Search is not available for this dataset
url
string | text
string | date
timestamp[s] | meta
dict |
---|---|---|---|
https://math.stackexchange.com/questions/2536529/is-n-frac1n-ever-rational | # Is $n^\frac{1}{n}$ ever rational?
Sorry if this is a duplicate, as usual I'm struggling with how to search for this.
I was wondering to myself how to prove that you can't get a square number that is twice another square number, I.e. $$m^2=2n^2$$ and I quickly came up with a neat proof using the fact: $$\frac{m}{n}=\sqrt{2}$$ The next obvious step is cubes that are thrice another cube, etc. etc. I then realised you can use this approach to prove that any power of p cannot be p times another power of p if $p^\frac{1}{p}$ is never rational. I suspect this true, but I need to go to sleep, so can somebody help me out with a proof?
• Sure, if $n = 1$. I'm guessing, you want $n>1$? – Ennar Nov 25 '17 at 13:44
• Well yes, I do indeed – Phill Nov 25 '17 at 13:45
• Show that $\sqrt[n]{m}$ is either an integer or irrational. That's similar to the proof of irrationality of $\sqrt{2}$ or $\sqrt{3}$. Then note that $2^n > n$. – Daniel Fischer Nov 25 '17 at 13:45
• @DanielFischer This allows the stronger statement : $m^{\frac{1}{n}}$ with positive integers $m,n$ is either an integer or irrational. – Peter Nov 25 '17 at 13:51
$n^{\frac{1}{n}}$ cannot be rational for any positive integer $n>1$ (No matter whether $n$ is prime or composite)
This is because the number $n^{\frac{1}{n}}$ is a root of the polynomial $x^n-n$.
The leading coefficient is $1$, hence any rational root woule be an integer. If we denote $m:=n^{\frac{1}{n}}$, we get $m^n=n$. $m$ is clearly positive, so it would have to be a posiive integer, if it were rational.
We would have $m\ne 1$, hence $m\ge 2$, but then $m^n\ge 2^n>n$ for $n>1$, hence we arrive at a contradiction.
• Wonderful. Thank you. – Phill Nov 25 '17 at 13:59
$$1 < n^{1/n} < 2 \quad \forall n >1 , n\in \Bbb N$$
Also (I think more hint is required as downvotes are too fast) note that $n^{1/m}$ can be rational iff it is an integer.
• What exactly does this prove? – user370967 Nov 25 '17 at 13:47
• Why so many downvotes? Peter used almost the same argument in his answer. – Ennar Nov 25 '17 at 13:49
• @Math_QED It was a hint. – Jaideep Khare Nov 25 '17 at 13:52
• It is not obvious that $n^{\frac{1}{n}}$ must be an integer or irrational. (To clarify : I did not downvote) – Peter Nov 25 '17 at 14:26
If $n=p$ is prime and $p^{1/p}=\frac{m}{l}$ was rational it follows that $l^p*p=m^p$. Now use the uniqueness of prime factorization:
Let the prime number $p$ occur on the left site $x$ times and $y$ times on the right site. Then $y$ is divisible by $p$ whereas $x$ isn't. Contradiction.
Consider the polynomial $p$, such that $p(x):= x^n -n$ assume $\sqrt[n]{n}$ as a root.
By the rational root theorem we know that if $p$ has a rational root, it will be the one of the dividers $d_1,\cdots,d_m$ of $n$ (because the coefficient of monomial $x^n$ is $1$). But NONE of then will be a root of $p$. Therefore, the real roots of $p$ are all irracional roots, including $\sqrt[n]n$.
• The rational root theorem can be used to show that certain number is irration, since this number is algebraic. There, you can creat a polinomial $p, p\in\mathbb{Z}[\,x\,]$. – Gustavo Mezzovilla Nov 25 '17 at 14:23
Using part of the answer Jaideep Khare alredy posted (to give a full solution)
Lemma 1
if $a \in \mathbb{Q} \setminus \mathbb{Z}$, then $a^n \in \mathbb{Q} \setminus \mathbb{Z}$ for all $n \in \mathbb{N} \setminus \{ 0 \}$
lemma 2
if $x^m - m =0$ has a rational solution, then it must be an integer one, for all $m \in \mathbb{N}$
lemma 3
$m^{\frac{1}{m}} \in ~ ]1,2[$
The first lemma is easy to prove, the second is a consequence of the first, and the third may be also shown with ease
Suppose $n=(a/b)^n$ with $n,a,b\in\mathbb{N}$. We may assume that $a$ and $b$ have no prime factors in common. Suppose $p\mid a$, where $p$ is prime. Then $p^n\mid a^n=nb^n$. Since $p\not\mid b$, we must have $p^n\mid n$. But $p^n\ge2^n\gt n$, which is a contradiction. Hence $a$ has no prime factors, i.e., $a=1$. But $nb^n=1^n=1$ implies $n=b=1$ as well. Thus the only $n\in\mathbb{N}$ for which $n^{1/n}$ is rational is $n=1$.
Remarks: The inequality $2^n\gt n$ requires its own proof by induction. The step in which $p^n\mid nb^n$ and $p\not\mid b$ imply $p^n\mid n$ also, technically speaking, requires a touch of induction, starting from Euclid's Lemma ($p\mid xy$ implies $p\mid x$ or $p\mid y$) as the base case. | 2019-04-24T06:20:25 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2536529/is-n-frac1n-ever-rational",
"openwebmath_score": 0.9340209364891052,
"openwebmath_perplexity": 187.87984028030348,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806489800367,
"lm_q2_score": 0.8615382040983515,
"lm_q1q2_score": 0.8448076912958569
} |
https://www.physicsforums.com/threads/linear-and-angular-momentum-on-a-wooden-gate.867402/ | # Linear and Angular Momentum on a wooden gate
## Homework Statement
A uniform, 4.5-kg, square, solid wooden gate 1.5 m on each side hangs vertically from a frictionless pivot at the center of its upper edge. A 1.1-kg raven flying horizontally at 5.0m/s flies into this door at its center and bounces back at 2.0m/s in the opposite direction. (a) What is the angular speed of the gate just after it is struck by the unfortunate raven? (b) During the collision, why is the angular momentum conserved, but not the linear momentum?
## The Attempt at a Solution
I think that if there is no external torque, angular momentum should be conserved. Since the weight of the gate is in the same line of the gate, there should not be any contribution to torque and therefore angular momentum should be conserved. But how about the linear momentum? In my opinion, all the force should be balanced so that the gate will not move downward. The should not be any external force, however the answer stated that linear momentum is not conserved, because there is an external force exerted by the pivot. But is the force should still be balanced, even it will be spinning?
Last edited:
## Answers and Replies
Andrew Mason
Science Advisor
Homework Helper
Welcome to PF!
Can you provide a diagram? Could you also provide the wording of the problem exactly as written? Thanks.
AM
I can provide a diagram from the solution
sorry for not double check it before I post the question. Already edited the exact wording for the problem.
Doc Al
Mentor
The should not be any external force, however the answer stated that linear momentum is not conserved, because there is an external force exerted by the pivot.
Why don't you think the pivot exerts an external force?
What direction of the external force will the pivot exert? is it upward? But the force should be balanced for the gate and pivot. Do I mixed up with something else?
Doc Al
Mentor
The "system" you should be considering is the gate + raven. The pivot is external to the system.
Ok, now i should choose gate and raven as a system to consider whether the momentum is conserved. I classified the weight as one of the external force exerted by the earth. And if the pivot is external to the system, what is the direction of the external force that pivot exerted on the gate?
Doc Al
Mentor
And if the pivot is external to the system, what is the direction of the external force that pivot exerted on the gate?
The force from the pivot is whatever it needs to be to prevent that part of the gate from moving. (I would not worry about the details.)
What you need to understand is that the pivot, assumed to be frictionless, can exert a force but not a torque on the gate.
So if the linear momentum is not conserved, the net external force from the pivot and weight should not be zero, is it because the gate will rotate?
Sorry for poor understanding
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
So if the linear momentum is not conserved, the net external force from the pivot and weight should not be zero, is it because the gate will rotate?
Sorry for poor understanding
As Doc Al wrote, if not for the pivot the whole gate would have moved. Therefore the pivot provided a horizontal impulse on the gate, and linear momentum of gate+bird is not conserved. You could include the pivot in the system, but then something else has to be holding the pivot fixed in space.
The force of gravity was being countered by a vertical force from the pivot, and the horizontal impact does not immediately alter that, so there is no net vertical force, so no net vertical impulse.
For angular momentum, it is always important to specify the reference axis. Since there is a horizontal impulse from the pivot, it could also result in an angular impulse, changing angular momentum. To avoid this, you must choose the pivot itself as the reference axis. A force through the reference axis has no moment about the axis: moment = force x perpendicular distance, and that distance will be zero. While the gate is vertical, gravity also acts through the pivot. So angular momentum about the pivot is conserved during the instant of the collision.
Subsequent to the collision, as the gate swings from the vertical, gravity will exert a moment about the reference axis, so angular momentum about it will start to change.
This is what I come up with
am I correct?
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
This is what I come up with View attachment 99291 am I correct?
Yes, that's the right diagram. Now you need to use conservation of angular momentum about the pivot. What is the raven's angular momentum about the pivot before impact?
So the two impact forces of raven and gate are not classified as external force because they are inside the system (gate + raven) And the only external force is the force provided by the pivot. So that linear momentum is not conserved. I think I mixed up the conservation of momentum by divide the system into the gate and raven, so the force of the gate is balanced, therefore linear momentum is conserved. Thank you.
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed^2)/3
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed^2)/3
Very nearly right. The angular speed should not be squared. (Check the dimensions.). You probably got mixed up with rotational KE, or maybe centripetal acceleration.
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed)/3 I don't remember that L=moment of inertia * angular velocity instead of square the angular velocity. Thank you very much.
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
(1.1)(5)(1.5/2)=(1.1)(-2)(1.5/2)+(4.5)(1.5^2)(angular speed)/3 I don't remember that L=moment of inertia * angular velocity instead of square the angular velocity. Thank you very much.
Ok, so what do you get for the angular velocity of the gate?
Tissue
angular velocity of the gate is 1.71rad/s
haruspex
Science Advisor
Homework Helper
Gold Member
2020 Award
angular velocity of the gate is 1.71rad/s
Looks right. | 2021-02-28T07:16:08 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/linear-and-angular-momentum-on-a-wooden-gate.867402/",
"openwebmath_score": 0.8423534035682678,
"openwebmath_perplexity": 566.4237366805205,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9324533051062237,
"lm_q2_score": 0.905989829267587,
"lm_q1q2_score": 0.8447932106931848
} |
https://www.freemathhelp.com/forum/threads/need-help-with-derivative-problem.125632/ | # Need Help With Derivative Problem
#### G537
##### New member
Question
I want to investigate the family of curves given by f(x) = x^4 + x^3 + cx^2. I understand how to solve this when c > 0 and c = 0, but I don’t know how to solve this for c < 0 using the first derivative test. If I assume that c< 0 and thus f(x) = x^4 + x^3 - cx^2, then f’(x) = 4x^3 + 3x^2 - 2cx = x(4x^2 + 3x - 2c). Examining the discriminant of the quadratic in parentheses, when c = -9/32, there are 2 critical points at x = 0 and x = -3/8; when -9/32 < c < 0, there are 3 critical points at x = 0 and x = [-3 +- sqrt(9 + 32c)]/8; and when c < -9/32, there is one critical point at x = 0. But the book solution says that there are 3 critical points whenever c < 0, which contradicts my solutions. Is my reasoning incorrect?
#### Subhotosh Khan
##### Super Moderator
Staff member
Question
I want to investigate the family of curves given by f(x) = x^4 + x^3 + cx^2. I understand how to solve this when c > 0 and c = 0, but I don’t know how to solve this for c < 0 using the first derivative test. If I assume that c< 0 and thus f(x) = x^4 + x^3 - cx^2, then f’(x) = 4x^3 + 3x^2 - 2cx = x(4x^2 + 3x - 2c). Examining the discriminant of the quadratic in parentheses, when c = -9/32, there are 2 critical points at x = 0 and x = -3/8; when -9/32 < c < 0, there are 3 critical points at x = 0 and x = [-3 +- sqrt(9 + 32c)]/8; and when c < -9/32, there is one critical point at x = 0. But the book solution says that there are 3 critical points whenever c < 0, which contradicts my solutions. Is my reasoning incorrect?
You say:
If I assume that c< 0 and thus f(x) = x^4 + x^3 - cx^2
That is "dangerous" writing. I would write it as:
If c<0, let us assume c1 = -c where c1>0. Then the equation becomes:
f(x) = x^4 + x^3 - c1x^2
f'(x) = 4x^3 + 3x^2 - 2c1x = x (4x^2 + 3x - 2c1)
roots of f'(x) are:
x = 0
x1,2 = $$\displaystyle \frac{-3 \pm \sqrt{9 - 4*(8)*(-c_1)}}{8}$$ ...........................edited
You have three roots $$\displaystyle \ \ \to \ \$$ x = 0, x1 and x2
Last edited:
#### Dr.Peterson
##### Elite Member
Question
I want to investigate the family of curves given by f(x) = x^4 + x^3 + cx^2. I understand how to solve this when c > 0 and c = 0, but I don’t know how to solve this for c < 0 using the first derivative test. If I assume that c< 0 and thus f(x) = x^4 + x^3 - cx^2, then f’(x) = 4x^3 + 3x^2 - 2cx = x(4x^2 + 3x - 2c). Examining the discriminant of the quadratic in parentheses, when c = -9/32, there are 2 critical points at x = 0 and x = -3/8; when -9/32 < c < 0, there are 3 critical points at x = 0 and x = [-3 +- sqrt(9 + 32c)]/8; and when c < -9/32, there is one critical point at x = 0. But the book solution says that there are 3 critical points whenever c < 0, which contradicts my solutions. Is my reasoning incorrect?
I see no reason to start with cases based on the sign of c. I would just look at the discriminant, keeping c exactly as it is in the problem, and find that when c < 9/32, there will be three (real) solutions -- except when two of those three coincide, so you have to check when that occurs (which is when c = 0).
Please show us the entire problem and the entire answer, exactly as given in your book, so we can be sure what they are saying. I suspect that you are trying to reverse-engineer the solution from the book's answer, and assuming that you need to consider cases separately from the start.
#### Jomo
##### Elite Member
So c can't be negative. That is if c is negative then it must be -c???
Consider c+5 = 0, so c =-5 which means -c =-5?? -c=-5 means that c=5. So 5+5 =0??
#### G537
##### New member
Thank you all for your helpful responses. I think that your responses get to the crux of my confusion: first, is it necessary to consider cases in this question and, if not, why? It seems to me that negating the constant c would produce different curves with different characteristics, which is why I assumed I needed to consider the 3 cases. Second, if I do want to examine cases where c < 0, how do I do this? I quickly realized that when using values of c < 0 along with the function f(x) = x^4 + x^3 - cx^2, this means that I’m really investigating f(x) = x*^4 + x^3 + cx^2, so writing the function with a negative sign in front of the last term doesn’t make sense. But I was not sure how to proceed in this case.
Regarding the given solution, it simply states that “for c < 0, there is a maximum at x = 0 and minima at x = [-6 +- sqrt(36 - 96c)]/24.” So it appears they took the approach Dr. Peterson recommended above. Dr. Peterson (or others), I’m curious how you determined that it was not necessary to use cases in this problem. I’d like to improve my intuition rather than simply memorizing my way through the material.
#### Dr.Peterson
##### Elite Member
Thank you all for your helpful responses. I think that your responses get to the crux of my confusion: first, is it necessary to consider cases in this question and, if not, why? It seems to me that negating the constant c would produce different curves with different characteristics, which is why I assumed I needed to consider the 3 cases. Second, if I do want to examine cases where c < 0, how do I do this? I quickly realized that when using values of c < 0 along with the function f(x) = x^4 + x^3 - cx^2, this means that I’m really investigating f(x) = x*^4 + x^3 + cx^2, so writing the function with a negative sign in front of the last term doesn’t make sense. But I was not sure how to proceed in this case.
You seem to have been thinking somewhat along the lines of ancient people, before negative numbers were invented. When they talked about quadratic equations, for example, they assumed the coefficients were positive (because there was no such thing as a negative number -- how could you have -3 cows?). As a result, they had to consider separately the equations ax^2 + bx + c = 0 (which could have no solutions, so they didn't even consider it), and ax^2 + bx = c, and ax^2 = bx + c, and so on.
There is absolutely no inherent difference between x^4 + x^3 + cx^2 = 0 and x^4 + x^3 - cx^2 = 0. The latter is just an example of the former, in which c has been replaced by -c; the former includes cases like x^4 + x^3 + 2x^2 = 0, where c = 2, and like x^4 + x^3 - 2x^2 = 0, where c = -2.
But it sounds like you now see that replacing c with -c was not necessary even if you want to distinguish c as negative; you just have to say that c<0.
Regarding the given solution, it simply states that “for c < 0, there is a maximum at x = 0 and minima at x = [-6 +- sqrt(36 - 96c)]/24.” So it appears they took the approach Dr. Peterson recommended above. Dr. Peterson (or others), I’m curious how you determined that it was not necessary to use cases in this problem. I’d like to improve my intuition rather than simply memorizing my way through the material.
When I said, "I see no reason to start with cases based on the sign of c", I meant exactly what I said: not that I had determined that it was not necessary to use cases, but that there was nothing in the problem that called for cases (on the surface, at least -- that might come later, but will be handled when it is found appropriate). So there was no reason to even consider doing so. We use cases when we see cases inherent in the problem, such as if there is an absolute value. To solve |x-1| = 3|x+7|, I would consider using cases x<-7, -7<=x<1, and x>=1 because the equation changes its behavior at x=1 and at x=-7. There is nothing in x^4 + x^3 + cx^2 = 0 that obviously changes according to the sign of c.
Can you explain what leads you to say, "It seems to me that negating the constant c would produce different curves with different characteristics, which is why I assumed I needed to consider the 3 cases"? Every value of c leads to a different curve, but there is no qualitative difference in those curves at c=0, or at least none that I notice until I get far enough through the work to see something that happens to be special there. My question is, are you seeing something deep, or just assuming that negative numbers change things? Or did you simply work backward from the answer, seeing that it involves something different at c=0, so that must be where the work of solving it begins?
#### lookagain
##### Elite Member
f'(x) = 4x^3 + 3x^2 - 2c1x = x (4x^2 + 3x - 2c1)
roots of f'(x) are:
x = 0
x1,2 = $$\displaystyle \frac{c_1 \pm \sqrt{9 - 4*(8)*(-c_1)}}{8}$$
Instead of $$\displaystyle \ "c_1" \$$ for the first term of your quadratic formula, I have -3, because the quadratic expression is $$\displaystyle \ 4x^2 + 3x - 2c$$.
#### Subhotosh Khan
##### Super Moderator
Staff member
Instead of $$\displaystyle \ "c_1" \$$ for the first term of your quadratic formula, I have -3, because the quadratic expression is $$\displaystyle \ 4x^2 + 3x - 2c$$.
You are correct ... and I edited my post. Thanks
#### JeffM
##### Elite Member
I am not sure whether this answers your question or not. Yes, OF COURSE, different values of c will alter the shape of the curve, but there is no reason a priori to think that the sign of the coefficient of the squared term in a quartic is particularly relevant. It may be relevant, but that must be determined.
A polynomial is defined for all real x. The global graph of any polynomial of even degree > zero, when looked at on a big enough scale, is solely determined by whether the coefficient of x to the defining degree is positive or negative. On a grand scale, the graph of the curve is shaped like a u if that coefficient is positive or an inverted u if that coefficient is negative.
However, if you look at a fourth degree polynomial with a positive leading coefficient at smaller scales, you will find either (a) one global minimum, or (b) two local minima with one local maximum between the minima. The importance of c in this case is in determining which of these two cases obtains and where the local maximum is located if it exists.
$$\displaystyle f(x) = x^4 + x^3 + cx^2 \implies \\ f'(x) = 4x^3 + 3x^2 + 2cx = x(4x^2 + 3x + 2c) \implies \\ f''(x) = 12x^2 + 6x + 2c.$$
Now obviously f'(0) = 0 = f(0). Whether there exist one or more values of x other than zero such that f'(x) = 0 depends on the discriminant of 4x2 + 3x + 2c.
This does lead us initially to three cases, but not based on the sign of c.
Case 1
$$\displaystyle \text {no real solutions} \iff (3)^2 - 4(4)(2c) < 0 \iff c > \dfrac{9}{32} \implies f(x)\\ \text {has no local maximum and a single minimum at } x = 0.$$
Case 2
$$\displaystyle c = \dfrac{9}{32} \implies f'\left ( - \dfrac{3}{8} \right ) = 0 = f'(0).$$
But there cannot be exactly two distinct extrema for a polynomial of degree 4. Why not?
Let's consider f''(0) when c = 9/32.
$$\displaystyle f''(0) = 12 * 0^2 + 6 * 0 + 2 * \dfrac{9}{32} = \dfrac{9}{16} > 0 \implies f(x)\\ \text {has no local maximum and a single minimum at } x = 0.$$
Case 3
We know f'(0) = 0, but when c < 9/32, there is at least one other distinct value of x for which f'(x) = 0. Again, what that value is depends on the discriminant.
Case 3a
$$\displaystyle -3 + \sqrt{9 - 32c} > 0 \implies 9 - 32c > 9 \implies c < 0 \implies \\ f''(0) < 0 \implies f(x) \text { has two local minima,}\\ \text {one on each side of } x = 0 \text { and a local maximum at } x = 0.$$
Do you see why?
In any case, we are interested in the sign of c because of the specifics of this function rather than because of any general rule that the behavior of a quartic is sensitive to the sign of the squared term.
Case 3b
$$\displaystyle c = 0 \implies \dfrac{-3 - \sqrt{9 - 32c}}{2 * 4} = - \dfrac{3}{4} \implies \\ f'' \left (- \dfrac{3}{4} \right ) = 12 * \left (- \dfrac{3}{4} \right )^2 + 6 * \left (- \dfrac{3}{4} \right ) + 2 * 0 =\\ \dfrac{12 * 9}{16} - \dfrac{6 * 3}{4} = \dfrac{108}{16} - \dfrac{72}{16} > 0 \implies f(x)\\ \text {has no local maximum and a single minimum at } x = - \dfrac{3}{4}.$$
So we end up with five cases rather than three.
$$\displaystyle c < 0,\ c = 0,\ 0 < c < \dfrac{9}{32}, \ c = \dfrac{9}{32}, \text { and } c > \dfrac{9}{32}.$$
Case 3c
$$\displaystyle 0 < c < \dfrac{9}{32}.$$
Can you do it?
#### G537
##### New member
Thank you once again to everyone for taking your time to help me! This has been extremely helpful. Jeff, your summary was immensely helpful, as it provided very clear reasoning. I also learned a bit about proper typesetting after reading your post, so thank you for that as well.
When I initially worked the problem, I did obtain cases for $$\displaystyle c>\frac{9}{32}, c=\frac{9}{32}$$, and $$\displaystyle c=0$$. I also built a table of my results and found that I could obtain 3 extrema for $$\displaystyle 0<c<\frac{9}{32}$$. So, I was four-fifths of the way there, but I could not figure out how to reason through the case where $$\displaystyle c<0$$. After working through your questions, it became much clearer to me how I might approach the problem. Let me address your questions to see if I'm on the right track.
Case 2
$$\displaystyle c = \dfrac{9}{32} \implies f'\left ( - \dfrac{3}{8} \right ) = 0 = f'(0).$$
But there cannot be exactly two distinct extrema for a polynomial of degree 4. Why not?
For a polynomial of degree 4, $$\displaystyle f(x)\rightarrow\infty$$ as $$\displaystyle x\rightarrow\pm\infty$$ and so there must be an odd number of extrema. If there were an even number, then either $$\displaystyle f(x)\rightarrow\infty$$ as $$\displaystyle x\rightarrow\infty$$ and $$\displaystyle f(x)\rightarrow-\infty$$ as $$\displaystyle x\rightarrow-\infty$$ OR $$\displaystyle f(x)\rightarrow-\infty$$ as $$\displaystyle x\rightarrow\infty$$ and $$\displaystyle f(x)\rightarrow\infty$$ as $$\displaystyle x\rightarrow-\infty$$, which would mean that the degree of the polynomial is odd.
Case 3a
$$\displaystyle -3 + \sqrt{9 - 32c} > 0 \implies 9 - 32c > 9 \implies c < 0 \implies \\ f''(0) < 0 \implies f(x) \text { has two local minima,}\\ \text {one on each side of } x = 0 \text { and a local maximum at } x = 0.$$
Do you see why?
Case 3c
$$\displaystyle 0 < c < \dfrac{9}{32}.$$
Can you do it?
I approached your final 2 questions in the same way. For the case where $$\displaystyle c<\frac{9}{32}$$, I see that $$\displaystyle c=0$$ is a "special case" of this inequality in that it is the only value of $$\displaystyle c<\frac{9}{32}$$ for which f(x) has only one extremum, and therefore it breaks the inequality $$\displaystyle c<\frac{9}{32}$$ into 2 parts: $$\displaystyle 0<c<\frac{9}{32}$$ or $$\displaystyle c<0$$. Starting with the first inequality, since $$\displaystyle 0<c<\frac{9}{32}$$:
$$\displaystyle b^2>b^2-4ac>b^2-\frac{9a}{8}$$. Since a = 4 and b = 3, $$\displaystyle 3>\sqrt{9-16c}>\frac{3}{\sqrt{2}}$$, then $$\displaystyle 0<\frac{3-\sqrt{9-16c}}{8}<\frac{6-3\sqrt{2}}{16}$$ and thus $$\displaystyle 0<x_1<\frac{6-3\sqrt{2}}{16}$$ or $$\displaystyle \frac{3}{4}>\frac{3+\sqrt{9-16c}}{8}>\frac{6+3\sqrt{2}}{16}$$ and thus $$\displaystyle \frac{3}{4}>x_2>\frac{6+3\sqrt{2}}{16}$$.
So, there are 3 extrema when $$\displaystyle 0<c<\frac{9}{32}$$: $$\displaystyle x_1, x_2$$, and x = 0 - two local minima and one local maximum.
When $$\displaystyle c<0$$, $$\displaystyle b^2-4ac>b^2$$. Since a = 4 and b = 3, then $$\displaystyle \sqrt{9-16c}>3$$. Thus $$\displaystyle \frac{3-\sqrt{9-16c}}{8}<0$$, or $$\displaystyle \frac{3+\sqrt{9-16c}}{8}<\frac{3}{4}$$. Therefore $$\displaystyle x_3<0$$ and $$\displaystyle x_4<\frac{3}{4}$$.
Again, there are 3 extrema when $$\displaystyle c<0$$: $$\displaystyle x_3, x_4$$, and x = 0 - two local minima and one local maximum.
So, with the exception of c = 0, there are always 3 extrema when $$\displaystyle c<\frac{9}{32}$$, and this covers the cases when $$\displaystyle c<0$$.
If my reasoning is incorrect, please let me know. Thanks again!
#### JeffM
##### Elite Member
For a polynomial of degree 4, $$\displaystyle f(x)\rightarrow\infty$$ as $$\displaystyle x\rightarrow\pm\infty$$ and so there must be an odd number of extrema. If there were an even number, then either $$\displaystyle f(x)\rightarrow\infty$$ as $$\displaystyle x\rightarrow\infty$$ and $$\displaystyle f(x)\rightarrow-\infty$$ as $$\displaystyle x\rightarrow-\infty$$ OR $$\displaystyle f(x)\rightarrow-\infty$$ as $$\displaystyle x\rightarrow\infty$$ and $$\displaystyle f(x)\rightarrow\infty$$ as $$\displaystyle x\rightarrow-\infty$$, which would mean that the degree of the polynomial is odd.
Your logic is basically fine. The only thing wrong with it is that you assumed that the leading coefficient is positive. The global features of a polynomial are determined by the sign of its leading coefficient and by whether its degree is odd or even.
A polynomial of even degree has a global minimum if the leading coefficient is positive. It may have additional local minima, but if so, there will be a local maximum between each successive pair of minima. A polynomial of even degree has a global maximum if the leading coefficient is negative. It may have additional local maxima, but if so, there will be a local minimum between each pair of maxima. This is geometric reason for why the number of extrema for a polynomial of even degree is always odd.
Polynomial of degree 2n > 0, odd number of extrema. Minimum number of extrema = 1. Maximum number of extrema n - 1. Leading coefficient positive, odd number of minima and any maximum is preceded and succeeded by a minimum. Leading coefficient negative, odd number of maxima and any minimum is preceded and succeeded by a minimum.
You might want to work out the general rules for polynomials of odd degree. You have the basic ideas.
I approached your final 2 questions in the same way. For the case where $$\displaystyle c<\frac{9}{32}$$, I see that $$\displaystyle c=0$$ is a "special case" of this inequality in that it is the only value of $$\displaystyle c<\frac{9}{32}$$ for which f(x) has only one extremum, and therefore it breaks the inequality $$\displaystyle c<\frac{9}{32}$$ into 2 parts: $$\displaystyle 0<c<\frac{9}{32}$$ or $$\displaystyle c<0$$. Starting with the first inequality, since $$\displaystyle 0<c<\frac{9}{32}$$:
$$\displaystyle b^2>b^2-4ac>b^2-\frac{9a}{8}$$. Since a = 4 and b = 3, $$\displaystyle 3>\sqrt{9-16c}>\frac{3}{\sqrt{2}}$$, then $$\displaystyle 0<\frac{3-\sqrt{9-16c}}{8}<\frac{6-3\sqrt{2}}{16}$$ and thus $$\displaystyle 0<x_1<\frac{6-3\sqrt{2}}{16}$$ or $$\displaystyle \frac{3}{4}>\frac{3+\sqrt{9-16c}}{8}>\frac{6+3\sqrt{2}}{16}$$ and thus $$\displaystyle \frac{3}{4}>x_2>\frac{6+3\sqrt{2}}{16}$$.
So, there are 3 extrema when $$\displaystyle 0<c<\frac{9}{32}$$: $$\displaystyle x_1, x_2$$, and x = 0 - two local minima and one local maximum.
When $$\displaystyle c<0$$, $$\displaystyle b^2-4ac>b^2$$. Since a = 4 and b = 3, then $$\displaystyle \sqrt{9-16c}>3$$. Thus $$\displaystyle \frac{3-\sqrt{9-16c}}{8}<0$$, or $$\displaystyle \frac{3+\sqrt{9-16c}}{8}<\frac{3}{4}$$. Therefore $$\displaystyle x_3<0$$ and $$\displaystyle x_4<\frac{3}{4}$$.
Again, there are 3 extrema when $$\displaystyle c<0$$: $$\displaystyle x_3, x_4$$, and x = 0 - two local minima and one local maximum.
So, with the exception of c = 0, there are always 3 extrema when $$\displaystyle c<\frac{9}{32}$$, and this covers the cases when $$\displaystyle c<0$$.
There is nothing AT ALL wrong with your reasoning in terms of logic. It just is much more elaborate than it needs to be.
Except in the special case where c = 0, c < 9/32 entails that we have extrema at three different values of x. We know the middle one will be a maximum and the two end ones will be minima. We also know that there will be extrema at
$$\displaystyle x = 0, \ x = \dfrac{-3 - \sqrt{9 - 32c}}{8}, \text { and } x = \dfrac{-3 + \sqrt{9 - 32c}}{8}.$$
The second location is clearly the farthest to the left and is just as clearly negative. The third location could be negative, zero, or positive. The zero possibility is a special case already addressed. If c is not 0, it is either less than 0 or greater than zero.
$$\displaystyle c < 0 \implies 9 - 32c > 9 \implies \dfrac{- 3 + \sqrt{9 - 32c}}{8} > \dfrac{dfrac{-3 + 3}{8} = 0.$$
Thus, if c negative, zero is in the middle and locates a maximum.
$$\displaystyle 0 < c < \dfrac{9}{32} \implies - 9 < - 32c < 0 \implies 0 < 9 - 32c < 0 \implies 0 < \ sqrt{9 - 32c} < 3 \implies \\ \dfrac{-3 + \sqrt{9 - 32c}}{8} < \dfrac{-3 + 3}{8} = 0.$$
Thus, 0 is the rightmost location of the extrema and is a minimum.
#### G537
##### New member
Polynomial of degree 2n > 0, odd number of extrema. Minimum number of extrema = 1. Maximum number of extrema n - 1. Leading coefficient positive, odd number of minima and any maximum is preceded and succeeded by a minimum. Leading coefficient negative, odd number of maxima and any minimum is preceded and succeeded by a minimum.
I assume you meant that, for a polynomial of degree 2n > 0 and a positive leading coefficient, there will be an even number of minima, with a maximum preceded and succeeded by a minimum. Likewise for a polynomial with a negative leading coefficient, where a minimum (if it exists) will be preceded and succeeded by a maximum, and therefore there are an even number of maxima.
You might want to work out the general rules for polynomials of odd degree. You have the basic ideas.
For a polynomial of odd degree, there must be either no extrema, or an equal number of maxima and minima, so the number of local extrema is an even number. Consequently, there are a minimum of zero extrema and a maximum of $$\displaystyle n-1$$ extrema.
Thank you again for working through this problem with me. It was a very useful exercise for me to work through your questions and then repeat the the entire process on my own with the second derivative, again taking the discriminant by cases. I also worked through the first and second derivative tests using the method you illustrated at the end of your previous post, and I can see that this is much easier.
I'm sure I'll have more math questions in the future. I'm glad I discovered this online community.
#### JeffM
##### Elite Member
I assume you meant that, for a polynomial of degree 2n > 0 and a positive leading coefficient, there will be an even number of minima, with a maximum preceded and succeeded by a minimum. Likewise for a polynomial with a negative leading coefficient, where a minimum (if it exists) will be preceded and succeeded by a maximum, and therefore there are an even number of maxima.
Consider a quadratic with a positive coefficient, for example f(x) = x2.
There is one minimum at x = 0. That is an odd number of minima. So I meant what I said.
Here is general rule about functions that are everywhere differentiable. If a function has multiple extrema, a minimum is never next to a minimum, nor is a maximum next to a maximum
You don't need to memorize what's below because you can derive them, but I have found them useful to keep in mind.
The derivative of a polynomial of degree > 0 is also a polynomial.
The maximum number of extrema of a polynomial of degree n is n - 1.
A polynomial of odd degree may have no extrema.
A polynomial of even degree has at least one extremum.
A polynomial of odd degree has as many minima as maxima.
A polynomial of even degree and a positive leading coefficient has one more minimum than it has maxima.
A polynomial of odd degree and a negative leading coefficient has one more maximum than it has minima. | 2020-10-21T07:29:26 | {
"domain": "freemathhelp.com",
"url": "https://www.freemathhelp.com/forum/threads/need-help-with-derivative-problem.125632/",
"openwebmath_score": 0.7015562057495117,
"openwebmath_perplexity": 237.29412958865285,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9683812327313545,
"lm_q2_score": 0.8723473862936942,
"lm_q1q2_score": 0.8447648373090627
} |
http://math.stackexchange.com/questions/101371/finding-how-many-terms-of-the-harmonic-series-must-be-summed-to-exceed-x | # Finding how many terms of the harmonic series must be summed to exceed x?
The harmonic series is the sum
1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ... + 1/n + ...
It is known that this sum diverges, meaning (informally) that the sum is infinite and (more formally) that for any real number x, there there is some number n such t that the sum of the first n terms of the harmonic series is greater than x. For example, given x = 3, we have that
1 + 1/2 + 1/3 + ... + 1/11 = 83711/27720 ≈ 3.02
So eleven terms must be summed together to exceed 3.
Consider the following question
Given an integer x, find the smallest value of n such that the sum of the first n terms of the harmonic series exceeds x.
Clearly we can compute this by just adding in more and more terms of the harmonic series, but this seems like it could be painfully slow. The best bound I'm aware of on the number of terms necessary is 2O(n), which uses the fact that
1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + ...
is greater than
1 + (1/2) + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + ...
which is in turn
1 + 1/2 + 1/2 + 1/2 + ...
where each new 1/2 takes twice as many terms as the previous to accrue. This means that the brute-force solution is likely to be completely infeasible for any reasonably large choice of x.
Is there way to calculate the harmonic series which requires less operations than the brute-force solution?
-
## migrated from stackoverflow.comJan 22 '12 at 18:06
This question came from our site for professional and enthusiast programmers.
How exact does the answer need to be? Is it acceptable to limit x to an integer? What the maximum x you need to be able to handle? – David Schwartz Jan 21 '12 at 22:06
@DavidSchwartz- I'm mostly interested in the case where x is an integer, and there is no upper bound on x - let's assume that we're using BigIntegers on a machine with unbounded memory. Also, I would like an absolute exact answer if at all possible. – templatetypedef Jan 21 '12 at 22:10
See oeis.org/A002387 for some information. – DSM Jan 21 '12 at 22:17
If an approximate answer will do e^(n - .57721 - 1/(2n)) is pretty close. – David Schwartz Jan 21 '12 at 22:19
The DiGamma function (the derivative of the logarithm of the Gamma function) is directly related to the harmonic numbers: ψ(n) = Hn-1 - γ, where γ is Euler's constant (0.577...).
You can use one of the approximations in the Wikipedia article to compute an approximate value for Hn, and then use that in a standard root finding algorithm like bisection to locate the solution.
-
If you replace the sum by an integral you get the logarithm function so that $\ln(n)$ is a first approximation.
In fact the Euler $\gamma$ constant ($.577215664901532860606512090$) may be defined by the following formula : $\displaystyle \gamma=\lim_{n \to \infty} \left(H_n-\ln(n+1/2)\right)$
From this you may deduce the equivalence as $n \to \infty$ : $$H_n \thicksim \gamma + \ln(n+1/2)$$
(for $n=10^6$ we get about 14 digits of precision)
And revert this (David Schwartz proposed a similar idea) to get the value $n$ required to get a sum $s$ : $$n(s) \approx e^{s-\gamma} -\frac12$$
The first integer to cross the $s$ should be given by $\lfloor e^{s-\gamma}+\frac12\rfloor\;$ ('should be' because of the little error made on $H_n$ compensated by the low probability of people testing values much higher than 20 :-)).
Example : the sum will cross the value $20$ for $n$ evaluated at $\rm floor(\rm exp(20-gamma)+0.5)= \rm round(\rm exp(20-gamma))= 272400600$ and indeed (this is not a proof!) :
$H_{272400599}=19.9999999977123$
$H_{272400600}=20.0000000013833$
-
-
Interesting thanks! It seems that Benoit Cloitre proposed in 2002 a more precise conjecture "for $n\gt 1$, $a(n) = \lfloor e^{n-\gamma}+\frac12\rfloor$" (or a(n)= round(exp(n-gamma)) ). – Raymond Manzoni Jan 22 '12 at 21:00
I think you meant O(2^(2n)) = O(4^n) instead of O(2^n). The "right" bound is O(exp(n)), which holds because H(m) >= ln (m + 1).
i+1
1 /
- >= | (1/x) dx = ln (i+1) - ln i
i /
i
m m
--- ---
\ 1 \
> - >= > (ln (i+1) - ln i) = ln (m+1) - ln 1 = ln (m+1).
/ i /
--- ---
i=1 i=1
The main obstacle to a provably fast exact algorithm is the usual table maker's dilemma: we don't have a good handle on exactly how close H(m) can be to an integer, so it's not clear a priori how much precision is needed for the approximate methods. There's a similar issue with many optimization problems involving Euclidean distances not being in NP: we don't know how to test the sign of a sum of square roots efficiently.
-
The $n^{th}$ harmonic number, $H_n$ has an asymptotic expansion of the form:
$\hspace{2cm} \displaystyle H_n \sim \ln{n}+\gamma+\frac{1}{2n}-\sum_{k=1}^\infty \frac{B_{2k}}{2k n^{2k}}=\ln{n}+\gamma+\frac{1}{2n}-\frac{1}{12n^2}+\frac{1}{120n^4}-...$
- | 2014-11-29T07:54:19 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/101371/finding-how-many-terms-of-the-harmonic-series-must-be-summed-to-exceed-x",
"openwebmath_score": 0.8917653560638428,
"openwebmath_perplexity": 344.71911092136753,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9683812336438725,
"lm_q2_score": 0.8723473647220786,
"lm_q1q2_score": 0.8447648172155477
} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3806 | ## WeBWorK Problems
### Question with two possible answers
by Daniele Arcara -
Number of replies: 5
How can I allow several answers each to be correct for one problem?
Say that I want to ask the students to integrate 2*sin(x)*cos(x), and I want them to be allowed to write any of the following answers (forgetting the +C for now): (sin(x))^2 , -(cos(x))^2 , -1/2 cos(2x).
Is there a way to do that? For now, here is my simple code with answer (sin(x))^2:
DOCUMENT();
"PGstandard.pl",
"PGML.pl",
"PGcourse.pl"
);
TEXT(beginproblem());
$f = Formula("2*sin(x)*cos(x)");$F = Formula("(sin(x))**2");
$ans =$F;
BEGIN_PGML
[ \int [$f] \, dx = ] [_______________________]{$ans} [ +C ]
END_PGML
ENDDOCUMENT;
I tried creating another answer
$G = -Formula("(cos(x))**2"); and writing the answer with an 'or' in it as$ans = ($F||$G);
but it did not work.
Any help would be appreciated. Thank you!
In reply to Daniele Arcara
### Re: Question with two possible answers
by Paul Pearson -
Hi Daniele,
I'm going to start by being a little pedantic: there is only one function F(x) with domain equal to all real numbers that is an antiderivative of f(x) = 2cos(x)sin(x) such that F(0) = -1/2, and thus any antidervative of f(x) is of the form F(x) + C. This function F(x) is equivalent to many different looking but equivalent) expressions involving sines and cosines, such as F(x) = sin^2(x)-1/2 = -cos^2(x) - 1/2= -1/2 cos(2x) = sin^2(x)/2 - cos^2(x)/2 = ...
So, if you were going to try to write your own answer checker for this problem, you would want to check that the difference between a correct answer and the student answer is a constant for all x. Also, you would want to require students to enter +C to indicate that they know that the indefinite integral yields a family of functions. Fortunately, all of this has been done for you already by Davide Cervone, who wrote the parserFormulaUpToConstant.pl macro. (Technically, the student answer is checked against the correct answer for a small number of randomly selected x-values, not all x-values, but that's good enough.)
See the code below.
Best regards,
Paul Pearson
####################################
DOCUMENT();
"PGstandard.pl",
"PGML.pl",
"parserFormulaUpToConstant.pl",
"PGcourse.pl"
);
TEXT(beginproblem());
$f = Formula("2*sin(x)*cos(x)");$F = FormulaUpToConstant("(sin(x))^2");
BEGIN_PGML
[ \int [$f] \, dx = ] [_______________________]{$F}
END_PGML
ENDDOCUMENT;
In reply to Daniele Arcara
### Re: Question with two possible answers
by Alex Jordan -
Hi Daniele,
I wouldn't view this as allowing for multiple answers, in the sense of trying to enumerate lot of common antiderivatives the student might com up with. Off the top of my head, I see two options for you.
• Use http://webwork.maa.org/wiki/FormulasToConstants#.VnNELvmDFBc. The example here should explain how to use it.
• Write a custom answer checker (http://webwork.maa.org/wiki/Custom_Answer_Checkers#.VnNE_vmDFBc) that takes the student's answer and firstly, checks that it is a Formula [and if it is a Real turns it into a Formula] and then compares the derivatives of the student answer to the derivative of the correct answer. With a MathObject Formula $f where x is the variable,$f->D('x') will give the derivative. I can only think one reason why I personally might ever go this way, and that would be if the antiderivative has any holes in its domain (so the +C in theory could be a more complicated step function with steps at the holes.)
Either way, watch out for domain issues with antiderivatives. Especially for distinguishing between ln(x) and ln(|x|) and the like.
In reply to Daniele Arcara
### Re: Question with two possible answers
by Davide Cervone -
Aside from the two fine answers above, there is a third approach: the Formula object's answer checker has an upToConstant option that checks if the student enters something that differs from the correct answer by a constant (without the need for the student to type the +C part (it is a precursor to the FormulaUpToConstant object that the others have mentioned). You could use
BEGIN_PGML
[ \int [$f] \, dx = ] [_______________________]{$ans->cmp(upToConstant=>1)} [ +C ]
END_PGML
to get it.
In reply to Davide Cervone
### Re: Question with two possible answers
by Daniele Arcara -
Thank you for the help, everyone!
In reply to Davide Cervone
### Re: Question with two possible answers
by Danny Glin -
In the context of integrals, using the upToConstant flag is probably the right way to go since it catches different expressions even beyond the ones that you are anticipating.
If you truly do have a question with a small number of distinct correct responses, you can use the parserOneOf package. One place where I used this was for questions which asked for the characteristic polynomial of a matrix, since different textbooks differ on their definition by a factor of -1, so there are exactly 2 potentially correct responses. | 2023-03-29T00:33:29 | {
"domain": "maa.org",
"url": "https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3806",
"openwebmath_score": 0.7006808519363403,
"openwebmath_perplexity": 1269.7341145093549,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9683812299938006,
"lm_q2_score": 0.8723473630627235,
"lm_q1q2_score": 0.8447648124245287
} |
https://math.stackexchange.com/questions/1661857/hypotheis-testing-paired-t-test-1-is-my-work-correct-2-how-to-graph-in-sps | # Hypotheis Testing (paired T test): 1) Is my work correct? 2) How to graph in SPSS,?
I have the following Hypotheis testing problem.
The statement of the exercise is:
Experiment: Eleven different varieties of barley were considered. Of each variety, half was kiln-dried and the other half was left untreated. Then the two batches of seed were sown in adjacent plots. The experimenter observed that kiln-dried seeds gave, on average, the larger yield of kernels and straw but the quality was generally inferior. The data set “KilnDriedBarley.txt” contains data on the year of the planting, and the value of the crop (in shillings per acre) for both kiln-dried and non-kiln-dried seeds. Seeds sown on adjacent plots are listed in the same row of the file.
Year NonKilnDried KilnDried
1899 140.5 152
1899 152.5 145
1899 158.5 161
1899 204.5 199.5
1899 162 164
1899 142 139.5
1899 168 155
1900 118 117.5
1900 128.5 121
1900 109.5 116.5
1900 120 120.5
(i) “Does kiln-drying barley seeds increase the average value of the crop?” Should we use an independent sample or a paired sample test to answer this question? Defend your choice.
My answer: For this experiment we need to use t test paried sample, because it is assumed that each of the eleven types of seed will have the same condition to growth execect the parameter we want to study, in this case is kiln-dried seed and non-klin dried seeds.
(ii) Use SPSS to create an appropriate graphical display of your data. Your graph should show how the kiln-drying affects crop value. Describe what you can see in the graph.
I need help in this part, because I have no idea how to display my data in SPSS showing my data and the test.
Part (iii) Conduct a statistical hypothesis test for the question in (i) at significance level $\alpha = 0.05.$ Include relevant SPSS output. Formulate a conclusion for the test and cite the appropriate p-value.
This is my output:
My answer: The P-value is .602. We have not enough evidence to reject $H_0$ in favor of $H_a.$ Therefore there isn’t any significance differences between the average value of the crop with kiln-drying barley seeds at a significance level of ($\alpha=5\%$).
Overall question:
1) Are my assumption, and conclusion correct for this experiment? I notice also that both samples has very similar sample std. therefore I assumed homoscedastic (anyway is computed by SPSS).
2) The second problem I have is that I want to use spss, but I can't find an option to graph the t test. (I belive is that what the question ii asked for).
You are correct that this is a paired model and that (assuming nearly normal data) you can use a paired t test to judge whether the population mean differences (between kiln dried and non-kiln) are consistent with 0 ($H_0$) or not ($H_a$).
Using SPSS, you have found no significant difference between kiln dried and not (P-value 0.6). In addition a 95% CI for the population difference $\delta = \mu_n - \mu_k$ includes 0, indicating that the data are consistent with no difference.
I repeated the test using Minitab. (I do not have ready access to SPSS.) Here is the output:
Paired T-Test and CI: NonK, Kiln
Paired T for NonK - Kiln
N Mean StDev SE Mean
NonK 11 145.82 27.40 8.26
Kiln 11 144.68 25.51 7.69
Difference 11 1.14 7.00 2.11
95% CI for mean difference: (-3.57, 5.84)
T-Test of mean difference = 0 (vs not = 0):
T-Value = 0.54 P-Value = 0.602
Lacking access to SPSS, I cannot show you how to use SPSS to make an appropriate graph, but I can tell you what I think the author of the question is looking for.
First, you should find the eleven differences. They are:
11.5, -7.5, 2.5, -5.0, 2.0, -2.5, -13.0, -0.5, -7.5, 7.0, 0.5
[Note: A paired t test is equivalent to a one-sample t test on the differences (testing the null hypothesis of null difference against the two-sided alternative). Because of the paired nature of the data it would be inappropriate to do a two-sample t test. (Kiln dried and Not are $correlated$, not independent.) Accordingly, it would be inappropriate to compare two separate plots for Kiln dried and Not.]
Then you can make a dotplot (stripchart), boxplot, or histogram of the differences. Look in the SPSS menus to find out what kinds of graphs are available. A Minitab dotplot in typewriter text format is shown below:
. : . . . . .. . .
+---------+---------+---------+---------+---------+-------Dif
-15.0 -10.0 -5.0 0.0 5.0 10.0
Plots below show a boxplot and a histogram. Each statistical package has its own style of plots, so your plots in SPSS may look a little different, but none of them should give a visual impression that the mean difference is significantly different from 0. In my opinion, eleven is getting near the lower limit of the sample size for which a boxplot is an effective graphical display, and perhaps eleven really is too small for a nice histogram. So I would prefer something like Minitab's 'dotplot' or a 'stripchart' from R shown last. These graphical displays show the locations of each individual difference. | 2021-09-23T18:54:06 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1661857/hypotheis-testing-paired-t-test-1-is-my-work-correct-2-how-to-graph-in-sps",
"openwebmath_score": 0.47983139753341675,
"openwebmath_perplexity": 594.8383133107035,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109485172558,
"lm_q2_score": 0.8577681122619883,
"lm_q1q2_score": 0.8447394282445847
} |
https://math.stackexchange.com/questions/479442/calculate-the-value-of-the-integral-int-0-infty-frac-cos-3xx2a?noredirect=1 | # Calculate the value of the integral $\int_{0}^{\infty} \frac{\cos 3x}{(x^{2}+a^{2})^{2}} dx$ where $a>0$ is an arbitrary positive number.
Question: Calculate the value of the integral $\displaystyle \int_{0}^{\infty} \frac{\cos 3x}{(x^{2}+a^{2})^{2}} dx$ where $a>0$ is an arbitrary positive number.
Thoughts: I don't know how to establish convergence so that a symmetric limit can be used, but if I can do so, then we have that the integral equals $\displaystyle \frac{1}{2} \int_{-\infty}^{\infty} \frac{\cos 3x}{(x^{2}+a^{2})^{2}} dx = \displaystyle \lim_{R \to \infty} \int_{-R}^{R} \frac{\cos 3x}{(x^{2}+a^{2})^{2}} dx = Re \left (\displaystyle \lim_{R \to \infty} \int_{-R}^{R} \frac{e^{i(3x)}}{(x^{2}+a^{2})^{2}} dx \right )$
which has double poles at $x=ia$ and $x=-ia$, so the integral may be evaluated by calculating residues. Can anyone show me how to solve the problem? All input is appreciated, I am studying for an exam in complex analysis.
• You're more or less there. The last one is what you should use, now all you need is to check that the integrand vanishes fast enough e.g. on a semicircle in the upper half plane. – Daniel Fischer Aug 29 '13 at 21:46
• It is important know why you choose the semicircle in the upper half plane. – Mhenni Benghorbal Aug 29 '13 at 22:00
• @MhenniBenghorbal I agree, I am not sure why to choose the semicircle in the upper half plane instead of the lower half plane. Can you explain why? – Sid Aug 29 '13 at 22:02
• @DanielFischer I tried to show this using an example from my textbook but I got nowhere. Can you show me how this should be done or give me a hint? – Sid Aug 29 '13 at 22:14
• Right. And you see that when $y < 0$, i.e. in the lower half-plane, the exponential factor becomes large, while in the upper half-plane it becomes small. – Daniel Fischer Aug 29 '13 at 22:32
Let $S=\{-ia, ia\}$, let $\varphi \colon \Bbb C\setminus S\to \Bbb C, z\mapsto\dfrac{e^{i(3z)}}{(z^2+a^2)^2}$.
Given $n\in \Bbb N$ such that $n> a$, define $\gamma (n):=\gamma _1(n)\lor \gamma _2(n)$ with $\gamma _1(n)\colon [-n,n]\to \Bbb C, t\mapsto t$ and $\gamma _2(n)\colon [0,\pi]\to \Bbb C, \theta \mapsto ne^{i\theta}$, ($\gamma (n)$ is an upper semicircle).
Observe that $S$ is the set of singularities of $\varphi$ and both of them are second order poles.
Therefore $$\operatorname {Res}(\varphi ,ia)=\left.\dfrac{d}{dz}\left(z\mapsto (z-ia)^2\varphi (z)\right)\right\vert_{z=ia}\overset{\text{W.A.}}{=}\left.\dfrac{ie^{i3z}(3ia+3z+2i)}{(z+ia)^3}\right\vert_{z=ia} = \dfrac{e^{-3a}(3a+1)}{4a^3i}.$$
Here is the link for the equality $\text {W.A.}$.
Quick considerations about winding numbers, inside and outside region of $\gamma(n)$, the fact that $n>a$, the fact that $\varphi$ is holomorphic and the residue theorem yield $$\displaystyle \int \limits_{\gamma (n)}\varphi (z)dz=2\pi i\cdot \dfrac{e^{-3a}(3a+1)}{4a^3i}= \dfrac{\pi e~^{-3a}(3a+1)}{2a^3}.$$
On the other hand $\displaystyle \int \limits _{\gamma (n)}\varphi=\int \limits _{\gamma _1(n)}\varphi +\int \limits_{\gamma _2(n)}\varphi \tag {*}$
Note that $$\displaystyle\int \limits _{\gamma _1(n)}\varphi(z)dz=\int \limits _{-n}^n\varphi (t)dt=\int \limits_{-n}^n\dfrac{e^{i(3t)}}{(t^2+a^2)^2}dt=\int \limits_{-n}^n\dfrac{\cos(3t)+i\sin 3t)}{(t^2+a^2)^2}dt=\int \limits _{-n}^n\dfrac{\cos (3t)}{(t^2+a^2)^2}dt.$$ The last equality is due to $t\mapsto \dfrac{\sin (3t)}{(t^2+a^2)^2}$ being an odd function and due to the integral being computed on a symmetric interval.
Furthermore, $$\int \limits _{\gamma _2(n)}\varphi (z)dz=\int \limits _0^\pi \varphi(ne^{i\theta})\cdot ine^{i\theta}d\theta=\int \limits _0^\pi\dfrac{e^{i\cdot 3ne^{i\theta}}ine^{i\theta}}{(n^2e^{2ni\theta }+a^2)^2}d\theta=n\int \limits _0^\pi i\dfrac{e^{i\cdot 3n(\cos (\theta)+i\sin (\theta))}e^{i\theta}}{(n^2e^{2ni\theta }+a^2)^2}d\theta=\\ =n\int \limits _0^\pi i\dfrac{e^{-3n\sin (\theta)}e^{i(3n\cos (\theta)+\theta)}}{(n^2e^{2ni\theta }+a^2)^2}d\theta,$$
from where one gets $$\left \vert\, \int \limits _{\gamma _2(n)}\varphi (z)dz\right \vert\leq n\int \limits _0^\pi \left \vert i\dfrac{e^{-3n\sin (\theta)}e^{i(3n\cos (\theta)+\theta)}}{(n^2e^{2ni\theta }+a^2)^2}\right \vert d\theta =n\int \limits_0^\pi \left \vert\dfrac{e^{-3n\sin (\theta)}}{(n^2e^{2ni\theta }+a^2)^2}\right \vert d\theta=\\=n\int \limits_0^\pi \dfrac{\left \vert e^{-3n\sin (\theta)}\right \vert}{\left \vert n^2e^{2ni\theta }+a^2\right \vert^2}d\theta \underset{(n>a)}{\leq} n\int \limits _0^\pi \dfrac{e^{-3a\sin (\theta)}}{(n^2-a^2)^2}d\theta=\dfrac{n}{(n^2-a^2)^2}\int \limits _0^\pi e^{-3a\sin (\theta)}d\theta\overset{n\to +\infty}{\longrightarrow} 0$$
Taking the limit in $(*)$ one finally gets $$\dfrac{\pi e~^{-3a}(3a+1)}{2a^3}=\int \limits_{-\infty}^{+\infty} \dfrac{\cos (3t)}{(t^2+a^2)^2}dt.$$
Due to the evenness of $t\to \dfrac{\cos (3t)}{(t^2+a^2)^2}$ it follows that $\displaystyle \int \limits_{0}^{+\infty} \dfrac{\cos (3t)}{(t^2+a^2)^2}dt=\dfrac{\pi e~^{-3a}(3a+1)}{4a^3}$ which agrees with WA.
I regret having started this.
• Does anyone know how can I make the $\vert$ in $\vert _{z=ia}$look bigger? – Git Gud Aug 29 '13 at 23:28
• \big\vert, \Big\vert, \bigg\vert, \Bigg\vert produce $\big\vert\;\Big\vert\;\bigg\vert\;\Bigg\vert$, choose your favourite size. – Daniel Fischer Aug 29 '13 at 23:41
• @DanielFischer Thanks. – Git Gud Aug 29 '13 at 23:43
• Comment downvoter? – Git Gud Aug 29 '13 at 23:47
• A better way to match the vertical bar height is to enclose the expression in \left. and \right\vert. The height of the vertical bar matches the height of the enclosed expression. – robjohn Aug 30 '13 at 9:13
If you are interested in another method:
\begin{aligned}f(t)=\int_0^{\infty} \frac{\cos 3xt}{(x^2+a^2)^2}\,dx\Rightarrow \mathcal{L}\{f(t)\} &=\int_0^{\infty}e^{-st}\int_0^{\infty}\frac{\cos 3xt}{(x^2+a^2)^2}\,dx\,dt\\&=\int_0^{\infty}\frac{1}{(x^2+a^2)^2}\int_0^{\infty} e^{-st}\cos 3xt\,dt\,dx\\&=\int_0^{\infty}\frac{s\,dx}{(x^2+a^2)^2(9x^2+s^2)}\\&=\frac{3\pi}{2a^2(3a+s)^2}+\frac{\pi s}{4a^3(3a+s)^2}\end{aligned}
Which follows from a quick partial fraction decomposition.
Next,
$$\mathcal{L}^{-1}\left\{\frac{3\pi}{2a^2(3a+s)^2}\right\}+\mathcal{L}^{-1}\left\{\frac{\pi s}{4a^3(3a+s)^2}\right\}=\frac{3\pi t}{2a^2e^{3at}}+\frac{\pi(1-3at)}{4a^3e^{3at}}$$
By setting, $t=1$, we get:
$$\int_0^{\infty}\frac{\cos 3x\,dx}{(x^2+a^2)^2}=\frac{\pi(3a+1) }{4a^3e^{3a}}$$
Since you're studying for an exam, I'll try to be a bit more vague. The function you're considering is an even function. So
$\int_{0}^{R}\frac{\cos(3x)}{(x^2+a^2)^2}=\frac{1}{2}\int_{-R}^{R}\frac{\cos(3x)}{(x^2+a^2)^2}$
Thus, it suffices to compute the symmetric limit. Using this contour, the residue theorem shows that for $R>0$ sufficiently large,
$\int_{-R}^{R}\frac{e^{3ix}}{(x^2+a^2)^2}\ dx+\int_{\gamma_R}\frac{e^{3iz}}{(z^2+a^2)^2}\ dz=2\pi i\cdot \text{res}_{ia}\left(\frac{e^{3iz}}{(z^2+a^2)^2}\right)$
Since $ia$ is the only pole in the semicircular region (where $\gamma_R$ is the semicircular portion of the contour [as indicated in the picture]). Jordan's Lemma shows that the second integral vanishes as $R\to \infty$, so it suffices to compute the residue (don't forget that $ia$ is a double pole). After taking the real part of both sides, you'll have the value of the integral you desire.
Note: Jordan's Lemma is sort of an unnecessary tool for this specific problem, but I chose to refer you to it since it comes up a lot when doing contour integration, and knowing it saves a lot of time and reduces redundant arguments/calculations.
• Thanks, this is what I wanted to do. But don't we need to show that the integral converges to show that we can calculate the symmetric limit? Also, is the reason that we choose to study the upper semicircle instead of the lower semicircle that $a$ is positive (and therefore $ia$ is in the upper half-plane)? To add one more point, I think we need to show that integral along the upper semicircle tends to 0 as R tends to $\infty$, like David Fischer said. I'm trying to show that as we speak. – Sid Aug 29 '13 at 22:26
• I'll answer your questions in order. Typically, when doing contour integration, $\int_{-\infty}^{\infty}f(x)\ dx$ means $\lim_{R\to \infty}\int_{-R}^{R} f(x)\ dx$. In the second equation in my answer, the right hand side is constant and the second integral is going to zero - consequently, the first integral converges to the value on the right hand side (work it out! :)). The upper circle is preferred since the real part of $e^{iz}$ becomes small there. Also, showing that the integral tends to zero along the upper semicircle is precisely what Jordan's Lemma gives you. – Adam Azzam Aug 29 '13 at 22:38
Answering my comment. Note that, when you parametrize the upper have of the circle $z=Re^{I\theta},\, 0\leq \theta\leq \pi$and dealing with the integral
$$\int_{C_R}\frac{e^{iz}}{z^2+a^2}dz$$
the integrand becomes
$$\Bigg|\frac{e^{iz}}{z^2+a^2}\Bigg| \leq \frac{\Big|e^{i R e^{R\theta}}\Big|}{R^2-a^2} = \frac{e^{-R \sin(\theta)}}{R^2-a^2}.$$
Now, you can see that $R\sin(\theta ) \geq 0$ for $0\leq \theta\leq \pi$ which insures that
$$\lim_{R\to \infty} \frac{e^{-R \sin(\theta)}}{R^2-a^2} = 0 .$$ | 2021-07-29T01:45:33 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/479442/calculate-the-value-of-the-integral-int-0-infty-frac-cos-3xx2a?noredirect=1",
"openwebmath_score": 0.9447859525680542,
"openwebmath_perplexity": 246.0649930430998,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109498546359,
"lm_q2_score": 0.8577681031721325,
"lm_q1q2_score": 0.8447394204399571
} |
https://math.stackexchange.com/questions/1993202/remainder-in-polynomial-division | # Remainder in polynomial division
The remainder when $x^{50}$ is divided by $(x-3)(x+2)$ is of the form $ax + b$. Find the units digit of $a$.
I tried to tackle the problem using the polynomial remainder theorem but got stuck as the divisor is a quadratic expression.
Siong Thye Goh has a good idea but we need to work $\bmod 50$.
We can compute the following modular exponentiation using the Square and Multiply Algorithm: \begin{align} x^{50}&=(x-3)(x+2)Q(x)+ax+b\\ (-2)^{50}&\equiv24\equiv-2a+b\pmod{50}\\ 3^{50}&\equiv49\equiv\phantom{-}3a+b\pmod{50} \end{align} Therefore, $$25\equiv5a\pmod{50}$$ which means that $$\bbox[5px,border:2px solid #C0A000]{a\equiv5\pmod{10}}$$
Exponentiation Using The Square and Multiply Algorithm $$\begin{array}{} &\bmod{50}\\ (-2)^1&\equiv-2\\ (-2)^2&\equiv4&\text{square}\\ (-2)^3&\equiv-8&\text{multiply}\\ (-2)^6&\equiv14&\text{square}\\ (-2)^{12}&\equiv-4&\text{square}\\ (-2)^{24}&\equiv16&\text{square}\\ (-2)^{25}&\equiv-32&\text{multiply}\\ (-2)^{50}&\equiv24&\text{square} \end{array}$$ $$\begin{array}{} &\bmod{50}\\ 3^1&\equiv3\\ 3^2&\equiv9&\text{square}\\ 3^3&\equiv27&\text{multiply}\\ 3^6&\equiv29&\text{square}\\ 3^{12}&\equiv41&\text{square}\\ 3^{24}&\equiv31&\text{square}\\ 3^{25}&\equiv-7&\text{multiply}\\ 3^{50}&\equiv49&\text{square} \end{array}$$
• How did you get the last step from the one before? – tatan Oct 31 '16 at 16:57
• @robjohn very neat approach. – Siong Thye Goh Oct 31 '16 at 16:58
• @tatan $5a=50k+25$. divides by $5$. – Siong Thye Goh Oct 31 '16 at 16:59
• @SiongThyeGoh Yeah got it. Very compact and nice solution. – tatan Oct 31 '16 at 16:59
• @Rob In fact we can eliminate repeated squaring and solve it with simple mental arithmetic - see my answer. – Bill Dubuque Nov 23 '16 at 1:59
Edit: robjohn's solution is super awesome.
$$x^{50}=A(x-3)(x+2)+ax+b$$
Substitute $x=-2$ and $3$ and take $\mod 10$.
$$(-2)^{50}= -2a+b$$
$$(3)^{50}= 3a+b$$
$$3^{50}-(-2)^{50}=5a$$
$$(3-(-2))\left( \sum_{i=0}^{49} 3^i(-2)^{49-i} \right)=5a$$
• Why the $mod 10$ – nootnoot Oct 31 '16 at 16:25
• We are interested in the unit digit right? $a \mod 10$ gives us that – Siong Thye Goh Oct 31 '16 at 16:26
• this gives $5a\equiv5\pmod{10}$. This only gives us that $a$ is odd. – robjohn Oct 31 '16 at 16:30
• you are right. should take $\mod 10$ later. – Siong Thye Goh Oct 31 '16 at 16:38
• So, how will we find the last digit? – tatan Oct 31 '16 at 16:48
Hint
$$x^{50}=(x-3)(x+2)q(x)+ax+b$$ (Division algorithm)
Now, take $$x=3$$ and $$x=-2$$ to get two equations. Two variables and two equations. Hope you get it.
This is the complete problem. (Use the hint and try yourself first)
Taking $$x=3$$,
$$3^{50}=3a+b,$$
Taking $$x=-2$$
$$(-2)^{50}=2^{50}=-2a+b$$
Subtracting, we get
$$3^{50}-2^{50}=5a\implies 9^{25}-4^{25}=5a$$
Firstly, observe that the LHS is divisible by $$(9-4)=5$$(Why?). So, you get an integer value of $$a$$. (Just for a check)
$$\therefore a= 9^{24}+9^{23}\cdot 4+ 9^{22}\cdot 4^2+...+4^{24}$$
Now, you may use modular arithmetic.
$$a\equiv 1-4+6-4+6-4+...+6 \equiv 1+12\times 2\equiv 5\pmod{10}$$ (Why?)
Hope you get it.
Below we solve it simply - with purely mental arithmetic. By polynomial division with remainder followed by evaluation at $\,x=3,$ and $\,x=-2\,$ we obtain
\begin{align} x^{50} &= (x\!-\!3)(x\!+\!2)\, q(x) + ax + b\\ \Rightarrow\quad 3^{50} &= 3a + b\\ (-2)^{50} &= -2a + b\\ 3^{50}\!-(-2)^{50} &= 5a \end{align}
Note $\ 3\equiv -2\pmod{5}\,\Rightarrow\, 3^{50}\equiv (-2)^{50}\pmod{25}\$ by the Lemma below.
Thus $\,5a = \color{#0a0}{3^{50}-(-2)^{50}}\equiv 0\pmod{25},\,$ so ${\rm mod}\ 50\,$ either $5a\equiv 0$ or $\,5a\equiv 25.\,$ But since $\,5a\,$ is obviously $\rm\color{#0a0}{odd}$, it must be $\,5a\equiv 25\pmod{50}.\,$ Hence $\,a\equiv 5\pmod{10},\,$ by cancelling $\,5$.
Lemma $\ \ c \equiv d \pmod n\,\Rightarrow\, c^{nk} \equiv d^{nk} \pmod{n^2}$.
Proof $\$ By hypothesis $\ c = d+nj\,$ for some integer $\,j\,$ so by the Binomial Theorem $$c^{nk} = (d+nj)^{nk} = d^{nk} + (\color{#c00}nk)(\color{#c00}nj) d^{nk-1} + (\color{#c00}nj)^{\color{#c00} 2}(\cdots) \equiv d^{nk}\!\! \pmod{\!\color{#c00}{n^2}}$$
Remark This method of solving for $\,a\,$ may be viewed as Lagrange (or Newton) interpolation, which is a special case of CRT = Chinese remainder theorem. | 2019-07-20T05:40:51 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1993202/remainder-in-polynomial-division",
"openwebmath_score": 0.9723093509674072,
"openwebmath_perplexity": 665.6748013428753,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109538667758,
"lm_q2_score": 0.8577680977182187,
"lm_q1q2_score": 0.8447394185103687
} |
https://math.stackexchange.com/questions/2895366/divisibility-relation-on-the-set-s-2-6-7-14-15-30-70-105-210 | Divisibility Relation On the Set $S = \{ 2, 6, 7, 14, 15, 30, 70, 105, 210 \}$: Hasse Diagram, Maximal, Minimal Elements, Greatest, Least elements
Consider the divisibility relation on the set
$$S = \{ 2, 6, 7, 14, 15, 30, 70, 105, 210 \}$$
It is given that this relation is a partial order on $$S$$.
(i) Draw the Hasse diagram for this partial order.
(ii) Find all maximal elements and all minimal elements of S.
(iii) Does $$S$$ have a greatest element? Does $$S$$ have a least element? If so, write them down; if not, explain why not.
I'm not sure if this is a reasonable problem to seek review for from math.stackexchange, but I perhaps I can at least check my solutions for (ii) and (iii).
For (i), my Hasse diagram is a mess: there are lines criss-crossing through other lines. I'm not sure if this is allowed, but if not, I'm not sure how else it can be done?
For (ii), I got that the maximal elements of $$S$$ are $$\{ 210 \}$$, since a maximal element of a subset $$S$$ of some partially ordered set (poset) is an element of $$S$$ that is not smaller than any other element in $$S$$, and the minimal elements of $$S$$ are $$\{ 2, 7, 15 \}$$, since a minimal element of a subset $$S$$ of some partially ordered set is defined as an element of $$S$$ that is not greater than any other element in $$S$$.
For (iii), I got that $$S$$ has a greatest element $$\{ 210 \}$$, since the greatest element of a subset $$S$$ of a partially ordered set (poset) is an element of $$S$$ that is greater than every other element of $$S$$; for the least element, I wrote that $$S$$ does not have a least element, since there is no element of $$S$$ that is less than every other element of $$S$$.
I would greatly appreciate it if people could please take the time to review this.
• your answers are correct. In the (iii) part, you mistyped that $S$ has no minimal elements, It has minimal elements as you have already found them but no least element. Aug 26 '18 at 19:29
• @AnuragA thank your for the confirmation. I will fix my typo now. Aug 26 '18 at 19:38
Here is your Hasse diagram for the divisibility relation on $$S$$.
• I will point out that $15$ is a minimal element as well. Depending on what you prefer to prioritize, your Hasse diagram can look differently. Some people prefer to have all minimal elements appearing level with one another at the bottom, while others prefer to avoid as many crossings as possible. I will also point out that you are missing an edge: $105 = 3\times 5 \times 7$ so there should have been an edge from $7$ to $105$ heading up in the diagram as well. You can draw this diagram as a planar graph if you wish, but crossings seem unavoidable if all minimal are level at bottom. Aug 26 '18 at 19:49
• @JMoravitz Thanks for pointing out the missing edge. As far as $15$ being minimal is concerned OP already had figured that out (see his post). Aug 26 '18 at 19:59 | 2021-10-16T20:52:21 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2895366/divisibility-relation-on-the-set-s-2-6-7-14-15-30-70-105-210",
"openwebmath_score": 0.7835308909416199,
"openwebmath_perplexity": 153.99651571483744,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9848109485172559,
"lm_q2_score": 0.8577681013541613,
"lm_q1q2_score": 0.8447394175024373
} |
https://www.physicsforums.com/threads/square-roots.333285/ | # Square Roots
1. Aug 29, 2009
### S_David
Hello,
My calculus book says that readers who are writting $$\sqrt{9}$$ as $$\pm3$$ must stop doing that, because it is incorrect. The question is: why is it incorrect?
Regards
2. Aug 29, 2009
### arildno
Because the square root of a number A is DEFINED to be the unique, non-negative number whose square equals A.
3. Aug 29, 2009
### S_David
I didn't understand. For any real number a there are two square roots: a positive square root, and a negative square root. How is the square root is unique?
4. Aug 29, 2009
### fleem
When you encounter a square root in an equation, use the +/- thing. If somebody asks you, "What is the square root of four", say "two". Its just what mathematicians have decided we will mean, when "square root" is used in each of those contexts.
5. Aug 29, 2009
### arildno
Incorrect.
For any non-negative number "a", the equation:
$$x^{2}=a$$
has two SOLUTIONS:
$$x_{1}=\sqrt{a},x_{2}=-\sqrt{a}$$
The $\sqrt{a}$ is a non-negative number.
6. Aug 29, 2009
### HallsofIvy
Staff Emeritus
To expand on arildno's point: S David, would you say that the solution to $x^2= 5$ is $\sqrt{5}$ or $\pm \sqrt{5}$? I suspect you will say the latter and the point is that the whole reason we need the "$\pm$" is because $\sqrt{5}$ itself only gives one of them: the positive root.
7. Aug 29, 2009
### S_David
Referreing to the book whose name is: Calculus (7th ed), for Anton, Bivens, and Davis, Appendix B at the bottom of the page, it says the following:
After this review, it says the thing I started with. Is this differ from what I said in post #3 in this thread?
8. Aug 29, 2009
### arildno
The SYMBOL $\sqrt{a}$ always signifies a non-negative number.
Therefore, $-\sqrt{a}$ is always a non-positive number.
Colloquially, we call this "the negative square root of a", whereas if we want to be über-precise, we ought to call it "the (additive) negative OF the square root of a"
(alternatively, "minus square root of a", in complete agreement of calling -2 for "minus two")
9. Aug 29, 2009
### slider142
The symbol $\sqrt{9}$ is shorthand for "the principal square root of 9" (not simply "a square root of 9") where the principal square root is a function. A function has only a single output for each input, therefore equating it to the symbols $\pm 9$ which is shorthand for the set {9, -9} is an error. | 2017-08-20T00:43:13 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/square-roots.333285/",
"openwebmath_score": 0.7986157536506653,
"openwebmath_perplexity": 1022.9120293479464,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846691281407,
"lm_q2_score": 0.8633916134888614,
"lm_q1q2_score": 0.8447291180913112
} |
http://math.stackexchange.com/questions/489803/why-is-p-land-p-lor-q-equivalent-to-p | Why is $p \land (p \lor q)$ equivalent to $p$?
How can you prove with equivalence laws that $p \land (p\lor q)$ is equivalent to $p$? I know you have to get rid of $q$, but I'm not sure how.
-
To prove using equivalence laws, we need your list of equivalence laws. There are many possible "standard" lists. – user7530 Sep 10 '13 at 19:35
I have to use laws like distributivity, De Morgan, True/False-elimination – Guest001 Sep 10 '13 at 19:39
We have: \begin{align*} p \wedge (p \vee q) &= (p \vee F) \wedge (p \vee q) & \text{identity for } \vee \\ &= p \vee (F \wedge q) & \text{distributivity of } \vee \text{ over } \wedge \\ &= p \vee F & \text{annihilator for } \wedge \\ &= p & \text{identity for } \vee. \end{align*}
-
Note, that $F$ is an arbitrary false statement, such as $p \wedge \neg p$ – AlexR Sep 10 '13 at 19:44
thank you very much! That's what I was looking for – Guest001 Sep 10 '13 at 19:46
@Guest001 If you feel satisfied with this answer, you may accept it. – Doug Spoonwood Sep 11 '13 at 1:30
When proving simple tautologies like this one, I like to break it into cases: if $p$ is $T$ and if $p$ is $F$.
If $p$ is $T$, then we have $T \wedge (T\vee q) = T\wedge T = T$.
If $p$ is $F$, then we have $F \wedge (F\vee q) = F$.
-
This is usually what I do myself. A good approach! – New_to_this Sep 10 '13 at 20:14
Here is a third way, - just pointing out, that there are several ways;
\begin{align*} p\wedge (p\vee q)&\equiv (p\wedge p)\vee(p\wedge q)\ - \ Distributive \ law\\ &\equiv p\vee(p\wedge q) \ - \ Idempotent \ law\\ &\equiv p \ - \ Absorption \ law \end{align*}
-
In that case: $$(p ∧ (p ∨ q)) ⇔ (p ∧ (p ∨ q)) ∨ (p ∧ ¬p) ⇔ (p ∧ ((p ∨ q) ∨ ¬p)) ⇔ p$$ | 2014-07-23T06:10:18 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/489803/why-is-p-land-p-lor-q-equivalent-to-p",
"openwebmath_score": 0.999895453453064,
"openwebmath_perplexity": 1062.0320968485446,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846716491915,
"lm_q2_score": 0.8633916099737806,
"lm_q1q2_score": 0.8447291168288642
} |
https://math.stackexchange.com/questions/2216989/finding-eigenvectors-to-eigenvalues-and-diagonalization | Finding eigenvectors to eigenvalues, and diagonalization
I just finished solving a problem on finding eigenvectors corresponding to eigenvalues, however, I'm not sure if it is correct. I was wondering if someone could check my work:
For the matrix $W = \begin{bmatrix} 1 & 2 \\ 3 & 2\\ \end{bmatrix}$, I must find the eigenvectors corresponding to the eigenvalues, as well as a diagonal matrix similar to W.
I was able to find that the eigenvalues were equal to $\lambda = 4, -1$. Then, I used the equation $(A - \lambda I)v = 0$ to solve for the vector.
When $\lambda = 4$, I set up the equation $\begin{bmatrix} 1 & 2 \\ 3 & 2\\ \end{bmatrix} - \begin{bmatrix} 4 & 0 \\ 0 & 4\\ \end{bmatrix}$ = $\begin{bmatrix} -3 & 2 \\ 3 & -2\\ \end{bmatrix}$, which gave me the eigenvector $\begin{bmatrix} 2\\ 3\\ \end{bmatrix}$.
For $\lambda = -1$, I did the exact same procedure and received the eigenvector which gave me the eigenvector $\begin{bmatrix} 1\\ -1\\ \end{bmatrix}$.
Did I do this part correctly? How do I find a diagonal matrix similar to $W$?
• Change the rest of the A's to W's as well! – NickD Apr 4 '17 at 2:37
• You have two distinct eigenvalues for a $2\times2$ matrix, so you can write down the similar diagonal matrix without further ado: it’s just a matrix with the eigenvalues along its diagonal. – amd Apr 4 '17 at 3:00
• thank you so much for your help. could please help me a last question I have here? math.stackexchange.com/questions/2217044/… – user400359 Apr 4 '17 at 3:26
we can use Row operations to obtain a diagonal matrix similar to W
W = \begin{bmatrix} 1 & 2 \\ 3 & 2\\ \end{bmatrix} $r_1-r_2=R_1$ gives $$W = \begin{bmatrix} -2 & 0 \\ 3 & 2\\ \end{bmatrix}$$ then $R_2=2r_2$ gives W = \begin{bmatrix} -2 & 0 \\ 6 & 4\\ \end{bmatrix} now $R_2=r_2+3r_1$ gives $W = \begin{bmatrix} -2 & 0 \\ 0 & 4\\ \end{bmatrix}$ and $R_1=\frac{1}{2}r_1$ gives $W = \begin{bmatrix} -1 & 0 \\ 0 & 4\\ \end{bmatrix}$ which is in diagonal form, as required, as you can see the diagonal entries are the eigenvalues you calculated
• How do you know that the matrix you have arrived upon is similar to W? – Doug M Apr 4 '17 at 3:34
I think it is worth the exercise to verify that
$W\mathbf v = \lambda \mathbf v$
$W \begin {bmatrix} 2\\3 \end{bmatrix} = 4\begin {bmatrix} 2\\3 \end{bmatrix}$ and $W \begin {bmatrix} 1\\-1 \end{bmatrix} = -\begin {bmatrix} 1\\-1 \end{bmatrix}$
which it does...in both cases.
In which case:
$W\begin{bmatrix} \mathbf v_1&\mathbf v_2 \end{bmatrix} = \begin{bmatrix} \mathbf v_1&\mathbf v_2 \end{bmatrix}\begin{bmatrix} \lambda_1\\&\lambda_2\end{bmatrix}$
Let $P = \begin{bmatrix} \mathbf v_1&\mathbf v_2 \end{bmatrix}$ and $\Lambda = \begin{bmatrix} \lambda_1\\&\lambda_2\end{bmatrix}$
$WP = P\Lambda\\ P^{-1}WP = \Lambda$
$\lambda$ is a diagonal matrix similar to W
• Okay, so the part I have done is correct. How do I find a diagonal matrix similar to W ? – user400359 Apr 4 '17 at 2:35
• @stackofhay42 subtract the second row away from the first then take 3 lots of the first row away from the second? – user395952 Apr 4 '17 at 2:38
• I have outlined the theory and the process. $\begin{bmatrix}0.2&0.2\\0.6&-0.4\end{bmatrix}W \begin{bmatrix}2&1\\3&-1\end{bmatrix}= \begin{bmatrix}4\\&-1\end{bmatrix}$ – Doug M Apr 4 '17 at 2:39 | 2021-05-12T01:33:38 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2216989/finding-eigenvectors-to-eigenvalues-and-diagonalization",
"openwebmath_score": 0.9777132868766785,
"openwebmath_perplexity": 186.43378995745522,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846659768267,
"lm_q2_score": 0.8633916099737807,
"lm_q1q2_score": 0.8447291119313921
} |
https://stats.stackexchange.com/questions/181167/what-is-the-autocorrelation-for-a-random-walk | # What is the autocorrelation for a random walk?
Seems like it is really high, but this is counterintuitive to me. Can somebody please explain? I am very confused by this issue and would appreciate a detailed, insightful explanation. Thanks a lot in advance!
(I wrote this as an answer to another post, which was marked as a duplicate of this one while I was composing it; I figured I'd post it here rather than throw it away. It looks like it says quite similar things to whuber's answer but it is just different enough that someone might get something out of this one.)
A random walk is of the form $y_t = \sum_{i=1}^t \epsilon_i$
Note that $y_t = y_{t-1}+ \epsilon_t$
Hence $\text{Cov}(y_t,y_{t-1})=\text{Cov}(y_{t-1}+ \epsilon_t,y_{t-1})=\text{Var}(y_{t-1})$.
Also note that $\sigma^2_t=\text{Var}(y_t) = t\,\sigma^2_\epsilon$
Consequently $\text{corr}(y_t,y_{t-1})=\frac{\sigma_{t-1}^2}{\sigma_{t-1}\sigma_t} =\frac{\sigma_{t-1}}{\sigma_t}=\sqrt{\frac{t-1}{t}}=\sqrt{1-\frac{1}{t}}\approx 1-\frac{1}{2t}$.
Which is to say you should see a correlation of almost 1 because as soon as $t$ starts to get large, $y_t$ and $y_{t-1}$ are almost exactly the same thing -- the relative difference between them tends to be fairly small.
You can see this most readily by plotting $y_t$ vs $y_{t-1}$.
We can now see it somewhat intuitively -- imagine $y_{t-1}$ has drifted down to $-20$ (as we see it did in my simulation of a random walk with standard normal noise term). Then $y_t$ is going to be pretty close to $-20$; it might be $-22$ or it might be $-18.5$ but it's nearly certain to be within a few units of $-20$. So as the series drifts up and down, the plot of $y_t$ vs $y_{t-1}$ is going to nearly always stay within quite a narrow range of the $y=x$ line... yet as $t$ grows the points will cover greater and greater stretches along that $y=x$ line (the spread along the line grows with $\sqrt{t}$, but the vertical spread remains roughly constant); the correlation must approach 1.
In the context of your previous question, a "random walk" is one realization $(x_0, x_1, x_2, \ldots, x_n)$ of a binomial random walk. Autocorrelation is the correlation between the vector $(x_0, x_1, \ldots, x_{n-1})$ and the vector of the next elements $(x_1,x_2, \ldots, x_n)$.
The very construction of a binomial random walk causes each $x_{i+1}$ to differ from each $x_i$ by a constant. After running the walk for a while, the values of $x_i$ will have wandered away from the initial value $x_0$ and thereby will usually cover a good range, typically proportional to $\sqrt{n}$ in length. Thus the lag-1 scatterplot of the $(x_i, x_{i+1})$ pairs will consist of points lying only on the lines $y=x\pm 1$, on average being close to the line $y=x$. The residuals will be close to $\pm 1$. Therefore, in the vast majority of realizations, the variance of the residuals (about $1$) compared to the variance of the values (roughly on the order of $(\sqrt{n}/2)^2 = n/4$) will be small. We would expect $R^2$ to be approximately
$$R^2 \approx 1 - \frac{1}{n/4} = 1 - \frac{4}{n}.$$
Here is a picture of $n=1000$ steps in a random walk (on the left) and its lag-1 scatterplot (on the right). Color coding is used to help you find corresponding points in the two plots. Notice that $R^2$ is very close indeed to $1 - 4/n$ in this case.
Here is the R code that produced the images.
set.seed(17)
n <- 1e3
x <- cumsum((runif(n) <= 1/2)*2-1) # Binomial random walk at x_0=0
rho <- format(cor(x[-1], x[-n]), digits=3) # Lag-1 correlation
par(mfrow=c(1,2))
plot(x, type="l", col="#e0e0e0", main="Sample Path")
points(x, pch=16, cex=0.75, col=hsv(1:n/n, .8, .8, .2))
plot(x[-n], x[-1], asp=1, pch=16, col=hsv(1:n/n, .8, .8, .2),
main="Lag-1 Scatterplot",
xlab="Current value", ylab="Next value")
mtext(bquote(rho == .(rho))) | 2020-12-02T19:18:34 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/181167/what-is-the-autocorrelation-for-a-random-walk",
"openwebmath_score": 0.846843421459198,
"openwebmath_perplexity": 379.3009861494492,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846691281407,
"lm_q2_score": 0.8633916047011595,
"lm_q1q2_score": 0.8447291094935584
} |
https://math.stackexchange.com/questions/118536/prove-the-map-has-a-fixed-point | # Prove the map has a fixed point
Assume $K$ is a compact metric space with metric $\rho$ and $A$ is a map from $K$ to $K$ such that $\rho (Ax,Ay) < \rho(x,y)$ for $x\neq y$. Prove A have a unique fixed point in $K$.
The uniqueness is easy. My problem is to show that there a exist fixed point. $K$ is compact, so every sequence has convergent subsequence. Construct a sequence ${x_n}$ by $x_{n+1}=Ax_{n}$,$\{x_n\}$ has a convergent subsequence $\{ x_{n_k}\}$, but how to show there is a fixed point using $\rho (Ax,Ay) < \rho(x,y)$?
• (1) I think you need to assume $K$ is complete. (2) You have a convergent subsequence; the only thing you can do now is examine the behavior of its limit ...
– Neal
Mar 10, 2012 at 13:09
• @Neal: A metric space is compact iff it is complete and totally bounded, so completeness comes for free with compactness. Mar 10, 2012 at 13:17
• Oh, I totally missed "compact" in the question. My bad.
– Neal
Mar 11, 2012 at 0:24
Define $f(x):=\rho(x,A(x))$; it's a continuous map. (Note $$\rho(x,Ax)\le\rho(x,y)+\rho(y,Ay)+\rho(Ay,Ax)\quad\forall x, y\in K$$ or $$\rho(x,Ax)-\rho(y,Ay)\le\rho(x,y)+\rho(Ax,Ay).$$ Reversing the roles of $x,y$ to get $$\left|\rho(x,Ax)-\rho(y,Ay)\right|\le\rho(x,y)+\rho(Ax,Ay)<2\delta \quad \text{ whenever }\rho(x,y)<\delta.$$ That is, $f$ is actually uniformly continuous.)
Let $\alpha:=\inf_{x\in K}f(x)$, then we can find $x_0\in K$ such that $\alpha=f(x_0)$, since $K$ is compact. If $\alpha>0$, then $x_0\neq Ax_0$ and $\rho(A(Ax_0),Ax_0)<\rho(Ax_0,x_0)=\alpha$, which is a contradiction. So $\alpha=0$ and $x_0$ is a fixed point. The assumption on $A$ makes it unique.
Note that completeness wouldn't be enough in this case, for example consider $\mathbb R$ with the usual metric, and $A(x):=\sqrt{x^2+1}$. It's the major difference between $\rho(Ax,Ay)<\rho(x,y)$ for $x\neq y$ and the existence of $0<c<1$ such that for all $x,y,$: $\rho(Ax,Ay)\leq c\rho(x,y)$.
• Nice proof!Thank you! :)) Mar 10, 2012 at 13:53
• How do we show that f(x):=ρ(x,A(x)) is indeed continuous? Apr 2, 2012 at 9:22
• @Jacques: $\delta: x \mapsto (x,x)$ is continuous, $A$ is continuous, so $g:(x,y) \mapsto (x,A(y))$ is continuous, and $d:(x,y) \mapsto d(x,y)$ is continuous, so $f(x) = (d\circ g \circ \delta)(x)$ is a composition of continuous maps, hence it is continuous. Alternatively, use the triangle inequality and the reverse triangle inequality a few times.
– t.b.
Apr 2, 2012 at 9:52
• Can someone clarify about uniqueness? Nov 9, 2015 at 2:57
• @Niebla In general if we have $\rho(A(x), A(y))<\rho(x,y)$ - note that the inequality is strict - $A$ can only have one fixed point. Let $a, b$ be two fixed points, then $\rho(A(a), B(b))<\rho(a, b)$, which is a contradiction since both sides of this strict inequality are equal. Dec 27, 2015 at 16:53
I don't have enough reputation to post a comment to reply to @андрэ 's question regarding where in the proof it is used that $$f$$ is a continuous function, so I'll post my answer here:
Since we are told that $$K$$ is a compact set. $$f:K\rightarrow K$$ being continuous implies that the $$\mathrm{im}(f) = f(K)$$ is also a compact set. We also know that compact sets are closed and bounded, which implies the existence of $$\inf_{x\in K} f(x)$$.
If it is possible to show that $$f(K) \subseteq K$$ is a closed set, then it is necessarily compact as well: A subset of a compact set is compact? However, I am not aware of how you would do this in this case without relying on continuity of $$f$$.
you don't need to prove completeness or define any sequence. Define a nonnegative real function $$h(x) = \rho(x,f(x) )$$ This is continuous, so its minimum is achieved at some point $$x_0.$$ If $$h(x_0) >0,$$ we see that $$h(f(x_0) ) = \rho( f(x_0), f(f(x_0 )) < \rho( x_0, f(x_0)) = h(x_0)$$ Put together, $$h(f(x_0) ) < h(x_0)$$ Thus the assumption of a nonzero minimum of $$h$$ leads to a contradiction. Therefore the minimum is actually $$0,$$ so $$h(x_0) = 0,$$ so $$f(x_0) = x_0$$
• Please do not post the same answer to multiple questions. If you believe that an answer is appropriate for more than one question, please post only one answer, and nominate the other question for closure as a duplicate. Feb 17 at 23:22 | 2022-06-29T01:47:30 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/118536/prove-the-map-has-a-fixed-point",
"openwebmath_score": 0.9580180048942566,
"openwebmath_perplexity": 104.50866656229547,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846659768267,
"lm_q2_score": 0.8633916064586998,
"lm_q1q2_score": 0.8447291084922909
} |
https://math.stackexchange.com/questions/611761/alternating-sum-of-binomial-coefficients-given-n-in-mathbb-n-prove-sumn | # Alternating sum of binomial coefficients: given $n \in \mathbb N$, prove $\sum^n_{k=0}(-1)^k {n \choose k} = 0$
Let $$n$$ be a positive integer. Prove that \begin{align} \sum_{k=0}^n \left(-1\right)^k \binom{n}{k} = 0 . \end{align}
I tried to solve it using induction, but that got me nowhere. I think the easiest way to prove it is to think of a finite set of $$n$$ elements, but I can't find the solution.
• Note that both proofs below fail for $n=0$. – Carsten S Dec 18 '13 at 15:22
• @CarstenSchultz Because $$\sum_{k=0}^0 (-1)^k\binom{0}{k} = 1,$$ the result doesn't hold for $n = 0$. – Daniel Fischer Dec 18 '13 at 15:48
• @DanielFischer, I am aware of that ;) But I do not know, if $0$ is in Franck's $\mathbb N$. – Carsten S Dec 18 '13 at 16:58
• doesn't $\mathbb N$ start at 1? – FranckN Dec 18 '13 at 17:00
• @FranckN There are different conventions. Some let $\mathbb{N}$ start with $0$, some with $1$. Given the problem statement, it is overwhelmingly likely that the problem author belongs to the latter group. – Daniel Fischer Dec 18 '13 at 17:02
Using Binomial Theorem for positive integer exponent $n$
$$(a+b)^n=\sum_{0\le r\le n}\binom nr a^{n-r}b^r$$
Set $\displaystyle a=1,b=-1$ in the above identity
I think the easiest way to prove it is to think of a finite set of $$n$$ elements,
If you think of it that way, it's the number of even sized ($$(-1)^k = 1$$) subsets of $$\{1,\,\dotsc,\,n\}$$ minus the number of odd-sized ($$(-1)^k = -1$$) subsets.
The map
$$\varphi \colon S \mapsto \begin{cases} S\cup \{1\} &, 1 \notin S\\ S \setminus \{1\} &, 1 \in S \end{cases}$$
that "flips $$1$$", i.e. adds $$1$$ to $$S$$ if $$1\notin S$$ and removes it if $$1\in S$$, is a bijection between the set of even-sized and the set of odd-sized subsets. Thus $$\{1,\, \dotsc,\,n\}$$ has as many even-sized subsets as odd-sized, i.e.
$$\sum_{k=0}^n (-1)^k\binom{n}{k} = 0$$
for all $$n \geqslant 1$$.
• but what happen if n its an odd number? I think it doesn't apply – FranckN Dec 18 '13 at 14:55
• Take a subset $S$ that doesn't contain $1$. If $S$ has an odd number of elements, then $S\cup \{1\}$ has an even number of elements, and vice versa. – Daniel Fischer Dec 18 '13 at 14:58
Please allow me to give a less direct proof. Let $p$ be the product of $n$ different primes $q_1,\ldots,q_n$.
We know $$\sum_{d \mid p}\mu(d)=0,$$ where $\mu$ is the Möbius function.
Each divisor $d$ of $p$ is the product of primes from the set $\{q_1,\ldots,q_n\}$, and will satisfy $\mu(d)=1$ or $\mu(d)=-1$, depending on the parity of the number of primes dividing $d$.
It follows that there as many ways to choose an odd number of primes as ways to choose an even number of primes.
Equivalently, $$\sum_{0\leq 2k \leq n}\binom{n}{2k}=\sum_{0\leq 2k+1 \leq n}\binom{n}{2k+1},$$ it follows that $$\sum_{k=0}^n\binom{n}{k}(-1)^k=0.$$
As this question just got revived, I thought I'd add another bijective proof. Namely, we are trying to show that the number of subsets of $\{1,2,\dots,n\}$ with an odd number of elements is equal to the number of subsets with an even number of elements. To that end, given any subset $S$, just take the symmetric difference with $\{1\}$, i.e. $S\to S\triangle \{1\}$.
Here is a different tack: If you drop the term for $k=0$, this sum is the negation of the Euler characteristic of the $n$-dimensional simplex, whose faces of dimension $k$ correspond to subsets of ${0,...,n}$ with cardinality $k+1$. The simplex is a contractible space, so its Euler characteristic is the same as that of a point, namely 1. Putting back in the term for $k=0$, we see that the original sum is 1-1=0.
This is way more work than necessary (appealing to homotopical invariance of the Euler characteristic), but it's fun, and it's suggestive of the idea that alternating sums can sometimes be dealt with topologically.
Even though this question is pretty old, and the OP probably will not see the answer, I think it's worthwhile to provide a proof by induction, which the OP (and maybe others) had problems with and surprisingly no one has posted yet.
Since the statement is true for $n=1$, suppose it holds for $n=m$. Then the statement for $n=m+1$ follows from $$\require\cancel \sum_{k=0}^{m+1} (-1)^k {m+1 \choose k} =\sum_{k=0}^{m} (-1)^k {m \choose k} \\ \cancel{{m \choose 0}-{m+1 \choose 0}}+\sum_{k=1}^{m} (-1)^k {m \choose k}-(-1)^k{m+1 \choose k}-(-1)^{m+1}{m+1 \choose m+1}=0 \\ \sum_{k=1}^m (-1)^{k+1}\left({m+1 \choose k}-{m \choose k}\right)+(-1)^{m+2}{m+1 \choose m+1}=0,$$ which, recalling the property $\displaystyle {a \choose b}+{a \choose b+1}={a+1 \choose b+1},$ is equivalent to $$\sum_{k=1}^m (-1)^{k+1} {m \choose k-1}+(-1)^{m+2}{m+1 \choose m+1}=0 \\ \sum_{k=0}^{m-1} (-1)^{k} {m \choose k}+(-1)^{m}{m \choose m}=0 \\ \sum_{k=0}^{m} (-1)^k {m \choose k}=0,$$ and this is is precisely our inductive hypothesis.
Alternatively, prove a more general identity below: $$\sum_{r=0}^k\,(-1)^r\binom{n}{r}=(-1)^k\binom{n-1}{k}\,,$$ for all integers $n,k\geq 0$. When $k=n$, we have $$\sum_{r=0}^{n}\,(-1)^r\binom{n}{r}=(-1)^n\binom{n-1}{n}=0\,.$$ | 2020-01-18T20:44:39 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/611761/alternating-sum-of-binomial-coefficients-given-n-in-mathbb-n-prove-sumn",
"openwebmath_score": 0.9542222619056702,
"openwebmath_perplexity": 155.7491201492834,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846678676151,
"lm_q2_score": 0.863391602943619,
"lm_q1q2_score": 0.8447291066856806
} |
http://trimuihoinach.com/how-to-hiuj/e3c0b1-%2Aa-function-f%3Aa%E2%86%92b-is-invertible-if-f-is%3A%2A | # *a function f:a→b is invertible if f is:*
De nition 5. The function, g, is called the inverse of f, and is denoted by f -1 . 1. not do anything to the number you put in). Then f 1(f… a if b ∈ Im(f) and f(a) = b a0 otherwise Note this defines a function only because there is at most one awith f(a) = b. Using this notation, we can rephrase some of our previous results as follows. We say that f is invertible if there exists another function g : B !A such that f g = i B and g f = i A. A function is invertible if and only if it is bijective (i.e. If f: A B is an invertible function (i.e is a function, and the inverse relation f^-1 is also a function and has domain B), then f is surjective. Prove: Suppose F: A → B Is Invertible With Inverse Function F−1:B → A. Suppose F: A → B Is One-to-one And G : A → B Is Onto. So g is indeed an inverse of f, and we are done with the first direction. Determining if a function is invertible. So for f to be invertible it must be onto. Then what is the function g(x) for which g(b)=a. If x 1;x 2 2X and f(x 1) = f(x 2), then x 1 = g(f(x 1)) = g(f(x 2)) = x 2. Not all functions have an inverse. If (a;b) is a point in the graph of f(x), then f(a) = b. Intro to invertible functions. 7. Let f: X Y be an invertible function. The inverse of bijection f is denoted as f -1 . Let x 1, x 2 ∈ A x 1, x 2 ∈ A Then there is a function g : Y !X such that g f = i X and f g = i Y. In other words, if a function, f whose domain is in set A and image in set B is invertible if f-1 has its domain in B and image in A. f(x) = y ⇔ f-1 (y) = x. Here image 'r' has not any pre - image from set A associated . Also, range is equal to codomain given the function. Show that f is one-one and onto and hence find f^-1 . When f is invertible, the function g … Definition. It is a function which assigns to b, a unique element a such that f(a) = b. hence f -1 (b) = a. Google Classroom Facebook Twitter. A function f: A → B is invertible if and only if f is bijective. First of, let’s consider two functions $f\colon A\to B$ and $g\colon B\to C$. f:A → B and g : B → A satisfy gof = I A Clearly function 'g' is universe of 'f'. According to Definition12.4,we must prove the statement $$\forall b \in B, \exists a \in A, f(a)=b$$. Let f : A ----> B be a function. Thus ∀y∈B, f(g(y)) = y, so f∘g is the identity function on B. Invertible function: A function f from a set X to a set Y is said to be invertible if there exists a function g from Y to X such that f(g(y)) = y and g(f(x)) = x for every y in Y and x in X.or in other words An invertible function for ƒ is a function from B to A, with the property that a round trip (a composition) from A to B to A returns each element of the first set to itself. In other words, if a function, f whose domain is in set A and image in set B is invertible if f-1 has its domain in B and image in A. f(x) = y ⇔ f-1 (y) = x. Consider the function f:A→B defined by f(x)=(x-2/x-3). Thus f is injective. If now y 2Y, put x = g(y). So this is okay for f to be a function but we'll see it might make it a little bit tricky for f to be invertible. Let f: A!Bbe a function. First assume that f is invertible. Therefore 'f' is invertible if and only if 'f' is both one … We say that f is invertible if there is a function g: B!Asuch that g f= id A and f g= id B. Thus, f is surjective. Let B = {p,q,r,} and range of f be {p,q}. 6. So you input d into our function you're going to output two and then finally e maps to -6 as well. Is the function f one–one and onto? Moreover, in this case g = f − 1. To prove that invertible functions are bijective, suppose f:A → B … Is equal to codomain given the function, g, is called the inverse of,... And only if f is denoted by f -1 let B = { p, q } f ( (... And g: B → A is unique, the inverse of f, and is denoted by f.. Onto and hence find f^-1 Bijection f is denoted by f -1 in this case =... F−1: B → A is unique, the inverse of f to be one - and... Order of mapping we get the input as the new output pages.. Theorem 3 that functions! Called invertible if and only if ' f ' is invertible or not input d into function... Mar 21, 2018 in Class XII Maths by rahul152 ( -2,838 points ) relations and functions ' pre! And so f^ { -1 } ( x ) to output two and then finally e to! Is the identity function on B learn how we can rephrase some of previous. Is equal to codomain given the function, g, is called the inverse:... And onto and hence find f^-1 if is both one-one and onto and is denoted as f -1 not.. They have inverse function F−1: B → A of invertible f. Definition finally e maps to as..., ' f ' has to be one - one and onto the definition, that! Have no image in set A associated g, is called the of. So f^ { -1 } ( x ) to Show g f = and. Now to Show g f = 1A and so g is indeed an inverse.... Previous results as follows … let f: x! y denoted by f -1 the... Shows page 2 - 3 out of 3 pages.. Theorem 3 inverse! } and range of f, and we are done with the first.... Is unique, the inverse of f ( x ) for which g ( f ( ). Function: A → B is invertible if and only if f is injective and.. Relations and functions - one and onto condition for *a function f:a→b is invertible if f is:* but not sufficient F−1 =.. It by f 1 A is unique, the inverse F−1: B → A unique... G is A necessary condition for invertibility but not sufficient is one-one, if no in! Associated with more than one element in A talk about generic functions given with their domain and codomain, the... Function are also known as invertible function: Bijection function are also known invertible. ) =a is indeed an inverse function associated with more than one in. So you input d into our function you 're going to output two and then finally maps! That g f = *a function f:a→b is invertible if f is:* and f g = f − 1 bijective i.e! Not onto function and g: y x be the inverse of f, i.e on.! Functions are bijective, suppose f: A -- -- > B be A function f:!... Function are also known as invertible function because they have inverse function f 1 so, ' r becomes! Then what is the identity function on B and surjective for which g B... Function g ( x ) { /eq } is not onto function necessary condition for invertibility but sufficient! Has A right inverse if and only if ' f ' is invertible if and if. Not do anything to the number you put in ) for f. Proposition 1.13 A inverse! Only if is both one … De nition 5 the number you put in ) f,. Is points to two the inverse F−1: B → A learn how we can rephrase some our! Shows page 2 - 3 out of 3 pages.. Theorem 3 -- -- > be... Invertible f. Definition inverse function F−1: B → A is unique the... ( f… now let f: A -- -- > B be A function invertible. Then F−1 f = 1A and f F−1 = 1B you g ( x ) and we are with! Is one to one f from A to B is invertible if on reversing the order of mapping get... The Restriction of f be { p, q, r, } and range of f be! ' has not any pre - image from set A associated now f. Previous results as follows f F−1 = 1B -1 } is not onto function equal to given... To output two and then finally e maps to two, or maps to two or... Is defined, ' r ' has not any pre - image from set A g! Is invertible with inverse function, g, is One-to-one the function g! How we can tell whether A function f: x! y is indeed an function! Denote it by f 1 ( f… now let f: A → B is to. Computation now to Show g f = i x and f F−1 = 1B functions: Bijection function also!, which will have no image in set A associated it is surjective f F−1 = 1B inverse function.... And g: B → A of invertible f. Definition the first direction, where the concept of bijective sense! If it has an inverse x∈ A, f ( x ) for which (. B ) =a ) and we are done with the first direction, or maps -6... All B in B is not defined for all B in B is invertible with inverse function F−1 B... = IY inverse as { eq } f^ { -1 } is an invertible function because have... Called invertible if and only if it has an inverse of f, and denoted! When f-1 is defined, ' r ' has to be invertible it must be onto which have... For f to be invertible if on reversing the order of mapping we get the input as new! A necessary condition for invertibility but not sufficient to one be invertible and! F: A → B is associated with more than one element in.! Function on B necessary condition for invertibility but not sufficient Show g f 1A... Both one-one and onto and onto, Need not be onto g f = 1A and f =. Not do anything to the number you put in ) 1A and so is. Is denoted by f -1! B is invertible with inverse function function you 're going to output and! F-1 is defined, ' f ' is both one-one and onto 2Y, put x = g y. That would give you g ( x ) such that g f = i y property! Gthe inverse of fand denote it by f -1 has A right inverse if and if... Indeed an inverse of f, and is denoted by f -1 identity! Now let f: A → B is invertible if and only if it is necessary... You g ( f ( g ( y ) ) Show f 1x the... Be the inverse of f be { p, q } that would give you (... Is onto iff y∈ B, x∈ A, f ( g f! Show G1x, Need not be onto not onto function you input d into our function you 're to... Our function you 're going to output two and then finally e to... Shows page 2 - 3 out of 3 pages.. Theorem 3 number you put in.. Injectivity is A function mapping A into B of our previous results as follows both one … De nition.. 2018 in Class XII Maths by rahul152 ( -2,838 points ) relations and...., the Restriction of f, and is denoted by f 1 ( ). If it is an invertible function codomain, where the concept of bijective makes sense finally e maps two!, put x = g *a function f:a→b is invertible if f is:* x ) and we can rephrase some of our previous as. Proposition 1.13 f − 1 x = g ( y ) ).... To B is said to be one - one and onto and find., x∈ A, f ( x ) and we can rephrase some our! Bijection function are also known as invertible function because they have inverse function property →B is onto iff y∈,. } f^ { -1 } ( x ) { /eq } is an easy computation now Show. See, d is points to two, or maps to -6 as well for g!, or maps to -6 as well Show G1x, Need not be onto given! X = g ( x ) { /eq } is an easy computation now to Show f... Known as invertible function because they have inverse function property number you put in.., so f∘g is the identity function on B: Bijection function are also known as invertible.. One element in A are also known as invertible function is said to be invertible it must be.! If is both one-one and onto is onto, we can tell whether A function is if. So let 's *a function f:a→b is invertible if f is:*, d is points to two fog = IY an. Bijective makes sense -- > B be A function is invertible if on reversing the order of mapping we the... One … De nition 5 any pre - image, which will have no image in set A.. Function g ( x ) for which g ( x ) =y A is unique the... | 2021-07-24T10:17:26 | {
"domain": "trimuihoinach.com",
"url": "http://trimuihoinach.com/how-to-hiuj/e3c0b1-%2Aa-function-f%3Aa%E2%86%92b-is-invertible-if-f-is%3A%2A",
"openwebmath_score": 0.8342097401618958,
"openwebmath_perplexity": 805.1243433361718,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9572777987970315,
"lm_q2_score": 0.8824278788223264,
"lm_q1q2_score": 0.8447286174361702
} |
https://math.stackexchange.com/questions/126613/sum-with-binomial-coefficients-sum-k-0n2n-choose-2k | Sum with binomial coefficients: $\sum_{k=0}^{n}{2n\choose 2k}$
I'm repeating material for test and I came across the example that I can not do. How to calculate this sum: $\displaystyle\sum_{k=0}^{n}{2n\choose 2k}$?
$$(1+1)^{2n}= \displaystyle\sum_{k=0}^{2n}{2n\choose k}$$ $$(1-1)^{2n}= \displaystyle\sum_{k=0}^{2n}(-1)^k{2n\choose k}$$
OR Second solution:
You can use the formula
$${2n\choose 2k}={2n-1\choose 2k}+{2n-1\choose 2k-1}$$ to prove that
$$\displaystyle\sum_{k=0}^{n}{2n\choose 2k}=\displaystyle\sum_{k=0}^{2n-1}{2n-1\choose k}$$
• I like first solution :-) – xan Mar 31 '12 at 17:50
$\binom{2n}{2k}$ is the number of subsets of $\{1,\dots,2n\}$ of size $2k$. When you sum these binomial coefficients over all $k$ from $0$ through $n$, you’re counting the number of subsets of $\{1,\dots,2n\}$ whose cardinalities are even. For $n>0$ exactly half of the subsets have even cardinalities, so the sum is $\frac12(2^{2n})=2^{2n-1}$.
Clearly $\{1\}$ has one even subset, $\varnothing$, and one odd subset, $\{1\}$. Suppose that $\{1,\dots,n\}$ has $2^{n-1}$ even and $2^{n-1}$ odd subsets. Now look at the $2^{n+1}$ subsets of $\{1,\dots,n+1\}$. Half of them are $2^n$ subsets of $\{1,\dots,n\}$, of which $2^{n-1}$ are even and $2^{n-1}$ are odd. The other $2^n$ subsets all contain $n+1$. The even ones are obtained by adding $n+1$ to an odd subset of $\{1,\dots,n\}$, so there are $2^{n-1}$ of them. The odd ones are obtained by adding $n+1$ to an even subset of $\{1,\dots,n\}$, so there are $2^{n-1}$ of them as well. Thus, $\{1,\dots,n+1\}$ has $2^{n-1}+2^{n-1}=2^n$ even subsets and the same number of odd subsets.
This does fail for $n=0$, since the empty set has only one subset, itself, and therefore has one even and no odd subsets. In that case $$\sum_{k=0}^n\binom{2n}{2k}=\binom00=1\;.$$
• This is beautiful what the combinatorial interpretation can do.. without counting :-) – xan Mar 31 '12 at 17:49
• The counting is probably easier if you think instead of binary sequences of length $2n$ (these are exactly the subsets). Erasing the last digit of a binary string is a bijection between the binary strings of length 2n with even number of 1's and all the binary strings of length 2n-1... – N. S. Mar 31 '12 at 18:06
• @N.S.: Matter of taste. It’s just about six of one and half a dozen of the other, but for elementary presentations I prefer my version. – Brian M. Scott Mar 31 '12 at 18:09
from binomial theorem we have
$$\sum_{i=0}^{2m}\binom{2m}{i}x^{i}=(1+x)^{2m}$$
for $x=1$ and $x=-1$ we get
$$\sum_{i=0}^{2m}\binom{2m}{i}=\sum_{k=0}^{2m}\binom{2m}{2k}+\sum_{k=1}^{2m}\binom{2m}{2k-1}=2^{2m}$$
$$\sum_{i=0}^{2m}\binom{2m}{i}(-1)^{i}=\sum_{k=0}^{2m}\binom{2m}{2k}-\sum_{k=1}^{2m}\binom{2m}{2k-1}=0$$ suming these equations we get $$2\sum_{k=0}^{2m}\binom{2m}{2k}=2^{2m}$$ finally
$$\sum_{k=0}^{2m}\binom{2m}{2k}=2^{2m-1}$$
Using line integrals: taking $r>1$, \eqalign{2\pi i\sum_{k=0}^n\binom{2n}{2k} &= \sum_{k=0}^n\int_{|z|=r}\frac{(z + 1)^{2n}}{z^{2k+1}}\,dz = \sum_{k=0}^\infty\int_{|z|=r}\frac{(z + 1)^{2n}}{z^{2k+1}}\,dz = \int_{|z|=r}\frac{(z + 1)^{2n}}z\sum_{k=0}^{\infty}\frac1{z^{2k}}\,dz\cr &= \int_{|z|=r}\frac{(z + 1)^{2n}}z\,\frac1{1 - 1/z^2}\,dz = \int_{|z|=r}\frac{z(z + 1)^{2n-1}}{z-1}\,dz = 2\pi i\,2^{2n-1}. }
With $\ds{n \in \mathbb{N}_{\ \geq\ 0}}$:
\begin{align} \sum_{k = 0}^{n}{2n \choose 2k} & = \sum_{k = 0}^{2n}{2n \choose k}{1 + \pars{-1}^{k} \over 2} = {1 \over 2}\sum_{k = 0}^{2n}{2n \choose k}1^{k} + {1 \over 2}\sum_{k = 0}^{2n}{2n \choose k}\pars{-1}^{k} \\[5mm] & = {1 \over 2}\pars{1 + 1}^{2n} + {1 \over 2}\bracks{1 + \pars{-1}}^{2n} = \bbx{2^{2n - 1} + {1 \over 2}\,\delta_{n0}} \end{align} | 2019-07-21T15:20:30 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/126613/sum-with-binomial-coefficients-sum-k-0n2n-choose-2k",
"openwebmath_score": 0.9839687943458557,
"openwebmath_perplexity": 338.1926603292671,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435195,
"lm_q2_score": 0.8652240791017536,
"lm_q1q2_score": 0.8447273757268304
} |
https://www.physicsforums.com/threads/tips-for-this-geometry-problem-please.951279/ | Tips for this geometry problem please
Homework Statement
Find x. L1 and L2 are parallel
Choices:
a)100
b)120
c)140
d)150
e)135
Homework Equations
From the image, the angles of the polygon in blue should satisfy:
6θ + 90 + 4θ + 2θ + 90 + x = 540
12θ + x = 360
x = 360 - 12θ
The Attempt at a Solution
I couldn't figure out how to advance from there so I resorted to a dirty trick using the values from the choices.
Noticing that every value of each choice is a multiply of 5 then the value of θ in the equation must be:
θ = 5n
So it follows that:
x = 360-12(5n)
Then by trial and error:
x = 180 for n = 3
x = 120 for n = 4
x = 60 for n = 5
From the image x should be obtuse so then I conclude that x must be equal to 120
and the answer would be the letter 'b'. I know I got it right cause my book says that's the correct answer. However since this is a section of my book were no development of the solution is presented (just the final answer) I'm asking for some tips on other methods to solve this. Perhaps I'm overlooking something...
Attachments
• 20.9 KB Views: 739
• 56.6 KB Views: 739
Related Precalculus Mathematics Homework Help News on Phys.org
jambaugh
Gold Member
I found it most efficacious to solve for $\theta$ first. Note that from the lower left moving rightward on L2 you can follow the turns until you're moving leftward along L1 and you will have turned $180^\circ$ by a sequence of angles $\theta, 2\theta, 3\theta, (7-4)\theta$. So with that you can solve for $\theta$.
I also note that the drawing is very much not to scale w.r.t. angles.
I'm interested to see another way of solving for X that doesn't involve using the polygon like I did.
And yes, the drawing is not to scale at all.
If the lower θ and upper 4θ were both zero, then X would be 180°, so the answer is whatever the upper one is less the lower one, subtracted from 180°. So without solving for θ (which jambaugh already did) it looks to me like X = 180° - (4θ - θ), or 180° - 3θ.
Does that help?
yeah tbh I was kinda lazy to think about it, just following jambaugh's way I can find the following is true:
(θ) + (90) + (180 - x) + (90 - 4θ) = 180
-3θ + 360 - x = 180
180 - 3θ = x
By the same reasoning we already know:
θ + 2θ + 3θ + 3θ = 180
9θ = 180
θ = 20
so:
180 - 3(20) = x
x = 120 | 2020-04-09T05:12:19 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/tips-for-this-geometry-problem-please.951279/",
"openwebmath_score": 0.7044897675514221,
"openwebmath_perplexity": 869.1222211716821,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9626731158685837,
"lm_q2_score": 0.8774767922879692,
"lm_q1q2_score": 0.8447233177342293
} |
https://math.stackexchange.com/questions/2642674/consider-the-following-sequence-1234567891011121314-9999899999100000-ho | # Consider the following sequence $1234567891011121314 . . . 9999899999100000$, how many times the block “2016” appears?
Consider the following sequence $1234567891011121314 . . . 9999899999100000$, how many times the block "$2016$" appears?
My try: Easily we can find $...$$2015 2016 2017$$...$ as our first block, after that we can see this kind of block $\;$ $...1201612017...$ (this repeats 9 times changing the first digit of $12016$ with the numbers $1$ to $9$
Ex: $12016,22016 ...$ = $9$ blocks
And the same thing happens with $20161,20162...$ changing the last digit = $9$ blocks
So $1+9+9=19$ blocks found by me with brute force.
Are there more blocks or just these and i'm missing something?
In the case I missed some blocks (very likely), is there some algebraic or elegant way to find them?
• Clearly infinitely many times. – Piquito Feb 9 '18 at 1:52
• @Piquito: I believe we stop at $100000$ so this is a finite string. – Ross Millikan Feb 9 '18 at 1:54
• @Ross Millikan: I did not read well and I thought that the number did not stop. By the way in this case the number is transcendent (a Mahler's result). – Piquito Feb 10 '18 at 20:25
This is more of a rewrite of Brian Tung's answer, since I haven't generated any computer code to find these, but I figure this is ever so slightly more elegant:
The two things to simplify are:
• The numbers in order never appear with a leading "0", and
• The numbers rotate where 2016 can start for each length of digits.
In every case, we just look at two forms for the number of digits adjacent and see where 2016 appears, and rotate the possible wildcard locations. Whatever digit leads in this rotation, it can't be a 0, which eliminates a few edges. 2017 also doesn't appear between digits, like in "9991000", because it doesn't start with a 9.
First off, "2016" contains 4 digits. If we are looking at the sequence before we hit "1000", we would want 2016 to appear in a subsequence like "abcabd". However, every digit in 2016 is distinct, so it could only appear at the end as "cabd". This can't work, since the "a" digit is a first digit, so a = 0 would mean we don't have 3 digits. A similar sort of argument will work with 3, 2, and 1 digit numbers.
For the 6 digit numbers, we can see 2016 isn't a part of 100000. This leaves the case of 4 and 5 digit repetitions.
For 4 digits, we have the form "abcdabc". 2016 appears as "abcd", "bcda", and "cdab". "dabc" is out, because that would make a = 0 which is a leading digit. We have three instances where 2016 appears in the 4 digit numbers then. Note I am not including the last character of the second number, since that just double counts the very first "abcd". That pattern continues throughout.
For 5 digits, we have a free digit, which we could call X, because 2016 is one digit short. We need to fill "abcdeabcd". Let's put the X after 2016, which I believe is doable without loss of generality. Then we compare the letter strings to "2016X"
Then "abcde" = 2016X gives 10 more, "bcdea" gives 9 more since X cannot be 0 here, "cdeab" gives 10 more, and "deabc" another 10. "eabcd" is illegal because a always is 0, so this is the total count of 39 for 5 digits.
Adding up both cases, 39 + 3 = 42 appearances.
This type of argument would also generalize to different numbers, especially ones without zeroes in the decimal expansion. We can see with some algebra that for a k digit number, not starting with 9 and without zeroes in its decimal representation, in the first 10^n numbers we expect about:
$$\sum_{d = k}^n d * \lfloor{10^{d - k}}\rfloor$$
many appearances, where d is the current number of digits we are considering in the argument here.
So we would expect 50 + 4 = 54 for a number like 1234, but because of the zeroes for numbers like 2016, things get a little complicated. It seems at a glance that you just subtract 11 and 1 from the cases for 2016, so it might be you subtract multiples of these when you add zeroes in. Not sure. I'd also have to think a little bit harder about the cases that involved the edges of digits: numbers like "9100" that appear at "991000" and every higher power.
But hopefully that gives a little insight. Listing the occurrences in order seems to get a result with less understanding than even some half-baked arguments here.
The sequence $2016$ appears $42$ times in all (verified by computer search, I'm afraid):
• $1620, 1621$
• $2016$
• $6201, 6202$
• $16X20, 16X21$ ($X = 0$ to $9$)
• $2016X$ ($X = 0$ to $9$)
• $X2016$ ($X = 1$ to $9$)
• $6X201, 6X202$ ($X = 0$ to $9$)
I can't think of any really elegant way of doing this.
Yes, you are missing some. For example, there are $6201\ 6202$ and $66201\ 66202$
• Is this a comment, a hint to the complete answer or the complete answer itself, as the OP has requested an elegant way to solve the problem? – Gaurang Tandon Feb 9 '18 at 1:59
• Only an elegant way if its possible, because if i had given more time to the problem i might have found the 42 numbers. – Rodrigo Pizarro Feb 9 '18 at 2:03
We split cases:
• Case 1: 2016 in one "number".
• Case 2: 201 in one number, 6 in the next.
• Case 3: 20 in one number, 16 in the next
(2 in one number and 016 in the next cannot occur)
## Case 1:
We split cases again:
• Sub-case 1: $2016$, there is $1$ possibility.
• Sub-case 2: $?2016$, there are $9$ possibilities.
• Sub-case 3: $2016?$, there are $10$ possibilities.
So in this case there are $20$ possibilities.
## Case 2:
The "first" number and the "next" one both start with $6$, and the next one ends with $202$. So we split cases again:
• Sub-case 1: $6201$, there is $1$ possibility.
• Sub-case 2: $6?201$, there are $10$ possibilities.
Therefore, there are $11$ possibilities in this case.
## Case 3:
Both numbers start with $16$, the "next" number ends with $21$. We split cases of the "first" number:
• Sub-case 1: $1620$, there is $1$ possibility.
• Sub-case 2: $16?20$, there are $10$ possibilities.
Therefore, there are $11$ possibilities in this case.
The number of blocks of $2016$ is therefore the sum of the numbers of possibilities: $$20+11+11=42\text.$$
• It stops at 100000 – Rodrigo Pizarro Feb 9 '18 at 2:00
• You've counted some non-working cases, I believe. For instance Case 3/Sub-case 4 includes 1620X, which doesn't work for any X. – Brian Tung Feb 9 '18 at 2:02
• I think if you correct Case 1 so that you restrict to numbers up to 100,000, and then eliminate Case 2/Sub-case 3 and Case 3/Sub-case 4, you end up with the right answer. – Brian Tung Feb 9 '18 at 2:04
• @RodrigoPizarro fixed – user_194421 Feb 9 '18 at 2:05
• @BrianTung fixed – user_194421 Feb 9 '18 at 2:05 | 2019-07-17T17:15:40 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2642674/consider-the-following-sequence-1234567891011121314-9999899999100000-ho",
"openwebmath_score": 0.7518031597137451,
"openwebmath_perplexity": 730.4795505886345,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9626731105140616,
"lm_q2_score": 0.8774767954920548,
"lm_q1q2_score": 0.8447233161202475
} |
http://inside-science.com/kasoor-prateek-ppvcw/moving-average-smoothing-7812f3 | choosing a window width is like an amount smoothing Daniels Trading does not guarantee or verify any performance claims made by such systems or service. In a Simple Moving Average, the price data have an equal weight in the computation of the average. The most straightforward method is called a simple moving average. The risk of loss in trading futures contracts or commodity options can be substantial, and therefore investors should understand the risks involved in taking leveraged positions and must assume responsibility for the risks associated with such investments and for their results. The multiplier 1/3 is called the weight. Trade recommendations and profit/loss calculations may not include commissions and fees. If there are trends, use different estimates that take the divided by the number of values, or. Thus, the oldest price data in the Smoothed Moving Average are neve… In general: $$\bar{x} = \frac{1} {n} \sum_{i=1}^{n}{x_i} = Variations include: simple, and cumulative, or weighted forms. What are Moving Average or Smoothing Techniques? Use a moving average filter with a 5-hour span to smooth all the data simultaneously (by linear index). a useful estimate for forecasting when there are no trends. The Moving Average is a popular indicator used by forex traders to identify trends. moving average can’t capture seasonality and trend It’s proper to use MA when it’s stationary or the future is similar to the past. Smoothing all the data together would then indicate the overall cycle of traffic flow through the intersection. For this method, we choose a number of nearby points and average them to estimate the trend. delivers in 1000 dollar units. False Forecast including trend is an exponential smoothing technique that utilizes two smoothing constants: one for the average … The Smoothed Moving Average uses a longer period to determine the average, assigning a weight to the price data as the average is calculated. When the window size for the smoothing method is not specified, smoothdata computes a default window size based on a heuristic. There are two distinct groups of smoothing methods. The moving average method is simply the average of a subset of numbers which is ideal in smoothing out the trend in data such as in a time-series. This method relies on the notion that observations close in time are likely to have similar values. There exist methods for reducing of canceling the effect due to random variation. Also, in a Simple Moving Average, the oldest price data are removed from the Moving Average as a new price is added to the computation. Here time series derived from the average of … extrapolate a local trend. FunkyTunes has revenue in January of 5000, in February of 6000, in March of … Please consult your broker for details based on your trading arrangement and commission setup. Suppose that the data are from a single intersection over three consecutive days. It is often used in technical analysis of financial data, like stock prices, returns or trading volumes. The "MSE" is the mean of the squared errors. In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different subsets of the full data set. of course, that an average is computed by adding all the A manager of a warehouse wants to know how much a typical supplier Consequently, the averaging removes random … A smoothed moving average is a moving average that assigning a weight to the price data as the average is calculated, deals with a longer period, and represents the combination of a simple moving average and exponential moving average. 12 suppliers, at random, obtaining the following Moving averages with different time frames can provide a variety of information. results: Performing the same calculations we arrive at: The estimator with the smallest MSE is the best. Fundamental Analysis and Position Trading, Steps for Energy Trading and Risk Management. Daniels Trading is not affiliated with nor does it endorse any third-party trading system, newsletter or other similar service. Simple Moving Average The SMA is the most common type of average used by technical analysts and is calculated by dividing the sum of a set of prices by the total number of prices found in … (Marks 2) Question 3: Sequence the jobs shown below by using a Gantt chart. The "error" = true amount spent minus the estimated amount. That is, the estimate of the trend-cycle at time t t is obtained by averaging values of the time series within k k periods of t t. Moving averages are a simple and common type of smoothing used in time series analysis and time series forecasting.Calculating a moving average involves creating a new series where the values are comprised of the av… Smoothing is a technique applied to time series to remove the fine-grained variation between time steps.The hope of smoothing is to remove noise and better expose the signal of the underlying causal processes. Is It Time to Limit Your Exposure to U.S. Dollar Devaluation. Calculating an average at specific intervals smooths out the data by reducing the impact of random fluctuations. We know, A moving average is a technical analysis indicator that helps smooth out price action by filtering out the “noise” from random price fluctuations. are the weights and, of course, they sum to 1. The idea is simple: the moving average filter takes the average of the last “M” amount of entries in the signal and averages them to produce the output. Using a moving average to visualize time series dataThis video supports the textbook Practical Time Series Forecasting. (Marks 2) Explain the aggregate planning strategy? between 1985 and 1994. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter. example, the average of the values 3, 4, 5 is 4. The names lowess and loess are derived from the term locally weighted scatter plot smooth, as both methods use locally weighted linear regression to smooth data. Now, moving average smoothing techniques will allow us to avoid sensitivity to local fluctuations, so allow us to smooth out those fluctuations while still getting a read on the overall trends. You should read the "risk disclosure" webpage accessed at www.DanielsTrading.com at the bottom of the homepage. All rights reserved. of random data is the mean. Old prices are never removed from the calculation, but they have only a minimal impact on the Moving Average due to a low assigned weight. Moving Averages and Exponential Smoothing: Calculation Problem 1. more Simple Moving Average (SMA) Definition This material is conveyed as a solicitation for entering into a derivatives transaction. The triple exponential moving average was designed to smooth price fluctuations, thereby making it easier to identify trends without the lag associated with traditional moving averages (MA). For$$. You should carefully consider whether such trading is suitable for you in light of your circumstances and financial resources. When calculating a simple moving average, it is beneficial to use an odd number of points so that the calculation is symmetric. The simple moving average (SMA) calculates an average of the last n prices, where n represents the number of periods for which you want the average: 1 Simple moving average = (P1 + P2 + P3 + P4 +... + Pn) / n Daniels Trading. This material has been prepared by a Daniels Trading broker who provides research market commentary and trade recommendations as part of his or her solicitation for accounts and solicitation for trades; however, Daniels Trading does not maintain a research department as defined in CFTC Rule 1.71. What are the advantages of Exponential smoothing over the Moving average and the Weighted moving average? The Hull moving average (HMA) was developed by Alan Hull in a bid to create a moving average that was fast, responsive and with reduced lag. While a traditional low pass filter can be efficiently used to focus on a desired signal frequency, the moving average filter is a more direct approach to simply “smoothing out” a signal. Given a series of numbers and a fixed subset size, the first element of the moving average is obtained by taking the average of the initial fixed subset of the number series. The "simple" average or mean of all past observations is only trend into account. According to Hull, the HMA “almost eliminates lag altogether and manages to improve smoothing at the same time.” The HMA is fairly complex to calculate so you can read more about the method here. A moving average is a technical analysis indicator that helps smooth out price action by filtering out the “noise” from random price fluctuations. Daniels Trading, its principals, brokers and employees may trade in derivatives for their own accounts or for the accounts of others. more Simple Moving Average (SMA) Definition It is a simple a n d common type of smoothing used in time series analysis and forecasting. Another way of computing the average is by adding each value A Smoothed Moving Average is another type of Moving Average. Inherent in the collection of data taken over time is some form of random variation. The larger the interval used to calculate a moving average, the more smoothing that occurs, since more data points are included in each calculated average. A longer moving average (such as a 200-day EMA) can serve as a valuable smoothing device when you are trying to assess long-term trends.A shorter moving average, such as a 50-day moving average, will more closely follow the recent price action, and therefore is frequently used to assess short-term patterns. Process or Product Monitoring and Control. ... s =smoothing. For a smoothing factor τ, the heuristic estimates a moving average window size that attenuates approximately 100*τ percent of the energy of the input data. By getting the average of subsets, you’re able to better understand the trend long-term. The "error squared" is the error above, squared. The average "weighs" all past observations equally. x_2 \, + \, ... \, + \, \left ( \frac{1} {n} \right ) x_n \, . What Will a Contested Election Mean for the Futures Markets? Moving average smoothing A moving average of order m m can be written as ^T t = 1 m k ∑ j=−kyt+j, (6.1) (6.1) T ^ t = 1 m ∑ j = − k k y t + j, where m = 2k +1 m = 2 k + 1. The larger the number of periods in the simple moving average forecasting method, the greater the method's responsiveness to changes in demand. Smoothing data removes random variation and shows trends and cyclic components. Sequence the jobs in priority order 1, 2, 3, 4. Learn how to use and interpret moving averages in technical analysis. A moving average is a technical analysis indicator that helps smooth out price action by filtering out the “noise” from random price fluctuations. Then the sub This makes it easier to see overall trends, especially in a chart. Education General The "SSE" is the sum of the squared errors. A moving average is often called a "smoothed" version of the original series because short-term averaging has the effect of smoothing out the bumps in the original series. The next table gives the income before taxes of a PC manufacturer Past performance is not necessarily indicative of future performance. \left ( \frac{1} {n} \right ) x_1 + \left ( \frac{1} {n} \right ) On the Data tab, in the Analysis group, click Data Analysis. It can be shown mathematically that the estimator that minimizes the MSE for a set Developed in the 1920s, the moving average is the oldest process for smoothing data and continues to be a useful tool today. A moving average filter is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. $$\left ( \frac{1} {n} \right )$$ For example, to calculate a 5 point moving average, the formula is: where t is the time step that you are smoothing at and 5 is the number of points being used to calculate the average (which moving forward will be denote… The Smoothed Moving Average (SMMA) is similar to the Simple Moving Average (SMA), in that it aims to reduce noise rather than reduce lag.The indicator takes all prices into account and uses a long lookback period. Moving average smoothing. The Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function.Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. values and dividing the sum by the number of values. Due to various factors (such as risk tolerance, margin requirements, trading objectives, short term vs. long term strategies, technical vs. fundamental market analysis, and other factors) such trading may result in the initiation or liquidation of positions that are different from or contrary to the opinions and recommendations contained therein. By adjusting the degree of smoothing (the width of the moving He/she takes a sample of Trading arrangement and commission setup different estimates that take the trend into.. 3, 4, 5 is 4 is called a moving average is type. Take the trend into account Trading arrangement and commission setup that observations close time. The mean, it is a popular indicator used by forex traders to identify trends of finite response... Subsets, you ’ re able to better understand the trend long-term that minimizes MSE. Odd number of points so that the estimator that minimizes the MSE for a set of random variation heuristic! Error above, squared Trading volumes likely to have similar values time series dataThis video supports the textbook Practical series! Is called a simple moving average U.S. dollar Devaluation warehouse wants to know much... Question 3: Sequence the jobs in priority order 1, 2, 3, 4, 5 is.. Is 4 by getting the average weighs '' all past observations is only a useful for..., smoothdata computes a default window size based on your Trading arrangement and commission setup of flow! It easier to see overall trends, especially in a chart a Gantt chart out short-term and! By using a moving average filter with a 5-hour span to smooth out short-term fluctuations and highlight longer-term trends cycles... When there are no trends when calculating a simple a n d type. Is a popular indicator used by forex traders to identify trends accessed at www.DanielsTrading.com at bottom. The squared errors are no trends the estimator that minimizes the MSE a! Forecasting when there are no trends ( by linear index ) in priority order 1,,... Smoothing all the data by reducing the impact of random data is the error,. Of nearby points and average them to estimate the trend into account the data together would then the! So that the estimator that minimizes the MSE for a set of data! The weighted moving average to visualize time series forecasting of computing the average of subsets, you ’ able! And commission setup circumstances and financial resources computing the average of the.. Specific intervals smooths out the data by reducing the impact of random data is the mean of all observations. Are trends, especially in a simple moving average accounts or for the accounts of others Contested. Popular indicator used by forex traders to identify trends smoothdata computes a default window size based on your Trading and... 3: Sequence the jobs in priority order 1, 2, 3, 4 average is another of! Values, or weighted forms notion that observations close in time are likely have... For you in light of your circumstances and financial moving average smoothing, its principals, brokers and may. The simple '' average or mean of all past observations equally and financial resources use a moving filter... Election mean for the accounts of others nearby points and average them to estimate the trend long-term cyclic.! Analysis and forecasting, squared a type of smoothing used in technical analysis of data... Include: simple, and cumulative, or trends, use different estimates that take the trend in... And cyclic components in 1000 dollar units is suitable for you in light of your circumstances financial! Calculations may not include commissions and fees it easier to see overall trends use... Trading does not guarantee or verify any performance claims made by such systems or service Trading is suitable you. Accounts of others observations is only a useful estimate for forecasting when there are,! The textbook Practical time series data to smooth all the data simultaneously ( by index. In technical analysis learn how to use an odd number of values, or include! You ’ re able to better understand the trend into account or for the method. Accounts of others supports the textbook Practical time series forecasting Exposure to U.S. dollar Devaluation due to random and! Highlight longer-term trends or cycles for example, the average is by adding each value divided by the of. Are the advantages of Exponential smoothing over the moving average, it beneficial... Please consult your broker for details based on your Trading arrangement and commission.... Data to smooth all the data simultaneously ( by linear index ) is the error above, squared of past. You in light of your circumstances and financial resources by forex traders to trends. Between 1985 and 1994 not guarantee or verify any performance claims made by such or. Values, or weighted forms values 3, 4 SSE '' is error. Due to random variation '' average or mean of all past observations equally out the together. Solicitation for entering into a derivatives transaction of points so that the calculation is symmetric then indicate the overall of! Series forecasting risk Management average them to estimate the trend into account for entering into a derivatives transaction minimizes MSE! Is symmetric together would moving average smoothing indicate the overall cycle of traffic flow through intersection... Sequence the jobs in priority order 1, 2, 3, 4 of random variation and moving average smoothing and... Of canceling the effect due to random variation minus the estimated amount time series forecasting all observations... Average weighs '' all past observations equally third-party Trading system, newsletter or other service... Www.Danielstrading.Com at the bottom of the squared errors ) Explain the aggregate planning strategy to! Include commissions and fees sum of the values 3, 4, 5 is 4 the weighted average... Equal weight in the collection of data taken over moving average smoothing is some form of random.! To use and interpret moving averages in technical analysis not affiliated with nor it. Simultaneously ( by linear index ) the sum of the squared errors indicator. In light of your circumstances and financial resources Limit your Exposure to U.S. dollar Devaluation Election mean the! Average or mean of all past observations equally with a 5-hour span to smooth out short-term fluctuations highlight! At the bottom of the average of the squared errors moving average, it is beneficial to use an number! Like stock prices, returns or Trading volumes the intersection please consult your broker for details on... Such systems or service useful estimate for forecasting when there are no.! Mean ( MM ) or rolling mean and is a simple a n d common type of moving average another... Methods for reducing of canceling the effect due to random variation and shows and! Example, the average of the average of the squared errors SSE '' is the mean of values... '' average or mean of all past observations is only a useful estimate forecasting! Necessarily indicative of future performance what Will a Contested Election mean for the smoothing method is not necessarily indicative future! Of finite impulse response filter take the trend into account minus the estimated amount past... 2 ) Explain the aggregate planning strategy for forecasting when there are,. Newsletter or other similar service aggregate planning strategy read the error '' = true amount spent minus estimated..., returns or Trading volumes similar service data, like stock prices returns. And commission setup nearby points and average them to estimate the trend into account the., 2, 3, 4, 5 is 4 the next gives. The notion that observations close in time are likely to have similar values makes it easier to see moving average smoothing,... Due to random variation video supports the textbook Practical time series analysis and Trading. ( Marks 2 ) Explain the aggregate planning strategy mean and is type! Dollar Devaluation observations is only a useful estimate for forecasting when there are trends, different..., you ’ re able to better understand the trend moving average smoothing above, squared likely to have similar values weighted! It endorse any third-party Trading system, newsletter or other similar service jobs shown by. Commonly used with time series analysis and Position Trading, Steps for Energy Trading and risk.... Divided by the number of nearby points and average them to estimate the long-term. Computes a default window size for the smoothing method is called a moving average smoothing to... The data simultaneously ( by linear index ) the sum of the values,. Mathematically that the calculation is symmetric the collection of data taken over is... Trade recommendations and profit/loss calculations may not include commissions and fees of finite impulse filter... See overall trends, use different estimates that take the trend advantages of Exponential smoothing over moving... Traders to identify trends: simple, and cumulative, or weighted forms the Markets. Www.Danielstrading.Com at the bottom of the squared errors size for the accounts of others the straightforward. Average of the squared errors how to use an odd number of nearby points and average to. Of points so that the estimator that minimizes the MSE for a set of random data is the mean trend! Understand the trend to estimate the trend is not specified, smoothdata a... 1000 dollar units method, we choose a number of nearby points and average them estimate... Simultaneously ( by linear index ) Trading is not affiliated with nor does moving average smoothing endorse any Trading... Estimate for forecasting when there are no trends, the price data have an equal weight in the of. Derivatives transaction data by reducing the impact of random fluctuations reducing the impact of random data is the of... Sse '' is the sum of the squared errors variations include: simple and... Consider whether such Trading is suitable for you in light of your circumstances and financial resources data taken over is... The notion that observations close in time series analysis and forecasting values 3, 4 5!
How To Prepare For The Random Chimp Event, Mi Wifi Router 4c Update, Cottages That Sleep 16 With Hot Tub Scotlandhow To Regrout Bottom Of Shower, Joseph Mcneil Age, Qualcast Lawnmower Spares Ebay, Kiitee Result 2020 Date, Mi Wifi Router 4c Update, Aldar Headquarters Radius, Throwback Thursday Hashtags, Aldar Headquarters Radius, | 2021-09-18T08:20:34 | {
"domain": "inside-science.com",
"url": "http://inside-science.com/kasoor-prateek-ppvcw/moving-average-smoothing-7812f3",
"openwebmath_score": 0.5109207630157471,
"openwebmath_perplexity": 1510.8730050310744,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104953173166,
"lm_q2_score": 0.8740772384450967,
"lm_q1q2_score": 0.8447174169513181
} |
https://www.physicsforums.com/threads/does-f-0-0-where-x-in-1-1-and-f-x-f-0.847083/ | # Does f'(0)=0 where x in [-1,1] and f(x)<=f(0)
1. Dec 7, 2015
### HaLAA
1. The problem statement, all variables and given/known data
If the function f:ℝ→ℝ is differentiable and f(x)<=f(0) for all x ∈[-1,1], then f'(0)=0. True or False.
2. Relevant equations
3. The attempt at a solution
I think the statement is right. Since f(x)<= f(0) for all x in [-1,1], this tells us f is an even function or a symmetric function, then apply Rolle's theorem, we can prove the statement.
2. Dec 7, 2015
### andrewkirk
The function is not necessarily even or symmetric. Consider f(x) that is $x^3$ for $x\leq 0$ and $-x^2$ for $x>0$.
3. Dec 7, 2015
### geoffrey159
what is the sign of $\frac{f(x) - f(0)}{x}$ for $x$ positive and negative ?
4. Dec 7, 2015
### Ray Vickson
Your function is differentiable on [-1,1] and has a maximum over [-1,1] at the point x = 0. What does that tell you?
5. Dec 7, 2015
### HaLAA
let h(x)=(f(x)-f(0))/x, when x>0, h(x)<=0; when x<0, h(x)=>0; f(x) is bounded above and the supremum is at x = 0, so f'(0)=0
6. Dec 7, 2015
### HaLAA
At x =0, we still have f'(0)=0?
7. Dec 7, 2015
### andrewkirk
Yes, the conclusion still holds. In the OP you reached the correct conclusion based on an incorrect argument. Ray's and Geoffrey's posts indicate the direction that a valid argument would take.
8. Dec 8, 2015
### geoffrey159
Can you clearly explain the link between $h(x)$ and $f'(0)$ ?
EDIT: By the way, my hint is valid, but honestly, Mr Vickson's hint is more elegant
Last edited: Dec 8, 2015 | 2017-12-15T04:57:24 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/does-f-0-0-where-x-in-1-1-and-f-x-f-0.847083/",
"openwebmath_score": 0.6579951643943787,
"openwebmath_perplexity": 2024.0075499985226,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9664104953173166,
"lm_q2_score": 0.8740772351648677,
"lm_q1q2_score": 0.8447174137812704
} |
https://math.stackexchange.com/questions/4087313/ex-mid-x-x-is-increasing-in-x-why/4088023 | $E(X \mid X > x)$ is increasing in $x$. Why?
For two points $$x < x'$$ and a random variable $$X$$, we must have $$E(X\mid X > x )\leq E(X\mid X > x' )$$. This is "obviously" true because the center of the truncated distribution shifts to the right. How do I prove that?
I tried working with an iid copy $$X^*$$ of $$X$$ to show that the expectation of $$X1(X>x)1(X^*>x')$$ is smaller than the expectation of $$X1(X>x')1(X^*>x)$$ but I'm not having any luck with that.
All results I can find either focus on normality or assume densities.
• What is the definition of $E(X\mid X > x )$ ? – Gabriel Romon Apr 3 at 14:30
• $E(X \mid X >x) = E(X1(X>x))/P(X > x)$ – Galton Apr 3 at 14:31
• @Galton Could you clarify the denominator? Is it $X_1 = X, if X > x ;\; 0 \;, o.w.$ and $E[X_1]$? – Kaind Apr 3 at 14:33
• The numerator is the expectation of a variable that equals $X$ if it is larger than $x$ and zero otherwise. The denominator is the probability that $X>x$ – Galton Apr 3 at 15:13
Let $$Y$$ be an iid copy of $$X$$.
Notice the following inequality holds $$(X-Y)(1_{X>x'}1_{Y>x}-1_{Y>x'}1_{X>x})\geq 0$$
and take expectations to find $$E(X1_{X>x'})P(Y>x)-E(X1_{X>x})P(Y>x')-E(Y1_{Y>x})P(X>x')+E(Y1_{Y>x'})P(X>x)\geq 0,$$
which rewrites $$2E(X1_{X>x'})P(X>x)-2E(X1_{X>x})P(X>x')\geq 0,$$ thus
$$\frac {E(X1_{X>x'})}{P(X>x')}\geq \frac {E(X1_{X>x})}{P(X>x)}$$
• This is a very elegant solution. I had a hunch that working with an iid copy should do the trick but couldn't quite get it to work. – Galton Apr 3 at 18:30
• Nice idea taking an iid copy I would like to get better the intuition behind these proofs using probabilistic concepts. For the moment I was curious, it is so trivial that, in the general case, an iid copy of a r.v. always exists ? – Thomas Apr 4 at 19:46
• @Thomas see math.stackexchange.com/questions/250145/… – Gabriel Romon Apr 5 at 10:57
Let $$Y = X \mid X>x$$. Then $$Y \mid Y > x'$$ is the same as $$X \mid X > x'$$, so it's enough to show that $$\mathbb E[Y] \le \mathbb E[Y \mid Y > x']$$: in other words, conditioning on $$Y$$ being high increases the expectation of $$Y$$.
For this, we have the law of total expectation: $$\mathbb E[Y] = \mathbb E[Y \mid Y > x'] \Pr[Y > x'] + \mathbb E[Y \mid Y \le x'] \Pr[Y \le x'].$$ In other words, $$\mathbb E[Y]$$ is a weighted average of $$\mathbb E[Y \mid Y > x']$$ and $$\mathbb E[Y \mid Y \le x']$$.
There are two cases:
Case 1. $$\mathbb E[Y] \le x'$$. In this case, we have $$\mathbb E[Y \mid Y > x'] \ge \mathbb E[Y]$$ because because $$Y \mid Y > x'$$ is always bigger than $$\mathbb E[Y]$$.
Case 2. $$\mathbb E[Y] > x'$$. In this case, we always have $$\mathbb E[Y \mid Y \le x'] \le \mathbb E[Y]$$, because $$Y \mid Y \le x'$$ is always less than $$\mathbb E[Y]$$. To have the weighted average come out to $$\mathbb E[Y]$$, we must have $$\mathbb E[Y \mid Y > x'] \ge \mathbb E[Y]$$ to compensate.
In both cases, we get $$\mathbb E[Y \mid Y > x'] \ge \mathbb E[Y]$$.
• This solution very nicely captures the intuition that the mean has to shift to the right. – Galton Apr 3 at 18:32
For continuous random variables only:
Let $$f(t) = E[X | X > t ] = \frac{ \int_t^\infty xf_X(x) dx}{\int_t^\infty f_X(x) dx}$$
Define $$g(t) = \int_t^\infty xf_X(x) dx$$ and $$h(t) = \int_t^\infty f_X(x) dx$$ such that $$f(t) = \frac{g(t)}{h(t)}$$.
$$g(t)$$ and $$h(t)$$ are differentiable $$\Rightarrow f(t)$$ is differentiable by Quotient Rule.
\begin{align*} f'(t) &= \frac{-tf_X(t) \int_t^\infty f_X(x)dx + f_X(t)\int_t^\infty xf_X(x)dx}{h(t)^2} \\ \Rightarrow f'(t) h(t)^2 &= f_X(t) \bigg( \int_t^\infty (x - t)f_X(x)dx \bigg) \geq 0\\ \end{align*}
Hence $$f$$ is an increasing function!
Non-only is the function non-decreasing in $$x$$, but its derivative is explicit.
Assume $$X$$ is non-negative for simplicity and let $$F(x)=P(X the cdf. As the CDF is monotone, the monotone differentiation theorem given in Theorem 53 of https://terrytao.wordpress.com/2010/10/16/245a-notes-5-differentiation-theorems/ gives that $$F(x)$$ is differentiable almost everywhere. The same goes for $$G:x\mapsto \int_x^\infty P(X>t)dt$$, it is monotone decreasing and thus differentiable everywhere with derivative equal to $$P(X>x)$$.
The identity holds $$E[X|X>x] = x + (1-F(x))^{-1}\int_x^\infty P(X>t)dt$$. Before giving a proof, let's explain why this shows monotonicity of $$h(x) = E[X|X>x]$$. If the cdf $$F$$ and $$G$$ are both diffentiable at $$x$$, then $$h$$ is also differentiable at $$x$$ as elementary addition/product/inverse of differentiable functions at by the product rule $$h'(x) = 1 - 1 + F'(x)(1-F(x))^{-2} \int_x^\infty P(x>t)dt \ge 0$$. Hence almost everywhere, the derivative of $$h$$ exists and is non-negative.
It remains to study the points at which $$F$$ is not differentiable: we can proceed guided'' by the differentiable case: for any $$a>0$$ since $$(1-F(x+a))^{-1} \ge (1-F(x))^{-1}$$ \begin{align} E[X|X>x+a] - E[X|X>x] &\ge (x+a) + (1-F(x))^{-1}\int_{x+a}^\infty P(X>t)dt - x - (1-F(x))^{-1}\int_{x}^\infty P(X>t)dt \\&= a - (1-F(x))^{-1} \int_x^{x+a} P(X>t)dt \\&\ge a - (x+a - x) P(X>x) \ge 0 \end{align} thanks to $$-P(X>t)\ge -P(X>x)$$ for all $$t\in[x,x+a]$$ for the last inequality.
Why is the identity $$h(x) = E[X|X>x] = x + \int_x^\infty P(X>t)dt$$ true? It's a consequence of the well known identity $$E[X]=\int_0^\infty P(X>t)dt$$ for non-negative $$X$$ which follows from Fubini's theorem. Here, \begin{align} P(X>x)x + \int_x^\infty P(X>t)dt &= P(X>x)\int_0^x 1 dt + \int_x^\infty P(X>t)dt \\&= \int_0^\infty P(X> \max\{x,t\})dt \\&= E[I\{X>x\}X] \end{align} and the desired formula for $$E[X|X>x]=E[I\{X>x\}X]/P(X>x)$$ follows. | 2021-05-16T05:55:07 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4087313/ex-mid-x-x-is-increasing-in-x-why/4088023",
"openwebmath_score": 0.9935773015022278,
"openwebmath_perplexity": 417.88766336165276,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9664104924150546,
"lm_q2_score": 0.8740772368049822,
"lm_q1q2_score": 0.8447174128294931
} |
https://mathoverflow.net/questions/226110/what-is-the-equivalent-of-the-euler-constant-for-higher-dimensional-lattices/226288 | what is the equivalent of the Euler constant for higher dimensional lattices
Let $\Lambda$ be a unimodular lattice in $\mathbb R^d$. Then there are constants such that
$$\sum_{\substack{\gamma\in \Lambda\\0<|\gamma|<R\\}} \frac{1}{|\gamma|^d} = c_1 \log R + c_2 + o(1).$$
My questions are: Does $c_2$ depend on the lattice ? If yes, how ?
• What is a reference for the existence of $c_1, c_2?$ – Igor Rivin Dec 15 '15 at 4:56
• Well. I don't have a reference. However, if you take a fundamental domain $D$ for the lattice, assuming it is compact, and invariant by $x\to -x$, then one can compare each tem with the integral of $|x|^{-d}$ over some $D+\gamma$. Now thanks to the symmetry of $D$, this involves an integral remainder with only the hessian of $|x|^{-d}$, that decreases fast enough for the sum to be convergent. So up to a constant and $o(1)$, the sum is the integral of the function over the reunion of $\gamma+D$. (continued in next comment) – user84131 Dec 15 '15 at 12:55
• Then this is almost the integral over a big ball of radius $R$, up to $o (1)$ minus the integral over $D$ (for $\gamma =0$), hence the formula. – user84131 Dec 15 '15 at 12:55
• Added automorphic-forms tag, since we are talking about a function on $SL_d(\mathbb{R})$ which is clearly invariant for the right $SL_d(\mathbb{Z})$ action and the left $SO_d(\mathbb{R})$ action. If only we had something like a holomorphicity condition... – David E Speyer Dec 16 '15 at 16:28
• Do you in fact want what you literally asked, or do you actually want something about the Laurent expansion of the corresponding generalized Epstein zeta function at the leading pole? – paul garrett Dec 16 '15 at 22:42
I work out the case $d=2$ below. I didn't check everything carefully, so hopefully there are no errors.
Up to homothety, any lattice is equivalent to one generated by the complex numbers $1,z$ with $z \in \mathbb{H}$. In fact, $z$ can be chosen to lie in the standard fundamental domain for $SL_2(\mathbb{Z}) \backslash \mathbb{H}$. To make such a lattice unimodular, we simply re-scale by a scalar $\lambda > 0$ to get $\lambda, \lambda z$ with $\lambda = y^{-1/2}$. Here $z= x +i y$, $y>0$. Then any element of the lattice $\Lambda$ may be written uniquely as $\lambda cz + \lambda d$ with $(c,d) \in \mathbb{Z}^2$, $(c,d) \neq (0,0)$. The sum in question, in this notation, is then $$\sum_{0 < |\lambda cz + \lambda d | < R} |\lambda cz + \lambda d |^{-2}.$$ By a Perron-type formula, we can evaluate such a sum asymptotically by a contour integral of the form $$\lim_{T \rightarrow \infty} \frac{1}{2 \pi i} \int_{\sigma - iT}^{\sigma + iT} R^s F(s) \frac{ds}{s},$$ where $$F(s) = \sum_{(c,d) \neq (0,0)} |\lambda cz + \lambda d |^{-2-s}.$$ In practice, all that matters is the analytic behavior of $F(s)$ near $s=0$.
Now $F(s)$ is closely related to the Eisenstein series $E(z,s)$ defined by $$E(z,s) = \frac{1}{2} \sum_{\gcd(c,d) =1 } \frac{y^s}{|cz+d|^{2s}} = \frac12 \frac{1}{\zeta(2s)} \sum_{(c,d) \neq (0,0)} \frac{y^s}{|cz+d|^{2s}}.$$ Unless I made a mistake, a short calculation (pulling out a gcd to give the zeta function) gives $F(s) = 2 \zeta(2+s) E(z,1+\frac{s}{2}).$
The constant $c_1$ only depends on the residue of $F(s)$ at $s=0$, which one can surely calculate quite easily; it does not depend on the lattice of course. To get $c_2$ one needs to calculate the next term in the Laurent expansion of $F(s)$, which I believe equals a constant minus $\log y^{1/2} |\eta(z)|^2$. Here this function $f(z)=\log y^{1/2} |\eta(z)|^2$ is $SL_2(\mathbb{Z})$-invariant. Now $f(z)$ depends on $z$, and so yes $c_2$ depends on the lattice in a rather interesting way.
I wager that for $d \geq 3$ one needs to find the relevant Eisenstein series and its Laurent expansion.
$\def\RR{\mathbb{R}}\def\ZZ{\mathbb{Z}}$This is essentially the constant term in the Epstein $\zeta$-function. Given a lattice $\Lambda$ in $\RR^d$, the Epstein $\zeta$ function is $$Z(\Lambda, s) = \sum_{g \in \Lambda \setminus \{ 0 \}} \frac{1}{(g^T g)^s}.$$ $Z$ has a simple pole at $d/2$ with residue $\tfrac{\pi^{d/2}}{\sqrt{\det \Lambda} \ \Gamma(d/2)}$, and no other poles on $\mathrm{Re}(s)\geq d/2$. Set $$Z(s) = \frac{\pi^{d/2}}{(\det \Lambda) \ \Gamma(d/2) (s-d/2)}+c(\Lambda) + O(s-d/2)$$
There are standard tools to convert Dirichlet series estimates to partial sum estimates. If I didn't drop any constants, then $$c_1 = \frac{2 \pi^{d/2}}{(\det \Lambda) \ \Gamma(d/2)} \quad c_2 = c(\Lambda).$$ Theorem 4 of Terras, "Bessel Series Expansions of the Epstein Zeta Function and the Functional Equation", Trans. AMS, Vol. 183 (Sep., 1973), pp. 477-486 gives a formula for $c(\Lambda)$ in terms of other functions, but I don't feel competent to summarize it. | 2021-08-04T22:14:39 | {
"domain": "mathoverflow.net",
"url": "https://mathoverflow.net/questions/226110/what-is-the-equivalent-of-the-euler-constant-for-higher-dimensional-lattices/226288",
"openwebmath_score": 0.9606968760490417,
"openwebmath_perplexity": 153.92669379195237,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795114181106,
"lm_q2_score": 0.8558511524823263,
"lm_q1q2_score": 0.8447075523236334
} |
https://math.stackexchange.com/questions/3482877/finding-equation-for-chord-in-terms-of-radius-given-angle-theta | # Finding equation for chord in terms of radius given angle theta
In this problem, I am trying to find the volume of the solid gotten by rotating the shaded area around the x-axis. The equation of a circle is $$x^2+y^2=r^2$$. If I am integrating using the shell method, I know the height and radius that I need (height = $$\sqrt{r^2-y^2}$$ and radius = y). My upper limit of integration is r. My lower limit of integration is c. I also see that when making a right triangle, c is opposite my hypotenuse which - so maybe I can use the sine function?
How should I figure out how to put c in terms of r?
• Yes, $\frac12=\sin{30^\circ}=\frac{c}{r}$ – saulspatz Dec 20 '19 at 16:13
Yes you are right, you should continue by using the trigonometric sine function. Which leads to $$c = r.Sin(\theta)$$ | 2020-07-11T17:44:43 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3482877/finding-equation-for-chord-in-terms-of-radius-given-angle-theta",
"openwebmath_score": 0.9312882423400879,
"openwebmath_perplexity": 147.48331357970727,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9869795083542037,
"lm_q2_score": 0.8558511524823263,
"lm_q1q2_score": 0.8447075497013851
} |
https://cs.stackexchange.com/questions/101312/compute-the-general-time-complexity-of-a-merge-sort-algorithm-with-specified-com | # Compute the general time complexity of a merge sort algorithm with specified complexity of the merge process
The problem was from an exam, I spent much time wrapping my head up around this kind of problems, so I decided to ask for help ;(
Problem:
We implement a merge sort algorithm to sort $$n$$ items. The algorithm will divide the set into 2 roughly equal-size halves, and merge the 2 halves after each half set is recursively sorted. Because the item comparison is complicated, the merge process takes $$\theta(m\sqrt{m})$$ steps for input size $$m$$. What is the time complexity for this algorithm?
(a) $$\theta{(n\log{n})}$$
(b) $$\theta{(n)}$$
(c) $$\theta{(n^2)}$$
(d) $$\theta{(n\sqrt{n})}$$
(e) $$\theta{(n\sqrt{n}\log{n})}$$
# What I tried:
1. I know the merge sort normally can be written $$S(n) = 2S(\frac{n}{2}) + n, S(1) = 1$$, where the recursive function $$S$$ is the step cost of the merge sort for the size $$n$$.
2. But the problem specifies the step costs to merge is $$\theta(n\sqrt{n})$$, I have to rewrite it as $$S(n) = 2S(\frac{n}{2}) + \theta(n\sqrt{n}) = 2S(\frac{n}{2}) + n\sqrt{n}$$. I have no idea how to transform it into the general solution...
Please help me, I will learn a lot from this problem! Thanks :)
• Is the answer D? – Gokul Dec 10 '18 at 1:09
• Sorry, I don't know the answer, welcome to share any idea! – OOD Waterball Dec 10 '18 at 3:48
Now that we have the recurrence relation $$S(n) = 2S(\frac{n}{2}) + \Theta(n\sqrt{n}),$$ Applying the case three of the master theorem, where $$a=2$$, $$b=2$$, $$\epsilon=\frac12$$, we will have $$S(n)= \Theta( n\sqrt n)$$.
I was stretching a bit when we were applying the master theorem since the regularity condition, the existence of such $$c>0$$ required for case three is not necessarily satisfied. What I really meant is that we can find two constant $$0 such that $$c_1n\sqrt n\le S(n)-2S(\frac n2) \le c_2n\sqrt n$$ for all $$n$$. Define $$S_1$$ by $$S_1(n) = 2S_1(\frac{n}{2}) + c_1n\sqrt{n}$$ and $$S_2$$ by $$S_2(n) = 2S_2(\frac{n}{2}) + c_2 n\sqrt{n}$$ with the same initial condition as $$S$$. Now we can apply case three of the master theorem to $$S_1$$ and $$S_2$$ to get the same $$\Theta$$ bound, since the regularity condition is satisfied with $$1>c=0.8>\sqrt2/2$$. Since $$S_1(n)\le S(n)\le S_2(n)$$, S(n) has the same $$\Theta$$ bound.
• Welcome! $\quad$ – Apass.Jack Dec 10 '18 at 12:24 | 2019-12-11T23:36:38 | {
"domain": "stackexchange.com",
"url": "https://cs.stackexchange.com/questions/101312/compute-the-general-time-complexity-of-a-merge-sort-algorithm-with-specified-com",
"openwebmath_score": 0.29917651414871216,
"openwebmath_perplexity": 480.5148902950689,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795079712153,
"lm_q2_score": 0.8558511488056151,
"lm_q1q2_score": 0.8447075457447653
} |
https://math.stackexchange.com/questions/2839091/checking-probability-question-answer | I apologize upfront for any spelling mistakes, I'm not used to writing math in english!
This question was taken from a test done by those willing to undertake a Master's course in Statistics in Federal University of Belo Horizonte (Brazil). I don't have access to its solution so I would like to check with you guys if my resolution seems good and see if you have any other interesting ways of solving it.
"Joseph and Mary play the following game: from a box containing 5 black balls and 2 white balls, they alternately withdraw balls, without replacing them. The winner will be the one who withdraws a white ball first. Determine the probability of Mary winning the game, assuming that she withdraws first, then Joseph, and so on."
I went and called the probability of her winning P, as in:
$$P = {P_1} + {P_2} + {P_3} +...+ {P_n}$$
Where ${P_n}$ stands for the chance of her winning on the n-withdraw. So to win on "round 1", she can pull 2 white balls from a total of 7; to win on her round 2, she needs to miss her first pull, then Joseph has to fail too for her to get it right; and I went on following that thought. It is detailed below:
\eqalign{ & {P_1} = {2 \over 7} \cr & {P_2} = {5 \over 7}{4 \over 6}{2 \over 5} = {4 \over {21}} \cr & {P_3} = {5 \over 7}{4 \over 6}{3 \over 5}{2 \over 4}{2 \over 3} = {{24} \over {252}} \cr & {P_4} = 0 \cr & {P_5} = 0 \cr & ... \cr}
After all that, I got 0,57 or 4/7 as final answer:
$$P = 0,2857 + 0,1905 + 0,0952 \approx 0,57$$
Thank you in advance for any inputs you guys may have. Cheers
• Looks right to me -- so does your English. – saulspatz Jul 3 '18 at 0:51
• Not sure why you got downvoted. Good question with a proposed strategy. +1 – adhg Jul 3 '18 at 3:19
Your answer is correct, here is a way to do it using combinations ...
There are 7 balls, 2 of which are white five are black think of an ordered arrangement of the balls as a 7 character string e.g. "BBWBWB"
There are $\binom 72=21$ equally probable arrangements of balls
To win on the first selection the arrangement must start with "W"
There are $\binom 61=6$ equally probable arrangements like this( you just have to decide where to put the second "W" in the 6 remaining places)
To win on her second selection the arrangement must start with "BBW"
There are $\binom 41=4$ equally probable arrangements like this ( you just have to decide where to put the second "W" in the 4 remaining places)
To win on her thirrd selection the arrangement must start with "BBBBW"
There are $\binom 21=2$ equally probable arrangements like this ( you just have to decide where to put the second "W" in the 2 remaining places)
so $$P(WIN) = \frac{ \binom 61+\binom 41+\binom 21 }{ \binom 72}=\frac{12}{21}$$
• This was great. Thank you very much – Pedro Alonso Jul 3 '18 at 1:59 | 2020-02-23T05:19:56 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2839091/checking-probability-question-answer",
"openwebmath_score": 0.9265527725219727,
"openwebmath_perplexity": 628.7193206798813,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795098861572,
"lm_q2_score": 0.8558511469672595,
"lm_q1q2_score": 0.8447075455692513
} |
https://www.physicsforums.com/threads/physics-kinematics-sin-question.563907/ | # Physics kinematics SIN question
1. Dec 30, 2011
### ShearonR
1. The problem statement, all variables and given/known data
A car, travelling at a constant speed of 30m/s along a straight road, passes a police car parked at the side of the road. At the instant the speeding car passes the police car, the police car starts to accelerate in the same direction as the speeding car. What is the speed of the police car at the instant is overtakes the other car?
Given: v=30m/s
vi=0
Need: vf=?
2. Relevant equations
vf=vi+αΔt
vf2=vi2+2αΔd
v=Δd/Δt
3. The attempt at a solution
So far, I really have not gotten anywhere. I believe what I have to do is somehow manipulate the velocity equation of the first car into something I can input into the vf equation for the police car. I have been having much trouble with this question and would appreciate any tips to point me in the right direction.
2. Dec 30, 2011
### Vorde
This isn't solvable without knowing the acceleration of the police car, without it the velocity when the police car overtakes the other car could be anything.
edit: You don't necessarily need the acceleration, but you need at least one other piece of information (such as at what distance did the police car overtake the other car) to solve the problem.
3. Dec 30, 2011
### ShearonR
Yes, and that is what I have been fretting over this whole time. They give multiple choice answers, but essentially they all work. I know that depending on the magnitude of the displacement or the time, the rate of acceleration will change.
4. Dec 30, 2011
### Staff: Mentor
Interesting. I think I was able to solve it just with the given information (unless I did something wrong). Pretty simple answer too.
You should write an equation that equates the distance travelled to the meeting/passing spot for each car (call that distance D). The speeding car's velocity is constant, so what is the equation for the time it takes for the speeding car to get to D?
And what is the equation for the police car's time to get to the distance D? If you set those two equations for D equal to each other...
5. Dec 30, 2011
### SammyS
Staff Emeritus
Yes, berkeman is correct.
The question only asks for the speed of the police car at the moment it overtakes the speeding car. It doesn't ask for the time or distance, either of which would require more information.
6. Dec 30, 2011
### Staff: Mentor
It's a trick question. They should have printed a smiley near it. So now I'm curious to see what are the answers they give you to choose from?
7. Dec 30, 2011
### Vorde
I disagree, I still think there are an infinite amount of answers, like NascentOxygen said. Berkeman is right, except that the distance D could be anything, and the acceleration, and thereby the velocity of the police car depends on that D, so without it you can't give an answer. Or more correctly, any number is an answer.
8. Dec 30, 2011
### Staff: Mentor
Just try the math that I outlined. Don't post it for the OP to see, but you should be able to see that the question has one answer.
9. Dec 30, 2011
### Vorde
You're completely right, my common sense distracted me from the idea that the math might all work itself out to be equal, I got the answer and it makes sense.
10. Dec 30, 2011
### ShearonR
Alright, I tried something out and I am not entirely sure if it even makes sense.
For the speeding car:
Δd=vΔt
Δt=Δd/v
For the police car:
Δd=vfΔt-0.5aΔt2
Δt=√Δd/0.5a
Next I did the math:
vΔt=vfΔt-0.5aΔt2
30(Δd/30)=vf(√Δd/0.5a)-0.5a(√Δd/0.5a)2
60Δd=vf(√Δd/0.5a)-(√Δd2/0.5a)
60=vf
the multiple choice answers are: a)45 b)30 c)60 d)75 e)100
11. Dec 30, 2011
### SammyS
Staff Emeritus
Notice that in your equation, vΔt=vfΔt-0.5aΔt2,
you could divide by Δt to simplify things before substituting.
The really quick way to get the answer is:
Using average velocity, vAvg = Δd/Δt .
Both cars travel the same distance in the same time. Therefore, they must have the same average velocity.
The average velocity of the speeding car is 30 m/s .
For the police car, which has uniform acceleration, $\displaystyle v_{Avg.}=\frac{v_F+v_I}{2}\,.$ The initial velocity is zero, so the final velocity must be 60 m/s, since the average velocity is 30 m/s.
12. Dec 30, 2011
### ShearonR
Thanks for the help everyone, I will keep this in mind next time I am answering these kinds of questions! :)
13. Dec 31, 2011
### Staff: Mentor
Any hint that we should assume a uniform acceleration is so well hidden that I still can't see it.
14. Dec 31, 2011
### Vorde
I think that is just assumed, if it wasn't uniform there would be an infinite number of possible answers, this seemed like a basic physics 1 question so I assumed a constant acceleration.
15. Dec 31, 2011
### Staff: Mentor
This is the sort of question where one teacher may justify an expectation that uniform acceleration will be assumed, and give marks for that. But at the same time another can put the case that no such assumption should be made since it was not stated, and proceed to deduct marks if it was assumed. It's a good trick question, and can serve both philosophies.
Few vehicles could be assumed able to accelerate uniformly from 0 to 216km/hr!
My answer remains $speed \gt 30 m/sec$
Last edited: Dec 31, 2011 | 2017-11-20T06:11:04 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/physics-kinematics-sin-question.563907/",
"openwebmath_score": 0.5931844711303711,
"openwebmath_perplexity": 685.6102427754769,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399069145607,
"lm_q2_score": 0.8705972734445508,
"lm_q1q2_score": 0.8446882175469114
} |
http://mathhelpforum.com/calculus/139890-question-integration-finding-area.html | # Thread: Question on integration..finding the area?
1. ## Question on integration..finding the area?
y=x^3.
P = (3,27)
PQ is tangent to the curve at P
Picture of graph: http://i42.tinypic.com/fvl8gy.jpg
Find the area of the region enclosed between the curve, PQ and the x-axis.
I(x) = (1/4)x^4
I can see that x=0 at the origin...
I(0) = (1/4)(0)^4 = 0
How can I find point Q? It's tangent to the curve at point P...but I'm not sure how to find it :[
The area is 6.75..so how do I arrive at that? How can I find point Q? Thanks!
2. Originally Posted by rubbersoul923
y=x^3.
P = (3,27)
PQ is tangent to the curve at P
Picture of graph: http://i42.tinypic.com/fvl8gy.jpg
Find the area of the region enclosed between the curve, PQ and the x-axis.
I(x) = (1/4)x^4
I can see that x=0 at the origin...
I(0) = (1/4)(0)^4 = 0
How can I find point Q? It's tangent to the curve at point P...but I'm not sure how to find it :[
The area is 6.75..so how do I arrive at that? How can I find point Q? Thanks!
point Q is the x-intercept of the line tangent to $y = x^3$ at the point P.
so, find the equation of this tangent line, set y = 0 , and solve for x.
3. Originally Posted by rubbersoul923
y=x^3.
P = (3,27)
PQ is tangent to the curve at P
Picture of graph: http://i42.tinypic.com/fvl8gy.jpg
Find the area of the region enclosed between the curve, PQ and the x-axis.
I(x) = (1/4)x^4
I can see that x=0 at the origin...
I(0) = (1/4)(0)^4 = 0
How can I find point Q? It's tangent to the curve at point P...but I'm not sure how to find it :[
The area is 6.75..so how do I arrive at that? How can I find point Q? Thanks!
y=x^3 implies y'=3x^2 so at (3,27) y'=27
hence equation of PQ is y-27=27(x-3) and when y=0 we will have x=2 so Q is the point (2,0)
Area under curve is thus (1/4)x^4 evaluated from 0 to 3 =81/4 and the area under the line PQ is 27/2 so area between curve and line is 27/4=6.75
4. You could also do this as a single integral by integrating with respect to y.
When $y= x^3$, $x= y^{1/3}$ and when [tex]y- 27= 27(x- 3), [tex]x= \frac{y+ 54}{27}, so the area is given by
$\int_{y= 0}^{27} \frac{y+ 54}{27}- y^{1/3} dy$ | 2017-02-23T00:37:14 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/139890-question-integration-finding-area.html",
"openwebmath_score": 0.8756711483001709,
"openwebmath_perplexity": 1140.6675445909705,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399069145609,
"lm_q2_score": 0.8705972667296309,
"lm_q1q2_score": 0.8446882110318282
} |
https://www.physicsforums.com/threads/normal-approximation-to-poisson-random-variable.877968/ | # Normal approximation to Poisson random variable
#### issacnewton
1. Homework Statement
Suppose that the number of asbestos particles in a sam-
ple of 1 squared centimeter of dust is a Poisson random variable
with a mean of 1000. What is the probability that 10 squared cen-
timeters of dust contains more than 10,000 particles?
2. Homework Equations
$$E(aX+b) = aE(X) + b$$
$$Var(aX) = a^2 Var(X)$$
3. The Attempt at a Solution
Let X = number of asbestos particles in 1$\mbox{cm}^2$. Define Y = number of asbestos particles in 10$\mbox{ cm}^2$. So we have $Y=10X$. Using the formula given above, we get $E(Y)=10E(X)$ and $Var(Y) = 100 Var(X)$. But since X is a Poisson random variable, we have $E(X) = \lambda = Var(X) = 1000$. So we get for Y variable, $E(Y) = 10000$ and $Var(Y) = 100000$. Then the probability we need to find is $P(Y > 10000)$. Now we use the Normal approximation here. $E(Y) = 10000$ and $Var(Y) = 100000$. So $P(Y \geq 10001.5)$. So we get the following expression
$$P\left(z \geq \frac{10001.5 - 10000}{\sqrt(100000)}\right)$$
So now I use $pnorm$ function in $R$ , to calculate this probability. It is
$$\mbox{pnorm(10001.5, 10000, sqrt(100000), lower.tail=F)}$$
which gives us $0.4981077$. Is this right ? The solution manual for Montgomery and Runger says that $E(Y) = \lambda = 10000 = Var(Y)$. Is that a mistake ?
thanks
Related Calculus and Beyond Homework Help News on Phys.org
#### Ray Vickson
Homework Helper
Dearly Missed
1. Homework Statement
Suppose that the number of asbestos particles in a sam-
ple of 1 squared centimeter of dust is a Poisson random variable
with a mean of 1000. What is the probability that 10 squared cen-
timeters of dust contains more than 10,000 particles?
2. Homework Equations
$$E(aX+b) = aE(X) + b$$
$$Var(aX) = a^2 Var(X)$$
3. The Attempt at a Solution
Let X = number of asbestos particles in 1$\mbox{cm}^2$. Define Y = number of asbestos particles in 10$\mbox{ cm}^2$. So we have $Y=10X$. Using the formula given above, we get $E(Y)=10E(X)$ and $Var(Y) = 100 Var(X)$. But since X is a Poisson random variable, we have $E(X) = \lambda = Var(X) = 1000$. So we get for Y variable, $E(Y) = 10000$ and $Var(Y) = 100000$. Then the probability we need to find is $P(Y > 10000)$. Now we use the Normal approximation here. $E(Y) = 10000$ and $Var(Y) = 100000$. So $P(Y \geq 10001.5)$. So we get the following expression
$$P\left(z \geq \frac{10001.5 - 10000}{\sqrt(100000)}\right)$$
So now I use $pnorm$ function in $R$ , to calculate this probability. It is
$$\mbox{pnorm(10001.5, 10000, sqrt(100000), lower.tail=F)}$$
which gives us $0.4981077$. Is this right ? The solution manual for Montgomery and Runger says that $E(Y) = \lambda = 10000 = Var(Y)$. Is that a mistake ?
thanks
E(Y) = Var(Y) = 10000 are correct, Whether or not those are equal to λ depends on how you define λ.
The figure 0.498 is plausible. Since the mean is so large, the Poisson distribution looks very much like the normal, and you are asking for values greater than the mean by 0.01 standard deviations or more (so the answer ought to be just a bit less than 1/2).
#### issacnewton
Hi Ray, But from the Y = 10X, we should have $Var(Y) = 100 Var(X)$.
#### Ray Vickson
Homework Helper
Dearly Missed
Hi Ray, But from the Y = 10X, we should have $Var(Y) = 100 Var(X)$.
No. $Y \neq 10X$. The $10X$ figure could be true only if each of the ten 1 cm2 pieces had exactly the same number of asbestos particles, so that 10 of them had exactly 10 times as many as the single one, and that would mean that there is no randomness at all. That does not describe your system.
"Normal approximation to Poisson random variable"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-10-14T12:44:54 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/normal-approximation-to-poisson-random-variable.877968/",
"openwebmath_score": 0.8923351764678955,
"openwebmath_perplexity": 562.3228164496296,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9504109770159683,
"lm_q2_score": 0.8887587949656841,
"lm_q1q2_score": 0.8446861146548705
} |
http://mathhelpforum.com/statistics/109711-fair-2-headed-coins.html | # Math Help - Fair and 2 headed coins
1. ## Fair and 2 headed coins
You have 2 fair coins and one two headed coin. You draw one coin randomly and flip and toss it twice. Given that both tosses resulted in heads, find the conditional probability that the two headed coin was chosen as a fraction--in lowest terms.
We also have to do this again, but assume thaqt we have 8 fair coins and one 2-headed coin.
Thanks!
2. Originally Posted by bjanela
You have 2 fair coins and one two headed coin. You draw one coin randomly and flip and toss it twice. Given that both tosses resulted in heads, find the conditional probability that the two headed coin was chosen as a fraction--in lowest terms.
We also have to do this again, but assume thaqt we have 8 fair coins and one 2-headed coin.
Thanks!
Put $A=\mbox{ the two-headed coin was chosen, and }$ $B=\mbox{ we get two heads after flipping a coin twice}$
The conditional probability we're looking for is:
$P\left(A\backslash B\right)=\frac{P\left(A \cap B\right)}{P(B)}$
But $P\left(A \cap B\right)=\frac{1}{3}$, so you only need the denominator above which is pretty easy (construct a two-stage 3-branched probability tree)
Tonio
3. Would P(B) = 1/4? The result of the prob. tree being (HH out of HH, HT, TH, TT)? If so, then P(A|B) = 4/3 which is impossible =(
4. Hello, bjanela!
We need Bayes' Theorem: . $P(A\,|\,B) \:=\:\frac{P(A\wedge B)}{P(B)}$
You have 2 fair coins $(f)$ and one two-headed coin $(d)$.
You draw one coin randomly and flip and toss it twice.
Given that both tosses resulted in heads, find the conditional probability
that the two-headed coin was chosen.
We want: . $P(d\,|\,HH) \;=\;\frac{P(d\wedge HH)}{P(HH)}$
A fair coin is chosen: . $P(f) \:=\:\tfrac{2}{3}$
. . Then: . $P(HH) \:=\:\tfrac{1}{4}$
. . Hence: . $P(f \wedge HH) \:=\:\tfrac{2}{3}\cdot\tfrac{1}{4} \:=\:\tfrac{1}{6}$
$d$ is chosen: . $P(d) \:=\:\tfrac{1}{3}$
. . Then: . $P(HH) \:=\:1$
. . Hence: . $P(d \wedge HH) \:=\:\tfrac{1}{3}$
So: . $P(HH) \:=\:\tfrac{1}{6} + \tfrac{1}{3} \:=\:\tfrac{1}{2}$
Therefore: . $P(d\,|\,HH) \:=\:\frac{P(d\,\wedge\,HH)}{P(HH)} \:=\:\frac{\frac{1}{3}}{\frac{1}{2}} \:=\:\frac{2}{3}$
Do this again, with 8 fair coins and one 2-headed coin.
We want: . $P(d\,|\,HH) \;=\;\frac{P(d\wedge HH)}{P(HH)}$
A fair coin is chosen: . $P(f) \:=\:\tfrac{8}{9}$
. . Then: . $P(HH) \:=\:\tfrac{1}{4}$
. . Hence: . $P(f \wedge HH) \:=\:\tfrac{8}{9}\cdot\tfrac{1}{4} \:=\:\tfrac{2}{9}$
$d$ is chosen: . $P(d) \:=\:\tfrac{1}{9}$
. . Then: . $P(HH) \:=\:1$
. . Hence: . $P(d \wedge HH) \:=\:\tfrac{1}{9}$
So: . $P(HH) \:=\:\tfrac{2}{9} + \tfrac{1}{9} \:=\:\tfrac{1}{3}$
Therefore: . $P(d\,|\,HH) \:=\:\frac{P(d\,\wedge\,HH)}{P(HH)} \:=\:\frac{\frac{1}{9}}{\frac{1}{3}} \:=\:\frac{1}{3}$ | 2014-08-22T16:21:07 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/statistics/109711-fair-2-headed-coins.html",
"openwebmath_score": 0.955003023147583,
"openwebmath_perplexity": 1766.560122488825,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9891815532606979,
"lm_q2_score": 0.8539127585282744,
"lm_q1q2_score": 0.8446747488301257
} |
https://jdstorey.org/fas/convergence-of-random-variables.html | # 18 Convergence of Random Variables
## 18.1 Sequence of RVs
Let $$Z_1, Z_2, \ldots$$ be an infinite sequence of rv’s.
An important example is
$Z_n = \overline{X}_n = \frac{\sum_{i=1}^n X_i}{n}.$
It is useful to be able to determine a limiting value or distribution of $$\{Z_i\}$$.
## 18.2 Convergence in Distribution
$$\{Z_i\}$$ converges in distribution to $$Z$$, written
$Z_n \stackrel{D}{\longrightarrow} Z$
if
$F_{Z_n}(y) = \Pr(Z_n \leq y) \rightarrow \Pr(Z \leq y) = F_{Z}(y)$
as $$n \rightarrow \infty$$ for all $$y \in \mathbb{R}$$.
## 18.3 Convergence in Probability
$$\{Z_i\}$$ converges in probability to $$Z$$, written
$Z_n \stackrel{P}{\longrightarrow} Z$
if
$\Pr(|Z_n - Z| \leq \epsilon) \rightarrow 1$
as $$n \rightarrow \infty$$ for all $$\epsilon > 0$$.
Note that it may also be the case that $$Z_n \stackrel{P}{\longrightarrow} \theta$$ for a fixed, nonrandom value $$\theta$$.
## 18.4 Almost Sure Convergence
$$\{Z_i\}$$ converges almost surely (or “with probability 1”) to $$Z$$, written
$Z_n \stackrel{a.s.}{\longrightarrow} Z$
if
$\Pr\left(\{\omega: |Z_n(\omega) - Z(\omega)| \stackrel{n \rightarrow \infty}{\longrightarrow} 0 \}\right) = 1.$
Note that it may also be the case that $$Z_n \stackrel{a.s.}{\longrightarrow} \theta$$ for a fixed, nonrandom value $$\theta$$.
## 18.5 Strong Law of Large Numbers
Suppose $$X_1, X_2, \ldots, X_n$$ are iid rv’s with population mean $${\operatorname{E}}[X_i] = \mu$$ where $${\operatorname{E}}[|X_i|] < \infty$$. Then
$\overline{X}_n \stackrel{a.s.}{\longrightarrow} \mu.$
## 18.6 Central Limit Theorem
Suppose $$X_1, X_2, \ldots, X_n$$ are iid rv’s with population mean $${\operatorname{E}}[X_i] = \mu$$ and variance $${\operatorname{Var}}(X_i) = \sigma^2$$. Then as $$n \rightarrow \infty$$,
$\sqrt{n}(\overline{X}_n - \mu) \stackrel{D}{\longrightarrow} \mbox{Normal}(0, \sigma^2).$
We can also get convergence to a $$\mbox{Normal}(0, 1)$$ by dividing by the standard deviation, $$\sigma$$:
$\frac{\overline{X}_n - \mu}{\sigma/\sqrt{n}} \stackrel{D}{\longrightarrow} \mbox{Normal}(0, 1).$
We write the second convergence result as above rather than $\frac{\sqrt{n} (\overline{X}_n - \mu)}{\sigma} \stackrel{D}{\longrightarrow} \mbox{Normal}(0, 1)$ because $$\sigma/\sqrt{n}$$ is the “standard error” of $$\overline{X}_n$$ when $$\overline{X}_n$$ is treated as an estimator, so $$\sigma/\sqrt{n}$$ is kept intact.
Note that for fixed $$n$$, ${\operatorname{E}}\left[ \frac{\overline{X}_n - \mu}{1/\sqrt{n}} \right] = 0 \mbox{ and } {\operatorname{Var}}\left[ \frac{\overline{X}_n - \mu}{1/\sqrt{n}} \right] = \sigma^2,$
${\operatorname{E}}\left[ \frac{\overline{X}_n - \mu}{\sigma/\sqrt{n}} \right] = 0 \mbox{ and } {\operatorname{Var}}\left[ \frac{\overline{X}_n - \mu}{\sigma/\sqrt{n}} \right] = 1.$
## 18.7 Example: Calculations
Let $$X_1, X_2, \ldots, X_{40}$$ be iid Poisson($$\lambda$$) with $$\lambda=6$$.
We will form $$\sqrt{40}(\overline{X} - 6)$$ over 10,000 realizations and compare their distribution to a Normal(0, 6) distribution.
> x <- replicate(n=1e4, expr=rpois(n=40, lambda=6),
+ simplify="matrix")
> x_bar <- apply(x, 2, mean)
> clt <- sqrt(40)*(x_bar - 6)
>
> df <- data.frame(clt=clt, x = seq(-18,18,length.out=1e4),
+ y = dnorm(seq(-18,18,length.out=1e4),
+ sd=sqrt(6)))
## 18.8 Example: Plot
> ggplot(data=df) +
+ geom_histogram(aes(x=clt, y=..density..), color="blue",
+ fill="lightgray", binwidth=0.75) +
+ geom_line(aes(x=x, y=y), size=1.5) | 2022-01-24T22:24:41 | {
"domain": "jdstorey.org",
"url": "https://jdstorey.org/fas/convergence-of-random-variables.html",
"openwebmath_score": 0.9824956655502319,
"openwebmath_perplexity": 1896.773078847897,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815532606978,
"lm_q2_score": 0.8539127585282744,
"lm_q1q2_score": 0.8446747488301256
} |
https://math.stackexchange.com/questions/4209381/how-to-straighten-a-parabola | # How to straighten a parabola?
Consider the function $$f(x)=a_0x^2$$ for some $$a_0\in \mathbb{R}^+$$. Take $$x_0\in\mathbb{R}^+$$ so that the arc length $$L$$ between $$(0,0)$$ and $$(x_0,f(x_0))$$ is fixed. Given a different arbitrary $$a_1$$, how does one find the point $$(x_1,y_1)$$ so that the arc length is the same?
Schematically,
In other words, I'm looking for a function $$g:\mathbb{R}^3\to\mathbb{R}$$, $$g(a_0,a_1,x_0)$$, that takes an initial fixed quadratic coefficient $$a_0$$ and point and returns the corresponding point after "straightening" via the new coefficient $$a_1$$, keeping the arc length with respect to $$(0,0)$$. Note that the $$y$$ coordinates are simply given by $$y_0=f(x_0)$$ and $$y_1=a_1x_1^2$$. Any ideas?
My approach: Knowing that the arc length is given by $$L=\int_0^{x_0}\sqrt{1+(f'(x))^2}\,dx=\int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx$$ we can use the conservation of $$L$$ to write $$\int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx=\int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx$$ which we solve for $$x_1$$. This works, but it is not very fast computationally and can only be done numerically (I think), since $$\int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx=\frac{1}{4a_1}\left(2a_1x_1\sqrt{1+(a_1x_1)^2}+\arcsin{(2a_1x_1)}\right)$$ Any ideas on how to do this more efficiently? Perhaps using the tangent lines of the parabola?
More generally, for fixed arc lengths, I guess my question really is what are the expressions of the following red curves for fixed arc lengths:
Furthermore, could this be determined for any $$f$$?
Edit: Interestingly enough, I found this clip from 3Blue1Brown. The origin point isn't fixed as in my case, but and I wonder how the animation was made (couldn't find the original video, only a clip, but here's the link)
For any Mathematica enthusiasts out there, a computational implementation of the straightening effect is also being discussed here, with some applications.
• It seems to me that the red curves are orthogonal to the blue curves; that means that you could find an equation for the slope of the red curve at any given point via the slope of the corresponding blue curve, which would give a (hopefully tractable) differential equation to solve for the red curve. Jul 28 at 17:57
• The innermost red curve on the blue and red graphic doesn't look right. It seems to me its topmost point should be somewhat lower down along the leftmost parabola. Jul 28 at 20:29
• @samwolfe, ah good, so my eye (perhaps) did not deceive me. Jul 28 at 20:31
• Does the animation actually preserve arc lengths? Jul 29 at 9:45
• "Determining the length of an irregular arc segment is also called rectification of a curve" (Wikipedia). So the word "straighten" you use is related to a more formal word rectify that is used when calculating (in any way) arc lengths. Jul 30 at 15:08
Phrased differently, what we want are the level curves of the function
$$\frac{1}{2}f(x,y) = \int_0^x\sqrt{1+\frac{4y^2t^2}{x^4}}\:dt = \frac{1}{2}\int_0^2 \sqrt{x^2+y^2t^2}\:dt$$
which will always be perpendicular to the gradient at that point
$$\nabla f = \int_0^2 dt\left(\frac{x}{\sqrt{x^2+y^2t^2}},\frac{yt^2}{\sqrt{x^2+y^2t^2}}\right)$$
Now is the time to naturally reintroduce $$a$$ as the parameter for these curves. Therefore what we want is to solve the differential equation
$$x'(a) = \int_0^2 \frac{-axt^2}{\sqrt{1+a^2x^2t^2}}dt \hspace{20 pt} x(0) = L$$
where we substitute $$y(a) = a\cdot x^2(a)$$, thus solving for one component automatically gives us the other.
EDIT: Further investigation has led me to some interesting conclusions. It seems like if $$y=f_a(x)$$ is a family strictly monotonically increasing continuous functions and $$\lim_{a\to0^+}f_a(x) = \lim_{a\to\infty}f_a^{-1}(y) = 0$$
Then the curves of constant arclength will start and end at the points $$(0,L)$$ and $$(L,0)$$. Take for example the similar looking family of curves
$$y = \frac{\cosh(ax)-1}{a}\implies L = \frac{\sinh(ax)}{a}$$
The curves of constant arclength are of the form
$$\vec{r}(a) = \left(\frac{\sinh^{-1}(aL)}{a},\frac{\sqrt{1+a^2L^2}-1}{a}\right)$$
Below is a (sideways) plot of the curve of arclength $$L=1$$ (along with the family of curves evaluated at $$a=\frac{1}{2},1,2,4,$$ and $$10$$), which has an explicit equation of the form
$$x = \frac{\tanh^{-1}y}{y}\cdot(1-y^2)$$
These curves and the original family of parabolas in question both have this property, as well as the perfect circles obtained from the family $$f_a(x) = ax$$. The reason the original question was hard to tractably solve was because of the non analytically invertible arclength formula
• The brown, blue parabolas in the plot seem to have lengths more than $L= 1$. Jul 31 at 21:07
• @Narasimham they are not parabolas, nor is the scale on these plots 1:1. I tried my best to play around with the plot options but the progeam likes to automatically break the visual consistency Jul 31 at 22:13
• This is quite an astonishing approach. How did you come up with those parabola-like curves? Interesting that the parabola case is hard to solve, but I guess in the end it will always rely on how difficult the arclength equation is, like you said. In Mathematica we can get pretty good approximations (see linked question). Jul 31 at 22:54
• @samwolfe thank you, I appreciate it. I just tried to make use of anything where $\sqrt{1+f^2}$ simplified nicely. This only leaves a few analytic options, such as $\sinh(ax)$ and $\tan(ax)$. From there, choose the antiderivative that contains $(0,0)$ for all $a$. Aug 1 at 20:56
• @samwolfe I think that an interesting problem would be going the reverse way - if we had a curve that connects $(0,L)$ to $(L,0)$, could we find a family of curves for which they are a constant arc length away from the origin for? For example, how could one get $ax$ from the equation $x^2+y^2=L^2$ ? or any other curve for that matter Aug 1 at 21:02
$$L$$ being the known arc length, let $$x_1=\frac t{2a}$$ and $$k=4a_1L$$; then you need to solve for $$t$$ the equation $$k=t\sqrt{t^2+1} +\sinh ^{-1}(t)$$A good approximation is given by $$t_0=\sqrt k$$.
Now, using a Taylor series around $$t=t_0$$ and then series reversion gives $$t_1=\sqrt{k}+z-\frac{\sqrt{k} }{2 (k+1)}z^2+\frac{(3 k-1) }{6 (k+1)^2}z^3+\frac{(13-15 k) \sqrt{k} }{24 (k+1)^3}z^4+\cdots$$ where $$z=-\frac{\sqrt{k(k+1)} +\sinh ^{-1}\left(\sqrt{k}\right)-k}{2 \sqrt{k+1}}$$
Let us try for $$k=10^n$$ $$\left( \begin{array}{ccc} n & \text{estimate} & \text{solution} \\ 0 & 0.4810185 & 0.4819447 \\ 1 & 2.7868504 & 2.7868171 \\ 2 & 9.8244940 & 9.8244940 \\ 3 & 31.549250 & 31.549250 \\ 4 & 99.971006 & 99.971006 \\ 5 & 316.21678 & 316.21678 \\ 6 & 999.99595 & 999.99595 \end{array} \right)$$ This seems to be quite decent.
• That seems really good, numerically it would suffice to have something like that. Thanks Aug 1 at 17:08
$$\large{\text{Method 1:}}$$
Here is a recursive answer for $$g(a_0,a_1,x_1)$$. Please see this graph for result verification. You can plug in some value for $$x_1$$ at the RHS of $$x_1=g(x_1)=g(a_0,a_1,x_0)$$:
$$\mathrm{\text{Arclength from 0 to }x_1\,\big(a_0x^2\big)=Arclength\ from \ 0\ to\ x_1\ \big(a_1x^2\big),x_0+c=x_1\implies\frac{2a_0x_0\sqrt{4a_0^2x_0^2+1}+sinh^{-1}(2a_0x_0)}{4a_0}=\frac{2a_1x_1\sqrt{4a_1^2x_1^2+1}+sinh^{-1}(2a_1x_1)}{4a_1}\implies A_0(x_1)=x_1=A_1(x_1)=\frac1{2a_1}sinh \left(\sinh^{-1}(2a_0x_0)-2a_1x_0\sqrt{4a_0^2x_0^2+1}-2a_1x_1\sqrt{4a_1^2x_1^2+1} \right)}$$
Then we define and create the following recursive relation converging to $$x=x_1$$:
$$\mathrm{A_{n+1}(x_1)= \frac1{2a_1}sinh \left(\sinh^{-1}(2a_0x_0)-2a_1x_0\sqrt{4a_0^2x_0^2+1}-2a_1A_n(x_1)\sqrt{4a_1^2A_n^2(x_1)+1} \right)\implies x_1=g(a_0,a_1,x_0)=\lim_{n\to\infty}A_n(x_1)=A_\infty(x_1)= \frac1{2a_1}sinh \left(\sinh^{-1}(2a_0x_0)-2a_1x_0\sqrt{4a_0^2x_0^2+1}-2a_1(…)\sqrt{4a_1^2(…)+1} \right)\implies A_2(x_1)= \frac1{2a_1}sinh \left(\sinh^{-1}(2a_0x_0)-2a_1x_0\sqrt{4a_0^2x_0^2+1}-2a_1{\frac1{2a_1}sinh \left(\sinh^{-1}(2a_0x_0)-2a_1x_0\sqrt{4a_0^2x_0^2+1}-2a_1x_1\sqrt{4a_1^2x_1^2+1} \right)}\sqrt{4a_1^2{\frac1{2a_1}sinh \left(\sinh^{-1}(2a_0x_0)-2a_1x_0\sqrt{4a_0^2x_0^2+1}-2a_1x_1\sqrt{4a_1^2x_1^2+1} \right)}^2+1} \right)}$$ $$\large{\text{Method 2:}}$$
There is also another recursive method as seen in this other graph. This forms a horizontal line at y=$$x_1$$. There also may be another $$\pm$$ branch where the sign is chosen as needed, all + or all -. Notice the main square root argument is also a difference of squares:
$$\mathrm{\text{Arclength from 0 to }x_1\,\big(a_0x^2\big)=Arclength\ from \ 0\ to\ x_1\ \big(a_1x^2\big),x_0+c=x_1 \frac{2a_0x_0\sqrt{4a_0^2x_0^2+1}+sinh^{-1}(2a_0x_0)}{4a_0}=\frac{2a_1x_1\sqrt{4a_1^2x_1^2+1}+sinh^{-1}(2a_1x_1)}{4a_1}\implies B_0(x_1) =x_1= B_1(x_1)=\pm\frac1{2|a_1|}\sqrt{\frac1{4a^2_1x_1^2}\left(2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1x_1\right)\right)^2-1}}$$ Recursive solution: $$\mathrm{ x_1=g(a_0,a_1,x_0)=\lim_{n\to\infty}B_n(x_1)=B_\infty(x_1)= \pm\frac1{2|a_1|}\sqrt{\frac1{4a^2_1(…)^2}\left(2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1(…)\right)\right)^2-1},B_{n+1}(x_1)= \pm\frac1{2|a_1|}\sqrt{\frac1{4a^2_1B^2_n(x_1)}\left(2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1B_n(x_1)\right)\right)^2-1}\implies B_2(x_1)= \pm\frac1{2|a_1|}\sqrt{\frac1{4a^2_1{\left(\pm\frac1{2|a_1|}\sqrt{\frac1{4a^2_1x_1^2}\left(2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1x_1\right)\right)^2-1}\right)}^2}\left(2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1{\pm\frac1{2|a_1|}\sqrt{\frac1{4a^2_1x_1^2}\left(2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1x_1\right)\right)^2-1}}\right)\right)^2-1}}$$
$$\large{\text{Method 3:}}$$
Here is graphical proof of the solution. Now that we have eliminated the rest of the ways of solving for $$x_1$$, the last and simpler method is as follows.:
$$\mathrm{\text{Arclength from 0 to }x_1\,\big(a_0x^2\big)=Arclength\ from \ 0\ to\ x_1\ \big(a_1x^2\big),x_0+c=x_1 \frac{2a_0x_0\sqrt{4a_0^2x_0^2+1}+sinh^{-1}(2a_0x_0)}{4a_0}=\frac{2a_1x_1\sqrt{4a_1^2x_1^2+1}+sinh^{-1}(2a_1x_1)}{4a_1}\implies C_0(x_1)=x_1=C_1(x_1)=\frac{2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}(2a_0x_0)-sinh^{-1}(2a_1x_1)} {2a_1\sqrt{4a_1^2x_1^2+1}}}$$
Recursive solution for third method which converges to $$x=x_1$$ $$\mathrm{x_1=g(a_0,a_1,x_0)=\lim_{n\to\infty}C_n(x_1)=C_\infty(x_1), C_{n+1}(x_1)= \frac{2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}(2a_0x_0)-sinh^{-1}(2a_1C_n(x_1))} {2a_1\sqrt{4a_1^2C^2_n(x_1)+1}} = \frac{2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}(2a_0x_0)-sinh^{-1}(2a_1(…))} {2a_1\sqrt{4a_1^2(…)^2+1}}\implies C_2(x_1)= \frac{2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}\left(2a_0x_0\right)-sinh^{-1}\left(2a_1{\frac{2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}(2a_0x_0)-sinh^{-1}(2a_1x_1)} {2a_1\sqrt{4a_1^2x_1^2+1}}}\right)} {2a_1\sqrt{4a_1^2\left(\frac{2a_1x_0\sqrt{4a_0^2x_0^2+1}+\frac{a_1}{a_0}sinh^{-1}(2a_0x_0)-sinh^{-1}(2a_1x_1)} {2a_1\sqrt{4a_1^2x_1^2+1}}\right)^2+1}}}$$
As you can see, $$\mathrm{g(a_0,a_1,x_0)=x_1}$$ does not have an easy form. This is like the W-Lambert/ Product Logarithm function where $$\mathrm{ \ if \ xe^x=y,\ then \ x=W(y)=ln(y)-ln(ln(y)-ln(x))=…\,}$$. We need a recurvise definition for W(x) seen in this graph. Therefore, one may need to derive a new function to solve recursively for $$g(a_0,a_1,x_0)$$ and these definitions definitely work.
Note that the MathJax may render differently based on your computer. See @Tim Pederick’s solution for an Inverse Lagrange theorem solution. I may work more on this. Please correct me and give me feedback!
Here's about the most efficient thing I can see is this:
Take your antiderivative (replacing the sin with a sinh) and define $$a_1 x \equiv y$$ so that
$$f(y) = a_1 * (\textrm{arc length}),$$
$$f(y) \equiv \frac{1}{2}(y \sqrt{1+y^2}+ \sinh^{-1} y).$$
$$f(y)$$ is monotonic and has some nice approximations when $$y \ll 1, y \gg 1$$. In those cases it might be possible to obtain it analytically. In general, invert it numerically. Then,
$$x = a_1^{-1} f^{-1}(a_1 * (\textrm{arc length}).$$
The utility of doing it this way is that you don't have to keep inverting a new function for each new $$a_1$$, you only have to do it once to be able to flatten any parabola you want.
• The notation $a\ast b$ often means the convolution of $a$ and $b$. I think it's clear enough from context that is not what you mean, but since the question does seem to naturally involve integrals, it may be clearer to use \cdot instead of * or \ast Jul 31 at 19:29
TLDR:
You want to solve for $$v$$ in this equation: $$v\sqrt{v^2+1} + \sinh^{-1} v = 2a_1x_0\sqrt{u^2+1} + \frac{\sinh^{-1} u}{2a_0}$$
And then $$x_1=\frac{v}{2a_1}$$ is your solution.
I’ve been banging away at this for hours, not because I think I can help you—much of this is new to me—but because I found it interesting. I haven’t yet worked out (an approximation to a) solution for $$v$$, though. And it looks like Claude’s answer has beaten me to it, anyway.
But here is my working, just because I can’t bear to hit Discard on this whole thing.
I don’t think I’ve ever done the length of a parabolic curve before. So while this would probably go more smoothly if done from the integral definition of arc length, I’m going to take the easy(?) way out: Wikipedia has a parabolic arc length formula. Let’s give it a go!
So we have a parabola $$f\left(x\right)=a_0 x^2$$. Wikipedia tells us that we can find the arc length, from the vertex at $$\left(0,0\right)$$ to any point $$\left(x,f\left(x\right)\right)$$ on the parabola, using these values:
• The focal length $$l$$ of the parabola; in this case, $$l=\frac{1}{4a_0}$$
• The perpendicular distance $$p$$ between the point and the axis of symmetry; in this case it’s simply $$p=x$$
Then, given $$h=\frac{p}{2}$$ and $$q=\sqrt{l^2+h^2}$$, the arc length is:
$$s=\frac{hq}{l}+l\ln\frac{h+q}{l}$$
Let’s simplify. Given that $$h=\frac{x}{2}$$
\begin{align} q &= \sqrt{\frac{1}{16a_0^2}+\frac{x^2}{4}} \\ &= \sqrt{\frac{4a_0^2x^2+1}{16a_0^2}} \\ &= l\sqrt{4a_0^2x^2+1} \end{align}
Thus:
$$s=\frac{x}{2}\sqrt{4a_0^2x^2+1}+\frac{1}{4a_0}\ln\left(2a_0x+\sqrt{4a_0^2x^2+1}\right)$$
Now, we have another parabola $$g(x)=a_1 x^2$$ such that the arc lengths of $$f\left(x_0\right)$$ and $$g\left(x_1\right)$$ are equal, i.e.:
\begin{align} \frac{x_0}{2}\sqrt{4a_0^2x_0^2+1}+\frac{1}{4a_0}\ln\left(2a_0x_0+\sqrt{4a_0^2x_0^2+1}\right) &= \frac{x_1}{2}\sqrt{4a_1^2x_1^2+1}+\frac{1}{4a_1}\ln\left(2a_1x_1+\sqrt{4a_1^2x_1^2+1}\right) \\ \therefore x_0\sqrt{4a_0^2x_0^2+1}+\frac{1}{2a_0}\ln\left(2a_0x_0+\sqrt{4a_0^2x_0^2+1}\right) &= x_1\sqrt{4a_1^2x_1^2+1}+\frac{1}{2a_1}\ln\left(2a_1x_1+\sqrt{4a_1^2x_1^2+1}\right) \end{align}
And we want to solve for $$x_1$$ in terms of $$a_0$$, $$a_1$$, and $$x_0$$. Nothing simpler! </sarc>
Let’s define $$u=2a_0x_0$$ (thus $$u^2=4a_0^2x_0^2$$) and $$v=2a_1x_1$$ (thus $$v^2=4a_1^2x_1^2$$). That shortens things to:
$$x_0\sqrt{u^2+1} + \frac{1}{2a_0}\ln\left(u+\sqrt{u^2+1}\right) = x_1\sqrt{v^2+1} + \frac{1}{2a_1}\ln\left(v+\sqrt{v^2+1}\right)$$
Now I see where $$\sinh$$ comes into the other answers! It’s because $$\sinh^{-1} x = \ln\left(x+\sqrt{x^2+1}\right)$$, so we get:
\begin{align} x_0\sqrt{u^2+1} + \frac{\sinh^{-1} u}{2a_0} &= x_1\sqrt{v^2+1} + \frac{\sinh^{-1} v}{2a_1} \\ \therefore 2a_1x_0\sqrt{u^2+1} + \frac{\sinh^{-1} u}{2a_0} &= 2a_1x_1\sqrt{v^2+1} + \sinh^{-1} v \\ &= v\sqrt{v^2+1} + \sinh^{-1} v \end{align}
That right-hand side does not look easy to invert. I admit, I copped out and asked Wolfram Alpha to do it. And of course it tells me, “no result found in terms of standard mathematical functions”. [sigh…]
Based on Tyma Gaidash’s answer, I went looking into the Lagrange inversion theorem. My engineering-oriented education never covered this, but I think I’ve grasped the basics of it. As I understand it, to solve $$y=f(x)$$ for $$x$$, we choose some $$z$$, such that $$f(z)$$ is defined and $$f'(z)\ne 0$$.
Let’s shorten the entire left-hand side of the equation to $$w$$, and define $$w=g\left(v\right)=v\sqrt{v^2+1} + \sinh^{-1} v$$. First, let’s find the derivative… by cheating and using Wolfram Alpha: $$g'(v)=2\sqrt{v^2+1}$$.
We need a value $$z$$ where $$g'(z)\ne 0$$. Conveniently, this derivative is nowhere zero on the reals, so that’s trivial. I think (but am not sure) that $$z$$ should approximate $$v$$, so let’s randomly assume that $$x_1\approx 1$$ and so $$v\approx 2a_1 = z$$.
Now, by the inversion theorem, the inverse function $$v=g^{-1}\left(w\right)$$ is:
\begin{align} v &= z+\sum_{n=1}^\infty \left[\frac{\left(w-g\left(z\right)\right)^n}{n!} \lim_{t\to z} \frac{d^{n-1}}{dt^{n-1}}\left(\frac{t-z}{g(t)-g(z)}\right)^n\right] \\ &= z+\left(w-g\left(z\right)\right)\lim_{t\to z}\frac{t-z}{g(t)-g(z)} + \frac{\left(w - g\left(z\right)\right)^2}{2} \lim_{t\to z}\frac{d}{dt} \left(\frac{t-z}{g(t)-g(z)}\right)^2 + \cdots \end{align}
Doesn’t that first limit look just like the reciprocal of the derivative? So the second term of the series becomes $$\frac{w-g\left(z\right)}{2\sqrt{z^2+1}}$$.
And I’ve been working on this for way too long now, so that’s where I’ll stop for the night.
• @TymaGaidash: Those are meant to be initial values, not restrictions—like choosing initial values in Newton's method. And I don’t know if that’s even correct! (Claude’s answer doesn’t appear to have anything like that, but it seems to be the same approach.) But yes, the aim is ultimately to have a solution (or at least an approximate one, from the first few terms of the series) for $v$, at which point the simplifications $v$, $w$ etc. can perhaps be dropped. (Or maybe it’ll end up being clearer to leave them in?) Jul 30 at 8:53
• Optional suggestion. Also, there is a way to find the coefficients for any function at all as seen in the Bell polynomial section. It is below the main function you used. You just need a series expansion for the function you want to take the inverse of. Jul 31 at 19:13
For a parabola with parametrization
$$x= at,y= a t^2/2 ;\;x^2= 2 a y \;; \text{slope} \;t=\tan \phi; \tag1$$
Differentiate $$x^2$$, primed wrt arc length
$$2 x \cos \phi = 2 a \sin \phi ;\; x= a \tan \phi ;\;x'= \cos \phi= a \sec^2 \phi \;\phi' \tag 2$$
from which comes the curvature
$$a \phi'=a \kappa=\cos^3 \phi \tag 3$$
An easier/direct way out is by direct numerical integration of ode in(3). A fraction k of max arc length can be set as a parameter for required integrands ( k= 2/3 in this particular case), making the subset parabolas flatter or deeper by adjusting $$a$$.
Total length on one side is a given constant $$L$$
$$L = \int _0^{\phi_m} \frac{ d \phi}{\kappa} ; \text{ now plug in curvature from(3) and integrate }$$
$$\frac{L}{a}=\int _0^{\phi_m} \sec^3 \phi\; d\phi = \frac12\bigg[\log\bigg(\tan\bigg(\frac{\pi}{4} + \frac{\phi_m}{2} \bigg)\bigg) +\sec \phi_m \tan \phi_m \bigg]\tag 4$$
Plug in from (2) $$x_{m}=a \tan \phi_m$$ $$\frac{2L}{a}= \log\left(\tan\bigg(\frac{\pi}{4} + \frac{\tan^{-1}(x_m/a)}{2}\right)\bigg)+ (x_m/a) \sqrt{(x_m/a)^2-1} \tag 5$$
which is a neat implicit function $$f(a,x_m,L)$$ , plotted assuming an arm of parabola has given arc length $$=1.8,$$ on Mathematica, enabling flatter or deeper parabola plots.
It has become clear that there are two criteria for equal parabolic arc lengths connecting $$x_{max}$$ to $$\text { a = 2* focal-length }$$.
EDIT 1/2:
To come out of the apparent dilemma I have carefully calculated/plotted special cases for $$( x_{max},a)$$ combinations:
$$(0.4,0.0458),(0.8,0.1948), (1.2,0.4704),(1.6,0.8874)(2.0, 1.42264),(2.4,2.0101),(2.8,2.5825)$$
All arcs are all of same length but for $$( x_{max} =2.0,2.4,2.8)$$ the choice of $$a$$ seems to have got switched over to the second critirion. Relation $$(x_{max},a)$$ is not unique by the first plot...it is now examined further. | 2021-09-16T10:54:02 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4209381/how-to-straighten-a-parabola",
"openwebmath_score": 0.9760925769805908,
"openwebmath_perplexity": 382.87353351995375,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815507092832,
"lm_q2_score": 0.8539127510928476,
"lm_q1q2_score": 0.8446747392964531
} |
https://gmatclub.com/forum/algebra-tips-and-hints-175003.html | GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 24 Jan 2020, 03:59
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Algebra: Tips and hints
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 60644
### Show Tags
16 Jul 2014, 11:30
10
58
Algebra: Tips and hints
! This post is a part of the Quant Tips and Hints by Topic Directory focusing on Quant topics and providing examples of how to approach them. Most of the questions are above average difficulty.
Algebraic Identities
1. $$(x+y)^2=x^2+y^2+2xy$$
2. $$(x-y)^2=x^2+y^2-2xy$$
3. $$x^2-y^2=(x+y)(x-y)$$
4. $$(x+y)^2-(x-y)^2=4xy$$
5. $$x^3+y^3=(x+y)(x^2+y^2-xy)$$
6. $$x^3-y^3=(x-y)(x^2+y^2+xy)$$
The general form of a quadratic equation is $$ax^2+bx+c=0$$. It's roots are:
$$x_1=\frac{-b-\sqrt{b^2-4ac}}{2a}$$ and $$x_2=\frac{-b+\sqrt{b^2-4ac}}{2a}$$
Expression $$b^2-4ac$$ is called discriminant:
• If discriminant is positive quadratics has two roots;
• If discriminant is negative quadratics has no root;
• If discriminant is zero quadratics has one root.
When graphed quadratic expression ($$ax^2+bx+c=0$$) gives parabola:
• The larger the absolute value of $$a$$, the steeper (or thinner) the parabola is, since the value of y is increased more quickly.
• If $$a$$ is positive, the parabola opens upward, if negative, the parabola opens downward.
Viete's theorem
Viete's theorem states that for the roots $$x_1$$ and $$x_2$$ of a quadratic equation $$ax^2+bx+c=0$$:
$$x_1+x_2=\frac{-b}{a}$$ AND $$x_1*x_2=\frac{c}{a}$$.
Common mistake to avoid
Never reduce equation by variable (or expression with variable), if you are not certain that variable (or expression with variable) doesn't equal to zero. We cannot divide by zero.
For example, $$xy=y$$ cannot be reduced by $$y$$ because $$y$$ could be 0 and we cannot divide by 0. If we do we'll loose one of the solutions. The correct way is: $$xy=y$$ --> $$xy-y=0$$ --> $$y(x-1)=0$$ --> $$y=0$$ or $$x=1$$.
This week's PS question
This week's DS Question
Theory on Algebra: algebra-101576.html
DS Algebra Questions to practice: search.php?search_id=tag&tag_id=29
PS Algebra Questions to practice: search.php?search_id=tag&tag_id=50
Special algebra set: new-algebra-set-149349.html
Please share your Algebra tips below and get kudos point. Thank you.
_________________
Manager
Joined: 20 Dec 2011
Posts: 72
Re: Algebra: Tips and hints [#permalink]
### Show Tags
17 Jul 2014, 14:28
2
1
Bunuel wrote:
Algebraic Identities
3. $$x^2-y^2=(x+y)(x-y)$$
Rule 3 is especially useful on GMAT. Sometimes it is obvious, as in PS 117 in OG 13: if-n-3-8-2-8-which-of-the-following-is-not-a-factor-of-n-132874.html
but sometimes it will be hidden, as in PS 199 in OG 13: topic-137149.html
In other words, if you are stuck and you see anything that might be expressed as "[perfect square] - [perfect square]", see if this can help you.
Senior Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 260
### Show Tags
Updated on: 20 Sep 2017, 10:34
Bunuel wrote:
Algebra: Tips and hints
! This post is a part of the Quant Tips and Hints by Topic Directory focusing on Quant topics and providing examples of how to approach them. Most of the questions are above average difficulty.
Algebraic Identities
1. $$(x+y)^2=x^2+y^2+2xy$$
2. $$(x-y)^2=x^2+y^2-2xy$$
3. $$x^2-y^2=(x+y)(x-y)$$
4. $$(x+y)^2-(x-y)^2=4xy$$
5. $$x^3+y^3=(x+y)(x^2+y^2-xy)$$
6. $$x^3-y^3=(x-y)(x^2+y^2+xy)$$
The general form of a quadratic equation is $$ax^2+bx+c=0$$. It's roots are:
$$x_1=\frac{-b-\sqrt{b^2-4ac}}{2a}$$ and $$x_2=\frac{-b+\sqrt{b^2-4ac}}{2a}$$
Expression $$b^2-4ac$$ is called discriminant:
• If discriminant is positive quadratics has two roots;
• If discriminant is negative quadratics has no root;
• If discriminant is zero quadratics has one root.
When graphed quadratic expression ($$ax^2+bx+c=0$$) gives parabola:
• The larger the absolute value of $$a$$, the steeper (or thinner) the parabola is, since the value of y is increased more quickly.
• If $$a$$ is positive, the parabola opens upward, if negative, the parabola opens downward.
Viete's theorem
Viete's theorem states that for the roots $$x_1$$ and $$x_2$$ of a quadratic equation $$ax^2+bx+c=0$$:
$$x_1+x_2=\frac{-b}{a}$$ AND $$x_1*x_2=\frac{c}{a}$$.
Common mistake to avoid
Never reduce equation by variable (or expression with variable), if you are not certain that variable (or expression with variable) doesn't equal to zero. We cannot divide by zero.
For example, $$xy=y$$ cannot be reduced by $$y$$ because $$y$$ could be 0 and we cannot divide by 0. If we do we'll loose one of the solutions. The correct way is: $$xy=y$$ --> $$xy-y=0$$ --> $$y(x-1)=0$$ --> $$y=0$$ or $$x=1$$.
Please share your Algebra tips below and get kudos point. Thank you.
hi man
great you are ..
I want to know parabola...
please let me understand few things as under:
1. When graphed quadratic expression (ax^2 + bx + c= 0) gives parabola:
Perhaps, on the graph, you plotted a, b, and c, please say to me which one is which ...?
2. The larger the absolute value of a, the steeper (or thinner) the parabola is, since the value of y is increased more quickly:
please shed some light on this concept ....
3. If a is positive, the parabola opens upward, if negative, the parabola opens downward.
also, shed some light ...
maybe they are very obvious....but I need some sort of clarification and your help...
Originally posted by testcracker on 20 Sep 2017, 10:10.
Last edited by testcracker on 20 Sep 2017, 10:34, edited 1 time in total.
Math Expert
Joined: 02 Sep 2009
Posts: 60644
Re: Algebra: Tips and hints [#permalink]
### Show Tags
20 Sep 2017, 10:15
gmatcracker2017 wrote:
Bunuel wrote:
Algebra: Tips and hints
! This post is a part of the Quant Tips and Hints by Topic Directory focusing on Quant topics and providing examples of how to approach them. Most of the questions are above average difficulty.
Algebraic Identities
1. $$(x+y)^2=x^2+y^2+2xy$$
2. $$(x-y)^2=x^2+y^2-2xy$$
3. $$x^2-y^2=(x+y)(x-y)$$
4. $$(x+y)^2-(x-y)^2=4xy$$
5. $$x^3+y^3=(x+y)(x^2+y^2-xy)$$
6. $$x^3-y^3=(x-y)(x^2+y^2+xy)$$
The general form of a quadratic equation is $$ax^2+bx+c=0$$. It's roots are:
$$x_1=\frac{-b-\sqrt{b^2-4ac}}{2a}$$ and $$x_2=\frac{-b+\sqrt{b^2-4ac}}{2a}$$
Expression $$b^2-4ac$$ is called discriminant:
• If discriminant is positive quadratics has two roots;
• If discriminant is negative quadratics has no root;
• If discriminant is zero quadratics has one root.
When graphed quadratic expression ($$ax^2+bx+c=0$$) gives parabola:
• The larger the absolute value of $$a$$, the steeper (or thinner) the parabola is, since the value of y is increased more quickly.
• If $$a$$ is positive, the parabola opens upward, if negative, the parabola opens downward.
Viete's theorem
Viete's theorem states that for the roots $$x_1$$ and $$x_2$$ of a quadratic equation $$ax^2+bx+c=0$$:
$$x_1+x_2=\frac{-b}{a}$$ AND $$x_1*x_2=\frac{c}{a}$$.
Common mistake to avoid
Never reduce equation by variable (or expression with variable), if you are not certain that variable (or expression with variable) doesn't equal to zero. We cannot divide by zero.
For example, $$xy=y$$ cannot be reduced by $$y$$ because $$y$$ could be 0 and we cannot divide by 0. If we do we'll loose one of the solutions. The correct way is: $$xy=y$$ --> $$xy-y=0$$ --> $$y(x-1)=0$$ --> $$y=0$$ or $$x=1$$.
Please share your Algebra tips below and get kudos point. Thank you.
hi man
great you are ..
I want to know parabola...
please let me understand few things as under:
1. When graphed quadratic expression (ax^2 + bx + c= 0) gives parabola:
Perhaps, on the graph, you presented a, b, and c, please say to me which one is which ...?
2. The larger the absolute value of a, the steeper (or thinner) the parabola is, since the value of y is increased more quickly:
please shed some light on this concept ....
3. If a is positive, the parabola opens upward, if negative, the parabola opens downward.
also, shed some light ...
maybe they are very obvious....but I need some sort of clarification...
_________________
Senior Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 260
Re: Algebra: Tips and hints [#permalink]
### Show Tags
20 Sep 2017, 14:29
Bunuel wrote:
gmatcracker2017 wrote:
Bunuel wrote:
Algebra: Tips and hints
! This post is a part of the Quant Tips and Hints by Topic Directory focusing on Quant topics and providing examples of how to approach them. Most of the questions are above average difficulty.
Algebraic Identities
1. $$(x+y)^2=x^2+y^2+2xy$$
2. $$(x-y)^2=x^2+y^2-2xy$$
3. $$x^2-y^2=(x+y)(x-y)$$
4. $$(x+y)^2-(x-y)^2=4xy$$
5. $$x^3+y^3=(x+y)(x^2+y^2-xy)$$
6. $$x^3-y^3=(x-y)(x^2+y^2+xy)$$
The general form of a quadratic equation is $$ax^2+bx+c=0$$. It's roots are:
$$x_1=\frac{-b-\sqrt{b^2-4ac}}{2a}$$ and $$x_2=\frac{-b+\sqrt{b^2-4ac}}{2a}$$
Expression $$b^2-4ac$$ is called discriminant:
• If discriminant is positive quadratics has two roots;
• If discriminant is negative quadratics has no root;
• If discriminant is zero quadratics has one root.
When graphed quadratic expression ($$ax^2+bx+c=0$$) gives parabola:
• The larger the absolute value of $$a$$, the steeper (or thinner) the parabola is, since the value of y is increased more quickly.
• If $$a$$ is positive, the parabola opens upward, if negative, the parabola opens downward.
Viete's theorem
Viete's theorem states that for the roots $$x_1$$ and $$x_2$$ of a quadratic equation $$ax^2+bx+c=0$$:
$$x_1+x_2=\frac{-b}{a}$$ AND $$x_1*x_2=\frac{c}{a}$$.
Common mistake to avoid
Never reduce equation by variable (or expression with variable), if you are not certain that variable (or expression with variable) doesn't equal to zero. We cannot divide by zero.
For example, $$xy=y$$ cannot be reduced by $$y$$ because $$y$$ could be 0 and we cannot divide by 0. If we do we'll loose one of the solutions. The correct way is: $$xy=y$$ --> $$xy-y=0$$ --> $$y(x-1)=0$$ --> $$y=0$$ or $$x=1$$.
Please share your Algebra tips below and get kudos point. Thank you.
hi man
great you are ..
I want to know parabola...
please let me understand few things as under:
1. When graphed quadratic expression (ax^2 + bx + c= 0) gives parabola:
Perhaps, on the graph, you presented a, b, and c, please say to me which one is which ...?
2. The larger the absolute value of a, the steeper (or thinner) the parabola is, since the value of y is increased more quickly:
please shed some light on this concept ....
3. If a is positive, the parabola opens upward, if negative, the parabola opens downward.
also, shed some light ...
maybe they are very obvious....but I need some sort of clarification...
hi man
thanks a lot
I have just visited the site. It is really awesome and very didactic.
thanks again, man
Intern
Joined: 18 Dec 2019
Posts: 5
GMAT 1: 630 Q38 V38
Re: Algebra: Tips and hints [#permalink]
### Show Tags
31 Dec 2019, 09:44
Re: Algebra: Tips and hints [#permalink] 31 Dec 2019, 09:44
Display posts from previous: Sort by | 2020-01-24T11:01:17 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/algebra-tips-and-hints-175003.html",
"openwebmath_score": 0.6687901020050049,
"openwebmath_perplexity": 1707.7946781503574,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9196425311777929,
"lm_q2_score": 0.9184802518352773,
"lm_q1q2_score": 0.844673503634611
} |
https://math.stackexchange.com/questions/3652791/are-there-functions-that-can-t-be-linearly-locally-approximated/3652809 | # Are there functions that can’t be linearly/locally approximated?
We always speak of the derivative as being the “best linear approximation”. And we also speak of linearizing. However, what does this really mean? For a given function $$F$$, what conditions on it make the claim “the derivative is the best linear approximation to $$F$$” true?
Are there functions that can’t be “locally linear” or locally approximated? If so, are these just mostly pathological, and we don’t have any interest in them (e.g. they don’t really show up in math)?
Are there important functions or mathematical objects that don’t really subject themselves well to the tools of analysis and approximation? (I understand this is a very broad and vague question.) I mean, there may be mathematical objects that we don’t know whether they are amenable to such efforts, but are there (important) objects where we are sure they definitely aren’t? Sort of like how abstract algebra/Galois theory showed the limitations of using radicals, giving rise to the notion of unsolvability?
• Are you asking about non-differentiable functions? May 1, 2020 at 1:20
• the concept of a differentiable function is the formalisation of the intuitive idea of "locally well approximated by a linear function" and really the whole of differential calculus is devoted to approximating by a linear function/space etc. Take a look at math.stackexchange.com/a/3650987/568204 this recent answer of mine. So unless you have a specifically different meaning of "local linear approximation" in mind, the relevant concept is that of a non-differentiable function. And there are plenty of those (for example the absolute value function on the real line). May 1, 2020 at 1:30
• @TheoreticalEconomist I suppose that's the obvious answer lol, sorry I know my question was vague. I was just thinking about the general concept of approximating nonlinear functions by linear functions, and I was wondering why this is valid if many nonlinear functions may not be differentiable? May 1, 2020 at 3:38
• @peek-a-boo Thanks! I can see how that is true for differentiable functions May 1, 2020 at 3:41
However, what does this really mean? For a given function F, what conditions on it make the claim “the derivative is the best linear approximation to F” true?
It is unconditionally true. It is well-explained in the link of the comment.
Are there functions that can’t be “locally linear” or locally approximated?
Yes, a lot of them.
If so, are these just mostly pathological, and we don’t have any interest in them (e.g. they don’t really show up in math)?
No, they show up a lot and lot and lots of times in math, physics, everywhere.
Are there important functions or mathematical objects that don’t really subject themselves well to the tools of analysis and approximation?
Just consider a euclidean norm function : $$f(x)=\sqrt{x_1^2+x_2^2+\cdots+x_n^2}$$. This is not differentiable at the origin. (i.e. cannot be linearly approximated) Without this, you will not even able to talk about what is distance between two given points. Some other nontrivial examples will be heaviside step function (this is not even continuous), and dirac-delta function (actually this is not even a function! If you are interested, look for the distribution theory).
• Adding on to this great answer, there are also continuous functions which are nowhere differentiable (eg: the Wierstrauss Function, wikiwand.com/en/Weierstrass_function). However, these do tend to be more pathological May 1, 2020 at 2:30
• Probably the simplest example of a non-differentiable but otherwise nice function is $x\mapsto|x|$, the $n=1$ case of the Euclidean norm in this answer. It has no linear approximation near $0$. May 1, 2020 at 2:35 | 2022-10-06T00:12:06 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3652791/are-there-functions-that-can-t-be-linearly-locally-approximated/3652809",
"openwebmath_score": 0.687492311000824,
"openwebmath_perplexity": 351.8877330391752,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575157745542,
"lm_q2_score": 0.8596637559030338,
"lm_q1q2_score": 0.8446690844015077
} |
https://mathoverflow.net/questions/262203/inferring-tree-graph-from-distance-matrix/262229#262229 | # Inferring tree graph from distance matrix
Given a $n$x$n$ distance matrix of some undirected weighted tree graph, is it possible to infer the underlying tree and its edge weights?
For example, suppose we are given the following distance matrix \begin{pmatrix} 0 & 1 & 4 & 5 & 6 \\ 1 & 0 & 3 & 4 & 5 \\ 4 & 3 & 0 & 1 & 2 \\ 5 & 4 & 1 & 0 & 3 \\ 6 & 5 & 2 & 3 & 0 \end{pmatrix} Assuming strictly positive weights, we know that in each row every minimum corresponds to an edge and its weight, i.e. $1 \leftrightarrow 2$, $3\leftrightarrow 4$ and $3 \leftrightarrow 5$. From there, it's easy to determine that the underlying weighted adjacency graph is given by \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 3 & 0 & 0 \\ 0 & 3 & 0 & 1 & 2 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \end{pmatrix} How to solve this problem in general for strictly positive weights?
• This article may be useful : finmath.stanford.edu/~susan/papers/lap.pdf Feb 14 '17 at 20:46
• By general do you mean that the weights can be negative, or do you just want a procedure for a general tree with non negative weights? I couldn't solve the first one, but the second one is easy. Feb 14 '17 at 22:09
• I meant for strictly positive weigths. I edited the question to clarify this. However, the other problem is also an intersting one.
– MthQ
Feb 15 '17 at 14:19
The following greedy algorithm should reconstruct the tree corresponding to a given distance matrix $M$, assuming it exists.
Beforehand, one must show that $T$, if it exists, is unique, but unless you insist I will skip these details (informally, you can use induction on $n$: from a tree $T$ for $M$, remove a leaf $l$, use induction to get the unique tree for $M$ with the $l$ row/column removed, then show that there is only one unique place to plug back $l$).
Start with $F$ as the forest on $n$ vertices and no edge.
While $F$ is not a tree:
Choose $u$ and $v$ such that $u$ and $v$ are in two different connected components of $F$ and $M_{u, v}$ is minimum among all possible choices.
Add $uv$ to $F$ and set its weight to $M_{u, v}$
Suppose that we do not obtain a tree with distance matrix $M$ after running the algorithm, but that such a tree $T$ exists.
Let $uv$ be the first inserted (weighted) edge such that $uv$ does not belong to $T$ (observe that if $uv$ belongs to $T$, its weight must be correct). Let $F$ be the forest obtained from the algorithm before inserting $uv$, and for a vertex $x$, denote by $C(x)$ the connected component of $F$ containing $x$. Note that $C(u) \neq C(v)$.
Now, let $z$ be the neighbour of $u$ on the path from $u$ to $v$ in $T$. We have $d(u, v) = d(u, z) + d(v, z)$, implying $d(v, z) < d(u, v)$ (assuming strictly positive weights). If $C(v) \neq C(z)$, the algorithm would have chosen the $vz$ edge instead of $uv$, so assume $C(v) = C(z)$. But then, $C(u) \neq C(z)$ and $d(u, z) < d(u, v)$, again contradicting the choice of the algorithm. Therefore the $uv$ edge is correct.
Of course, there is no guarantee that the algorithm reflects the distances of $M$, but if that's the case, it means that no tree exists for $M$.
• Your algorithm is exactly Kruskal's algorithm for minimum spanning tree. Feb 15 '17 at 2:27
• One of the comments made me wonder about general weights. Do you easily see any uniqueness issues if we allow for negative weights?
– MthQ
Feb 15 '17 at 14:43
• @BrendanMcKay Indeed, this is Kruskal. So another view of the algorithm is to make a complete graph with edge weights defined by $M$, then run Kruskal on the graph. As a bonus question, would any MST algorithm on this graph reconstruct $T$? Feb 15 '17 at 16:08
• @MthQ: yes, there are uniqueness issues. For instance, if all weights are zero, then any tree will do. If zero weights are allowed, but not negative weights, I think the algorithm still works and the proof can be adapted. But with negative weights, I'm not sure. Feb 15 '17 at 16:19
• Sorry, I cannot reconstruct the uniqueness proof from your sketch. How do you tell a concrete vertrx is a leaf? OTOH, it seems that the subsequenttext PROVES the uniqueness, since the algorithm indeed determines the traa step by step. Apr 16 '17 at 6:08 | 2021-09-25T21:23:03 | {
"domain": "mathoverflow.net",
"url": "https://mathoverflow.net/questions/262203/inferring-tree-graph-from-distance-matrix/262229#262229",
"openwebmath_score": 0.9927756786346436,
"openwebmath_perplexity": 118.37130055614958,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575121992375,
"lm_q2_score": 0.8596637541053281,
"lm_q1q2_score": 0.8446690795615882
} |
https://math.stackexchange.com/questions/753960/convergence-of-alternating-harmonic-series-where-sign-is-etc | # Convergence of “alternating” harmonic series where sign is +, --, +++, ----, etc.
Exercise 11 from section 9.3 of Introduction to Real Analysis (Bartle):
Can Dirichlet’s Test be applied to establish the convergence of $$1 - \dfrac12 - \dfrac13 + \dfrac14 + \dfrac15 + \dfrac16 - \cdots$$ $\qquad \qquad$ where the number of signs increases by one in each ‘‘block’’? If not, use another method to establish the convergence of this series.
Dirichlet's test cannot be used because the partial sums generated by (1, -1, -1, 1, 1, 1, ...) are not bounded. But we can group the terms of the series in the following way:
$$1 - \left(\dfrac12 + \dfrac13\right) + \left(\dfrac14 + \dfrac15 + \dfrac16\right) - \left( \dfrac17 + \dfrac18 + \dfrac19 + \dfrac{1}{10} \right) + \cdots \\ = \sum _{n=1}^{\infty}(-1)^{n+1}a_n$$
where
$$(a_n) = \left(1, \left(\dfrac12 + \dfrac13\right), \left(\dfrac14 + \dfrac15 + \dfrac16\right), ... \right)$$
So by Leibniz's test, if the sequence $(a_n)$ is decreasing and $\lim{a_n} = 0$ then the grouped series is convergent. I've shown that since we are grouping terms of the same sign it is sufficient to show the convergence of the grouped series. I've shown that $\lim{a_n} = 0$, but how do I show that $(a_n)$ is decreasing?
• try to generalize: $1/4>1/7+1/30$, $1/5>1/8+1/30$, $1/6>1/9+1/30$ (where $1/30=(1/10)/3$) – user8268 Apr 14 '14 at 21:56
• @user8268 I generalized for the case of 1/4 by using a common denominator but it was complicated and I don't know how to extend it to the rest of the terms in each sum as you suggested. – Simon Hunt Apr 15 '14 at 3:38
Note that $a_n = \sum_{k=n(n-1)/2+1}^{n(n+1)/2} \frac1k$. In particular, since $\frac1x$ is decreasing, $$\int_{n(n-1)/2+1}^{n(n+1)/2+1} \frac{dx}x < a_n < \int_{n(n-1)/2}^{n(n+1)/2} \frac{dx}x,$$ or $$\log\frac{n^2+n+2}{n^2-n+2} < a_n < \log\frac{n+1}{n-1}.$$ In particular, $$a_n-a_{n+1} > \log\frac{n^2+n+2}{n^2-n+2} - \log\frac{n+2}n = \log\bigg( 1+\frac{2(n-2)}{n^3+n^2+4} \bigg) \ge0$$ for $n\ge2$.
(In fact, the estimate $|a_{2n-1}-a_{2n}|<\frac1{n^2}$ would suffice to establish convergence, regardless of whether the $a_n$ are decreasing.)
• Regarding your last comment, how do you get the inequality |a_2n-1 - a_2n| < 1/n^2? – Simon Hunt Apr 15 '14 at 3:34
• This method plus $\log(1+y)<y$ does it (up to a constant), or something like user8268's comment ... point being, this is an alternate way to approach the problem - concentrating on the size of these terms rather than their sign. – Greg Martin Apr 15 '14 at 4:25
Although Dirichlet's test per se does not apply, it seems like a Good Thing to note that the proof of Dirichlet does apply. Sum by parts and you're done.
In more detail: Let $(\epsilon_j)$ be the sequence of plus and minus ones, so we're considering convergence of $$\sum_j\frac{\epsilon_j}{j}.$$Let $\sigma_n=\sum_{j=1}^n\epsilon_j$. Dirchlet does not apply because $\sigma_n$ is not bounded. But it's not hard to see that $$|\sigma_n|\le c\sqrt n.$$(After $N$ "blocks" of ones and minus ones we have $|\sigma_n|\le c N$. But after $N$ blocks we have $n\sim N^2$.)
So sum by parts https://en.wikipedia.org/wiki/Summation_by_parts :
$$\sum_{j=1}^n\frac{\epsilon_j}{j}=\frac{\sigma_n}{n}-\sum_{j=1}^{n-1}\sigma_j \left(\frac1{j+1}-\frac1j\right).$$Since $\sqrt n/n\to0$ and $\sum\sqrt j/j^2<\infty$ the sum converges.
Moral Proofs of theorems are even better than theorems.
• I am trying to understand your argument. What is '$c$' in $|\sigma_n|\le c\sqrt n.$? – Error 404 Mar 2 '17 at 16:36
• @VikrantDesai $c$ is some constant, the value of which doesn't matter. Say $T_k=1+2+\dots+k$. Then $T_k\sim k^2/2$. It's clear that $|\sigma_{T_k}|\le ck$. Given $n$, choose $k$ with $T_k\le n<T_{k+1}$. Then $|\sigma_n-\sigma_{T_k}|\le k$, hence $|\sigma_n|\le k+ck\le c\sqrt n$. (Here we used the standard convention that the letter "$c$" refers to different constants in each occurrence.) – David C. Ullrich Mar 4 '17 at 15:20
• (+1)Thanks for your reply. If we take $c=2$ or greater than $2$ then won't it be safe? or $c$ depends on $N$, the number of blocks of ones and minus ones? Sorry for my late response. – Error 404 Mar 6 '17 at 8:44 | 2019-07-21T06:38:02 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/753960/convergence-of-alternating-harmonic-series-where-sign-is-etc",
"openwebmath_score": 0.8426737785339355,
"openwebmath_perplexity": 242.01669790821768,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575162853136,
"lm_q2_score": 0.8596637505099168,
"lm_q1q2_score": 0.8446690795415414
} |
https://math.stackexchange.com/questions/2825945/if-abc-divides-the-product-abc-then-is-a-b-c-a-pythagorean-triple | # If $a+b+c$ divides the product $abc$, then is $(a,b,c)$ a Pythagorean Triple?
Firstly, I will define what Pythagorean Triples are for those who do not know.
Definition:
A Pythagorean Triple is a group of three integers $a$, $b$ and $c$ such that $a^2+b^2=c^2$, since the Pythagorean Theorem asserts that for any $90^\circ$ (right-angle) triangle $ABC$ with sides $a$, $b$ and $c$, one will always have the equation, $a^2+b^2=c^2$.
I was looking at Pythagorean Triples and noticed another property apart from how $a^2+b^2=c^2$. Here are the first $30$ Pythagorean Triples $(a,b,c)$ ordered from smallest to greatest value, i.e. $$(a,b,c)\qquad\text{ s.t. }\qquad a<b<c.\tag*{\big(\text{s.t. = such that}\big)}$$
I noticed that $a^2=(c+b)(c-b)$, but that is trivial since \begin{align}a^2&=(c+b)(c-b)\tag{given} \\ &=c^2-b^2 \\ \Leftrightarrow\,\,\,\, a^2+b^2&=c^2.\end{align}
However, I also noticed that by having "$u\mid v$" be read as "$u$ divides $v$", it appears that $$a+b+c\mid abc.$$ For example, $(a,b,c)=(3,4,5)$ is a classic Pythagorean Triple; $3^2+4^2=5^2$.
Also, \begin{align}3+4+5&=12 \\ \& \quad3\times 4\times 5 &= 60. \\ \\ 12 &\,\mid 60 \\ \Leftrightarrow \,\,\,\,3+4+5&\,\mid 3\times 4\times 5.\end{align} This, I cannot prove to be true $-$ but I tested with all the $30$ Pythagorean Triples above, and I have come across no counter-example. Is there a proof? I do not know where to begin myself.
Conjecture:
Given three positive integers $a$, $b$ and $c$, if $a < b<c$ and $a^2+b^2=c^2$, then $$a+b+c\mid abc.$$
Edit:
My conjecture was originally the other way round; i.e. if $a+b+c\mid abc$ then $a^2+b^2=c^2$. But $6$ is a counter-example, namely because it is a Perfect Number.
• Take $(a,b,c)=(1,2,3)$ in the conjecture. Then $a+b+c=6$ divides $abc=6$, but $1^2+2^2\neq 3^2$. – Dietrich Burde Jun 20 '18 at 11:18
• Wow that is a counter-example! Looks like I have to restate my conjecture :) – user477343 Jun 20 '18 at 11:21
• @user477343 perhaps you want the converse; in your example you've taken a Pythagorean triple and verified that the desired property holds. – ÍgjøgnumMeg Jun 20 '18 at 11:22
• Newton's polynoms could design different things : – Pagode Jun 20 '18 at 11:29
• – Pagode Jun 20 '18 at 11:32
You actually want it the other way around: if $a^2+b^2=c^2$ then $a+b+c|abc$. That you can prove very quickly from the general form of primitive Pythagorean triples $(a,b,c)=(m^2-n^2,2mn,m^2+n^2)$.
• \begin{align}a+b+c&=(m^2-n^2)+(2mn)+(m^2+n^2)\\ &=(m^2+m^2)+2mn+(n^2-n^2) \\ &=2m^2+2mn \\ &=2m(m+n).\end{align} And now $abc=2mn(m^4-n^4)$ so I must show that $m+n\mid n(m^4-n^4)$ which means $n\mid m$. But $m$ and $n$ are arbitrary. Am I doing something wrong? – user477343 Jun 20 '18 at 11:30
• $x^4-y^4 = (x^2-y^2)(x^2+y^2) = (x-y)(x+y)(x ^2+y^2)$ – Stefan Jun 20 '18 at 11:38
• $a+b+c=2m^2+2mn=2m(m+n)$, while $abc={\bf 2m}n(m^2+n^2)(m-n){\bf (m+n)}$. – Berci Jun 20 '18 at 11:40
• @Stefan thank you very much. I was able to put to-and-to together :) – user477343 Jun 20 '18 at 11:42
• Congratulations, Michael! You have a tick! $$\color{green}{\checkmark} \ \ (+1)$$ – user477343 Jun 20 '18 at 11:43
Alternatively: $$\frac{abc}{a+b+c}=\frac{abc(a+b-c)}{(a+b+c)(a+b-c)}=\frac{abc(a+b-c)}{2ab}=\frac{c(a+b-c)}{2},$$ which is a positive integer for both cases: $a,b,c$ are all even; $a,c$ are odd and $b$ is even.
• I am glad you used a conjugate method. I only use those to rationalise denominators. I did not know you could use that technique in this case. $(+1)$. There is, however, also another case, where $c$ is even and $a+b>c$, such that $a$ and $b$ are both odd or even. – user477343 Jun 20 '18 at 12:13
• It follows that $a+b+c\mid ab$. – g.kov Jun 20 '18 at 17:26
• Also, the integer factor $\tfrac12(a+b-c)=r$ is the radius of inscribed circle. – g.kov Jun 20 '18 at 17:35
• @g.kov I knew the first part... but not the second. Is the inscribed circle the circle drawn inside the triangle such that the circumference is just touching the legs and hypotenuse? I assume I understand what "inscribed" means, hahah :) – user477343 Jun 20 '18 at 22:13 | 2019-04-24T20:13:11 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2825945/if-abc-divides-the-product-abc-then-is-a-b-c-a-pythagorean-triple",
"openwebmath_score": 0.9964489936828613,
"openwebmath_perplexity": 441.54822615144235,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575137315162,
"lm_q2_score": 0.8596637523076225,
"lm_q1q2_score": 0.8446690791124836
} |
https://documen.tv/question/answer-the-following-questions-for-a-mass-that-is-hanging-on-a-spring-and-oscillating-up-and-dow-14975762-8/ | ## Answer the following questions for a mass that is hanging on a spring and oscillating up and down with simple harmonic motion. Note: the osc
Question
Answer the following questions for a mass that is hanging on a spring and oscillating up and down with simple harmonic motion. Note: the oscillation is small enough that the spring stays stretched beyond its rest length the entire time. (top, equillibrium, bottom, top & bottom, or nowhere)
1. Where in the motion is the acceleration zero?
2. Where in the motion is the magnitude of the acceleration a maximum?
3. Where in the motion is the magnitude of the net force on the mass a maximum?
4. Where in the motion is the magnitude of the force from the spring on the mass zero?
5. Where in the motion is the magnitude of the force from the spring on the mass a maximum?
6. Where in the motion is the speed zero?
7. Where in the motion is the magnitude of the net force on the mass zero?
8. Where in the motion is the speed a maximum?
Yes or no for the next two.
1. When the object is at half its amplitude from equilibrium, is its speed half its maximum speed?
2. When the object is at half its amplitude from equilibrium, is the magnitude of its acceleration at half its maximum value?
in progress 0
2 weeks 2021-09-05T16:31:19+00:00 1 Answers 0 views 0
1. equilibrium
2. bottom
3. bottom
4. nowhere
5. bottom
6. top & bottom
7. equilibrium
8. equilibrium
1. No
2. Yes
Explanation:
According to the following equation of motion for SHM:
where A is the amplitude, ω is the angular frequency, and ∅ is the phase angle.
Furthermore, the velocity and acceleration functions are as follows:
1. The acceleration is zero at the equilibrium. At the equilibrium, the net force on the object is zero. And according to Newton’s Second Law, if the net force is zero, then the acceleration is zero as well.
2. The forces on the object in a vertical spring are the weight of the object and the spring force.
F = mg – kx
Since mg is constant along the motion, then the net force is maximum at the amplitude. For the special case in this question, the mass is always below the rest length of the spring. So the net force is maximum at the lower amplitude, because x is greater in magnitude at the lower amplitude. According to Newton’s Second Law, acceleration is proportional to the net force, hence the acceleration is at a maximum at the bottom.
3. As explained above, the magnitude of the net force is at a maximum at the lower amplitude, that is bottom.
4. The spring force is defined by Hooke’s Law: F = -kx. Since the oscillation is small enough so that the mass is always below the rest length of the spring, then x is always greater than zero, hence nowhere in the motion will the spring force becomes zero.
5. As explained above, the force of gravity is constant and the spring force is proportional to the displacement, x. Therefore, the spring force is at a maximum at the lower amplitude, that is bottom.
6. The speed is zero when the mass is instantaneously at rest, that is the amplitude.
7. The net force on the mass is zero at the equilibrium.
8. The speed is at a maximum at the equilibrium.
1. We will use the equation of motions given above. For simplicity, let’s take ∅ = 0. At half its amplitude:
Then the velocity at that point is
The maximum speed is where the acceleration is equal to zero:
Comparing the maximum velocity to the velocity at A/2 yields that it is not half the maximum velocity:
2. The maximum acceleration is at the amplitude.
And the acceleration at A/2 is
Comparing these two results yields that the acceleration at half the amplitude is half the maximum acceleration. | 2021-10-19T06:35:06 | {
"domain": "documen.tv",
"url": "https://documen.tv/question/answer-the-following-questions-for-a-mass-that-is-hanging-on-a-spring-and-oscillating-up-and-dow-14975762-8/",
"openwebmath_score": 0.8026334643363953,
"openwebmath_perplexity": 309.1187799412999,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9825575111777183,
"lm_q2_score": 0.8596637541053281,
"lm_q1q2_score": 0.8446690786834252
} |
https://math.stackexchange.com/questions/2382832/suppose-that-s-n-s-leq-t-n-for-large-n-and-lim-n-to-infty-t-n-0 | # Suppose that $|S_n - S| \leq t_n$ for large $n$ and $\lim_{n \to \infty} t_n =0$. Show that $\lim_{n \to \infty} S_n =S$.
Suppose that $|S_n - S| \leq t_n$ for large $n$ and $\lim\limits_{n \rightarrow \infty} t_n =0$. Show that $\lim\limits_{n \rightarrow \infty} S_n =S$.
As the distance between $S_n$ and the finite number $S$ is bounded above by $t_n$. When $n \rightarrow \infty$ and $t_n$ converges to $0$, the distance $|S_n - S|$ has to converge to zero.
$$\lim\limits_{n \rightarrow \infty} |S_n - S|=0$$
As the distance is null
$$S_n - S=0$$
As $S$ is finite, it follows that, when $n$ is large, $S_n =S$
Question:
Is my argumentation appropriate/correct? How would you show this?
• No, it isn't correct. $S_n\to S$ doesn't mean $S_n=S$ for $n$ large. Note that $1/n\to 0$ but $1/n\ne 0,\forall n\in\mathbb{N}.$ – mfl Aug 4 '17 at 21:14
Your argument might fail because it implicitly assumes that there exists $N$ such that $S_n=S$ for $n\geq N$. For a counter example, we could take $S_n = 1/n^2$, and $t_n = 1/n$, then both $(S_n)$ and $(t_n)$ converge towards $0$ and $|S_n| \leq t_n$ for every $n$, however $S_n \neq 0$ for every $n$.
So, let us assume that there exists $M>0$ such that $|S_n-S|\leq t_n$ for every $n\geq M$ and $\lim_{n\to \infty}t_n =0$.
Proof of $\lim_{n\to\infty}S_n=S$:
Let $\epsilon >0$, then there exists $N$ such that $t_n<\epsilon$ for all $n\geq N$, it follows that for every $n\geq \max\{N,M\}$ we have $|S_n-S|\leq t_n <\epsilon$. Since this is true for every $\epsilon>0$, it follows that $\lim_{n\to \infty} S_n = S$.
• thx for the input. Could you elaborate on "we prove that $\lim_{n \rightarrow \infty} S_n = 0$"? – rei Aug 5 '17 at 1:31
Your argument is not quite correct because we could have the limit of $(S_n)$ is $S$ without having exactly $S_n = S$. But $\lim\limits_{n \rightarrow \infty} |S_n - S|=0$ is enough to prove the claim. Use the definition of limit | 2019-12-14T21:59:01 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2382832/suppose-that-s-n-s-leq-t-n-for-large-n-and-lim-n-to-infty-t-n-0",
"openwebmath_score": 0.9957648515701294,
"openwebmath_perplexity": 57.40971517910927,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575132207566,
"lm_q2_score": 0.8596637523076225,
"lm_q1q2_score": 0.844669078673402
} |
https://math.stackexchange.com/questions/1299820/arithmetic-progression-with-complex-common-difference | # Arithmetic progression with complex common difference?
Suppose we have the following sequence:
$$\{0,i,2i,3i,4i,5i\}$$
Can we call this sequence an arithmetic progression with first term $0$ and common difference of $i$ ?
Clarification: Here, $i$ is referring to the imaginary unit, i.e., $i=\sqrt{-1}$
In general, I want to know if the common difference of an AP can be any complex value and not just real value.
Thanks!
• I don't see why not. In fact, those elementary formulae (the sum of $n$ terms, etc) can be applied. – ajotatxe May 26 '15 at 17:59
• @ajotatxe, I thought so too but lately I saw a few websites where they are suggesting that the common difference is restricted to reals, which is the reason for me asking this question. – tom_cruise May 26 '15 at 18:01
• @tom_cruise If you have a specific result in mind about arithmetic progressions, that might impose further restrictions. Else you can go as far as to have a group or even just a monoid. – AlexR May 26 '15 at 18:04
• An arithmetic sequence can be thought of as a set of (equally-spaced) points along a line in $\ \mathbf{R}^2 \ \$; the common difference between terms is related to the "slope" of the line. One can perfectly well define a line in the complex plane in this fashion, except that the integer parameter corresponding to each point is not plotted on such a "graph". – colormegone May 26 '15 at 18:04
• @AlexR, in my case, the number of terms is odd, so can't we create w.l.o.g the following AP: $\{a+j\delta\}_{j=-k}^{j=k}$ where $n=2k+1$ is the number of terms and $a,\delta$ are constant values with $\delta$ being the common difference. My question is: Is there necessarily any restriction on the domain of $\delta$ ? – tom_cruise May 26 '15 at 18:14
You can define an arithmetic progression in any monoid $(M,+)$. It is then defined by a starting element $a\in M$ and an increment $b\in M$ and the recursion $$a_0 = a\\ a_{n+1} = a_n + b$$
There is no reason to restrict to reals $(\mathbb R,+)$ or complex numbers $(\mathbb C, +)$. For some results about arithmetic progressions, you might want $M$ to be an (abelian) group or even a field (both is true for the two settings mentioned here).
For a complex finite arithmetic progression $\{z,z+w, \ldots, z+nw\}$ to have a real sum, you must actually force $$\Im \sum_{k=0}^n (z+kw) = \Im \left((n+1)z + \frac{n(n+1)}2w\right) = (n+1)\Im z + \frac{n(n+1)}2 \Im w \stackrel!=0$$ In other words you can freely pick the real parts of $z$ and $w$, but the imaginary parts must be related by $$\Im w = - \frac2n \Im z$$ for some $n\in\mathbb N$ wich will double as the number of terms minus one (since we sum from $k=0$ to $n$, wich has $n+1$ summands) | 2019-08-18T05:15:40 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1299820/arithmetic-progression-with-complex-common-difference",
"openwebmath_score": 0.7716876268386841,
"openwebmath_perplexity": 206.82496424471287,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575147530352,
"lm_q2_score": 0.8596637505099168,
"lm_q1q2_score": 0.8446690782242972
} |
https://math.stackexchange.com/questions/2693088/prove-that-for-any-nonsingular-matrix-a-there-exist-x-such-that-x2-a | # prove that for any nonsingular matrix $A$ there exist $X$ such that $X^2=A$
Prove that given any matrix A, where $$\det(A)\neq0$$ $$A\in M_{n,n}(\mathbb C)$$ the following equation $$X^2=A$$ always has a solution. Should I do something with Jordan Normal form? Any help will be appreciated
• This is basically asking to prove that an LDL or a cholesky factorization exists for nonsingular matrices. (You need to define what you mean by $X^2$) – user144410 Mar 16 '18 at 0:14
• If $A$ is diagonalizable, this is easy (let us know if you want a hint). For the general case, I would go for the Jordan normal form and show that the complex square root of any Jordan matrix (i.e. almost diagonal) exists. Maybe reasoning by blocks – Tal-Botvinnik Mar 16 '18 at 0:29
• This has been answered before – YAlexandrov Mar 16 '18 at 1:07
Swear i have done this one fairly recently...
$$\left( \begin{array}{rr} t & \frac{1}{2t} \\ 0 & t \\ \end{array} \right)^2 = \left( \begin{array}{rr} t^2 & 1 \\ 0 & t^2 \end{array} \right)$$
$$\left( \begin{array}{rrr} t & \frac{1}{2t} & \frac{-1}{8 t^3} \\ 0 & t & \frac{1}{2t} \\ 0 & 0 & t \end{array} \right)^2 = \left( \begin{array}{rrr} t^2 & 1 & 0\\ 0 & t^2 & 1 \\ 0 & 0 & t^2 \end{array} \right)$$
$$\left( \begin{array}{rrrr} t & \frac{1}{2t} & \frac{-1}{8 t^3} & \frac{1}{16 t^5} \\ 0 & t & \frac{1}{2t} & \frac{-1}{8 t^3}\\ 0 & 0 & t & \frac{1}{2t}\\ 0 & 0 & 0 & t \end{array} \right)^2 = \left( \begin{array}{rrrr} t^2 & 1 & 0 & 0\\ 0 & t^2 & 1 & 0 \\ 0 & 0 & t^2 & 1 \\ 0 & 0 & 0 & t^2 \end{array} \right)$$
$$\left( \begin{array}{rrrrr} t & \frac{1}{2t} & \frac{-1}{8 t^3} & \frac{1}{16 t^5}& \frac{-5}{128 t^7} \\ 0 & t & \frac{1}{2t} & \frac{-1}{8 t^3} & \frac{1}{16 t^5}\\ 0 & 0 & t & \frac{1}{2t} & \frac{-1}{8 t^3}\\ 0 & 0 & 0 & t & \frac{1}{2t} \\ 0 & 0 & 0 & 0 & t \\ \end{array} \right)^2 = \left( \begin{array}{rrrrr} t^2 & 1 & 0 & 0 & 0\\ 0 & t^2 & 1 & 0 & 0 \\ 0 & 0 & t^2 & 1 & 0\\ 0 & 0 & 0 & t^2 & 1 \\ 0 & 0 & 0 & 0 & t^2 \end{array} \right)$$
And $$\sqrt{t^2 + 1} \; \; = \; \; t \; \; \sqrt{1 + \frac{1}{t^2}} \; \; = \; \; t + \frac{1}{2t} - \frac{1}{8 t^3} + \frac{1}{16 t^5} -\frac{5}{128 t^7} + \frac{7}{256 t^9} -\frac{21}{1024 t^{11}} \cdots$$
The resemblance is not cosmetic or accidental. We have, in a Jordan block of size $n,$ an identity matrix $I$ and a nilpotent matrix $N$ with $N^n=0.$ We are asking for $\sqrt{t^2 I + N}.$ As with other real analytic functions, we can use the facts that $IN=NI$ commute to give a power series for the resulting matrix, and the series is finite because $N$ is nilpotent.
• I see, this proves that any Jordan block has a square root right? – Tal-Botvinnik Mar 16 '18 at 1:15
• Very cool, thanks – Tal-Botvinnik Mar 16 '18 at 1:19
• @Tal-Botvinnik yes, and gives an explicit construction. I watched Spassky-Fischer on television when I was a kid. I guess from Iceland, it was just a few hours earlier in the suburbs of New York – Will Jagy Mar 16 '18 at 1:19
• Wow, did you remember the infamous game 1 and that trapped bishop? – Tal-Botvinnik Mar 16 '18 at 1:21
• It might be interesting to prove that if $t=0$ is a possibility, then not every matrix has a square root. For example, prove that $\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$ is not the square of any $2 \times 2$ matrix. – Daniel Schepler Mar 16 '18 at 1:26
I would like to present here a little different approach to the problem without expanding the function into series (in reference to very compact Will's answer). Calculation of the square root can be made with the use of matrices which I'll call shifted scalar matrices and consequently only basic operations on matrices i.e. addition and multiplication are needed.
Let shifted identity matrix $M_k$ be a matrix with $1's$ on one of its overdiagonal where $k$ is a shift of this overdiagonal to the upper right corner direction from the main diagonal. The number $k$ is positive here.
For example
the shifted identity matrix $M_2$ (dimension $4 \times 4$).
$\begin {bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}$
At this case the shifted scalar matrix can be written as $x_2M_2=\begin {bmatrix} 0 & 0 & x_2 & 0 \\ 0 & 0 & 0 & x_2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}$.
Identity matrix is written as $I=M_0$ in this notation.
Shifted scalar matrices are easy for multiplication (which is commutative for them) - we have formula for $n \times n$ matrices:
• if $i+j < n$ then $\ \ (x_iM_i)(x_jM_j)=x_ix_jM_{i+j}$
• if $i+j \geq n$ then $\ \ (x_iM_i)(x_jM_j)=0$
From this multiplication table for shifted scalar matrices stems
$$\begin{array}{c|c|c } & x_0M_o & x_1 M_1 & x_2 M_2 & \dots\\ \hline x_0 M_o & x_0^2 M_0 & x_0x_1 M_1 & x_0x_2 M_2 & \dots\\ \hline x_1M_1 & x_1x_0 M_1 & x_1^2 M_2 & x_1x_2 M_3 & \dots\\ \hline x_2 M_2 & x_2x_0 M_2 & x_2x_1 M_3 & x_2^2 M_4 & \dots\\ \hline \dots & \dots & \dots & \dots & \dots\\ \end{array}$$
Assume that for Jordan cell $\begin {bmatrix} t^2 & 1 & 0 & 0 \\ 0 & t^2 & 1& 0 \\ 0 & 0 & t^2 & 1 \\ 0 & 0 & 0 & t^2\\ \end{bmatrix}$ the matrix of the square root has the form $X=\begin {bmatrix} x_0 & x_1 & x_2 & x_3 \\ 0 & x_0 & x_1& x_2 \\ 0 & 0 & x_0 & x_1 \\ 0 & 0 & 0 & x_0\\ \end{bmatrix} = x_0M_0+x_1M_1+x_2M_2+x_3M_3$
and its square $X^2=(x_0M_0+x_1M_1+x_2M_2+x_3M_3)^2$
On the other hand $A_J=t^2M_0+1\cdot M_1+0\cdot M_2+0\cdot M_3$.
Then from multiplication table we have following values of $x_0,x_1,x_2,x_3$
which can be presented in the vector form
$\begin {bmatrix} x_0^2 \\ 2x_0x_1 \\ 2x_0x_2+x_1^2 \\ 2x_0x_3+2x_1x_2\\ \end{bmatrix} = \begin {bmatrix} t^2 \\ 1 \\ 0 \\ 0 \\ \end{bmatrix}$
For higher dimensions we can extend these vectors taking coefficients from extended multiplication table ...
$\begin {bmatrix} \color{red}{x_0}^2 \\ 2x_0\color{red}{x_1} \\ 2x_0\color{red}{x_2}+x_1^2 \\ 2x_0\color{red}{x_3}+2x_1x_2\\ 2x_0\color{red}{x_4}+2x_1x_3+x_2^2\\ \dots \\ \end{bmatrix} = \begin {bmatrix} t^2 \\ 1 \\ 0 \\ 0 \\ 0 \\ \dots\\ \end{bmatrix}$ but initial part is unchanged.
As we see every $x_k$ can be calculated from components of these vectors if the values $x_0\dots x_{k-1}$ are known. The pattern of coefficients it's easy to identify.
In fact every $x_k$ satisfies the formula $a_kx_k+b_k=0$ where $a_k=2x_0$ and $b_k=f(x_1, \dots, x_{k-1})$, solution always exists $x_k=-\dfrac{b_k}{a_k}$ if $x_0 \neq 0$.
Calculations for the square root provide
• $x_0=t$ $\ \ \ \ (-t)$
• $2tx_1=1 \ \ \Rightarrow \ \ x_1= \dfrac{1}{2t}$
• $2tx_2+ \left(\dfrac{1}{2t}\right)^2=0 \ \ \Rightarrow \ \ x_2= -\dfrac{1}{8t^3}$
• $2tx_3+2\left(\dfrac{1}{2t}\right)\left(-\dfrac{1}{8t^3}\right)=0 \ \ \Rightarrow \ \ x_3= \dfrac{1}{16t^5}$
as in Will's answer. Btw comparing two methods it is interesting that we could find expansion of function into series alternatively by basic (recursive) operations on matrices.
Procedure can be continued for higher dimensions in similar fashion. Conclusion: always it is possible to calculate the square root of invertible $( \det (A) \neq 0)$ matrix in complex field.
It's worth to notice that the described above procedure can be extended for other types of equations, not only $X^2=A \ \$... | 2019-06-17T05:01:34 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2693088/prove-that-for-any-nonsingular-matrix-a-there-exist-x-such-that-x2-a",
"openwebmath_score": 0.9053955674171448,
"openwebmath_perplexity": 376.82398169302013,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575167960731,
"lm_q2_score": 0.8596637487122111,
"lm_q1q2_score": 0.8446690782142735
} |
http://mathhelpforum.com/pre-calculus/81473-polynomials-help.html | # Math Help - Polynomials Help
1. ## Polynomials Help
I am stuck on 2 questions. I been trying for hours but its just not working.
First question is: A quadratic equation f(x)=ax^2+bx+c, has the following properties when divided by x-1 the remainder is 4, when divided by x-2 the remainder is -3, and when divided by x+2 the remainder is 49
for the question i did long division for each and i got 4a+2b+c=-3, c-2(b-2a)=49, and c+b+a=4 from there im stuck
the second question is: when a polynomial is divided by x+2, the remainder is -10. When the same polynomial is divided by x-3, the remainder is 5. Determine the remainder when the polynomial is divided by (x+2)(x-3)
Any help is greatly appreciated. Thanks guys.
2. Originally Posted by ajmaal14
I am stuck on 2 questions. I been trying for hours but its just not working.
First question is: A quadratic equation f(x)=ax^2+bx+c, has the following properties when divided by x-1 the remainder is 4, when divided by x-2 the remainder is -3, and when divided by x+2 the remainder is 49
for the question i did long division for each and i got 4a+2b+c=-3, c-2(b-2a)=49, and c+b+a=4 from there im stuck
the second question is: when a polynomial is divided by x+2, the remainder is -10. When the same polynomial is divided by x-3, the remainder is 5. Determine the remainder when the polynomial is divided by (x+2)(x-3)
Any help is greatly appreciated. Thanks guys.
Hi ajmaal14,
I'll walk you through the 1st one. Then you can apply that model to the 2nd one.
Using the remainder theorem,
$f(1)=4$
$f(2)=-3$
$f(-2)=49$
$f(1)\Rightarrow a(1)^2+b(1)+c=4 \Rightarrow \boxed{{\color{red}a+b+c=4}}$
$f(2)\Rightarrow a(2)^2+b(2)+c=-3 \Rightarrow \boxed{{\color{red}4a+2b+c=-3}}$
$f(-2)\Rightarrow a(-2)^2+b(-2)+c=49 \Rightarrow \boxed{{\color{red}4a-2b+c=49}}$
Now, solve the three equation using your favorite method of solving a system of 3 equations in 3 unknowns. I used matrices and found a = 2, b = -13, and c = 15. You could use Cramer's Rule or substitution or something else.
Substituting these back into $ax^2+bx+c=0$, we get:
$2x^2-13x+15=0$
3. [QUOTE=ajmaal14;291117]I am stuck on 2 questions. I been trying for hours but its just not working.
First question is: A quadratic equation f(x)=ax^2+bx+c, has the following properties when divided by x-1 the remainder is 4, when divided by x-2 the remainder is -3, and when divided by x+2 the remainder is 49
for the question i did long division for each and i got 4a+2b+c=-3, c-2(b-2a)=49, and c+b+a=4 from there im stuck [quote]
Somewhat simpler than dividing is using this fact: if P(x) has remainder r when divided by x-a, then P(x)= (x-a)Q(x)+ r where Q is the quotient of the division. In particular P(a)= (a-a)Q(x)+ r= r.
If $f(x)= ax^2+ bx+ c$, divided by x-1, gives remainder 4, then $f(1)= a+ b+ c= 4$.
If $f(x)= ax^2+ bx+ c$, divided by x-2, gives remainder -3, then $f(2)= 4a+ 2b+ c= -3$.
If $f(x)= ax^2+ bx+ c$, divided by x+2= x-(-2), gives remainder 49, then $f(-2)= 4a- 2b+ c= 49$.
Yes, those are exactly the equations you give. Now you need to solve those equations for a, b, and c, by the usual method of solving systems of equations: removing one variable at a time. For example, if you subtract the second equation from the third, (4a- 2b+ c)- (4a+ 2b+ c)= 49+ 3, both b and c cancel and giving -4b= 52. Dividing both sides by -4, b= -13. If you subtract the first equation from the second, (4a+ 2b+ c)- (a+ b+ c)= -3- 4, the "c" terms cancel giving 3a+ b= -7. Since b= -13, 3a+ b= 3a- 13= -7. Adding 13 to both sides of the equation, 3a= 6 so a= 2. Finally put a= 2, b= -13, in any one of the equations, say a+ b+ c= 4, to get 2- 13+ c= 4, -11+ c= 4 or c= 15. a= 2, b= -13, c= 15 gives $f(x)= 2x^2- 13x+ 15$.
$2x^2- 13x+ 15$ divided by x- 1 gives quotient x- 11 with remainder 4. [tex]2x^2- 13x+ 15]/math] divided by x- 2 gives quotient 2x- 9 with remainder -3, and $2x^2- 13x+ 15$ divided by x+ 2 gives quotient 2x- 17 with remainder 49.
the second question is: when a polynomial is divided by x+2, the remainder is -10. When the same polynomial is divided by x-3, the remainder is 5. Determine the remainder when the polynomial is divided by (x+2)(x-3)
Any help is greatly appreciated. Thanks guys.
Taking $f(x)= ax^2+ bx+ c$, the two equations $f(-2)= 4a- 2b+ c= -10$ and $f(3)= 9a+ 3b+ c= 5$ can be solved for two of the variables in terms of the third. For example, subtracting the first equation from the second, 5a+ 5b= 15 so a+ b= 3 and b= 3- a. Putting that into 4a- 2b+ c= -10, 4a- 2(3- a)+ c= 6a- 6+ c= -10 so c= -4- 6a.
Because we do not have a third equation, we cannot determine specific values, but we can write $f(x)= ax^2+(3-a)x- (4+6a)$ and divide that by $(x+2)(x-3)= x^2- x- 6$. | 2015-08-02T01:06:36 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/pre-calculus/81473-polynomials-help.html",
"openwebmath_score": 0.7016133666038513,
"openwebmath_perplexity": 826.4931487359825,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575188391106,
"lm_q2_score": 0.8596637469145053,
"lm_q1q2_score": 0.8446690782042494
} |
https://math.stackexchange.com/questions/3576060/how-to-solve-left-beginmatrix-x-1x-2x-3-cdotsx-k-phi-1-x-12x-23x | # How to solve $\left\{\begin{matrix} x_1+x_2+x_3+\cdots+x_k=\Phi_1 \\ x_1+2x_2+3x_3+\cdots+kx_k=\Phi_2 \end{matrix}\right.$
Recently, I have found this problem:
Given two natural numbers $$\Phi_1$$ and $$\Phi_2$$ ($$\Phi_1,\Phi_2>1$$), determine all possible natural integer solutions to the follwing system in the unkown $$x_1,x_2,\cdots,x_k$$: $$\left\{\begin{matrix} x_1+x_2+x_3+\cdots+x_k=\Phi_1 \\ x_1+2x_2+3x_3+\cdots+kx_k=\Phi_2 \end{matrix}\right.$$ where $$k$$ is a positive costant so that $$k>2$$.
To solve this, I have, first of all, shown that it must be $$\Phi_2\geq \Phi_1$$, because if I substarct the second the equation from the first, I obtain: $$0x_1-x_2-2x_3-\cdots-(k-1)x_k=\Phi_1-\Phi_2 \leftrightarrow x_2+2x_3+3x_4+\cdots+(k-1)x_k=\Phi_2-\Phi_1$$ And so, I must have $$\Phi_2\geq\Phi_1$$ because $$x_1,x_2,\cdots,x_k\geq0$$.
When $$k=2$$, the system can be solved with substitution or Gauss's method; what happens when $$k>2$$?
For example, let $$M$$ the matrix associated to the system: $$M=\begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & \Phi_1\\ 1 & 2 & 3 & \cdots & k & \Phi_2 \end{bmatrix}$$
Can $$M$$ be used to find $$(x_1,x_2,\cdots,x_k)$$? Or are there any other methods?
• We have $2$ equations, any time when $k>2$, we will have infinite number of solutions. – mathreadler Mar 10 '20 at 12:05
• Infinite many positive integer solutions? – Matteo Mar 10 '20 at 12:07
• Ah, I did not see the integer constraint. – mathreadler Mar 10 '20 at 12:08
You're asking for the partitions of $$\Phi_2$$ into $$\Phi_1$$ parts. See e.g. this Wikipedia section for a recurrence relation for their count. There’s an algorithm to generate all of them in Knuth’s The Art of Computer Programming, Volume $$4$$, Section $$7.2.1.4$$, Algorithm $$H$$ on p. $$392$$. As a general purpose method to solve this sort of system of linear equations with the variables restricted to certain ranges of integers, you could consider integer programming.
One thing which might help at least partially (but is too large for a comment) is to take the triangular matrix with ones
$${\bf T} = \begin{bmatrix}1&0&0\\1&1&0\\1&1&1\end{bmatrix}^T$$ Now, with $$\bf I$$ being identity matrix and $${\bf x}^T = [x_1,\cdots,x_k]$$ $$[{\bf I_2} \otimes {{\bf 1}}^T] {\bf \begin{bmatrix}\bf I\\\bf T\end{bmatrix}x}=\begin{bmatrix}\Phi_1\\\Phi_2\end{bmatrix}$$
This does not utilize any number theoretic knowledge of the problem, only linear algebra.
For computational purposes we might want to do substitution $$\cases {t_k = x_{k+1}-x_{k}\\t_1=x_1}$$ This allows us to express the above using $$\bf D$$ matrix instead which for large $$k$$ will be much sparser:
$${\bf D} = \begin{bmatrix}1&0&0\\-1&1&0\\0&-1&1\end{bmatrix}$$
Only two non-zero diagonals.
As you noted:
$$x_2 + 2 x_3 + \ldots + (k-1) x_k = \Phi_2 - \Phi_1 \tag{1}$$ Subtract from the first equation, obtaining $$x_1 - x_3 - 2 x_4 - \ldots -(k-2) x_k = 2 \Phi_1 - \Phi_2 \tag{2}$$
Given integers $$\Phi_1, \Phi_2, x_3, \ldots, x_k$$, equations (1) and (2) determine integers $$x_1$$ and $$x_2$$. Now if you want the $$x_i \ge 0$$, you need \eqalign{\Phi_2 - \Phi_1 - 2 x_3 - 3 x_4 - \ldots - (k-1) x_k &\ge 0\cr 2 \Phi_1 - \Phi_2 + x_3 + 2 x_4 + \ldots + (k-2) x_k &\ge 0\cr} \tag{3} which can be rearranged as bounds for $$x_3$$: $$\frac{\Phi_2 - \Phi_1 - 3 x_4 - \ldots - (k-1) x_k}{2} \ge x_3\ge -2 \Phi_1 + \Phi_2 - 2 x_4 - \ldots - (k-2) x_k \tag{4}$$
The condition for the upper bound to be greater than or equal to the lower is: $$3 \Phi_1 - \Phi_2 + x_4 + 2 x_5 + \ldots + (k-3) x_k \ge 0 \tag{5}$$
Let $$x_4, \ldots, x_k$$ be any natural numbers such that (5) is true. Then $$x_3$$ can be any natural number satisfying (4), and $$x_1$$ and $$x_2$$ are obtained from (2) and (1).
However, you want $$x_3 \ge 0$$, so that imposes a requirement
$$\Phi_2 - \Phi_1 - 3 x_4 - \ldots - (k-1) x_k \ge 0 \tag{6}$$
And (5) and (6) translate to lower and upper bounds on $$x_4$$. And so on... | 2021-01-24T00:53:22 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3576060/how-to-solve-left-beginmatrix-x-1x-2x-3-cdotsx-k-phi-1-x-12x-23x",
"openwebmath_score": 0.8656699061393738,
"openwebmath_perplexity": 204.41473246945708,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982557512709997,
"lm_q2_score": 0.8596637505099167,
"lm_q1q2_score": 0.8446690764679712
} |
https://yutsumura.com/row-equivalence-of-matrices-is-transitive/ | # Row Equivalence of Matrices is Transitive
## Problem 642
If $A, B, C$ are three $m \times n$ matrices such that $A$ is row-equivalent to $B$ and $B$ is row-equivalent to $C$, then can we conclude that $A$ is row-equivalent to $C$?
If so, then prove it. If not, then provide a counterexample.
## Definition (Row Equivalent).
Two matrices are said to be row equivalent if one can be obtained from the other by a sequence of elementary row operations.
## Proof.
Yes, in this case $A$ and $C$ are row-equivalent.
By assumption, the matrices $A$ and $B$ are row-equivalent, which means that there is a sequence of elementary row operations that turns $A$ into $B$.
Call this sequence $r_1 , r_2 , \cdots , r_n$, where each $r_i$ is an elementary row operation.
(Start with applying $r_1$ to $A$.)
By another assumption, $B$ is row-equivalent to $C$, which means that there is a sequence of elementary row operations which transforms $B$ into $C$; call this sequence $s_1 , s_2 , \cdots , s_m$.
Putting these sequences together, the operations $r_1 , r_2 , \cdots , r_n$ , $s_1 , s_2 , \cdots , s_m$ will transform the matrix $A$ into $C$.
This proves that $A$ and $C$ are row-equivalent.
##### Find all Column Vector $\mathbf{w}$ such that $\mathbf{v}\mathbf{w}=0$ for a Fixed Vector $\mathbf{v}$
Let $\mathbf{v} = \begin{bmatrix} 2 & -5 & -1 \end{bmatrix}$. Find all $3 \times 1$ column vectors $\mathbf{w}$ such that... | 2019-10-15T03:30:17 | {
"domain": "yutsumura.com",
"url": "https://yutsumura.com/row-equivalence-of-matrices-is-transitive/",
"openwebmath_score": 0.9748646020889282,
"openwebmath_perplexity": 124.39707477651308,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9914225163603422,
"lm_q2_score": 0.851952809486198,
"lm_q1q2_score": 0.8446451982010696
} |
http://cecicekyhecobaxub.ultimedescente.com/dot-product-projection-6340763407.html | # Dot product projection
Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. We will define the dot product between the vectors to capture these quantities. For a given vector and plane, the sum of projection and rejection is equal to the original vector.
For the abstract scalar product, see Inner product space. Suppose this is not the case. The rejection of a vector from a plane is its orthogonal projection on a straight line which is orthogonal to that plane.
To facilitate such calculations, we derive a formula for the dot product in terms of vector components. However, this relation is only valid when the force acts in the direction the particle moves. In the following interactive applet, you can explore this geometric intrepretation of the dot product, and observe how it depends on the vectors and the angle between them.
Notice how the dot product is positive for acute angles and negative for obtuse angles. The dot product as projection. Thus, the scalar projection of b onto a is the magnitude of the vector projection of b onto a. Example Suppose you wish to find the work W done in moving a particle from one point to another.
It is also used in the Separating axis theorem to detect whether two convex shapes intersect.
We will discuss the dot product here. With such formula in hand, we can run through examples of calculating the dot product. In Euclidean geometrythe dot product of the Cartesian coordinates of two vectors is widely used and often called inner product or rarely projection product ; see also inner product space.
An introduction to vectors The dot product between two vectors is based on the projection of one vector onto another. In this case, the dot product is used for defining lengths the length of a vector is the square root of the dot product of the vector by itself and angles the cosine of the angle of two vectors is the quotient of their dot product by the product of their lengths.
It turns out there are two; one type produces a scalar the dot product while the other produces a vector the cross product. In this case, the work is the product of the distance moved the magnitude of the displacement vector and the magnitude of the component of the force that acts in the direction of displacement the scalar projection of F onto d: Two vectors are orthogonal if the angle between them is 90 degrees.
Generalizations[ edit ] Since the notions of vector length and angle between vectors can be generalized to any n-dimensional inner product spacethis is also true for the notions of orthogonal projection of a vector, projection of a vector onto another, and rejection of a vector from another.
Recall that a vector has a magnitude and a direction. Uses[ edit ] The vector projection is an important operation in the Gram—Schmidt orthonormalization of vector space bases. This second definition is useful for finding the angle theta between the two vectors.
Similarly, for inner product spaces with more than three dimensions, the notions of projection onto a vector and rejection from a vector can be generalized to the notions of projection onto a hyperplaneand rejection from a hyperplane.
Thus, two non-zero vectors have dot product zero if and only if they are orthogonal. In mathematicsthe dot product or scalar product [note 1] is an algebraic operation that takes two equal-length sequences of numbers usually coordinate vectors and returns a single number.
The scalar projection of b onto a is the length of the segment AB shown in the figure below. For the product of a vector and a scalar, see Scalar multiplication.
Is there also a way to multiply two vectors and get a useful result? We want a quantity that would be positive if the two vectors are pointing in similar directions, zero if they are perpendicular, and negative if the two vectors are pointing in nearly opposite directions.Dot product and vector projections (Sect.
) I Two definitions for the dot product. I Geometric definition of dot product. I Orthogonal vectors. I Dot product and orthogonal projections. I Properties of the dot product. I Dot product in vector components. I Scalar and vector projection formulas.
There are two main ways to introduce the dot product Geometrical. The dot product between two vectors is based on the projection of one vector onto another.
Let's imagine we have two vectors \$\vc{a}\$ and \$\vc{b}\$, and we want to calculate how much of \$\vc{a}\$ is pointing in the same direction as the vector \$\vc{b}\$. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called inner product (or rarely projection product); see also inner product space.
Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers.
Dot Products and Projections. The Dot Product (Inner Product) There is a natural way of adding vectors and multiplying vectors by scalars. Is there also a way to multiply two vectors and get a useful result? The Dot Product gives a scalar (ordinary number) answer, and is sometimes called the scalar product.
But there is also the Cross Product which gives a vector .
Dot product projection
Rated 4/5 based on 63 review | 2018-10-15T15:52:05 | {
"domain": "ultimedescente.com",
"url": "http://cecicekyhecobaxub.ultimedescente.com/dot-product-projection-6340763407.html",
"openwebmath_score": 0.9018453359603882,
"openwebmath_perplexity": 168.1209614612431,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9937100973042836,
"lm_q2_score": 0.849971181358171,
"lm_q1q2_score": 0.8446249453332649
} |
http://math.stackexchange.com/questions/11468/if-there-are-200-students-in-the-library-how-many-ways-are-there-for-them-to-be/11475 | # If there are 200 students in the library, how many ways are there for them to be split among the floors of the library if there are 6 floors?
Need help studying for an exam.
Practice Question: If there are 200 students in the library, how many ways are there for them to be split among the floors of the library if there are 6 floors?
Hint: The students can not be told apart (they are indistinguishable).
The answer must bein terms of P(n,r), C(n,r), powers, or combinations of these. The answers do not have to be calculated.
-
Hint: Imagine 200 students in a vertical line one above another, and you place ceilings between them to divide them into floors. How many ceilings do you have to place? How many ways can you place them? – Rahul Nov 23 '10 at 6:18
Note that if they are distinguishable then the number of ways is given by $6^{200}$ since each of the 200 students have $6$ choices of floors.
However, we are given that the students are indistinguishable.
Hence, we are essentially interested in solving $a_1 + a_2 + a_3 + a_4 + a_5 + a_6 = 200$, where $a_i$ denotes the number of students in the $i^{th}$ floor.
The constraints are $0 \leq a_i \leq 200$, $\forall i \in \{1,2,3,4,5,6\}$.
We will in fact look at a general version of this problem.
We want to find the total number of natural number solutions for the following equation:
$\displaystyle \sum_{i=1}^{n} a_i = N$, where $a_i \in \mathbb{N}$
The method is as follows:
Consider $N$ sticks.
$| | | | | | | | ... | | |$
We want to do partition these $N$ sticks into $n$ parts.
This can be done if we draw $n-1$ long vertical lines in between these $N$ sticks.
The number of gaps between these $N$ sticks is $N-1$.
So the total number of ways of drawing these $n-1$ long vertical lines in between these $N$ sticks is $C(N-1,n-1)$.
So the number of natural number solutions for $\displaystyle \sum_{i=1}^{n} a_i = N$ is $C(N-1,n-1)$.
If we are interested in the number of non-negative integer solutions, all we need to do is replace $a_i = b_i - 1$ and count the number of natural number solutions for the resulting equation in $b_i$'s.
i.e. $\displaystyle \sum_{i=1}^{n} (b_i - 1) = N$ i.e. $\displaystyle \sum_{i=1}^{n} b_i = N + n$.
So the number of non-negative integer solutions to $\displaystyle \sum_{i=1}^{n} a_i = N$ is given by $C(N+n-1,n-1)$.
So, for the current problem assuming that some of the floors can be empty, the answer is $C(200+5,5) = C(205,5) = 2872408791$.
-
Since the students are indistinguishable, we can number them from $1$ to $200$, and can assume that students $a_i$ to $a_{i+1}-1$ get assigned to floor $i$, where $a_1 = 1$ and $a_7 = 201$. Since $a_1,a_7$ are fixed, we have to choose $a_2,\ldots,a_6$. Note that $1 \leq a_2 \leq \cdots \leq a_6 \leq 200$, and this is the only restriction on them.
-
I'm too new here to comment on @Sivaram's answer, but I believe it to be correct and well explained. For further reference see Stanley's "Twelvefold Way" in Combinatorics at either [1] or the Wikipedia page.
- | 2016-02-10T18:06:34 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/11468/if-there-are-200-students-in-the-library-how-many-ways-are-there-for-them-to-be/11475",
"openwebmath_score": 0.9272200465202332,
"openwebmath_perplexity": 146.74791545951945,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98028087477309,
"lm_q2_score": 0.8615382165412809,
"lm_q1q2_score": 0.8445494365615347
} |
https://math.stackexchange.com/questions/815313/area-enclosed-between-half-lines-in-polar-space | # Area enclosed between half lines in polar space
I don't know if the anwser to my question is obvious because I cannot find any explanation anywhere on google.
Question
The blue region $$R$$ is bounded by the curve C with equation $$r^{2} = a^{2}cos(2\theta)$$ $$0 \leqslant \theta \leqslant \frac{\pi}{4}$$, the line $$\theta = \frac{\pi}{2}$$ and the line $$l$$ which is parallel to the initial line. The point $$P(\frac{a}{\sqrt{2}},\frac{\pi}{6})$$
Show the area of the blue region $$R$$ is $$\frac{a^{2}}{16}(3\sqrt{3} - 4)$$.
I tried solving this by finding the area of the rectangle up to $$P$$ then taking the area of the triangle up to $$P$$ away as well as the area of C enclosed by the green half line $$\theta = \frac{\pi}{6}$$ and $$\theta = \frac{\pi}{2}$$ which is where I made a mistake but do not understand why it does not work for $$\theta = \frac{\pi}{2}$$.
I've graphed the curve and shaded the regions as seen in the picture:
I know the general method of answering questions like these but what I am really asking is
Given a polar curve , if I choose two half lines (in this example the green one with equation $$\theta = \frac{\pi}{6}$$ and another half line that does enclose the curve BUT is greater than $$\theta = \frac{\pi}{4}$$ i.e. it is any half line that encloses the curve but isn't the closest one to it which is the pink one $$\theta = \frac{\pi}{4}$$ , why do I get a different area than the actual area?
Is the curve not enclosed by any half line that is greater than $$\frac{\pi}{4}$$ and $$\frac{\pi}{6}$$ ?
How do I determine the other half line to enclose the curve that is correct then?
In the past I've come across several questions where I could avoid this situation but now since my exams are approaching fast I feel I need to understand this properly.
The portion of R above the $\pi/4$ line (let us call it R1) is half of a square with side $\pi/(2\sqrt(2))$, so that its area is $\pi^2/16$.
The area of the remaining part of R (let us call it R2) can be calculated as the difference between the area of the triangle delimitated by the lines $\pi/4$, $\pi/6$, and L, and that of the small portion of C above the $\pi/6$ line.
The triangle has base equal to $\pi (\sqrt{3}-1)/(2\sqrt{2})$ and height equal to $\pi/(2\sqrt{2})$, so that its area is $\pi^2 (\sqrt{3}-1)/16)$.
The area of the small portion of C above the $\pi/6$ line can be calculated by integrating $\frac{1}{2} a^2 cos(2\theta)$ between $\pi/4$ and $\pi/6$ (in polar coordinates, integration of $f(\theta)$ is performed by standard integration of $(f(\theta))^2/2$). The indefinite integral is $\frac{1}{2} a^2 sen(\theta) cos(\theta)$, which calculated over the above mentioned interval yields $\pi^2 (4-2\sqrt{3}/16)$.
Thus the area of R2 is:
$\pi^2 (\sqrt{3}-1)/16 - \pi^2 (4-2\sqrt{3})/16)=\pi^2 (3\sqrt{3} - 5)/16$.
Summing R1 and R2 we get:
$\pi^2/16 + \pi^2 (3\sqrt{3} - 5)/16=\pi^2 (3\sqrt{3} - 4)/16$.
Clearly we could also have obtained the same result more directly as the difference between the area of the triangle delimitated by the $\pi/6$ line, the y-axis and L, and that of the portion of C above the $\pi/6$ line.
• Why do I get an area different to the actual one if I choose the half line to be greater than $\frac{\pi}{4}$ – Nubcake May 30 '14 at 22:24
• When I first worked it out I used pi/2 as the second half line and I believe I did get a different area , I'll try this again now. – Nubcake May 30 '14 at 22:51
• You do not get a different area if you choose a half line greater than $\pi/4$. In this case, R1 decreases and R2 increases, but their sum does not change. – Anatoly May 30 '14 at 22:55
• I've ended up with $\frac{3\sqrt{3}a^{2}}{16}$ when I used pi/2. – Nubcake May 30 '14 at 22:59
• So I re did the working out and I came up with this $\frac{a^{2}}{16}(3\sqrt{3} - 4Sin(2\theta ))$ derived from $\frac{a^{2}\sqrt{3}}{16} - \frac{a^{2}}{4} \left[Sin(2\theta)]_\frac{\pi}{6}^\theta$ where $\theta$ is the equation of the half life. If the area as you say will be the same regardless of the value of the half line what is wrong with what I posted here? This provides the correct answer for $\theta = \frac{\pi}{6}$ but not for other values. – Nubcake May 30 '14 at 23:16 | 2019-07-21T21:03:00 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/815313/area-enclosed-between-half-lines-in-polar-space",
"openwebmath_score": 0.9036641716957092,
"openwebmath_perplexity": 153.8480439932118,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808765013517,
"lm_q2_score": 0.8615382147637196,
"lm_q1q2_score": 0.8445494363079888
} |
https://stats.stackexchange.com/questions/372484/joint-probability-distribution-of-geometric-distribution | # Joint probability distribution of geometric distribution
Let $$X$$ and $$Y$$ be independent and identically distributed $$(i.i.d.)$$ r.v.’s, each having the probability distribution, $$p(k) = (1 − λ)λ^k$$; $$k = 0,1,...$$ where $$λ :(0; 1)$$ is a constant. Define $$U = min(X; Y )$$; $$V = max(X; Y )$$; $$W = V − U$$. Determine the joint probability distribution of $$U$$ and $$W$$ (taking care with $$W = 0$$) and verify that $$U$$ and $$W$$ are independent r.v.’s.
My work: I set up this: $$P(X=W+U, Y=U)$$ when $$X>Y$$ and $$P(X=U, Y=W+U)$$ when $$X. and the joint distributions are the same in both cases as $$W$$ is always non-negative. finally I got the following joint pmf: $$f(w,u)=(1-λ)^2 λ^(w+2u-2)$$ when $$X=Y$$, $$f(w,u)=(1-λ)^2 λ^(2u-2)$$ and $${(w,u): w=0,1...; u= 0,1,..}$$ is this the correct joint pmf? what will be the final joint pmf?
• What did you try? If this is some sort of an assignment, consider adding the self-study tag and read the tag wiki. – StubbornAtom Oct 18 '18 at 7:16
• Your $W$ is just $|X-Y|$. This might help: math.stackexchange.com/questions/2685256/… – StubbornAtom Oct 18 '18 at 7:33
• I saw your mentioned problem. I have solved that problem from Casella and Burger. But in this particular problem, isn't W is always greater than or equal to 0? – Dihan Oct 18 '18 at 8:11
• Yes, of course $W$ is non-negative. I was thinking about breaking the problem into cases $X\ge Y$ and $X<Y$. – StubbornAtom Oct 18 '18 at 8:17
• I set up this: $P(X=W+U, Y=U)$ when $X>Y$ and $P(X=U, Y=W+U)$ when $X<Y$. and the joint distributions are the same in both cases as $W$ is always positive – Dihan Oct 18 '18 at 8:18
Both values of $$U$$ and $$W$$ are non-negative integers, say $$u$$ and $$w$$ respectively. We need to find which values of $$(x,y)$$ are associated with $$(u,w).$$ This is tantamount to solving the simultaneous equations
$$\begin{array}{rl} \min(x,y) &=u \\ \max(x,y)-\min(x,y)&=w \end{array}$$
for $$(x,y).$$ There are two possibilities: $$\min(x,y)=x$$ or $$\min(x,y)=y.$$ In the first case, $$x=u$$ whence $$y=u+w.$$ In the second case $$y=u$$ whence $$x=u+w.$$ These cases overlap when $$w=0,$$ which occurs when $$X=Y.$$
Therefore, by the probability axioms, when $$w\ne 0$$
$$\Pr((U,W)=(u,w)) = \Pr((X,Y)=(u,u+w)) + \Pr((X,Y)=(u+w,u))$$
and otherwise when $$w=0$$
$$\Pr((U,W)=(u,0)) = \Pr((X,Y)=(u,u)).$$
The independence of $$X$$ and $$Y$$ means their probabilities multiply, immediately giving
\eqalign{ \Pr((U,W)=(u,w)) &= \left\{\begin{array}{rl}(1-\lambda)\lambda^u\,(1-\lambda)\lambda^{u+w} + (1-\lambda)\lambda^{u+w}\,(1-\lambda)\lambda^u & \text{if }w\ne 0 \\ (1-\lambda)\lambda^u\, (1-\lambda)\lambda^u &\text{if } w=0\end{array}\right. \\ &= (1-\lambda)^2\lambda^{2u+w}\left\{\begin{array}{rl}2 & \text{if }w\ne 0 \\ 1 &\text{if } w=0.\end{array}\right. }
A convenient way to write that last expression in brackets uses the binary indicator function $$\mathcal{I}:$$
$$\mathcal{I}(w\ne 0) + 1 = \left\{\begin{array}{rl}2 & \text{if }w\ne 0 \\ 1 &\text{if } w=0.\end{array}\right.$$
Thus
$$\Pr((U,W)=(u,w)) = (1-\lambda)^2\ \left(\color{blue}{\lambda^{2u}}\right)\ \left(\color{red}{\left(\mathcal{I}(w\ne 0) + 1\right)\lambda^w}\right).$$
This is a product of a normalizing constant $$(1-\lambda)^2,$$ a function of $$u$$ alone (in blue), and a function of $$w$$ alone (in red), demonstrating $$U$$ and $$W$$ are independent. | 2020-02-17T03:15:17 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/372484/joint-probability-distribution-of-geometric-distribution",
"openwebmath_score": 0.999792754650116,
"openwebmath_perplexity": 1878.7640299771053,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808730448281,
"lm_q2_score": 0.8615382076534743,
"lm_q1q2_score": 0.8445494263600243
} |
https://math.stackexchange.com/questions/294647/prove-that-f-is-differentiable-in-mathbbr/294652 | # Prove that f is differentiable in $\mathbb{R}$
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ some function that all $x$ and $y$ in $\mathbb{R}$ satisfies: $$\left|f(x)-f(y)\right| \le (x-y)^2$$
• Prove that f is differentiable in any point in $\mathbb{R}$.
• Prove that f is constant.
• FYI, it's "differentiable" :)
– Jim
Feb 4, 2013 at 16:47
• @Jim thank's :) Feb 4, 2013 at 16:55
• A function with the property that $|f(x)-f(y)| \le C |x-y|$ for some $C \gt 0$ and all $x,y$ is said Lipschitz continuous. There's a story that makes the rounds in math grad schools that a PhD student was defending a thesis on marvelous properties of functions with Lipschitz exponent greater than 1, meaning something of the sort illustrated here by a square power. Possibly apocryphal, as the story goes an implication was pointed out by a defense committe member other than the thesis adviser, that all such functions are (as here) constant. Feb 4, 2013 at 17:01
• For another approach, see Davide Giraudo's answer in this post. Feb 4, 2013 at 17:04
You have $|f(x)-f(y)|\le |x-y||x-y|$ then $\displaystyle\frac{|f(x)-f(y)|}{|x-y|}\le|x-y|$ Just take the limit as $x\to y$ , you get $|f'(y)|\le 0$(then $f'(y)=0)$ for all $y$ then $f$ is constant
• Please edit to use the absolute value in the denoninator Feb 4, 2013 at 16:56
• @Barbara Thank you for pointing it out Feb 4, 2013 at 16:59
I would like to point out that it is possible to prove the assertion without proving differentiability in the first place.
For each x we have
$$|f(x)-f(0)| = \left|\sum_{k=1}^n f\left(\frac{k}{n} \cdot x\right)-f\left(\frac{k-1}{n}\cdot x\right)\right|$$
Thus by the triangle inequality
$$|f(x)-f(0)| \le \sum_{k=1}^n \left|f\left(\frac{k}{n}\cdot x\right)-f\left(\frac{k-1}{n}\cdot x\right)\right|$$
Since for each $k$
$$\left|f\left(\frac{k}{n}\cdot x\right)-f\left(\frac{k-1}{n}\cdot x\right)\right| \le \left|\frac{k}{n}\cdot x-\frac{k-1}{n}\cdot x\right|^2 = \left|\frac{x}{n}\right|^2$$
It follows that
$$|f(x)-f(0)| \le \sum_{k=1}^n \frac{x^2}{n^2} = \frac{x^2}{n}$$
And thus
$$|f(x)-f(0)|\le \limsup_{n\rightarrow\infty} \frac{x^2}{n} = 0$$
Hence for all $x$
$$f(x)=f(0)$$
Thus f is constant. It follows that f is differentiable everywhere ;)
By definition of what it means to be differentiable, you want to prove that the limit $$\lim_{x\to y} \frac{f(x)- f(y)}{x-y}$$ exists for all $y\in \mathbb{R}$. That will follow (in this case) from showing that $$\lim_{x\to y} \left\lvert\frac{f(x)- f(y)}{x-y}\right\rvert$$ exists. Now you have that $$0\leq \left\lvert\frac{f(x)- f(y)}{x-y}\right\rvert = \frac{\lvert f(x)- f(y)\rvert}{\lvert x-y\lvert} \leq \frac{\lvert x - y\rvert^2}{\lvert x - y\rvert} = \lvert x - y\rvert.$$ Now you have $\lvert x - y\rvert \to 0$ as $x \to y$. So by the Squeeze Theorem you must also have $$\lim_{x\to y} \left\lvert\frac{f(x)- f(y)}{x-y}\right\rvert = 0$$ And so $$\lim_{x\to y} \frac{f(x)- f(y)}{x-y} = 0$$ That means that $f$ is differentiable and that the derivative at any number $y$ is zero: $f'(y) = 0$.
As others have already mentioned this means that $f$ must be a constant: If you had $f(x) \neq f(y)$ for some $x$ and $y$. Then by the Mean Value Theorem you would have a $c$ between $x$ and $y$ such that $0\neq f(x) - f(y) = f'(c)(x-y) = 0$. This is a contradiction, so indeed $f(x) = f(y)$ for all $x$ and $y$.
• It's not the same to show the absolute value of the difference quotient has a limit, but to show the difference quotient tends to zero it suffices to show the absolute value of the difference quotient tends to zero. Feb 4, 2013 at 19:29
• @MatthewLeingang: I edited. Thanks. Feb 4, 2013 at 19:37
For each $x\in\mathbb R$, for each $y\in\mathbb R$ with $y\neq x$, we have $0\leq |\frac{f(x)-f(y)}{x-y}|\leq |x-y|$. Let $y\to x$. Then by squeezing we obtain $f'(x)=0$ for all $x\in\mathbb R$ so that $f$ is constant.
• I think you mean $y \to x$.
– Jim
Feb 4, 2013 at 16:51
• Shouldn't there be absolute values as in $\le \vert x-y \vert$? Feb 4, 2013 at 16:54
Let $a\in\Bbb R$ be fixed and $x\neq a$ then $$\left| \frac{f(x)-f(a)}{x-a}\right|\le|x-a|$$ Since $\lim_{x\to a}|x-a|=0$, we get $\lim_{x\to a} \left| \frac{f(x)-f(a)}{x-a}\right|=0$. So $f$ is differentiable at $a$.
And $f'(a)=0$ for all $a$, By mean value theorem we get $$f(x)-f(y)=f'(c)(x-y)$$ for some $c$. Since $f'(c)=0$, we get $f(x)=f(y)$, So $f$ is constant. | 2022-07-06T05:08:22 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/294647/prove-that-f-is-differentiable-in-mathbbr/294652",
"openwebmath_score": 0.9303368330001831,
"openwebmath_perplexity": 164.8560375447142,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808724687407,
"lm_q2_score": 0.8615382076534742,
"lm_q1q2_score": 0.8445494258637029
} |
https://math.stackexchange.com/questions/311506/convexity-and-minimum-of-a-vector-function | # Convexity and minimum of a vector function
Prove that the function $f:\mathbb{R}^n\to \mathbb{R}$ given by $f(x)=x^T \cdot x$ is strictly convex. Use this result to find the absolute minimum by equating the derivative to zero.
I am not sure how to prove that a vector function is convex. Is there a general method to do this? Also, I tried differentiating the function and I got $2x^T$dx as a result for the differential, which would mean that $2x^T$ is the derivative. However, does this give me the absolute minimum? Or did I make a mistake in differentiating? Thanks in advance.
• You probably meant $f:\mathbb{R}^n\longrightarrow\mathbb{R}$. – Julien Feb 22 '13 at 21:51
When the Hessian Matrix of a function $f$ is positiv definit the function $f$ is strict convex. That should help you.
By the way what do you mean with $2x^T \, dx$
Let us first rephrase the definition of strict convexity. $$f(t \cdot x+(1-t)y)< t\cdot f(x) + (1-t)f(y)$$ As this is an easy example we will make it with the definition. $$\sum_{i=1}^n (t\cdot x_i + (1-t)y_i)^2 =\sum_{i=1}^n t^2 x_i^2 + 2 (1-t)(t)(x_i \cdot y_i)+ (1-t)^2 y_i^2$$ And the right hand side is $$\sum_{i=1} t x_i^2 + (1-t)y_i^2$$ We can show that it is true for every coordinate (so we don't need the sum) $$t^2 x^2 + 2(1-t)(t)(xy)+(1-t)^2 y^2< tx^2 + (1-t)y^2$$ This is equivalent to $$0<t(1-t) x^2 + (1-t)(t) y^2 -2t(t-1)xy$$ We have in every term a $t(1-t)$ as $t\in(0,1)$ we can devide through it $$0< x^2 -2 xy +y^2=(x-y)^2$$ So we see this is true for every summand of the sum, and hence for the sum.
There is a very nice way how to differentiate stuff like that (this one isn't rigorous as I do it). We just use the product rule $$D(x^T x) = (x^T)' x + x^T x'= x^T (x)' + x^T=2 x^T$$ Using the symmetrie of $x^T x$. You could do it with the partial derivates too. The Hessian Matrix is $2\cdot I$ where $I$ is the unit matrix.
• Thank you! Well, I thought that that is the differential of this function. I obtained it by working out f(x+dx)-f(x). – dreamer Feb 22 '13 at 21:03
• And could you please show me how I can complete the proof of convexity? I am not familiar with the concept 'positiv definit' which you mentioned since I am relatively new to these kinds of excercises. – dreamer Feb 22 '13 at 21:05
• i don't know what the $dx$ should mean over there. ok give me some time – Dominic Michaelis Feb 22 '13 at 21:06
• Sorry, that is the way I learned to compute differentials and through that derivatives. How else would you compute the derivative then to do the second part of the question. Thanks for all your help, I really appreciate it. I really want to get better at this but I find it very hard sometimes. – dreamer Feb 22 '13 at 21:09
• I think you meant strictly convex in your first sentence. For otherwise $f(x)=0$ is convex but the Hessian is not definite. – Julien Feb 22 '13 at 21:50
You may prove convexity from first principle. $f(x)$ is said to be strictly convex if and only if $f(px+qy)<pf(x)+qf(y)$ for any $x\not=y$ and for every $0<p<1 \ (q=1-p)$. This can be proved by verifying that $$pf(x)+qf(y)-f(px+qy)=pq(x-y)^T(x-y)=pq\|x-y\|^2>0,$$ where $\|v\|$ denotes the length (i.e. Euclidean norm) of a vector $v$.
To find the minimum of $f$, note that $f(x)=x^Tx=\|x\|^2\ge0$ and $f(x)=0$ only when $x=0$. Hence the absolute minimum of $f$ occurs at the $x=0$. Calculus is not of much use here, because it can only prove that a certain point is a local minimum, but you are asked to find the absolute minima of $f$. But anyway, since you have $f'(x)=2x^T$, setting $f'(x)=0$ would give you back the critical point $x=0$. To show that $x=0$ is indeed an absolute minimum, you still need to argue that $f(x)\ge f(0)=0$ for every $x$.
Edit: It's worth mentioning (thanks to Dominic Michaelis) that every local minimum of a convex function is a global minimum, but in general, calculus is only helpful for screening out local minima among critical points. Extra work is often required to locate global minima.
• As it is convex we know that a local minimum implies a global – Dominic Michaelis Feb 22 '13 at 21:39
• @DominicMichaelis You are right, but considering what the OP knows, I think this result is even more alien to the oP than positive definiteness is. – user1551 Feb 22 '13 at 21:45
• Thank you very much for your help! – dreamer Feb 23 '13 at 8:26 | 2019-07-24T00:22:46 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/311506/convexity-and-minimum-of-a-vector-function",
"openwebmath_score": 0.8986556529998779,
"openwebmath_perplexity": 153.47120432236747,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808678600415,
"lm_q2_score": 0.8615382076534743,
"lm_q1q2_score": 0.8445494218931325
} |
http://origostudio.hu/jessica-marie-nal/0e820b-cardinality-of-surjective-functions | A function with this property is called a surjection. 1. proving an Injective and surjective function. The function $$f$$ that we opened this section with is bijective. Since $$f$$ is both injective and surjective, it is bijective. (The best we can do is a function that is either injective or surjective, but not both.) Note that the set of the bijective functions is a subset of the surjective functions. On the other hand, if A and B are as indicated in either of the following figures, then there can be no bijection $$f : A \rightarrow B$$. Let X and Y be sets and let be a function. Formally, f: A → B is a surjection if this statement is true: ∀b ∈ B. The following theorem will be quite useful in determining the countability of many sets we care about. Cardinality of set of well-orderable subsets of a non-well-orderable set 7 The equivalence of “Every surjection has a right inverse” and the Axiom of Choice By definition of cardinality, we have () < for any two sets and if and only if there is an injective function but no bijective function from to . BUT f(x) = 2x from the set of natural numbers to is not surjective, because, for example, no member in can be mapped to 3 by this function. 1. f is injective (or one-to-one) if implies . Recommended Pages. Proof. Example: The function f(x) = 2x from the set of natural numbers to the set of non-negative even numbers is a surjective function. Bijective functions are also called one-to-one, onto functions. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Think of f as describing how to overlay A onto B so that they fit together perfectly. Let n2N, and let X 1;X 2;:::;X n be nonempty countable sets. 3. f is bijective (or a one-to-one correspondence) if it is injective and surjective. Theorem 3. Logic and Set Notation; Introduction to Sets; Example 7.2.4. Hot Network Questions How do I provide exposition on a magic system when no character has an objective or complete understanding of it? 2.There exists a surjective function f: Y !X. Hence, the function $$f$$ is surjective. The function f matches up A with B. Definition. This means that both sets have the same cardinality. Both have cardinality $2^{\aleph_0}$. Discrete Mathematics - Cardinality 17-3 Properties of Functions A function f is said to be one-to-one, or injective, if and only if f(a) = f(b) implies a = b. Cardinality, surjective, injective function of complex variable. To see that there are $2^{\aleph_0}$ bijections, take any partition of $\Bbb N$ into two infinite sets, and just switch between them. A function $$f: A \rightarrow B$$ is bijective if it is both injective and surjective. A function f from A to B is called onto, or surjective, if and only if for every element b ∈ B there is an element a ∈ A with f(a) Definition. Surjective Functions A function f: A → B is called surjective (or onto) if each element of the codomain is “covered” by at least one element of the domain. The function $$g$$ is neither injective nor surjective. 2. f is surjective (or onto) if for all , there is an such that . It suffices to show that there is no surjection from X {\displaystyle X} to Y {\displaystyle Y} . ∃a ∈ A. f(a) = b Then Yn i=1 X i = X 1 X 2 X n is countable. 3.There exists an injective function g: X!Y. Injective but not surjective function. We work by induction on n. I'll begin by reviewing the some definitions and results about functions. Bijections and Cardinality CS 2800: Discrete Structures, Spring 2015 Sid Chaudhuri. Describing How to overlay a onto B so that they fit together perfectly 2.There exists a surjective function f a... Formally, f: a → B is a surjection if this statement is true: ∈... As describing How to overlay a onto B so that they fit together perfectly called! Following theorem will be quite useful in determining the countability of many sets we care about subset of the functions. Also called one-to-one, onto functions we can do is a subset of the functions... Describing How to overlay a onto B so that they fit together perfectly ; Introduction sets... Exists a surjective function f: Y! X X 2 X n be nonempty countable sets surjection X. 2015 Sid Chaudhuri that both sets have the same Cardinality X i = X 1 ; X 2 ;:... ∀B ∈ B and surjective called a surjection if this statement is true ∀b. The surjective functions begin by reviewing the some definitions and results about functions some definitions and results functions. So that they fit together perfectly correspondence ) if implies, f: Y! X the... An injective function g: X! Y exposition on a magic system when no character has objective! Subset of the bijective functions is a surjection correspondence ) if implies correspondence! Statement is true: ∀b ∈ B of the surjective functions is true: ∀b ∈ B a onto so... X } to Y { \displaystyle X } to Y { \displaystyle X } to Y \displaystyle... Character has an objective or complete understanding of it Spring 2015 Sid Chaudhuri surjective function f: a B! Is either injective or surjective, but not both.: Y X. Is no surjection from X { \displaystyle X } to Y { \displaystyle Y } exists injective. Objective or complete understanding of it not both. to sets ; 2.There a. Note that the set of the bijective functions is a surjection if this statement is true: ∀b B! Is both injective and surjective n be nonempty countable sets formally, f: Y! X:... Called a surjection if this statement is true: ∀b ∈ B both injective and surjective, not! With this property is called a surjection if this statement is true ∀b... ∀B ∈ B one-to-one, onto functions if this statement is true: ∀b ∈ B = 1! Care about 1. f is surjective ( or a one-to-one correspondence ) if implies provide exposition on a system. Either injective or surjective, but not both.: ∀b ∈ B has objective! We can do is a surjection if this statement is true: ∀b ∈ B 2800: Structures! G\ ) is both injective and surjective, it is injective ( or one-to-one ) if it is and! The best we can do is a function 2 X n be nonempty countable sets same.... That there is no surjection from X { \displaystyle X } to Y { Y! 3.There exists an injective function g: X! Y:: ; X n countable! Let n2N, and let X 1 X 2 ;:: ; X 2 ;: ;! I=1 X i = X 1 X 2 ;:: ; X is. Surjective ( or onto ) if implies that both sets have the Cardinality... Both injective and surjective, it is injective ( or a one-to-one correspondence ) if it is bijective overlay onto... F: a → B is a surjection the bijective functions is a surjection if statement! Set of the surjective functions this means that both sets have the same Cardinality function:! Exists a surjective function f: Y! X ) if for all, there an... Useful in determining the countability of many sets we care about \displaystyle Y.. Show that there is an such that is countable by reviewing the some definitions and results about.. X } to Y { \displaystyle X } to Y { \displaystyle Y } 2800 Discrete! Since \ ( f\ ) is neither injective nor surjective do i provide exposition on magic... A surjective function f: a → B is a function with this property is called a surjection this... Surjection if this statement is true: ∀b ∈ B quite useful in determining countability... X i = X 1 ; X n is countable of f as describing How overlay. Of f as describing How to overlay a onto B so that they fit together perfectly this that... Provide exposition on a magic system when no character has an objective or complete understanding of it 2. f injective! One-To-One correspondence ) if implies when no character has an objective or complete understanding of it determining countability! Some definitions and results about functions hence, the function \ ( g\ ) is surjective formally,:... Y! X a magic system when no character has an objective or complete understanding of it Y... For all, there is no surjection from X { \displaystyle X } to {. An objective or complete understanding of it many sets we care about definitions and results about functions that the of... Let X 1 ; X n is countable How to overlay a onto B so that fit. Correspondence ) if implies together perfectly, the function \ ( f\ ) that we opened this section is! N is countable bijections and Cardinality CS 2800: Discrete Structures, Spring Sid. Exists an injective function g: X! Y, onto functions ;:! To sets ; 2.There exists a surjective function f: a → B is a function be... Such that will be quite useful in determining the countability of many sets we care about is called surjection. Same Cardinality injective or surjective, it is injective ( or one-to-one ) if for,! Function that is either injective or surjective, it is injective and surjective n is countable be sets and X. Of f as describing How to overlay a onto B so that they fit together perfectly of! F\ ) that we opened this section with is bijective, but not both. reviewing. The some definitions and results about functions opened this section with is bijective such that, the \. We opened this section with is bijective ( or a one-to-one correspondence ) if it is (! Function with this property is called a surjection if this statement is true: ∈! They fit together perfectly a → B is a function with this property is called a surjection surjective functions a... Do is a surjection can do is a surjection if this statement true. X i = X 1 ; X 2 X n is countable no! A surjective function f: Y! X will be quite useful determining. Surjective ( or a one-to-one correspondence ) if it is injective ( or )... Or one-to-one ) if it is injective ( or onto ) if is. Not both. i provide exposition on a magic system when no character an... Called one-to-one, onto functions 1. f is injective ( or onto ) if it is and. Logic and set Notation ; Introduction to sets ; 2.There exists a surjective function f: Y! X a!: Y! X a surjection this property is called a surjection if statement! But not both. f is surjective a one-to-one correspondence ) if it is and... Some definitions and results about functions useful in determining the countability of many sets we care about an. Hence, the function \ ( f\ ) is neither injective nor surjective to sets 2.There... Both injective and surjective that there is no surjection from X { Y! 1 X 2 X n is countable a subset of the bijective are. To Y { \displaystyle Y } is surjective Discrete Structures, Spring Sid! B is a function that is either injective or surjective, but not both. is a subset the... We can do is a function have the same Cardinality is a subset of the surjective functions g\. Begin by reviewing the some definitions and results about functions this property is a... A surjection if this statement is true: ∀b ∈ B for all, there is surjection... Function that is either injective or surjective, it is bijective ( or onto if... Do is a surjection and Cardinality CS 2800: Discrete Structures, 2015... 2800: Discrete Structures, Spring 2015 Sid Chaudhuri is called a surjection if statement... Of many sets we care about of f as describing How to overlay a onto so! Or a one-to-one correspondence ) if implies and Y be sets and let X and Y be sets let... Then Yn i=1 X i = X 1 ; X 2 ;:: ; X 2:... As describing How to overlay a onto B so that they fit perfectly!, there is no surjection from X { \displaystyle X } to Y { \displaystyle X } Y... Exists an injective function g: X! Y can do is function. Countability of many sets we care about note that the set of surjective... No surjection from X { \displaystyle Y } a → B is a surjection if statement... ;:::: ; X 2 ;::: ; 2... Let be a function to Y { \displaystyle Y } nonempty countable.! The countability of many sets we care about: Discrete Structures, Spring 2015 Sid Chaudhuri!! Of f as describing How to overlay a onto B so that they together! | 2021-04-18T20:52:07 | {
"domain": "origostudio.hu",
"url": "http://origostudio.hu/jessica-marie-nal/0e820b-cardinality-of-surjective-functions",
"openwebmath_score": 0.82492595911026,
"openwebmath_perplexity": 638.2945109652293,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.980280871316566,
"lm_q2_score": 0.8615382040983515,
"lm_q1q2_score": 0.8445494213860415
} |
http://math.stackexchange.com/questions/206328/cancelations-and-logarithms | # cancelations and logarithms
When faced with the problem of multiplying fractions, for example $$\frac 5 2 \cdot \frac 8 3\cdot \frac{9}{35}$$ we know that we can permute the numerators, or equivalently, permute the denominators, getting $$\frac{5}{35}\cdot\frac 8 2 \cdot \frac 9 3$$ and then cancel: $$\frac 1 7 \cdot \frac 4 1 \cdot \frac 3 1.$$ Similarly when multiplying logarithms $$(\log_2 5)(\log_3 8)(\log_5 81)$$ we can permute the arguments, or equivalently, permute the bases: $$(\log_2 5)(\log_3 8)(\log_5 81) = (\log_2 8)(\log_3 81)(\log_5 5)=3\cdot4\cdot1= 12.$$ So we could say that in $(\log_2 5)(\log_3 8)(\log_5 81)$, we "cancel" the $5$s, getting $(\log_2 8)(\log_3 81)$. Or that in $(\log_2 5)(\log_3 8)(\log_5 81)$ we "cancel" the $2$ and the $8$, getting $3(\log_3 5)(\log_5 81)$, and then "cancel" the base $3$ and the $81$, getting $3\cdot4\log_5 5$ and then "cancel" the $5$s, getting $3\cdot4\cdot1$. Or that in $(\log_2 5)(\log_3 8)(\log_5 81)$ we "cancel" the $3$ and the $81$, getting $4\cdot(\log_2 5)(\log_5 8)$, and then "cancel" the $5$s, getting $4\cdot1\cdot\log_2 8$, etc.
However . . . . . . in the case of fractions, we can multiply numerators and multiply denominators, and say that $$\frac 5 2 \cdot \frac 8 3\cdot \frac{9}{35} = \frac{5\cdot8\cdot9}{2\cdot3\cdot35},$$ so that we can say that in our cancelations, we are dividing both the numerator and the denominator of one fraction by the same thing. Is there some way to do something analogous with logarithms and get something like $\log_{2,3,5} 5,8,81$, where the commas represent whatever operation is appropriate, which conceivably would be different in the base from what it is in the argument?
-
I'm not precisely sure if this addresses your question, but isn't the logarithm cancelling method precisely the same as the fraction cancelling once you read $\log_a b = \dfrac{\ln b}{\ln a}$ ? – Ragib Zaman Oct 3 '12 at 3:15
@RagibZaman : The identity at the end of your comment is of course the basis of this whole thing, but I don't understand how it means that it's "precisely the same thing". – Michael Hardy Oct 3 '12 at 3:18
About the question in your last two lines: not exactly, but pretty close if we first pass to one single common base. With your example:
$$\log_25\log_38\log_5 81=\frac{\log 5}{\log 2}\frac{\log 8}{\log 3}\frac{\log 81}{\log5}=\frac{\log 5}{\log 5}\frac{\log 8}{\log2}\frac{\log 81}{\log 3}=1\cdot3\cdot4=12$$
Here, "log" can be the natural one, the vulgar one or logarithm to any base.
-
I think you've very nearly got it. The commas in $2,3,5$ can mean $\exp((\log 2)(\log 3)(\log 5))$, where the logarithm and exponential function are both to the same base, and we don't care what base it is, and the commas in $5,8,81$ would mean $\exp((\log 5)(\log 8)(\log 81))$, where that same base is still used throughout, and then $\log_{2,3,5} 5,8,81$ really is the same thing as $(\log_2 5)(\log_3 8)(\log_5 81)$. Dunno why I didn't think of this. – Michael Hardy Oct 3 '12 at 3:28
Indeed so, @MichaelHardy. – DonAntonio Oct 3 '12 at 3:40
The commas in $2,3,5$ can mean $\exp((\log 2)(\log 3)(\log 5))$, where the logarithms and the exponential function are both to the same base, and we don't care what base it is, and the commas in $5,8,81$ would mean $\exp((\log 5)(\log 8)(\log 81))$, where that same base is still used throughout, and then $\log_{2,3,5} 5,8,81$ really is the same thing as $(\log_2 5)(\log_3 8)(\log_5 81)$. Dunno why I didn't think of this.
Later clarification in response to a comment below:
Say we let $x\circ y\circ z\circ\cdots = \exp_b((\log_b x)(\log_b y)(\log_b z)\cdots)$. Then $$(\log_p q)(\log_r s)(\log_t u)\cdots =\log_{{}\,p\,\circ\,r\,\circ\,t\,\circ\,\cdots} (q\circ s\circ u\circ\cdots).$$
-
It's not clear to me what you mean by the above. Could you please elaborate. – Bill Dubuque Oct 4 '12 at 20:59
Say we let $x\circ y\circ z\circ\cdots = \exp_b((\log_b x)(\log_b y)(\log_b z)\cdots)$. Then $(\log_p q)(\log_r s)(\log_t u)\cdots$ $=\log_{{}\,p\,\circ\,r\,\circ\,t\,\circ\,\cdots} (q\circ s\circ u\circ\cdots)$. That's what I meant. – Michael Hardy Oct 4 '12 at 21:54
So, was your goal simply to find an operation $\,\circ\,$ such that $$\rm log_{\ a\circ b\circ c}(x\circ y\circ z)\ =\ \frac{log(x\circ y\circ z)}{log(\ a\circ b\circ c)}\ =\ \frac{log(x)\,log(y)\,log(z)}{log(a)\,log(b)\,log(c)}$$ If so, it would help to edit your question to make that more clear. – Bill Dubuque Oct 4 '12 at 22:10
Certainly that was not my goal, since (as I said in the comments below the answer I "accepted") I hadn't even noticed the obvious fact pointed out by "DonAntonio" in that answer. But I wrote "where the commas represent whatever operation is appropriate", so identifying that operation if it existed was at least a part of the question. – Michael Hardy Oct 4 '12 at 22:14
Ok, so what was your original problem? That's the best sense I can make of it so far. – Bill Dubuque Oct 4 '12 at 22:16 | 2014-10-23T05:58:23 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/206328/cancelations-and-logarithms",
"openwebmath_score": 0.9464320540428162,
"openwebmath_perplexity": 231.35307638303271,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808684361289,
"lm_q2_score": 0.8615382058759129,
"lm_q1q2_score": 0.8445494206469444
} |
https://iq.opengenus.org/set-partition-problem/ | # Set Partition Problem (Same sum)
Get FREE domain for 1st year and build your brand new site
The problem is to determine whether a given set can be partitioned into two subsets such that the sum of elements in both subsets is same.
The time complexity of solving this using Dynamic Programming takes O(N x SUM) time where N is the number of elements and SUM is the sum of elements in either of the subsets.
If we solve this problem using recursive approach then time complexity would be exponential but we can use dynamic programming approach to reduce this complexity.In dynamic programming approach we break down the problem into smaller sub-problems and use the result of each to build to our solution.In this article,first of all we will explore the recursive approach and then go into dynamic programming approach to solve this efficiently.
# Example
arr[] = {2,6,3}
Output : False
The array can't be partitioned into equal sum sets
arr[] = {2,2,3,3}
Output : True
Array can be partitioned as {2,3} and {2,3}
Before moving further let's have some knowledge about OR operator.
## OR Operator
It is a logical operator.It is denoted by "||" symbol.Let A and B be two operands on which we are applying OR operator.
(A||B) = true,if any of the operands either A or B is non-zero.
So,we can say (A||B) will be equal to false if and only if both A and B is zero.
Property:
• A=1,B=0 => (A||B) = 1
• A=0,B=0 => (A||B) = 0
• A=0,B=1 => (A||B) = 1
• A=1,B=1 => (A||B) = 1
# Method
To solve this problem,three major steps are :
1. Calculate the sum of the given array.
2. If the sum is odd then we can't partition the array into two subsets having equal sum.In this case,return False.
3. If the sum is even then we will try to find a subset having sum of array elements equal to (sum/2).If such subset exists then return True.
The first and second step is very easy.
Third step can be solved using recursion as well as dynamic programming.We will create a function partition which handles first and second step.
For third step we will create another function isSubsetsum which returns true if there exists a subset having sum equal to (sum/2) else return false.Now our main target is to implement isSubsetsum function.For obtaining a subset from a given set having sum equal to (sum/2),for each element of the given set we will consider two possibilities whether to include that element in these subset or not.If the element is greater than the required sum then we will not include that element.But if that is not the case then we have to consider both possibilities whether to include or not.
# Recursive Approach
## Pseudocode:
As explained earlier calculate the sum of the given array and if the sum is odd then just return false but if the sum is even then call isSubsetsum(),if this function returns true then return true else return false.Let's understand isSubsetsum() function.
Let isSubsetsum(arr,n,sum) be the function that returns true if there is a subset of arr[0..n-1] with sum equal to (sum/2).
The isSubsetsum problem is divided into two subproblems
1. isSubsetsum(arr,n-1,sum) without considering last element.
2. isSubsetsum(arr,n-1,sum-arr[n-1]) considering last element.
If any of the above subproblems return true,then return true else return false.
## Implementation:
#include<bits/stdc++.h>
using namespace std;
//If there exits a subset in the given array having sum equal to given sum then this function returns true.
bool isSubsetsum(int arr[],int n,int sum){
//Base Cases
if(sum==0){
return true;
}
if(n==0 & sum!=0){
return false;
}
//If last element is greater than sum,then ignore it.
if(arr[n-1]>sum){
return isSubsetsum(arr,n-1,sum);
}
//If last element is not greater than sum then we are considering both the possibilities i.e. either include the last element or exclude it.If any possibility returns true then this function returns true.
return isSubsetsum(arr,n-1,sum) || isSubsetsum(arr,n-1,sum-arr[n-1]);
}
//This function returns true if arr[] can be partitioned into two subsets of equal sum.
bool partition(int arr[],int n){
int sum=0;
//calculate sum of the elements in array.
for(int i=0;i<n;i++){
sum+=arr[i];
}
//If sum is odd,there cannot be two subsets with equal sum.
if(sum%2!=0){
return false;
}
//If sum is even,then call function isSubsetsum()
return isSubsetsum(arr,n,sum/2);
}
int main(){
int arr[] = {2,6,3};
int n=3;
if(partition(arr,n)){
cout<<"True"<<endl;
}else{
cout<<"False"<<endl;
}
return 0;
}
Output:False
• Time Complexity : O(2^n)
# Dynamic Programming Approach
The recursive approach is computing the same subproblems again and again.That's why its complexity is high.Let's see the recursion tree for arr[] = {5,4,9}.In this example,since the sum is even so isSubsetsum() function is being called.
In recursion tree F(i,j) represents function isSubsetsum where i represents size of the array and j represents sum.
The function F(0,9) is called two times.For large values of n and sum, there will be many subproblems which are being called again and again.The re-computation of subproblems can be avoided by applying dynamic programming techniques.
## Pseudocode:
1. Calculate the sum of given array.If sum is odd then returns false else call function isSubsetsum().
2. Implement isSubsetsum().If this function returns true then return true else return false.
Let isSubsetsum(arr,n,sum) be the function that returns true if there is a subset of arr[0..n-1] with sum equal to (sum/2).To implement this function in bottom up manner we will create a 2d-array dp[][] whose row represents the sum and column represents the size of an array.Value at dp[i][j] will be true if there exists a subset of size less than or equal to j and sum equal to i.
dp[i][j] = true if a subset of size less than or equal to j having sum equal to i is found,otherwise false.
Base Case:
dp[0][i]=true
dp[i][0]=false; if i < Sum
Recursive relation:
If last element is not greater than sum then we are considering both the possibilities i.e. either include the last element or exclude it.If any possibility returns true then store true in the array.
for(int i=1;i<=sum;i++){
for(int j=1;j<=n;j++){
dp[i][j]=dp[i][j-1]; //Excluding last element
if(i>=arr[j-1]){
dp[i][j]=(dp[i][j]||dp[i-arr[j-1]][j-1]);
Let's understand this with an example:
arr[] = {3,4,7}
n = 3
Now,row of dp[][] array ranges from 0 to 7 and column ranges from 0 to 3.Here, dp[3][1]=true because there exist a subset of size 1 having sum equal to 3.Also dp[3][2]=true because there exist a subset of size less than 2 having sum equal to 3.Similarly,we can fill the entire dp[][] array and if dp[7][3]=true then isSubsetsum returns true.
## Implementation
#include<bits/stdc++.h>
using namespace std;
//If there exits a subset in the given array having sum equal to given sum then this function returns true.
bool isSubsetsum(int arr[],int n,int sum){
bool dp[sum+1][n+1];
//If sum is 0,then store true.
for(int i=0;i<=n;i++){
dp[0][i]=true;
}
//If n=0 but sum!=0 then store false.
for(int i=1;i<=sum;i++){
dp[i][0]=false;
}
//Filling the dp array.
for(int i=1;i<=sum;i++){
for(int j=1;j<=n;j++){
dp[i][j]=dp[i][j-1]; //Excluding last element
if(i>=arr[j-1]){
dp[i][j]=(dp[i][j]||dp[i-arr[j-1]][j-1]); //If last element is not greater than sum then we are considering both the possibilities i.e. either include the last element or exclude it.If any possibility returns true then store true in the array.
}
}
}
return dp[sum][n];
}
//This function returns true if arr[] can be partitioned in two subsets of equal sum.
bool partition(int arr[],int n){
int sum=0;
//calculate sum of the elements in array.
for(int i=0;i<n;i++){
sum+=arr[i];
}
//If sum is odd,there cannot be two subsets with equal sum.
if(sum%2!=0){
return false;
}
//If sum is even,then call function isSubsetsum()
return isSubsetsum(arr,n,sum/2);
}
int main(){
int arr[] = {5,4,9};
int n=3;
if(partition(arr,n)){
cout<<"True"<<endl;
}else{
cout<<"False"<<endl;
}
return 0;
}
Output:True
## Step by step Explanation
dp[i][j]=(dp[i][j]||dp[i-arr[j-1]][j-1])....-> equation-1
Before this step, we are storing dp[i][j]=dp[i][j-1] this is basically ensuring that if there exist a subset having sum i but excluding the element at (j-1)th index then store true at dp[i][j] and if not then store false.
After this step what we are achieving in equation-1 is that if dp[i][j]=false but there exist a subset having sum (i-arr[j-1]) and this sum exists without the element at (j-1)th index then store true i.e. in any case if there exist a subset having sum i but excluding the element at (j-1)th index or there exist a subset having sum (i-arr[j-1]) and this sum exists without the element at (j-1)th index store true at dp[i][j].
In the situation when neither of the condition is true then store false at dp[i][j].
# Example
arr[] = {5,4,9}
Output : True
sum=5+4+9=18
since,sum is even so we will call isSubsetsum() function.
dp[][] array obtained is :
Since dp[9][3]=1 ,therefore returns true.
# Complexity
• Time Complexity : O(n.sum)
• Space Complexity : O(n.sum)
# Question
1. Can we partition the array arr[]={3,6,5,3,7,9} into two subsets of equal sum?
• Yes
• No | 2021-08-02T15:11:00 | {
"domain": "opengenus.org",
"url": "https://iq.opengenus.org/set-partition-problem/",
"openwebmath_score": 0.5759615898132324,
"openwebmath_perplexity": 2115.1835810166467,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808684361289,
"lm_q2_score": 0.8615382058759129,
"lm_q1q2_score": 0.8445494206469444
} |
http://math.stackexchange.com/questions/64261/good-upper-bound-for-sum-limits-i-1kn-choose-i | Good upper bound for $\sum\limits_{i=1}^{k}{n \choose i}$?
I want an upper bound on $$\sum_{i=1}^k \binom{n}{i}.$$
$O(n^k)$ seems to be an overkill -- could you suggest a tighter bound ?
-
Do you mean $\sum_{i=1}^k {n \choose i}$? – Robert Israel Sep 13 '11 at 20:13
Assuming $n$ grows while $k$ is a fixed constant, the given sum is $\frac{n^k}{k!}+ O(n^{k-1})$. This is tight up to the constant in the $O$ notation. – Srivatsan Sep 13 '11 at 20:14
@Srivatsan, if $k$ is a fixed number, then the constant factor $\frac{1}{k!}$ is irrelevant for the growth rate. – Henning Makholm Sep 13 '11 at 20:16
@Hen Thanks. Fixed now :-). – Srivatsan Sep 13 '11 at 20:17
@Robert - yes, of course, sorry. Fixed. – Anton Belov Sep 13 '11 at 20:19
We are interested in the sum $$V(n,k) := \sum\limits_{i=0}^k \binom{n}{i}.$$ (Notice that I have added $i=0$ for convenience. This affects the sum only by an additive $1$.) Indeed, this quantity has a combinatorial significance: it is the volume of a "Hamming ball" of radius $k$ in the Hamming space (the $n$-dimensional hypercube). It is quite difficult to say anything precise without information about the relative sizes of $k$ and $n$. I will therefore address some common regimes of interest in the answer. Without loss of generality, we will take $k \leq n/2$, since for $k > n/2$, we can write $V(n, k)$ as $2^n - V(n, n-k+1)$.
First, suppose that $n$ grows while $k$ is a fixed constant independent of $n$. Then the lower order terms (terms corresponding to $i < k$) are all $O(n^{k-1})$. Since there are just $k$ of them, we can absorb all of them into a single $O(n^{k-1})$. The remaining term $\binom{n}{k}$ is a polynomial in $n$ of degree $k$ and leading coefficient $\frac{1}{k!}$. Therefore, this term is $\frac{n^{k}}{k!}+O(n^{k-1})$. Combining these two observations, $$V(n,k) = \frac{n^{k}}{k!} + O(n^{k-1}).$$
Another regime of interest is when $k$ is large. Specifically, suppose $k = \alpha n$ where $\alpha \in (0, 1/2)$. In this case, by Stirling approximation and a lot of calculations, we can show that $$V(n, \alpha n) = 2^{n H(\alpha) - \frac{1}{2} \log_2 n + O(1)},$$ where the exponent $H(\alpha)$ is the Shannon entropy or the binary entropy given by: $$H(\alpha) = - \alpha \log_2 \alpha - (1-\alpha) \log_2 (1-\alpha).$$ For $\alpha \in (0, 1/2)$, $H(\alpha)$ is strictly increasing and is strictly positive. I want to point out that when $k$ is so large, the $n^k$ upper bound is quite crummy: $n^k = 2^{k \log n} = 2^{\alpha n \log n}$. This is a useless upper bound because we already know a much better upper bound of $2^n$. (For a reference to this estimate of $V(n, \alpha n)$, check Problem 9.42 in Concrete Mathematics. In many common cases, one is however content with a less precise form, such as $2^{n H(\alpha)+o(n)}$.)
There are, of course, more things that one can ask: generalization to the $q$-ary hypercube, intermediate values of $k$ (e.g., $k = O(\log n)$), values of $k$ close to $n/2$ (e.g., $\frac{n}{2} - c \sqrt{n}$) and so on. Each of these is a fruitful direction of study, but I cannot touch upon any of them here.
EDIT: The precise estimate is taken from Mike's comment. (Thanks, Mike!)
-
I was about to post this as an answer, but it's so close to yours that I'll leave it as a comment: Problem 9.42 in Concrete Mathematics has a slightly more precise expression for $V(n,\alpha n): 2^{n H(\alpha) - \frac{1}{2}\log_2 n + O(1)}$. There's an outline of their derivation in the answer section of the book. – Mike Spivey Sep 13 '11 at 20:37
Thank you for such an informative answer -- I was just about to ask for $k = \alpha n$ ;-) – Anton Belov Sep 13 '11 at 20:38
@Mike I will edit my answer (sometime later) to include this reference. Hope it's ok with you! :-) Thanks. – Srivatsan Sep 13 '11 at 20:39
Certainly. I think it's fairly common practice to incorporate others' comments into our answers if we find them sufficiently helpful. – Mike Spivey Sep 13 '11 at 20:46
Let $k$ be fixed. Then, in the "big Oh" sense, the bound is tight.
For example, let $\epsilon$ be a very small positive number. It is easy to verify that $$\lim_{n \to \infty} \frac{n^{k-\epsilon}}{\binom{n}{k}}=0.$$ So we cannot replace $n^{k}$ by $n^{k-\epsilon}$. We cannot even replace $n^k$ by something that grows almost as fast as $n^k$, such as $n^k/(\log(\log n))$.
So for fixed $k$, we will not find a "cheaper" upper bound in the "big Oh" sense. Of course, we can improve the implicit constant, by dividing by $k!$ (the terms $\binom{n}{i}$ with $i<k$ make negligible relative contribution when $n$ is large).
-
great point; i would up-vote if i could; thank you – Anton Belov Sep 13 '11 at 20:40 | 2015-01-27T09:06:35 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/64261/good-upper-bound-for-sum-limits-i-1kn-choose-i",
"openwebmath_score": 0.9786620736122131,
"openwebmath_perplexity": 238.31041970672703,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808678600415,
"lm_q2_score": 0.8615382058759129,
"lm_q1q2_score": 0.8445494201506231
} |
http://math.stackexchange.com/questions/3320/does-the-number-pi-have-any-significance-besides-being-the-ratio-of-a-circles-d/17436 | # Does the number pi have any significance besides being the ratio of a circle's diameter to its circumference?
Pi appears a LOT in trigonometry, but only because of its 'circle-significance'. Does pi ever matter in things not concerned with circles? Is its only claim to fame the fact that its irrational and an important ratio?
-
I'd rather this be a comment instead, so: $\pi$ turns up in the expression for the so-called "probability integral" (a.k.a. the "error function") among other things. How circles relate to this is a bit of a long-winded explanation though. – J. M. Aug 26 '10 at 0:11
Also, let's get one thing straight here: circles are eerily important. You will never stop running into circles in mathematics. – Qiaochu Yuan Aug 26 '10 at 0:26
(For example, although the Fourier transform is "concerned with circles" (functions on the circle being the same thing as periodic functions) it penetrates into the deepest parts of modern mathematics. Many appearances of pi are because of a Fourier transform lurking somewhere in the background. You might also want to read this MO thread where I asked a similar question: mathoverflow.net/questions/18180/…) – Qiaochu Yuan Aug 26 '10 at 0:52
Fundamental source of $\pi$ is circle nothing else. It may be difficult to find it but it is always there. – Pratik Deoghare Aug 26 '10 at 12:43
It is difficult to know if a circle is not lurking somewhere, whenever there is $\pi$, but the values of the Riemann zeta function at the positive even integers have a lot to do with powers of $\pi$: see here for the values.
For instance, you can prove that the probability that two "randomly chosen" positive integers are coprime is $\frac{1}{\zeta(2)} = \frac{6}{\pi^2}$.
-
You had my upvote at "it is difficult to know if a circle is not lurking somewhere..." – J. M. Aug 26 '10 at 0:22
Maybe that has something to do with angles and the 2D lattice. – asmeurer Dec 20 '12 at 20:03
@asmeurer: Actually it has to do with the fact that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$. – ShreevatsaR Jun 9 at 16:29
@ShreevatsaR you're just restating what $\zeta(2)$ is. That doesn't explain where the $\pi$ comes from, fundamentally. – asmeurer Jun 9 at 19:18
@asmeurer: True, but I'm saying it has nothing to do with "angles and the 2D lattice". Nothing I know of, anyway. The proof that $\zeta(2)$ has that value (see the linked article) doesn't use those, and the proof that the probability of two "randomly chosen" prime numbers is $\frac{1}{\zeta(2)}$ is straightforward. – ShreevatsaR Jun 10 at 2:01
$\pi$ appears in Stirling's approximation, which is not obviously related to circles. This means that $\pi$ appears in asymptotics related to binomial coefficients, such as
$$\displaystyle {2n \choose n} \approx \frac{4^n}{\sqrt{\pi n}}.$$
In other words, the probability of flipping exactly $n$ heads and $n$ tails after flipping a coin $2n$ times is about $\frac{1}{\sqrt{\pi n}}$. This asymptotic also suggests that on average you should flip between $n + \sqrt{\pi n}$ and $n - \sqrt{\pi n}$ heads.
-
This is closely related to J. Mangaldan's comment about the probability integral. Somehow I think it all ties back to the fact that e^{-x^2} is its own Fourier transform. – Qiaochu Yuan Aug 26 '10 at 0:31
Yes. Yes it does. :) – J. M. Aug 26 '10 at 0:47
I guess that comment is worth explaining: the relationship is that the constant in Stirling's approximation can be computed from the central limit theorem. This is explained at terrytao.wordpress.com/2010/01/02/… . – Qiaochu Yuan Jan 14 '11 at 0:53
@QiaochuYuan You might be interested in Kunth's "Why Pi?" Lecture. He shows how this is related to cirlces! – Pedro Tamaroff Feb 27 '12 at 5:19
Because of the formula $e^{i\pi}+1=0$ you will find $\pi$ appearing in lots of places where it's not clear there is a circle, e.g in the normal distribution formulae.
-
Yes, the ratio $\pi$ of a circle's circumference to its diameter shows up in many, many places where one might not expect it!
One partial explanation (similar in spirit to "circles lurk everywhere") is that the equation for a circle is a $quadratic$ (eg. $x^2+y^2 = r^2$.) After nice linear functions, the next most commonly used functions are quadratic functions and everywhere one runs into a quadratic function, a trig substitution (e.g. $x = r \cos \theta; y=r\sin \theta$) may be useful, turning the quadratic function into something involving $\pi.$ This explains the antiderivative $\int \frac{1}{1+x^2} dx$ involving $\pi$, the sum of reciprocals of squares $\sum^\infty\frac{1}{k^2}$ involving $\pi$ and the area under the Gaussian distribution involving $\pi$. And so on....
-
How does it explain the sums of reciprocals of squares involving pi? – George Lowther Jan 13 '11 at 23:02
@George: there are a few elementary proofs of sum 1/k^2 = pi^2/6 where pi creeps in for reasons at least analogous to a trig substitution: math.stackexchange.com/questions/8337/… – Qiaochu Yuan Jan 14 '11 at 0:52
@Qiaochu: Most of the proofs I know apply equally well to evaluating $\sum 1/n^d$ (for d even) and even $\sum (-1)^d/n^d$ (for d odd), which also involve $\pi$. So, the fact that the terms are squares doesn't seem particularly significant to the appearance of $\pi$. – George Lowther Jan 14 '11 at 1:41
I'll have a look through the alternative proofs in that link though. – George Lowther Jan 14 '11 at 1:42
(I meant $\sum(-1)^d/(2n+1)^d$ above). I always thought of these sums involving $\pi$ for similar reasons, and not just the $d=2$ case in isolation. – George Lowther Jan 14 '11 at 1:56 | 2013-12-05T02:39:52 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/3320/does-the-number-pi-have-any-significance-besides-being-the-ratio-of-a-circles-d/17436",
"openwebmath_score": 0.9059469103813171,
"openwebmath_perplexity": 638.4589711344356,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808678600414,
"lm_q2_score": 0.8615382040983515,
"lm_q1q2_score": 0.8445494184081135
} |
https://math.stackexchange.com/questions/2246475/why-is-int-frac1-sqrty-sqrt1-y-dy-frac2-sqrty-1-sqrty-lo/2246503 | # Why is $\int \frac{1}{\sqrt{y}\sqrt{1 - y}} dy = \frac{2\sqrt{y - 1}\sqrt{y} \log(\sqrt{y - 1} + \sqrt{y})}{\sqrt{(-(y - 1) y)}}$?
Fairly self-explanatory question title. Why is $$\int \frac{1}{\sqrt{y}\sqrt{1 - y}} dy = \frac{2\sqrt{y - 1}\sqrt{y} \log(\sqrt{y - 1} + \sqrt{y})}{\sqrt{-(y - 1)}\sqrt{y}}\ ?$$
I'm assuming you have to use substition, but I'm not sure how.
edit: $$y \in (0,1)$$
• try using substitution $y=t^2$ – Abhash Jha Apr 22 '17 at 13:46
• Substitute $y = \sin^2 \theta$. – user384138 Apr 22 '17 at 13:47
• constrain your allowed values for $y$ to $(0,1)$ and everything should become much clearer – tired Apr 22 '17 at 13:47
• The term $\frac{2\sqrt{y-1}\sqrt{y}}{\sqrt{-(y-1)y}}$ can be greatly simplified and I suggest to do it. – Jack D'Aurizio Apr 22 '17 at 13:55
• Please check the signs in your expression. The denominator on the LHS is defined for $0 < y < 1$, but the first term in the numerator on the RHS is defined for $y>1$. – mlc Apr 22 '17 at 14:03
Is the answer correct? Notice $y\in (0,1)$, as both $y> 0$, $1-y>0$. But why answer has $\sqrt{y-1}$?
Let $y=\sin ^2x$, $x\in (0,\frac{\pi}{2})$ $$\int \frac{1}{\sqrt{y}\sqrt{1 - y}} dy = \int \frac{2\sin x \cos x}{\sin x\cos x} dx =2x +C=2\arcsin \sqrt{y}+C$$
• The two different representations follow from $\arcsin(x) = i\log(\sqrt{1-x^2}+ix)$ – Hyperplane Apr 22 '17 at 14:20
• @Hyperplane yes, it we allow complex number, we could certainly find the relationship between them. But I did not go further steps there, because the question's tag does not involve complex analysis. – Yujie Zha Apr 22 '17 at 15:11
• You're right. The problem is indeed supposed to happen within the real numbers. Which means my setup isn't quite consistent. – ghthorpe Apr 22 '17 at 15:51
Hint:
As $$4y(1-y)=1-(2y-1)^2$$
Set $2y-1=\sin t$
Another possibility is to write it as $$\int\frac{\sqrt{y}}{y\sqrt{1-y}}dy$$and substitute $$u=\sqrt{\frac{y}{1-y}}$$ this avoids trigonometry. | 2019-11-18T23:43:07 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2246475/why-is-int-frac1-sqrty-sqrt1-y-dy-frac2-sqrty-1-sqrty-lo/2246503",
"openwebmath_score": 0.839572012424469,
"openwebmath_perplexity": 554.824990263956,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9643214450208032,
"lm_q2_score": 0.8757870029950159,
"lm_q1q2_score": 0.8445401882585922
} |
https://mathematica.stackexchange.com/questions/104305/approximating-exponentials-in-a-nice-to-read-format | # Approximating exponentials in a nice to read format
I need to make some approximations, basically I have something like
$$e^{i*a} = -0.735145 + 0.67791*I$$
and I want to approximate this to something that is easily readable, like
$$e^{\frac{3*\pi}{2}*i}$$
Does mathematica have a simplify function that can do this, while I specify the form of the equation I want to end up with (like the second one)? Is there another way, besides just taking a wild guess?
Read the documentation about Rationalize
Now starting from the complex form you can build you own function
nicef = Exp[Rationalize[Im[Chop[Log[#]]], 1/16] I] &
nicef[-0.735145 + 0.67791 I]
E^((12 I)/5)
or if the input is $a$ then just Rationalize[a,1/16]
• Yes, I did try Rationalize, however my algorithm is so sensitive that not even with 0.000001 instead of 1/16, I won't get what I want. Anyway, your post does answer my question, so I will accept it. Thank you. Jan 18 '16 at 10:57 | 2021-09-18T17:36:52 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/104305/approximating-exponentials-in-a-nice-to-read-format",
"openwebmath_score": 0.826128363609314,
"openwebmath_perplexity": 713.3732774496442,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9643214491222695,
"lm_q2_score": 0.8757869932689565,
"lm_q1q2_score": 0.8445401824715554
} |
https://mathematica.stackexchange.com/questions/140516/projection-of-a-3d-ode-solution-on-a-parametric-2d-streamplot | # Projection of a 3D ODE solution on a parametric 2D streamplot
### Problem
I have a third-order dynamical system of which I'd like to plot the solutions on the streamplot defined by the same dynamical system, as a function of one of the three variables.
These are the equations:
\begin{align*} x^\prime &= (1 - z) (A (1 - x) - x)\\ y^\prime &= B - (1 - z) y\\ z^\prime &= z (1 - z) \end{align*} I would like to plot a projection of the trajectory on the (x, y) plane, as a function of the value the variable z assumes. I can treat z almost as it were a parameter, its solution being an invertible function of time.
### What I've done so far
Taking inspiration from here and here, I was able to plot the parametic streamplot (corresponding to stacks of the ideal 3D phase diagram as a function of z) and the solution of 3D dynamical system separately:
splot = Manipulate[StreamPlot[
{
(1 - z) (A (1 - x) - x),
B - (1 - z) y
},
{x, 0, 1}, {y, 0, 5}, StreamColorFunction -> "Rainbow",
StreamScale -> Large, StreamPoints ->Fine], {z, 0, 1}];
pplot = ParametricPlot3D[
Evaluate[
First[{x[t], y[t], z[t]} /.
NDSolve[
{
x'[t] == (1 - z[t]) (A (1 - x[t]) - x[t]),
y'[t] == B - (1 - z[t]) y[t],
z'[t] == z[t] (1 - z[t])
Thread[{x[0], y[0], z[0]} == {0.1, 0, 0.01}]}, {x, y}, {t, 0, 10}]]],
{t, 0, 10}, PlotStyle -> Red];
How to superimpose splot and pplot, always being able to vary the value of z? Of course, Show[splot, pplot] does not work...
### What I would like to do
In summary, I'd like to obtain a 2D projection of the 3D solution of the dynamical system, and plot it onto a streamplot defined by the (x, y) field, as a function of the value the variable z assumes.
Thanks in advance for you help.
• Just try ParametricPlot3D and see what happens. – zhk Mar 20 '17 at 13:22
• Yes, with ParametricPlot3D I can get the 3D trajectory. Thanks! But still how to project it on the 2D streamplot? I still obtain "Could not combine the graphics objects in Show" – Orso Mar 20 '17 at 14:55
• How is this possible? Your 3D has x, y and z but in StreamPlot you have just x and y. – zhk Mar 20 '17 at 15:43
• Not sure if this is what you mean, but I had to remove any dependence from z in the streamplot if I wanted to obtain the plot. Ideally, I'd like to have the complete form that you can see in the NDSolve part in the streamplot, too. – Orso Mar 20 '17 at 15:58
Perhaps, this can motivate desired answer:
f[a_, b_, x_, y_, z_] := {(1 - z) (a (1 - x) - x), b - (1 - z) y,
z (1 - z)}
sol = ParametricNDSolve[{{x'[t], y'[t], z'[t]} ==
f[a, b, x[t], y[t], z[t]],
x[0] == x0, y[0] == y0, z[0] == z0}, {x, y, z}, {t, 0, 10}, {a, b,
x0, y0, z0}];
s1 = Show[
ParametricPlot[
Evaluate[{x[1, 1, 0.3, 0.2, 0.1][t],
y[1, 1, 0.3, 0.2, 0.1][t]} /. sol], {t, 0, 10},
PlotRange -> {0, 10}, PlotStyle -> Red],
StreamPlot[f[1, 1, x, y, 0.1][[;; 2]], {x, 0, 10}, {y, 0, 10}],
Frame -> True, PlotLabel -> "z=0.1", ImageSize -> 300];
s2 = Show[
ParametricPlot[
Evaluate[{x[1, 1, 0.3, 0.2, 0.1][t],
y[1, 1, 0.3, 0.2, 0.1][t]} /. sol], {t, 0, 10},
PlotRange -> {0, 10}, PlotStyle -> Red],
StreamPlot[f[1, 1, x, y, 0.2][[;; 2]], {x, 0, 10}, {y, 0, 10}],
Frame -> True, PlotLabel -> "z=0.2", ImageSize -> 300];
p = ParametricPlot3D[
Evaluate[{x[1, 1, 0.3, 0.2, 0.1][t], y[1, 1, 0.3, 0.2, 0.1][t],
z[1, 1, 0.3, 0.2, 0.1][t]} /. sol], {t, 0, 10},
PlotRange -> {0, 10}, ImageSize -> 300];
Row[{s1, s2, p}] | 2019-11-14T12:11:54 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/140516/projection-of-a-3d-ode-solution-on-a-parametric-2d-streamplot",
"openwebmath_score": 0.39926618337631226,
"openwebmath_perplexity": 2002.6618984166596,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.964321450147636,
"lm_q2_score": 0.8757869916479466,
"lm_q1q2_score": 0.8445401818063835
} |
http://fluencyuniversity.com/m9o4am/b437h.php?cc98e9=eigenvalues-of-hermitian-matrix | Symmetric Matrices and the Product of Two Matrices, For Fixed Matrices $R, S$, the Matrices $RAS$ form a Subspace, True or False. (a) Suppose λ is an eigenvalue of A, with eigenvector v. Every real symmetric matrix is Hermitian. So we could characterize the eigenvalues in a manner similar to that discussed previously. In many physical problems, a matrix of interest will be real and symmetric, or Hermitian. <> the diagonal matrix Tis (i.e. Required fields are marked *. They have real eigenvalues (energy levels) and normalized orthongonal eigenvectors (wave functions). Proof 2.. Corollary. Proof: Let and be an eigenvalue of a Hermitian matrix and the corresponding eigenvector satisfying , then we have. The computation of eigenvalues and eigenvectors for a square matrix is known as eigenvalue decomposition. Theorem 9.1.2. A Hermitian (symmetric) matrix with all positive eigenvalues must be positive deï¬nite. Can someone walk me through the proof? A Hermitian matrix (or self-adjoint matrix) is one which is equal to its Hermitian adjoint (also known as its conjugate transpose). Let $C[-\pi, \pi]$ be the vector space of all continuous functions defined on the interval $[-\pi, \pi]$. This is an elementary (yet important) fact in matrix analysis. Inequality about Eigenvalue of a Real Symmetric Matrix, Sum of Squares of Hermitian Matrices is Zero, then Hermitian Matrices Are All Zero, Inner Product, Norm, and Orthogonal Vectors, Maximize the Dimension of the Null Space of $A-aI$, Given All Eigenvalues and Eigenspaces, Compute a Matrix Product, Eigenvalues of Orthogonal Matrices Have Length 1. Hermitian Matrices It is simpler to begin with matrices with complex numbers. Then, we establish various expansion formulas for calculating the inertias, ranks and signatures of some 2 × 2 and 3 × 3, as well as k × k block Hermitian matrices consisting of two orthogonal projectors. Enter your email address to subscribe to this blog and receive notifications of new posts by email. Let $lambda$ be a (real) eigenvalue of $A$ and let $mathbf{x}$ be a corresponding real […], […] that the eigenvalues of a real symmetric matrices are all real numbers and it is diagonalizable by an orthogonal […], […] The proof is given in the post Eigenvalues of a Hermitian Matrix are Real Numbers […], […] that all the eigenvalues of a symmetric matrices are real numbers. Suppose λ is an eigenvalue of the self-adjoint matrix A with non-zero eigenvector v . The Hamiltionian matrices for quantum mechanics problems are Hermitian. Learn how your comment data is processed. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue. If A has no negative real eigenvalues, compute the principal matrix square root of A, that is the unique matrix $X$ with eigenvalues having positive real part such that $X^2 = A$. Notify me of follow-up comments by email. Proof. We will prove that when r=n-2 there are necessarily diagonal rxr matrices satisfying this condition. This argument can be extended to the case of repeated eigenvalues; it is always possible to find an orthonormal basis of eigenvectors for any Hermitian matrix. consideration we employed the bi-orthogonal eigenvectors. Let be two different eigenvalues of .Let be the two eigenvectors of corresponding to the two eigenvalues and , respectively.. Then the following is true: Here denotes the usual inner product of two vectors . A useful consequence of HPD (SPD) matrices is that their eigenvalues (which wealreadyknowarerealdue to the Hermitianproperty)must be NON-NEGATIVE. can always be chosen as symmetric, and symmetric matrices are orthogonally diagonalizableDiagonalization in the Hermitian Case Theorem 5.4.1 with a slight change of wording holds true for hermitian matrices.. This website’s goal is to encourage people to enjoy Mathematics! Then (a) All eigenvalues of A are real. If is Hermitian (symmetric if real) (e.g., the covariance matrix of a random vector)), then. This site uses Akismet to reduce spam. In the discussion below, all matrices and numbers are complex-valued unless stated otherwise. Add to solve later Sponsored Links Step by Step Explanation. In physics the dagger symbol is often used instead of the star: if the eigenvalues of matrix Aare all distinct, if Ais an Hermitian matrix A, (or algebraic multipl i = geom multipl i;8i) =)9U= unitary and it diagonalizes A =)9X= nonsingular and it diagonalizes A (i.e. Show that the... Eigenvalues of a real symmetric matrix are real. The eigenvectors of a Hermitian matrix also enjoy a pleasing property that we will exploit later. The Ohio State University Linear Algebra Exam Problems and Solutions, Cosine and Sine Functions are Linearly Independent. all of its eigenvectors are orthogonal. The eigenvalues are real. Then, x = a ibis the complex conjugate of x. Proof Suppose xand yare eigenvectors of the hermitian matrix Acorresponding to eigen-values 1 and 2 (where 1 6= 2). Eigenvalues and the Hermitian matrices Hermitian Matrices are ones whose conjugate transpose [1] is the matrix itself, i.e. If H is a hermitian matrix (i.e. Are the Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$ Linearly Independent? (b) The rank of Ais even. This website is no longer maintained by Yu. Why do Hermitian matrices have real eigenvalues? For a Hermitian matrix, the families are the same. Then prove the following statements. For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix, | v i , j | 2 = â k ( λ i â λ k ( M j ) ) â k â i ( λ i â λ k ) , {\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},} Learn more about eig(), eigenvalues, hermitian matrix, complex MATLAB 2. ⦠(b) Eigenvectors for distinct eigenvalues of A are orthogonal. Eigenvectors corresponding to distinct eigenvalues are orthogonal. Inner Products, Lengths, and Distances of 3-Dimensional Real Vectors. 466 CHAPTER 8 COMPLEX VECTOR SPACES. Corollary : Æ unitary matrix V such that V â 1 HV is a real diagonal matrix. If A is real-symmetric or Hermitian, its eigendecomposition (eigen) is used to compute the square root. stream Let x= a+ ib, where a;bare real numbers, and i= p 1. H* = H â symmetric if real) then all the eigenvalues of H are real. These two proofs are essentially the same. In other words, the matrix A is Hermitian if and only if A = A H. Obviously a Hermitian matrix must be square, i.e., it must have dimension m ´ m for some integer m. The Hermitian conjugate of a general matrix product satisfies an identity similar to (1). Theorem 7. Therefore, by the previous proposition, all the eigenvalues of a real symmetric matrix are ⦠If is hermitian, then . Save my name, email, and website in this browser for the next time I comment. or in matrix notation: , where A T stands for A transposed. Then if the eigenvalues are to represent physical quantities of interest, Theorem HMREguarantees that these values will not be complex numbers. Every entry in the transposed matrix is equal to the complex conjugate of the corresponding entry in the original matrix: . {N?��)��["��BRRt($���5F�q�����{ �Z����M2ҕ�8�����m�u>�)Vi������p}�);hy3�UӨ��2=ʲ� �;��lߋNCT��ڙ(2�K�z'K�A���%���pH���� #���Z��n�6Q��CI�7�Du�>�27�@���i�lz��Hi0Z��p�Z�����[��iIiA��������NN�����]06��@/���8�¼�%{���q'�C��>�S�%����N��7i���1=�Q���S�[��`��oD�/h����� �JE�:=?! The Eigenvalues of a Hermitian Matrix If A is a Hermitian matrix, then its eigenvalues are real numbers. How to Diagonalize a Matrix. ST is the new administrator. All the eigenvalues of a symmetric real matrix are real If a real matrix is symmetric (i.e.,), then it is also Hermitian (i.e.,) because complex conjugation leaves real numbers unaffected. Otherwise, a nonprincipal square root is returned. The Intersection of Bases is a Basis of the Intersection of Subspaces, Positive definite real symmetric matrix and its eigenvalues – Problems in Mathematics, A Matrix Equation of a Symmetric Matrix and the Limit of its Solution – Problems in Mathematics, Top 10 Popular Math Problems in 2016-2017 – Problems in Mathematics, Inequality about Eigenvalue of a Real Symmetric Matrix – Problems in Mathematics, A Hermitian Matrix Has Real Eigenvalues – David Tersegno's Laser Writeshow, Linear Combination and Linear Independence, Bases and Dimension of Subspaces in$\R^n$, Linear Transformation from$\R^n$to$\R^m$, Linear Transformation Between Vector Spaces, Introduction to Eigenvalues and Eigenvectors, Eigenvalues and Eigenvectors of Linear Transformations, How to Prove Markov’s Inequality and Chebyshev’s Inequality, How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions, Expected Value and Variance of Exponential Random Variable, Condition that a Function Be a Probability Density Function, Conditional Probability When the Sum of Two Geometric Random Variables Are Known, Determine Whether Each Set is a Basis for$\R^3$. Eigenvalues and Eigenvectors of Hermitian Matrices. %PDF-1.2 Range, Null Space, Rank, and Nullity of a Linear Transformation from$\R^2$to$\R^3$, How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix, The Intersection of Two Subspaces is also a Subspace, Rank of the Product of Matrices$AB$is Less than or Equal to the Rank of$A$, Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis, Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space, Find a Basis for the Subspace spanned by Five Vectors, Prove a Group is Abelian if$(ab)^2=a^2b^2$. We give two proofs. If Two Matrices Have the Same Rank, Are They Row-Equivalent? all of its eigenvalues are real, and. Therefore, HPD (SPD) matrices MUST BE INVERTIBLE! it follows that v*Av is a Hermitian matrix. Your email address will not be published. An alternate formulation of Horn's Theorem shows that indices yield a Horn inequality if and only if certain associated partitions occur as the eigenvalues for some rxr Hermitian matrices A, B, C=A+B. 8][ E������!M��q)�іIj��rZ��;>��ߡ�. âSince we are working with a Hermitian matrix, we may take an eigenbasis of the space â¦â âWait, sorry, why are Hermitian matrices diagonalizable, again?â âUmm ⦠itâs not quick to explain.â This exchange happens often when I give talks about spectra of graphs and digraphs in Bojanâs graph theory meeting. %�쏢 11.11. Let$lambda_1, dots, lambda_n$be eigenvalues of […], […] seen proofs that Hermitian matrices have real eigenvalues. Let A be a real skew-symmetric matrix, that is, AT=âA. Proof 1.. Problems in Mathematics © 2020. Prove that the Length$\|A^n\mathbf{v}\|$is As Small As We Like. The corresponding values of v that satisfy the equation are the right eigenvectors. (See the corollary in the post “Eigenvalues of a Hermitian matrix are real numbers“.) Complex eigenvalues for hermitian matrix. Let A2M nbe a Hermitian matrix and A sbe an s sprincipal submatrix of A, s2[1 : n]. Proof. The list of linear algebra problems is available here. Last modified 11/18/2017, […] that the eigenvalues of a real symmetric matrix are real. (adsbygoogle = window.adsbygoogle || []).push({}); Linear Transformation to 1-Dimensional Vector Space and Its Kernel. This follows from the fact that the matrix in Eq. Let be an complex Hermitian matrix which means where denotes the conjugate transpose operation. Askew Hermitian matrix is one for which At = -A. However, the following characterization is simpler. The eigenvalue problem is to determine the solution to the equation Av = λv, where A is an n -by- n matrix, v is a column vector of length n, and λ is a scalar. The eigenvalues of a Hermitian (or self-adjoint) matrix are real. All Rights Reserved. A Hermitian matrix is defined as a matrix that is equal to its Hermitian conjugate. 8.F. In mathematics, for a given complex Hermitian matrix M and nonzero vector x, the Rayleigh quotient $${\displaystyle R(M,x)}$$, is defined as: Let Mbe an n nsquare matrix with complex entries. x��\Ks�Nr���Mr�MyG4��ɶ\�S�J9a���aC�A)J��X���f 0��h�v�j��4����m�N�MO��gW'����˓w'��'���Ϯ6_�����N�����[���,���9��ɷ�'ߟ8�6�J�q�n :��y~�b�f���W��w�Ur{��N����褐r{A��^{ۗ;�ϧ�7�Ӈ4x6=��^��Di�� ��������P! They are both consequences of CourantâFischer theorem. 5 0 obj Theorem: Eigenvectors of Hermitian matrices corresponding to di erent eigenvalues are orthogonal. Here are a couple. Proof. This will be illustrated with two simple numerical examples, one with real eigenvectors, and one with complex eigenvectors. Prove that the eigenvalues of a skew Hermitian matrix are pure imaginary. Get more help from Chegg Get ⦠Eigenvalues of a Hermitian Matrix are Real Numbers Problem 202. Statement. Then, for k2[1 : s], " k (A) " k (A s) " k+n s (A): Remark. When we process a square matrix and estimate its eigenvalue equation and by the use of it, the estimation of eigenvalues is done, this process is formally termed as eigenvalue decomposition of the matrix. (a) Each eigenvalue of the real skew-symmetric matrix A is either 0or a purely imaginary number. Unitary and hermitian matrices 469 Proposition 11.107: Eigenvalues and eigenvectors of hermitian matrices Let A be a hermitian matrix. Theorem 5.12. Find the Eigenvalues and Eigenvectors of the Matrix$A^4-3A^3+3A^2-2A+8E$. Idempotent Linear Transformation and Direct Sum of Image and Kernel. The values of λ that satisfy the equation are the eigenvalues. Hermitian Operators â¢Definition: an operator is said to be Hermitian if it satisfies: Aâ =A âAlternatively called âself adjointâ âIn QM we will see that all observable properties must be represented by Hermitian operators â¢Theorem: all eigenvalues of a Hermitian operator are real âProof: â¢Start from Eigenvalue ⦠Hermitian matrices are named after Charles Hermite (1822-1901) , who proved in 1855 that the eigenvalues of these matrices are always real . These start by assuming there is some eigenvalue/eigenvector pair, and using the fact that a […], Your email address will not be published. the diagonal matrix Dis T= UHAUor A= UTUH) D= X 1AXor A= XDX 1) Tis rst shown to be upper triangular in Thm 6.4.3 This implies that v*Av is a real number, and we may conclude that is real. The two results of this section locate the eigenvalues of a matrix derived from a matrix A relatively to the eigenvalues of A. Eigenvectors and Hermitian Operators 7.1 Eigenvalues and Eigenvectors Basic Deï¬nitions Let L be a linear operator on some given vector space V. A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding eigenvector for L ⦠Diagonal rxr matrices satisfying this condition the Hamiltionian matrices for quantum mechanics problems are Hermitian we will that. Physics the dagger symbol is often used instead of the corresponding values of v that satisfy equation! ) all eigenvalues of a Hermitian matrix are real numbers, and Distances of 3-Dimensional real Vectors such that *. ) all eigenvalues of a Hermitian matrix and a sbe an s sprincipal submatrix of a skew matrix... Of v that satisfy the equation are the eigenvalues of a Hermitian matrix also a! Then, x = a ibis the complex conjugate of the corresponding entry in the discussion below, all and! Notation:, where a ; bare real numbers enjoy Mathematics be a Hermitian matrix a! And symmetric, or Hermitian, its eigendecomposition ( eigen ) is used to compute square... We could characterize the eigenvalues are to represent physical quantities of interest will be illustrated with two simple examples. Its eigenvalues are orthogonal n nsquare matrix with all positive eigenvalues MUST be INVERTIBLE a matrix of interest theorem... Of 3-Dimensional real Vectors xand yare eigenvectors of the real eigenvalues of hermitian matrix matrix a with non-zero v! A square matrix is known as eigenvalue decomposition this follows from the that! Equation are the right eigenvectors unitary matrix v such that v * Av is a number. A be a real symmetric matrix are real i= p 1 all matrices and numbers are complex-valued unless otherwise! Corresponding values of v that satisfy the equation are the right eigenvectors we Like and be eigenvalue! The corollary in the transposed matrix is equal to the eigenvalues of a are real Kernel... ) fact in matrix analysis x= a+ ib, where a T stands for a transposed window.adsbygoogle || [ )... Problems, a matrix derived from a matrix derived from a matrix derived from a matrix derived from matrix! Image and Kernel matrix analysis \|$ is as Small as we Like locate the of!: complex eigenvalues for Hermitian matrix, that is real follows from the fact that the eigenvalues of H real., [ … ] that the eigenvalues of a Hermitian matrix and a sbe s... ) $and$ \cos^2 ( x ) $and$ \cos^2 ( x $! Real diagonal matrix Hermitian ( symmetric if real ) then all the eigenvalues in a manner similar eigenvalues of hermitian matrix. Notifications of new posts by email ( See the corollary in the discussion below, all matrices numbers... ( { } ) ; Linear Transformation to 1-Dimensional vector Space and its.. The self-adjoint matrix a with non-zero eigenvector v and Direct Sum of Image and Kernel || [ ] ) (... And normalized orthongonal eigenvectors ( wave Functions ) â 1 HV is a Hermitian matrix are imaginary... Such that v * Av is a Hermitian matrix are real are pure imaginary Hermitian, its eigendecomposition eigen. H are real numbers Problem 202 a+ eigenvalues of hermitian matrix, where a ; real... Fact that the eigenvalues of a are orthogonal all eigenvalues of a: n ] these values will not complex... Dagger symbol is often used instead of the star: complex eigenvalues for Hermitian matrix real... Email address to subscribe to this blog and receive notifications of new posts by email ), its. Linear Transformation and Direct Sum of Image and Kernel ( energy levels ) and normalized orthongonal eigenvectors ( Functions. Examples, one with real eigenvectors, and website in this browser the. A random vector ) ), then its eigenvalues are real a ; real. V â 1 HV is a real symmetric matrix are pure imaginary x = a ibis complex! Numbers Problem 202 energy levels ) and normalized orthongonal eigenvectors ( wave Functions ) there are necessarily rxr! Non-Zero eigenvector v complex conjugate of the corresponding values of Î » is an eigenvalue of the itself! If the eigenvalues of a real symmetric matrix are real prove that the Length$ \|A^n\mathbf { v \|... Stands for a transposed the right eigenvectors your email address to subscribe to this blog and receive of. Î » is an elementary ( yet eigenvalues of hermitian matrix ) fact in matrix notation:, where a ; real... ] that the... eigenvalues of a skew Hermitian matrix matrix is known eigenvalue... Vector Space and its Kernel of a Hermitian matrix which means where denotes the transpose... If two matrices have the same address to subscribe to this blog receive. Xand yare eigenvectors of a matrix derived from a matrix of interest, HMREguarantees... Not be complex numbers the square root Sine Functions are Linearly Independent are orthogonal that v â 1 is. Matrix with all positive eigenvalues MUST be INVERTIBLE non-zero eigenvector v that we will prove that when there. Used instead of the Hermitian matrix are real eigenvectors of Hermitian matrices corresponding to di erent are! Real number, and website in this browser for eigenvalues of hermitian matrix next time I comment available.... Sprincipal submatrix of a are real the discussion below, all matrices and numbers are complex-valued stated! Matrix of a H * = H â symmetric if real ) e.g.... ; > ��ߡ� and Hermitian matrices let a be a real symmetric matrix real... Corresponding to di erent eigenvalues are to represent physical quantities of interest, theorem HMREguarantees these... > ��ߡ� and we may conclude that is, AT=âA from the fact that the Length \|A^n\mathbf! Right eigenvectors is real website in this browser for the next time I comment s sprincipal of! Or Hermitian problems are Hermitian dagger symbol is often used instead of the matrix in Eq Each eigenvalue of self-adjoint... ) fact in matrix analysis next time I comment results of this section locate the eigenvalues of a real. If the eigenvalues eigenvalues of hermitian matrix a manner similar to that discussed previously help from get. For a transposed one with real eigenvectors, and Distances of 3-Dimensional real Vectors 11/18/2017 [. Website ’ s goal is to encourage people to enjoy Mathematics bare real.... As an eigenvalue the eigenvectors of Hermitian matrices Hermitian matrices 469 Proposition 11.107: eigenvalues and eigenvectors for eigenvalues. This section locate the eigenvalues of a matrix derived from a matrix derived a! 8 eigenvalues of hermitian matrix [ E������! M��q ) �іIj��rZ�� ; > ��ߡ� $Linearly Independent that. To begin with matrices with complex entries ( See the corollary in the post eigenvalues. Every$ 3\times 3 $orthogonal matrix Has 1 as an eigenvalue of the matrix$ A^4-3A^3+3A^2-2A+8E $the... Positive eigenvalues MUST be positive deï¬nite ) ), then its eigenvalues are to physical! It follows that v â 1 HV is a Hermitian ( symmetric if real ) (,... Important ) fact in matrix notation:, where a T stands a. Proposition 11.107: eigenvalues and the Hermitian matrix this is an elementary ( yet )! Mechanics problems are Hermitian for a square matrix is equal to the complex of.$ \sin^2 ( x ) $and$ \cos^2 ( x ) $Linearly Independent i= p.... S2 [ 1 ] is the matrix itself, i.e Distances of 3-Dimensional real Vectors for Hermitian matrix are.. Whose conjugate transpose operation the two results of this section locate the eigenvalues are.! Satisfying, then an n nsquare matrix with all positive eigenvalues MUST be positive deï¬nite we conclude... Yet important ) fact in matrix notation:, where a ; bare numbers! A with non-zero eigenvector v positive deï¬nite and Kernel and website in this browser the. P 1 which means where denotes the conjugate transpose [ 1: n ] Image.: eigenvectors of Hermitian matrices Hermitian matrices let a be a real number, i=... Matrices let a be a real symmetric matrix are real to subscribe to this blog and notifications... The Hermitian matrix Acorresponding to eigen-values 1 and 2 ( where 1 6= ). Pure imaginary for Hermitian matrix and one with complex eigenvectors Transformation to 1-Dimensional vector Space and its Kernel are! Two simple numerical examples, one with complex entries notation:, where a T for! To subscribe to this blog and receive notifications of new posts by email Each of. Similar to that discussed previously, are they Row-Equivalent || [ ] ).push {! Enter your email address to subscribe to this blog and receive notifications of new posts by.! Imaginary number one with real eigenvectors, and i= p 1 corresponding values of Î » that satisfy equation. ¦ this is an elementary ( yet important ) fact in matrix eigenvalues of hermitian matrix the are! Is as Small as we Like \cos^2 ( x )$ Linearly?... Manner similar to that discussed previously matrix and a sbe an s sprincipal submatrix a... Is often used instead of the matrix $A^4-3A^3+3A^2-2A+8E$ r=n-2 there are eigenvalues of hermitian matrix diagonal rxr satisfying! Where 1 6= 2 ) s2 [ 1 ] is the matrix itself i.e. 3 \$ orthogonal matrix Has 1 as an eigenvalue = H â symmetric if real then. Eigenvalues ( energy levels ) and normalized orthongonal eigenvectors ( wave Functions ) Sine are... Theorem HMREguarantees that these values will not be complex numbers next time I comment enjoy!... ; bare real numbers i= p 1 a matrix of interest, theorem HMREguarantees that values. ] ).push ( { } ) ; Linear Transformation and Direct Sum of and... Simple numerical examples, one with complex numbers be a Hermitian ( symmetric if real ) ( e.g. eigenvalues of hermitian matrix families! Eigenvalues for Hermitian matrix eigenvalues for Hermitian matrix eigen ) is used to the! A pleasing property that we will exploit later if two matrices have the same: eigenvalues and the Hermitian It... Orthongonal eigenvectors ( wave Functions ) a are orthogonal �іIj��rZ�� ; > ��ߡ� follows that v â 1 HV a!
2020 eigenvalues of hermitian matrix | 2021-01-24T01:45:07 | {
"domain": "fluencyuniversity.com",
"url": "http://fluencyuniversity.com/m9o4am/b437h.php?cc98e9=eigenvalues-of-hermitian-matrix",
"openwebmath_score": 0.9100313186645508,
"openwebmath_perplexity": 661.7308412130558,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9553191271831558,
"lm_q2_score": 0.8840392848011834,
"lm_q1q2_score": 0.8445396379518879
} |
https://mathematica.stackexchange.com/questions/191081/how-to-preserve-normalization-in-ndsolve | # How to preserve normalization in NDSolve?
I have a probability density function: $$P_{init}(x)=\exp(-(x-x0)^2)/\sqrt{\pi}$$.
I am trying to use it as the initial condition for the following partial differential equation:
Needs["DifferentialEquationsInterpolatingFunctionAnatomy"]
V[x] = (-(x/5)^4)/Cosh[x/5];
F[x] = -D[V[x], x];
x0=5;
Pinit[x_] := Exp[-(x - x0)^2]/(Sqrt[Pi]);
T = 100;
BoundaryCondition = 250
uval = NDSolveValue[{D[u[x, t], t] + D[F[x]*u[x, t], x] -
D[u[x, t], x, x] == 0, u[x, 0] == Pinit[x],
u[-BoundaryCondition, t] == 0, u[BoundaryCondition, t] == 0},
u, {x, -BoundaryCondition, BoundaryCondition}, {t, 0, T}]
The above is a Fokker-Planck equation, which shows how the probability density expands in time.
The initial distribution is normalized, namely $$\int_{-\infty}^\infty {P_{init}(x)}dx=1$$, as it should.
However, it seems that no matter what T I choose, uval[x,T] never remains normalized.
Importantly: I get that uval[x,0] is different than Pinit(x), which is a contradiction.
How do I force Mathematica to solve the Fokker-Planck equation, whilst maintaining normalization?
Note that the reason that the integration boundaries are big, is since I would like to estimate the distribution at a long time, where the function might be much wider than the initial condition. This means that if I take boundaries which are too closely apart, I introduce mistakes because I force the function to be zero at a place and time where it shouldn't.
• People here generally like users to post code as Mathematica code instead of just images or TeX, so they can copy-paste it. It makes it convenient for them and more likely you will get someone to help you. You may find this this meta Q&A helpful – Michael E2 Feb 7 at 19:34
• – Michael E2 Feb 7 at 19:40
• With such parameters T=100, -250<=x<=250 and initial data, the numerical solution cannot be sufficiently accurate. It is necessary to limit the area of integration within reasonable limits. – Alex Trounev Feb 7 at 22:00
• – xzczd Feb 8 at 4:56
• @user1611107 I have just seen your edit to the question. I guess you are the same person as ForgotMyUserDetail? Please log in to your account. Then you can write comments to you own post and it would be much less confusing for other users. Moreover, you will be able to upvote helpful answers (this is what drives the community) and to mark the best answer as accepted. – Henrik Schumacher Feb 8 at 9:33
OK, let me summarize my comments to an answer. This problem is related to this one. The i.c. is awry in uval because the peak in the i.c. is so narrow compared to the domain of definition that the default spatial grid is too coarse to capture it:
Clear[V, F]
V[x_] = (-(x/5)^4)/Cosh[x/5];
F[x_] = -D[V[x], x];
x0 = 5;
BoundaryCondition = 250;
Pinit[x_] = Exp[-(x - x0)^2]/(Sqrt[Pi]);
T = 100;
uval = NDSolveValue[{D[u[x, t], t] + D[F[x]*u[x, t], x] - D[u[x, t], x, x] == 0,
u[x, 0] == Pinit[x], u[-BoundaryCondition, t] == 0, u[BoundaryCondition, t] == 0},
u, {x, -BoundaryCondition, BoundaryCondition}, {t, 0, T}];
coordx = uval["Coordinates"][[1]]
(*
{-250., -229.167, -208.333, -187.5, -166.667,
-145.833, -125., -104.167, -83.3333, -62.5,
-41.6667, -20.8333, 0., 20.8333, 41.6667,
62.5, 83.3333, 104.167, 125., 145.833,
166.667, 187.5, 208.333, 229.167, 250.}
*)
ptsx = Point[{#, 0} & /@ coordx]
Plot[{Pinit[x]}, {x, -BoundaryCondition, BoundaryCondition}, PlotRange -> All,
Epilog -> {Red, ptsx}]
Then the solution is simple: make the spatial grid dense enough to capture the peak and approximate it in an accurate enough way:
mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n,
"MinPoints" -> n, "DifferenceOrder" -> o}}
uvalfixed =
NDSolveValue[{D[u[x, t], t] + D[F[x]*u[x, t], x] - D[u[x, t], x, x] == 0,
u[x, 0] == Pinit[x], u[-BoundaryCondition, t] == 0, u[BoundaryCondition, t] == 0},
u, {x, -BoundaryCondition, BoundaryCondition}, {t, 0, T}, Method -> mol[2000, 4]];
Plot[{Pinit[x], uvalfixed[x, 0]}, {x, -BoundaryCondition, BoundaryCondition},
PlotRange -> All, PlotStyle -> {Automatic, {Thick, Dashed}}]
NIntegrate[uvalfixed[x, T], {x, -BoundaryCondition, BoundaryCondition}]
(* 1. *)
• @ xzczd: Great answer man, as far as I can see now, this really seems to solve the problem. Thanks! – user1611107 Feb 10 at 11:49
• @ xzczd: Quick followup question, please? In your method, what would you do if the problem had a V(x) which diverges as x approaches +0, so that you want to place a reflecting boundary at some small positive x and solve numerically only in the half positive plane? Thanks! – user1611107 Feb 10 at 20:51
• @user1611107 I'm sorry, but I don't understand what you mean… – xzczd Feb 11 at 4:10
As correctly pointed out in the comment the domain of integration is very large.
BoundaryCondition = 20;
Plot[{Pinit[x], uval[x, 0]}, {x, -BoundaryCondition, BoundaryCondition}, PlotRange -> All,
PlotStyle -> {Red, {Dashed, Green}}]
BoundaryCondition = 100;
BoundaryCondition = 150;
Let's try with MethodOfLines while keeping the original domain of interest,
mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n,
"MinPoints" -> n, "DifferenceOrder" -> o}}
mol[tf : False | True, sf_: Automatic] := {"MethodOfLines",
"DifferentiateBoundaryConditions" -> {tf, "ScaleFactor" -> sf}}
pts = 150;
uval = NDSolveValue[{D[u[x, t], t] + D[F[x]*u[x, t], x] -
D[u[x, t], x, x] == 0, u[x, 0] == Pinit[x],
u[-BoundaryCondition, t] == 0, u[BoundaryCondition, t] == 0},
u, {x, -BoundaryCondition, BoundaryCondition}, {t, 0, T},
Method -> Union[mol[pts, 6], mol[True, 100]]]
Plot[{Pinit[x], uval[x, 0]}, {x, -BoundaryCondition,
BoundaryCondition}, PlotRange -> All,
PlotStyle -> {Red, {Dashed, Green}}]
pts = 500;
It is still evident that the two doesn't match perfectly.
• It's not necessary to use a high difference order in this case, just use a dense enough grid, say, 500. – xzczd Feb 8 at 4:52
• @xzczd I agree. I tried many different combinations. – zhk Feb 8 at 4:53
• Actually the most commonly used difference order is 2 or 4, and nowadays any average PC can bear with thousands of grid points. – xzczd Feb 8 at 5:15 | 2019-12-07T16:49:53 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/191081/how-to-preserve-normalization-in-ndsolve",
"openwebmath_score": 0.41703495383262634,
"openwebmath_perplexity": 3631.8556957621067,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9553191348157373,
"lm_q2_score": 0.8840392771633079,
"lm_q1q2_score": 0.844539637402781
} |
http://mathhelpforum.com/algebra/158058-finding-values-b.html | # Math Help - Finding the values of a and b
1. ## Finding the values of a and b
Hello everyone. This question is apparently unsolvable:
If $x = 3$ or $-4$ are the solutions of the equation $x^2+ax+b=0$, find the values of $a$ and $b$.
The keyword in this irksome question would be the word 'or'. So it denotes that that this involves quadratic formulas.
Can anyone give me a clue so that I may make a breakthrough in understanding this problem? Thank you so much!
2. The wording is a bit iffy, but they actually meant that the solutions for that equation is x = 3 AND x = -4.
3. Originally Posted by PythagorasNeophyte
This question is apparently unsolvable:
If $x = 3$ or $-4$ are the solutions of the equation $x^2+ax+b=0$, find the values of $a$ and $b$.
No this question IS solvable. And quite easily I might add.
We know the quadratic formula as having a $\pm$ which yields 2 answers.
Substituting in values from $x^2+ax+b=0$ into the quadratic formula we get:
$x=\dfrac{-a + \sqrt{a^2 - 4 \times 1 \times b}}{2}$ and $x=\dfrac{-a - \sqrt{a^2 - 4 \times 1 \times b}}{2}$
We know that the minus squareroot usually gives us the smaller answer of x. So then we just substitute in our x answers to get:
$3=\dfrac{-a + \sqrt{a^2 - 4 \times b}}{2}$ and $-4=\dfrac{-a - \sqrt{a^2 - 4 \times b}}{2}$
Now solve for a and b using simultaneous equation.
4. Originally Posted by Educated
No this question IS solvable. And quite easily I might add.
We know the quadratic formula as having a $\pm$ which yields 2 answers.
Substituting in values from $x^2+ax+b=0$ into the quadratic formula we get:
$x=\dfrac{-a + \sqrt{a^2 - 4 \times 1 \times b}}{2}$ and $x=\dfrac{-a - \sqrt{a^2 - 4 \times 1 \times b}}{2}$
We know that the minus squareroot usually gives us the smaller answer of x. So then we just substitute in our x answers to get:
$3=\dfrac{-a + \sqrt{a^2 - 4 \times b}}{2}$ and $-4=\dfrac{-a - \sqrt{a^2 - 4 \times b}}{2}$
Now solve for a and b using simultaneous equation.
That is a very methodical method, great for understanding concepts.
If you wish to know, a quicker way is to simply expand (x-3)(x+4) = 0.
5. I guess I overcomplicated things...
6. Another way: If x= 3 and x= -4 are roots of the equation $x^2+ ax+ b$, then [tex](x- 3)(x+ 4)= x^2+ ax+ b[tex]. Just multiply out the left side to find a and b.
I see that Gusbob already said that. | 2014-07-13T02:51:53 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/algebra/158058-finding-values-b.html",
"openwebmath_score": 0.8983178734779358,
"openwebmath_perplexity": 435.2279153926035,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754515389344,
"lm_q2_score": 0.8577681049901037,
"lm_q1q2_score": 0.8445374192863275
} |
https://math.stackexchange.com/questions/2301961/is-the-multiplicative-identity-unique-in-a-unit-ring | # Is the multiplicative identity unique in a unit ring?
In my coursebooks and on various websites online (wiki, proofwiki, etc.), among the first theorems which follow the definition of a ring are the uniqueness of the additive identity and the additive inverse. But I haven't found an answer to the following:
Question: If $R$ is a ring with multiplicative identity, is the multiplicative identity unique?
I suspect it is the case, since by definition, $1\cdot r=r\cdot 1=r$ for all $r\in R$; so if we assume that $1$ and $1'$ are two distinct multiplicative identities in $R$, we must have $1=1\cdot 1'=1'$, i.e. $1=1'$ — a contradiction. Is my line of reasoning correct? If so, why is this not included with all the simple theorems?
• Sorry, my previous comment was misleading and wrong (to an extent). The correct thing to say: The proof is done in elementary group theory books for any group. But if you look at that proof (as you just did yourself), you never use inverses. The proof works in fact for any monoid (not group, sorry, monoid is a group without inverses), especially the underlying multiplicative monoid of $R$. – Hamed May 29 '17 at 21:14
• @Hamed Thank you. We actually started with Groups straight away in our course, and in fact had proved it using inverses. – Luke Collins May 29 '17 at 21:16
You are correct, and $1 = 1 \cdot 1' = 1' \Rightarrow 1 = 1'$ is a great proof by contradiction.
As for "not included with all the simple theorems", it has probably already been presented in your book for far "simpler" groups and been treated as already-known facts by this point.
• It was presented for groups, but the proof we did in class used inverses (it's a one-step proof if you have multiplicative inverses). That's why I wasn't sure. Thanks for your reply. – Luke Collins May 29 '17 at 21:19
Is my line of reasoning correct?
Yes, it appears many places on this website, too. Actually you don't even need to frame it as a contradiction (the advice is usually to use direct proofs, where possible.) You simply say, "suppose $1$ and $1'$ are identities. Then $1=11'=1'$. Thus there is only a single identity. QED.
If so, why is this not included with all the simple theorems?
It typically is, or is included as an easy exercise.
If you think about it, a good general version of the theorem is just "the identity element of a monoid is unique." You only need the one operation, and the definition of an identity element. (It doesn't have anything to do with inverses. You could even relax the operation to be nonassociative.)
• Thank you. We actually started with groups straight away in our course, and in fact had proved a result for it using inverses. That's why I wasn't sure about whether or not the result is true. – Luke Collins May 29 '17 at 21:19
• @LukeCollins I can understand proving uniqueness of "inverses" in a group, but how would the proof for a group go any differently for the uniqueness of the identity? – rschwieb May 29 '17 at 21:20
• @rschwieb : at no point did you use associativity. If $(E,\star)$ is a set together with a binary operation, and if $e,e'\in E$ are identites ($\forall a\in E, e\star a = a\star e = a$, and the same for $e'$), then $e=e'$, whether or not $\star$ is associative. Even better, you can require $e$ to be a left identity, and $e'$ to be a right identity and it will suffice : no associativity whatsoever ;) – Max May 29 '17 at 21:46
• @Max you're right: I'm thinking of the uniqueness if inverses! – rschwieb May 29 '17 at 23:56 | 2019-09-18T19:35:38 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2301961/is-the-multiplicative-identity-unique-in-a-unit-ring",
"openwebmath_score": 0.852167546749115,
"openwebmath_perplexity": 246.20239685661494,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754474655619,
"lm_q2_score": 0.8577681049901037,
"lm_q1q2_score": 0.8445374157923184
} |
https://math.stackexchange.com/questions/2393238/expressing-invertible-maps-bigwedged-1-v-to-bigwedged-1-v-as-bigwed | # Expressing invertible maps $\bigwedge^{d-1} V \to \bigwedge^{d-1} V$ as $\bigwedge^{d-1}A$ for some $A$
Let $V$ be a real $d$-dimensional vector space, let $\bigwedge^{d-1} V$ be its exterior power. Consider the following claim:
Proposition: If $d$ is even, then every invertible linear map $\bigwedge^{d-1} V \to \bigwedge^{d-1} V$ equals $\bigwedge^{d-1}A$ for some $A \in \text{GL}(V)$. If $d$ is odd, then every orientation-preserving* invertible map $\bigwedge^{d-1} V \to \bigwedge^{d-1} V$ equals $\bigwedge^kA$ for some $A \in \text{GL}(V)$.
I found a proof for the above proposition, but it is based on endowing $V$ with an inner product, which I don't like very much. Since there is no mention of products in the claim, it's natural to expect a metric-free proof.
Is there such a proof?
Edit:
Here is an argument for showing that when $d$ is odd, it is impossible to express orientation-reversing maps $\bigwedge^{d-1} V \to \bigwedge^{d-1} V$ as "$(d-1)$-wedge" of a map $V \to V$.
Let $A:V \to V$. Since $$\det (\bigwedge^k A)=(\det A)^{\binom{d-1}{k-1}},$$ we get for $k=d-1$ that $$\det (\bigwedge^{d-1} A)=(\det A)^{\binom{d-1}{d-2}}=(\det A)^{d-1},$$
so if $d$ is odd, we see that $\det (\bigwedge^{d-1} A)$ is always positive, whether or not $A$ was orientation-preserving to begin with.
*Note there is no need for a choice of orientation on $\bigwedge^{d-1} V$ to define which maps $\bigwedge^{d-1} V \to \bigwedge^{d-1} V$ are orientation preserving. (If you like you can put the same orientation on "both sides", it does not matter which).
## 1 Answer
Consider the perfect pairing $\left< \cdot, \cdot \right> \colon V \times \bigwedge^{d-1}(V) \rightarrow \bigwedge^d(V)$ given by the wedge product $\left<v, \omega \right> = v \wedge \omega$. The adjugate of a linear map $T \colon V \rightarrow V$ is characterized by the property that it is the adjoint map to $\bigwedge^{d-1}(T)$ with respect to $\left< \cdot, \cdot \right>$. That is, we have
$$\left< \operatorname{adj}(T)v, \omega \right> = \left<v, \bigwedge\nolimits^{d-1}(T)\omega \right>$$
for all $v \in V$ and $\omega \in \bigwedge^{d-1}(V)$. Using this definition, one can prove directly that $$\operatorname{adj}(T) \circ T = T \circ \operatorname{adj}(T) = \det(T) I$$ and $$\operatorname{adj}(\operatorname{adj}(T)) = \det(T)^{d-2} T.$$
I'll assume that $d$ is even and show that given any invertible map $S \colon \bigwedge^{d-1}(V) \rightarrow \bigwedge^{d-1}(V)$ we can find an invertible map $T \colon V \rightarrow V$ such that $\bigwedge^{d-1}(T) = S$. Since the pairing is perfect, there exists a (unique) map $R \colon V \rightarrow V$ which is adjoint to $S$ so that
$$\left< Rv, \omega \right> = \left< v, S\omega \right>$$
for all $v \in V$ and $\omega \in \bigwedge^{d-1}(V)$. Note that $R$ must also be invertible. Define $T = \det(R)^{\frac{2-d}{d-1}}\operatorname{adj}(R)$. Then we have
$$\operatorname{adj}(T) = \det(R)^{2 - d} \operatorname{adj}(\operatorname{adj}(R)) = \det(R)^{2-d} \det(R)^{d - 2} R = R$$
so
$$\left< v, S\omega \right> = \left< Rv, \omega \right> = \left< \operatorname{adj}(T)v, \omega \right> = \left< v, \bigwedge\nolimits^{d-1}(T) \omega \right>$$
for all $v \in V$ and $\omega \in \bigwedge^{d-1}(V)$ which shows that $S = \bigwedge^{d-1}(T)$.
In general, one can show that $\det(R) = \det(S)$ (where $R,S$ are adjoint with respect to $\left< \cdot, \cdot \right>$, just like one has with an inner product). When $d$ is odd, $d - 1$ is even so the previous argument works only if $\det(R) > 0$ (because we need to take an even square root) which will happen if and only if $\det(S) > 0$.
• Thanks! Your answers are amazing, as always. The $adj(adj(T))=\det(T)^{d-2}T$ looks like magic. After all $adj(T)$ encodes in some way the action of $T$ on $d-1$-dimensional parallelepipeds. It is truly a miracle that when you does this operation twice (i.e "encodes the encoding"), you recover your original map (up to the necessary scaling of course). I do not see any "prime reason" for why such a thing needs to hold ,though, since I really don't have a good intuition or interpretation of $adj(T)$ as a map in its own right (I really tends to view it as a smart-and very useful-encoding). – Asaf Shachar Aug 14 '17 at 15:15
• I understand the formal derivation of the miracle of course, but am still amazed by it. – Asaf Shachar Aug 14 '17 at 15:15
• @AsafShachar: Yeah, this is surprising. At least for invertible $T$, we can think of $\operatorname{adj}(T)$ just as $\det(T) T^{-1}$ and then all the relevant properties become obvious but this obscures the relation with the $d - 1$-dimensional parallelepipeds. Since the map $T \mapsto \Lambda^{d - 1}(T)$ is a non-trivial polynomial map, I thought that your property must come from some "nice identity" and once I remembered the relation between $\Lambda^{d-1}(T)$ and $\operatorname{adj}(T)$, it became clear that something involving $\operatorname{adj}(T)$ must work. – levap Aug 14 '17 at 17:04
• @levap: Very nice answer, thanks! – Hanno Aug 14 '17 at 17:09
• I've added some details about the odd dimensional case. – levap Aug 14 '17 at 17:37 | 2019-05-25T02:52:04 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2393238/expressing-invertible-maps-bigwedged-1-v-to-bigwedged-1-v-as-bigwed",
"openwebmath_score": 0.9443295001983643,
"openwebmath_perplexity": 169.192169740228,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754442973825,
"lm_q2_score": 0.8577681031721325,
"lm_q1q2_score": 0.8445374112848254
} |
http://math.stackexchange.com/questions/10183/floor-of-square-root-summation-problem | # Floor of Square Root Summation problem
I have problem calculating the following summation: $$S = \sum_{j=1}^{k^2-1} \lfloor \sqrt{j}\rfloor.$$
As far as I understand the mean of that summation it will be something like $$1+1+1+2+2+2+2+2+3+3+3+3+3+3+3+\cdots$$ and I suspect that the last summation number will be $(k-1)^2$, but I really can't find the pattern of the equal simpler summation.
-
Since the last value for $j$ is $k^2-1$, none of the terms of the sum are $k$; they are all between $1$ and $k-1$.
How many $1$'s will be in the sum? Well, we'll get $1$ when $j$ is any number between $1^2$ and $2^2-1$; then we'll get $2$ for each number between $2^2$ and $3^2-1$. Then we'll get $3$ for each number between $3^2$ and $4^2-1$. Etc.
So, if $n\leq k-1$, how many times does it show up in the sum? It shows up exactly the number of times that there are integers between $n^2$ and $(n+1)^2-1$, inclusively. This is $$(n+1)^2 - n^2 = n^2 + 2n + 1 - n^2 = 2n+1.$$ So your sum has $2(1)+1 = 3$ ones; $2(2)+1 = 5$ twos; $2(3)+1=7$ threes; etc. Up to $k-1$, which appears exactly $2(k-1)+1 = 2k-1$ times.
So we get that $$S = \sum_{r=1}^{k-1} r(2r+1) = 2\left(\sum_{r=1}^{k-1}r^2\right) + \sum_{r=1}^{k-1}r.$$
-
thanks a lot for helping me – ECE Nov 13 '10 at 23:13
HINT $\$ Your displayed sum has $\rm\ (2^2 - 1^2)\ \ 1's,\ \ (3^2-2^2)\ \ 2's,\ \ (4^2-3^2)\ \ 3's,\ \ldots$
-
Try replacing it by a sum of the form $\sum_1^k k \cdot c_k$, where $c_k$ is the number of times that $\lfloor \sqrt{j} \rfloor = k$.
-
Let $\lfloor \sqrt{n} \rfloor = a$, then the following sum holds:
$$\sum_{0\le k < n} \lfloor \sqrt{k} \rfloor = (n+1)a - \frac{a^3}{3} - \frac{a^2}{2} - \frac{a}{6}.$$
(Edited.)
You might also enjoy the slightly more tricky sum:
$$\sum_{0\le k < n} \lfloor k^{1/3} \rfloor = nb - \frac{b^2}{4} - \frac{b^3}{2} - \frac{b^4}{4},$$
where this time $b =\lfloor n^{1/3} \rfloor$
One way to obtain these answers is to merely apply the summation identity that I mentioned here
- | 2015-11-27T09:03:11 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/10183/floor-of-square-root-summation-problem",
"openwebmath_score": 0.9708771109580994,
"openwebmath_perplexity": 165.8925728127924,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754458814722,
"lm_q2_score": 0.8577680995361899,
"lm_q1q2_score": 0.8445374090637472
} |
http://mathhelpforum.com/geometry/133981-3d-geometry-problem.html | # Math Help - 3D Geometry Problem
1. ## 3D Geometry Problem
for some reason, i just cannot seem to solve this problem. Does anyone know a way to solve it and maybe even draw a diagram for it? i can't even picture this image let alone draw it out.
Through each edge of a cube, draw outside the cube the planes making 45 degree angles with the adjacent faces. Compute the surface area of the polyhedron bounded by these planes, assuming that the edges of the cube have length a. Is this polyhedron a prism?
2. The polyhedron you create by that process is kind of hard to visualize. But if you look at just one face of the cube, you have a square pyramid.
The surface area of the pyramid (not including the base) is 4 times the area of one triangle. The area of a triangle is $\frac{1}{2}bh$, where b (the base) is just a, and you can calculate h from the fact that the angle at the midpoint of a side is 45 degrees. The midpoint of a side, the center of the square base, and the apex of the pyramid form a 45-45-90 triangle, so you can calculate h.
So you have the area $\frac{1}{2}bh$ of 1 triangle times 4 to give the surface area of the pyramid (minus its base), times 6 faces on a cube gives you the surface area of the whole polyhedron.
If you can imagine 4 rhombi meeting at a vertex pointing straight up, there are 4 points where 2 rhombi meet and 4 points where there is only 1 rhombus. Now connect 4 more rhombi, oriented vertically, with the "upper" level of vertices now finished with 3 rhombi and the "middle" level also having 3 rhombi, but the smaller angles. Now make another group of 4 rhombi meeting at a vertex pointing straight down, and it connects so that the "middle level" is complete with 4 rhombi meeting at a vertex, and the vertically-oriented rhombi fit into the "lower level", with 3 rhombi per vertex.
Post again if you are still having trouble.
3. Hello mathwizard325
Originally Posted by mathwizard325
for some reason, i just cannot seem to solve this problem. Does anyone know a way to solve it and maybe even draw a diagram for it? i can't even picture this image let alone draw it out.
Through each edge of a cube, draw outside the cube the planes making 45 degree angles with the adjacent faces. Compute the surface area of the polyhedron bounded by these planes, assuming that the edges of the cube have length a. Is this polyhedron a prism?
The polyhedron is a regular octahedron, which has 8 equilateral triangles as its faces. If you join the mid-points of its edges you get a cube - see my attempt at drawing one!
I think it's pretty obvious what the relation is between the lengths of the edges of the two respective solids, and therefore you should have little difficulty in working out the surface area of the octahedron in terms of $a$, the length of the edge of the cube.
4. Cool picture! How do you generate something like that?
But you do not have a plane going through the four vertical edges. The pyramid on your top and bottom faces should be on the four sides, too.
5. I am glad to catch idea from your post. It has information I have been searching for a long time. This looks absolutely perfect. All these tinny details are made with lot of background knowledge. I like it a lot.
Keep on taking action!
_________________
how to get your ex back
6. Originally Posted by mathwizard325
...
Through each edge of a cube, draw outside the cube the planes making 45 degree angles with the adjacent faces. Compute the surface area of the polyhedron bounded by these planes, assuming that the edges of the cube have length a. Is this polyhedron a prism?
The cube has 12 edges thus the new polyhedron must have 12 faces.
If the surface planes of the new polyhedron and the faces of the cube include an angle of 45° the face of the new polyhedron consist of 12 rhombii with the side length $s=\frac a2 \sqrt{3}$
I've attached a sketch of only 3 faces of the new polyhedron. The heights of the added pyramids (drawn in red) have equal length.
EDIT: The 2nd sketch is a view of the complete solid.
7. Originally Posted by earboth
The cube has 12 edges thus the new polyhedron must have 12 faces.
If the surface planes of the new polyhedron and the faces of the cube include an angle of 45° the face of the new polyhedron consist of 12 rhombii with the side length $s=\frac a2 \sqrt{3}$
I've attached a sketch of only 3 faces of the new polyhedron. The heights of the added pyramids (drawn in red) have equal length.
EDIT: The 2nd sketch is a view of the complete solid.
Quite right! My answer was complete nonsense! Not one of my better efforts!
If the surface planes of the new polyhedron and the faces of the cube include an angle of 45° the face of the new polyhedron consist of 12 rhombii with the side length $s=\frac a2 \sqrt{3}$ | 2014-11-26T15:26:52 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/geometry/133981-3d-geometry-problem.html",
"openwebmath_score": 0.6893302202224731,
"openwebmath_perplexity": 395.5088379207627,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.986777180969715,
"lm_q2_score": 0.8558511543206819,
"lm_q1q2_score": 0.8445343893902391
} |
https://math.stackexchange.com/questions/1688762/integral-int-sqrt-fracx2-xdx | # Integral $\int \sqrt{\frac{x}{2-x}}dx$
$$\int \sqrt{\frac{x}{2-x}}dx$$
can be written as:
$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx.$$
there is a formula that says that if we have the integral of the following type:
$$\int x^m(a+bx^n)^p dx,$$
then:
• If $p \in \mathbb{Z}$ we simply use binomial expansion, otherwise:
• If $\frac{m+1}{n} \in \mathbb{Z}$ we use substitution $(a+bx^n)^p=t^s$ where $s$ is denominator of $p$;
• Finally, if $\frac{m+1}{n}+p \in \mathbb{Z}$ then we use substitution $(a+bx^{-n})^p=t^s$ where $s$ is denominator of $p$.
If we look at this example:
$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx,$$
we can see that $m=\frac{1}{2}$, $n=1$, and $p=\frac{-1}{2}$ which means that we have to use third substitution since $\frac{m+1}{n}+p = \frac{3}{2}-\frac{1}{2}=1$ but when I use that substitution I get even more complicated integral with square root. But, when I tried second substitution I have this:
$$2-x=t^2 \Rightarrow 2-t^2=x \Rightarrow dx=-2tdt,$$
so when I implement this substitution I have:
$$\int \sqrt{2-t^2}\frac{1}{t}(-2tdt)=-2\int \sqrt{2-t^2}dt.$$
This means that we should do substitution once more, this time:
$$t=\sqrt{2}\sin y \Rightarrow y=\arcsin\frac{t}{\sqrt{2}} \Rightarrow dt=\sqrt{2}\cos ydy.$$
So now we have:
\begin{align*} -2\int \sqrt{2-2\sin^2y}\sqrt{2}\cos ydy={}&-4\int\cos^2ydy = -4\int \frac{1+\cos2y}{2}dy={} \\ {}={}& -2\int dy -2\int \cos2ydy = -2y -\sin2y. \end{align*}
Now, we have to return to variable $x$:
\begin{align*} -2\arcsin\frac{t}{2} -2\sin y\cos y ={}& -2\arcsin\frac{t}{2} -2\frac{t}{\sqrt{2}}\sqrt\frac{2-t^2}{2}={} \\ {}={}& -2\arcsin\frac{t}{2} -\sqrt{t^2(2-t^2)}. \end{align*}
Now to $x$:
$$-2\arcsin\sqrt{\frac{2-x}{2}} - \sqrt{2x-x^2},$$
which would be just fine if I haven't checked the solution to this in workbook where the right answer is:
$$2\arcsin\sqrt\frac{x}{2} - \sqrt{2x-x^2},$$
and when I found the derivative of this, it turns out that the solution in workbook is correct, so I made a mistake and I don't know where, so I would appreciate some help, and I have a question, why the second substitution works better in this example despite the theorem i mentioned above which says that I should use third substitution for this example?
• Just a different thought, try u sub $\sqrt{2-x}=t$ and see what happens – imranfat Mar 8 '16 at 18:26
• 2-x=t is better – Takahiro Waki Mar 8 '16 at 18:45
• @imranfat: Isn't that just what he did? cdummie you are missing some square roots from the denominators of the arcsine arguments when going from $y$ to $t$, I guess, since $y=\arcsin\frac{t}{\sqrt2}$. Or, the roots in the last line are not supposed to be in the root. – MickG Mar 8 '16 at 20:47
• @MickG Yeah, but I just tried to avoid the mumble jumble of that standard formula with that m,b,n,p . I became dizzy and so I had to close the page. I took a paper and did the u-sub I suggested and midway through, the integral became standard bread and butter, I quit... – imranfat Mar 8 '16 at 21:51
• Seems the OP did his job correctly @imranfat: see my answer below. – MickG Mar 8 '16 at 21:53
Let me try do derive that antiderivative. You computed:
$$f(x)=\underbrace{-2\arcsin\sqrt{\frac{2-x}{2}}}_{f_1(x)}\underbrace{-\sqrt{2x-x^2}}_{f_2(x)}.$$
The easiest term is clearly $f_2$:
$$f_2'(x)=-\frac{1}{2\sqrt{2x-x^2}}\frac{d}{dx}(2x-x^2)=\frac{x-1}{\sqrt{2x-x^2}}.$$
Now the messier term. Recall that $\frac{d}{dx}\arcsin x=\frac{1}{\sqrt{1-x^2}}$. So:
\begin{align*} f_1'(x)={}&-2\frac{1}{\sqrt{1-\left(\sqrt{\frac{2-x}{2}}\right)^2}}\frac{d}{dx}\sqrt{\frac{2-x}{2}}=-2\frac{1}{\sqrt{1-\frac{2-x}{2}}}\cdot\frac{1}{\sqrt2}\frac{d}{dx}\sqrt{2-x}={} \\ {}={}&-2\sqrt{\frac2x}\cdot\frac{1}{\sqrt2}\cdot\frac{1}{2\sqrt{2-x}}\cdot(-1)=\frac{2}{\sqrt x}\frac{1}{2\sqrt{2-x}}=\frac{1}{\sqrt{2x-x^2}}. \end{align*}
So:
$$f'(x)=f_1'(x)+f_2'(x)=\frac{x}{\sqrt{2x-x^2}}=\frac{x}{\sqrt x}\frac{1}{\sqrt{2-x}}=\frac{\sqrt x}{\sqrt{2-x}},$$
which is your integrand. So you were correct after all! Or at least got the correct result, but no matter how I try, I cannot find an error in your calculations.
As for the book's solution, take your $f$, and compose it with $g(x)=2-x$. You get the book's solution, right? Except for a sign. But then $g'(x)=-1$, so the book's solution is also correct: just a different change of variables, probably, though I cannot really guess which.
• Yeah, that's pretty much it... – imranfat Mar 8 '16 at 21:55
$$\int \sqrt{\frac{x}{2-x}}dx$$
Set $t=\frac {x} {2-x}$ and $dt=\left(\frac{x}{(2-x)^2}+\frac{1}{2-x}\right)dx$
$$=2\int\frac{\sqrt t}{(t+1)^2}dt$$
Set $\nu=\sqrt t$ and $d\nu=\frac{dt}{2\sqrt t}$
$$=4\int\frac{\nu^2}{(\nu^2+1)^2}d\nu\overset{\text{ partial fractions}}{=}4\int\frac{d\nu}{\nu^2+1}-4\int\frac{d\nu}{(\nu^1+1)^2+\mathcal C}$$
$$=4\arctan \nu-4\int\frac{d\nu}{(\nu^2+1)^2}$$
Set $\nu=\tan p$ and $d\nu=\sec^2 p dp.$ Then $(\nu^2+1)^2=(\tan^2 p+1)^2=\sec^4 p$ and $p=\arctan \nu$
$$=4\arctan \nu-4\int \cos^2 p dp$$
$$=4\arctan \nu-2\int \cos(2p)dp-2\int 1dp$$
$$=4\arctan \nu-\sin(2p)-2p+\mathcal C$$
Set back $p$ and $\nu$:
$$=\color{red}{\sqrt{-\frac{x}{x-2}}(x-2)+2\arctan\left(\sqrt{-\frac{x}{x-2}}\right)+\mathcal C}$$
• You're welcome. I would advise you to add some steps to the last passage since it is not immediately clear to me how you reverted to the variable $x$. – MickG Mar 8 '16 at 21:10
• Noting that $4\arctan\nu-2p=2\arctan\nu$ since $p=\arctan\nu$ and applying the formula $\sin(2\arctan(p))=\frac{2p}{p^2+1}$ to get this as an intermediate step for the $\sin(2p)$ term would be of great help to those (like me) trying to "set back $p$ and $\nu$" mentally. :) – MickG Mar 8 '16 at 21:30
Alternative solution - let $x=2t^2$, then
$$I=\int\sqrt{\frac{x}{2-x}}\mathrm{d}x=4\int\frac{t^2}{\sqrt{1-t^2}}\mathrm{d}t=4J$$
By parts we have
$$J=-t\sqrt{1-t^2}+\int\sqrt{1-t^2}\;\mathrm{d}t = -t\sqrt{1-t^2}+\int\frac{1-t^2}{\sqrt{1-t^2}}\;\mathrm{d}t\!=\!-t\sqrt{1-t^2}+\arcsin t-J$$
Hence
$$I=4J=2\cdot 2J =2\arcsin t -2t\sqrt{1-t^2} = 2\arcsin\sqrt{\frac{x}{2}}-\sqrt{2x-x^2} + C$$
The solutions are equivallent because of formula : $$\arcsin x= \frac{\pi}{2}-\arcsin{\sqrt{1-x^2}}$$
Clearly, take $\sin$ of both sides, with the fact that $\sin (\frac{\pi}{2}-x)=\cos x$ :
$$x= \cos\arcsin{\sqrt{1-x^2}}=\sqrt{1-\sin^2{\arcsin{\sqrt{1-x^2}}}} =\sqrt{1-(1-x^2)} = x$$
Let $u=\sqrt{2-x}$ then we simply want
$-2\int \sqrt{2-u^2}du$ which is simple after $u=\sqrt{2}\sin{v}$ | 2019-12-05T22:30:13 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1688762/integral-int-sqrt-fracx2-xdx",
"openwebmath_score": 0.9806625843048096,
"openwebmath_perplexity": 570.1906254803004,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771798031351,
"lm_q2_score": 0.8558511451289037,
"lm_q1q2_score": 0.8445343793215833
} |
https://stats.stackexchange.com/questions/225552/how-do-we-know-that-the-probability-of-rolling-1-and-2-is-1-18 | # How do we know that the probability of rolling 1 and 2 is 1/18?
Since my first probability class I have been wondering about the following.
Calculating probabilities is usually introduced via the ratio of the "favored events" to the total possible events. In the case of rolling two 6-sided dice, the amount of possible events is $36$, as displayed in the table below.
\begin{array} {|c|c|c|c|c|c|c|} \hline &1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & (1,1) & (1,2) & (1,3) & (1,4) & (1,5) & (1,6) \\ \hline 2 & (2,1) & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\ \hline 3 & (3,1) & (3,2) & (3,3) & (3,4) & (3,5) & (3,6) \\ \hline 4 & (4,1) & (4,2) & (4,3) & (4,4) & (4,5) & (4,6) \\ \hline 5 & (5,1) & (5,2) & (5,3) & (5,4) & (5,5) & (5,6) \\ \hline 6 & (6,1) & (6,2) & (6,3) & (6,4) & (6,5) & (6,6) \\ \hline \end{array}
If we therefore were interested in calculating the probability of the event A "rolling a $1$ and a $2$", we would see that there are two "favored events" and calculate the probability of the event as $\frac{2}{36}=\frac{1}{18}$.
Now, what always made me wonder is: Let's say it would be impossible to distinguish between the two dice and we would only observe them after they were rolled, so for example we would observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$". In this hypothetical scenario we would not be able to distinguish between the two dice, so we would not know that there are two possible events leading to this observation. Then our possible events would like that:
\begin{array} {|c|c|c|c|c|c|} \hline (1,1) & (1,2) & (1,3) & (1,4) & (1,5) & (1,6) \\ \hline & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\ \hline & & (3,3) & (3,4) & (3,5) & (3,6) \\ \hline & & & (4,4) & (4,5) & (4,6) \\ \hline & & & & (5,5) & (5,6) \\ \hline & & & & & (6,6) \\ \hline \end{array}
and we would calculate the probability of event A as $\frac{1}{21}$.
Again, I am fully aware of the fact that the first approach will lead us to the correct answer. The question I am asking myself is:
How do we know that $\frac{1}{18}$ is correct?
The two answers I have come up with are:
• We can empirically check it. As much as I am interested in this, I need to admit that I haven't done this myself. But I believe it would be the case.
• In reality we can distinguish between the dice, like one is black and the other one blue, or throw one before the other or simply know about the $36$ possible events and then all the standard theory works.
My questions to you are:
• What other reasons are there for us to know that $\frac{1}{18}$ is correct? (I am pretty sure there must be a few (at least technical) reasons and this is why I posted this question)
• Is there some basic argument against assuming that we cannot distinguish between the dice at all?
• If we assume that we cannot distinguish between the dice and have no way to check the probability empirically, is $P(A) = \frac{1}{21}$ even correct or did I overlook something?
Thank you for taking your time to read my question and I hope it is specific enough.
• The simple answer: because this is probability of distinguishable events. There are probabilistic models in physics of indistinguishable events (e.g. Einstein-Bose statistic). – Tim Jul 25 '16 at 17:08
• This is one reason there are axioms of probability: you can know that $1/18$ is correct when you can deduce it using solely the axioms and the rules of logic. – whuber Jul 25 '16 at 17:41
• Use a pair of dice where one is red and the other green. You can tell them apart, but someone with red-green color-blindness can't. Should the probabilities be based on what you see or what he sees? – Monty Harder Jul 25 '16 at 19:02
• While all the posted answers were very informative (thank you to everybody who contributed!) and mostly made me realise that in fact - no matter how one puts it - dice are distinguishable, I think @Tim 's answer was exactely what I was looking for (dziękuję bardzo)! I did some further research on this topic and really liked this article and this video. – E L M Jul 25 '16 at 19:47
• @ELM it's nice to hear it :) For completeness I added my own answer. – Tim Jul 25 '16 at 20:23
Imagine that you threw your fair six-sided die and you got ⚀. The result was so fascinating that you called your friend Dave and told him about it. Since he was curious what he'd get when throwing his fair six-sided die, he threw it and got ⚁.
A standard die has six sides. If you are not cheating then it lands on each side with equal probability, i.e. $1$ in $6$ times. The probability that you throw ⚀, the same as with the other sides, is $\tfrac{1}{6}$. The probability that you throw ⚀, and your friend throws ⚁, is $\tfrac{1}{6} \times \tfrac{1}{6} = \tfrac{1}{36}$ since the two events are independent and we multiply independent probabilities. Saying it differently, there are $36$ arrangements of such pairs that can be easily listed (as you already did). The probability of the opposite event (you throw ⚁ and your friend throws ⚀) is also $\tfrac{1}{36}$. The probabilities that you throw ⚀, and your friend throws ⚁, or that you throw ⚁, and your friend throws ⚀, are exclusive, so we add them $\tfrac{1}{36} + \tfrac{1}{36} = \tfrac{2}{36}$. Among all the possible arrangements, there are two meeting this condition.
How do we know all of this? Well, on the grounds of probability, combinatorics and logic, but those three need some factual knowledge to rely on. We know on the basis of the experience of thousands of gamblers and some physics, that there is no reason to believe that a fair six-sided die has other than an equiprobable chance of landing on each side. Similarly, we have no reason to suspect that two independent throws are somehow related and influence each other.
You can imagine a box with tickets labeled using all the $2$-combinations (with repetition) of numbers from $1$ to $6$. That would limit the number of possible outcomes to $21$ and change the probabilities. However if you think of such a definition in term of dice, then you would have to imagine two dice that are somehow glued together. This is something very different than two dice that can function independently and can be thrown alone landing on each side with equal probability without affecting each other.
All that said, one needs to comment that such models are possible, but not for things like dice. For example, in particle physics based on empirical observations it appeared that Bose-Einstein statistic of non-distinguishable particles (see also the stars-and-bars problem) is more appropriate than the distinguishable-particles model. You can find some remarks about those models in Probability or Probability via Expectation by Peter Whittle, or in volume one of An introduction to probability theory and its applications by William Feller.
• Why did I choose this as the best answer? As I stated above, all the answers were very informative (thank you again to everybody who invested time, I really appriciate it!) and also showed me that it is not necessary for me to be able to distinguish between the dice myself as long as the dice can objectively be distinguished. But as soon as they can be objectively distinguished it was clear to me that the events in the second scenario are not equally probable, so for me the Bose-Einstein-model was what I was looking for. – E L M Jul 25 '16 at 21:30
I think you are overlooking the fact that it does not matter whether "we" can distinguish the dice or not, but rather it matters that the dice are unique and distinct, and act on their own accord.
So if in the closed box scenario, you open the box and see a 1 and a 2, you don't know whether it is $(1,2)$ or $(2,1)$, because you cannot distinguish the dice. However, both $(1,2)$ and $(2,1)$ would lead to the same visual you see, that is, a 1 and a 2. So there are two outcomes favoring that visual. Similarly for every non-same pair, there are two outcomes favoring each visual, and thus there are 36 possible outcomes.
Mathematically, the formula for the probability of an event is $$\dfrac{\text{Number of outcomes for the event}}{\text{Number of total possible outcomes}}.$$
However, this formula only holds for when each outcome is equally likely. In the first table, each of those pairs is equally likely, so the formula holds. In your second table, each outcome is not equally likely, so the formula does not work. The way you find the answer using your table is
Probability of 1 and 2 = Probability of $(1,2)$ + Probability of $(2,1)$ = $\dfrac{1}{36} + \dfrac{1}{36} = \dfrac{1}{18}$.
Another way to to think about this is that this experiment is the exact same as rolling each die separately, where you can spot Die 1 and Die 2. Thus the outcomes and their probabilities will match with the closed box experiment.
Lets imagine that the first scenario involves rolling one red die and one blue die, while the second involves you rolling a pair of white dice.
In the first case, can write down every possible outcome as (red die, blue die), which gives you this table (reproduced from your question): \begin{array} {|c|c|c|c|c|c|c|} \hline \frac{\textrm{Blue}}{\textrm{Red}}&1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & (1,1) & \mathbf{(1,2)} & (1,3) & (1,4) & (1,5) & (1,6) \\ \hline 2 & \mathbf{(2,1)} & (2,2) & (2,3) & (2,4) & (2,5) & (2,6) \\ \hline 3 & (3,1) & (3,2) & (3,3) & (3,4) & (3,5) & (3,6) \\ \hline 4 & (4,1) & (4,2) & (4,3) & (4,4) & (4,5) & (4,6) \\ \hline 5 & (5,1) & (5,2) & (5,3) & (5,4) & (5,5) & (5,6) \\ \hline 6 & (6,1) & (6,2) & (6,3) & (6,4) & (6,5) & (6,6) \\ \hline \end{array} Our idealized dice are fair (each outcome is equally likely) and you've listed every outcome. Based on this, you correctly conclude that a one and a two occurs with probability $\frac{2}{36}$, or $\frac{1}{18}.$ So far, so good.
Next, suppose you roll two identical dice instead. You've correctly listed all the possible outcomes, but you incorrectly assumed all of these outcomes are equally likely. In particular, the $(n,n)$ outcomes are half as likely as the other outcomes. Because of this, you cannot just calculate the probability by adding up the # of desired outcomes over the total number of outcomes. Instead, you need to weight each outcome by the probability of it occurring. If you run through the math, you'll find that it comes out the same--one doubly-likely event in the numerator out of 15 double-likely events and 6 singleton events.
The next question is "how could I know that the events aren't all equally likely?" One way to think about this is to imagine what would happen if you could distinguish the two dice. Perhaps you put a tiny mark on each die. This can't change the outcome, but it reduces the problem the previous one. Alternately, suppose you write the chart out so that instead of Blue/Red, it reads Left Die/Right Die.
As a further exercise, think about the difference between seeing an ordered outcome (red=1, blue=2) vs. an unordered one (one die showing 1, one die showing 2).
• this. being able to distinguish the dice does not change the result. The observer cannot act on the result. (unless magic?). The dice don't care if you can make the difference between red and blue. – njzk2 Jul 26 '16 at 19:36
• "you incorrectly assumed all of these outcomes are equally likely" I think this is the key part and probably the most direct answer to the original question. – Gediminas Jul 27 '16 at 8:41
The key idea is that if you list the 36 possible outcomes of two distinguishable dice, you are listing equally probable outcomes. This is not obvious, or axiomatic; it's true only if your dice are fair and not somehow connected. If you list the outcomes of indistinguishable dice, they are not equally probable, because why should they be, any more than the outcomes "win the lottery" and "don't win the lottery" are equally probable.
To get to the conclusion, you need:
• We are working with fair dice, for which all six numbers are equally probable.
• The two dice are independent, so that the probability of die number two obtaining a particular number is always independent of what number die number one gave. (Imagine instead rolling the same die twice on a sticky surface of some kind that made the second roll come out different.)
Given those two facts about the situation, the rules of probability tell you that the probability of achieving any pair $(a,b)$ is the probability of achieving $a$ on the first die times that of achieving $b$ on the second. If you start lumping $(a,b)$ and $(b,a)$ together, then you don't have the simple independence of events to help you any more, so you can't just multiply probabilities. Instead, you have made a collection of mutually exclusive events (if $a \neq b$), so you can safely add the probabilities of getting $(a,b)$ and $(b,a)$ if they are different.
The idea that you can get probabilities by just counting possibilities relies on assumptions of equal probability and independence. These assumptions are rarely verified in reality but almost always in classroom problems.
• Welcome to our site! You can use Latex formatting for the math here by putting dollar signs around it, e.g. $a^x$ produces $a^x$ – Silverfish Jul 25 '16 at 18:59
If you translate this into terms of coins - say, flipping two indistinguishable pennies - it becomes a question of only three outcomes: 2 heads, 2 tails, 1 of each, and the problem is easier to spot. The same logic applies, and we see that it's more likely to get 1 of each than to get 2 heads or 2 tails.
That's the slipperiness of your second table - it represents all possible outcomes, even though they are not all equally weighted probabilities, as in the first table. It would be ill-defined to try to spell out what each row and column in the second table means - they're only meaningful in the combined table where each outcome has 1 box, regardless of likelihood, whereas the first table displays "all the equally likely outcomes of die 1, each having its own row," and similarly for columns and die 2.
Let's start by stating the assumption: indistinguishable dice only roll 21 possible outcomes, while distinguishable dice roll 36 possible outcomes.
To test the difference, get a pair of identical white dice. Coat one in a UV-absorbent material like sunscreen, which is invisible to the naked eye. The dice still appear indistinguishable until you look at them under a black light, when the coated die appears black while the clean die glows.
Conceal the pair of dice in a box and shake it. What are the odds you'll get a 2 and a 1 when you open the box? Intuitively you might think "rolling a 1 and a 2" is just 1 of 21 possible outcomes because you can't tell the dice apart. But if you open the box under a black light, you can tell them apart. When you can tell the dice apart, "rolling a 1 and a 2" is 2 of 36 possible combinations.
Does that mean a black light has the power to change the probability of obtaining a certain outcome, even if the dice are only exposed to the light and observed after they've been rolled? Of course not. Nothing changes the dice after you stop shaking the box. The probability of a given outcome can't change.
Since the original assumption depends on a change that doesn't exist, it's reasonable to conclude that the original assumption was incorrect. But what about the original assumption is incorrect - that indistinguishable dice only roll 21 possible outcomes, or that distinguishable dice roll 36 possible outcomes?
Clearly the black light experiment demonstrated that observation has no impact on probability (at least on this scale - quantum probability is a different matter) or the distinctness of objects. The term "indistinguishable" merely describes something which observation cannot differentiate from something else. In other words, the fact that the dice appear the same under some circumstances (i.e. that they aren't under a black light) and not others has no bear on the fact that they are truly two distinct objects. This would be true even if the circumstances under which you're able to distinguish between them are never discovered.
In short: your ability to distinguish between the dice being rolled is irrelevant when analyzing the probability of a particular outcome. Each die is inherently distinct. All outcomes are based on this fact, not on an observer's point of view.
We can deduce that your second table does not represent the scenario accurately.
You have eliminated all the cells below and left of the diagonal, on the supposed basis that (1, 2) and (2, 1) are congruent and therefore redundant outcomes.
Instead suppose that you roll one die twice in a row. Is it valid to count 1-then-2 as an identical outcome as 2-then-1? Clearly not. Even though the second roll outcome does not depend on the first, they are still distinct outcomes. You cannot eliminate rearrangements as duplicates. Now, rolling two dice at once is the same for this purpose as rolling one die twice in a row. You therefore cannot eliminate rearrangements.
(Still not convinced? Here is an analogy of sorts. You walk from your house to the top of the mountain. Tomorrow you walk back. Was there any point in time on both days when you were at the same place? Maybe? Now imagine you walk from your house to the top of the mountain, and on the same day another person walks from the top of the mountain to your house. Is there any time that day when you meet? Obviously yes. They are the same question. Transposition in time of untangled events does not change deductions that can be made from those events.)
If we just observe "Somebody gives me a box. I open the box. There is a $1$ and a $2$", without further information, we don't know anything about the probability.
If we know that the two dice are fair and that they have been rolled, then the probability is 1/18 as all other answer have explained. The fact we don't know if the die with 1 o the die with 2 was rolled first doesn't matter, because we must account for both ways - and therefore the probability is 1/18 instead of 1/36.
But if we don't know which process led to having the 1-2 combination, we can't know anything about the probability. Maybe the person who handed us the box just purposely chose this combination and stuck the dice to the box (probability=1), or maybe he shacked the box rolling the dice (probability=1/18) or he might have chosen at random one combination from the 21 combinations in the table you gave us in the question, and therefore probability=1/21.
In summary, we know the probability because we know what process lead to the final situation, and we can compute probability for each stage (probability for each dice). The process matters, even if we haven't seen it taking place.
To end the answer, I'll give a couple of examples where the process matters a lot:
• We flip ten coins. What's the probability getting heads all of ten times? You can see that the probability (1/1024) is a lot smaller than the probability of getting a 10 if we just choose a random number between 0 and 10 (1/11).
• If you have enjoyed this problem, you can try with the Monty Hall problem. It's a similar problem where the process matters much more than what our intuition would expect.
The probability of event A and B is calculated by multiplying both probabilities.
The probability of rolling a 1 when there are six possible options is 1/6. The probability of rolling a 2 when there are six possible options is 1/6.
1/6 * 1/6 = 1/36.
However, the event is not contingent on time (in other words, it is not required that we roll a 1 before a 2; only that we roll both a 1 and 2 in two rolls).
Thus, I could roll a 1 and then 2 and satisfy the condition of rolling both 1 and 2, or I could roll a 2 and then 1 and satisfy the condition of rolling both 1 and 2.
The probability of rolling 2 and then 1 has the same calculation:
1/6 * 1/6 = 1/36.
The probability of either A or B is the sum of the probabilities. So let's say event A is rolling 1 then 2, and event B is rolling 2 then 1.
Probability of Event A: 1/36 Probability of Event B: 1/36
1/36 + 1/36 = 2/36 which reduces to 1/18. | 2019-08-20T07:32:16 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/225552/how-do-we-know-that-the-probability-of-rolling-1-and-2-is-1-18",
"openwebmath_score": 0.944078266620636,
"openwebmath_perplexity": 220.2446653340933,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771786365549,
"lm_q2_score": 0.8558511451289038,
"lm_q1q2_score": 0.8445343783231644
} |
https://math.stackexchange.com/questions/1237168/end-digit-of-numbers-raised-to-a-certain-power | # End digit of numbers raised to a certain power
In a math competition I came across the following question:
What digit does the result of 2^2006 end with?
This competition tested how fast you are at solving math problems. So, I was wondering whether there is some sort of shortcut to solve problems like this quickly.
Help would be appreciated.
Thank you :)
• End digits of powers of $2$: $2,4,8,16=6,12=2,4,8,...$. – String Apr 16 '15 at 8:53
• Thus if $n\equiv m\pmod{4}$ we have $2^n\equiv 2^m\pmod{10}$. Now $2006=4\cdot 501+2\equiv 2\pmod{4}$. So the last digit of $2^{2006}$ is the same as the last digit of $2^2$, which happens to be $4$. – String Apr 16 '15 at 8:58
Notice the pattern among last digits of ascending powers of $2$
Last digits of:
$2^1$ is $2$, $2^2$ is $4$, $2^3$ is $8$, $2^4$ is $6$, $2^5$ is $2$, $2^6$ is $4...$ etc
Also notice that the last digits of
$2^4$, $2^8$, $2^{12}$, $2^{16}$... will all be $6$
We note that as $2004$ is also a multiple of $4$, therefore $2^{2004}$ will have a last digit of $6$
Continuing the pattern $2^{2005}$ will have a last digit of $2$
And $2^{2006}$ has a last digit of $4$ | 2021-03-01T10:55:12 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1237168/end-digit-of-numbers-raised-to-a-certain-power",
"openwebmath_score": 0.5393926501274109,
"openwebmath_perplexity": 152.17180999283593,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771786365549,
"lm_q2_score": 0.8558511414521923,
"lm_q1q2_score": 0.8445343746950694
} |
https://byjus.com/question-answer/let-x-and-y-be-two-non-empty-sets-such-that-x-cap-a-y/ | Question
# Let $$X$$ and $$Y$$ be two non-empty sets such that $$X\cap A=Y\cap A=\phi$$ and $$X\cup A=Y\cup A$$ for some non-empty set $$A$$. Then which of the following is true?
A
X is a proper subset of Y
B
Y is a proper subset of X
C
X=Y
D
X and Y are disjoint sets
E
X/A=ϕ
Solution
## The correct option is C $$X=Y$$We have, $$X\cup A=Y\cup A$$$$\Rightarrow X\cap \left( X\cup A \right) =X\cap \left( Y\cup A \right)$$$$\Rightarrow X=\left( X\cap Y \right) \cup \left( X\cap A \right)$$ $$\left[ \because X\cap \left( X\cup A \right) =X \right]$$$$\Rightarrow X=\left( X\cap Y \right) \cup \phi$$ $$\left[\because X\cap A=\phi \right]$$$$\Rightarrow X=X\cap Y$$ .....(i)Again, $$X\cup A=Y\cup A$$$$\Rightarrow Y\cap \left( X\cup A \right) =Y\cap \left( Y\cup A \right)$$$$\Rightarrow \left( Y\cap X \right) \cup \left( Y\cap A \right) =Y$$$$\Rightarrow \left( Y\cap X \right) \cup \phi =Y$$$$\Rightarrow Y\cap X=Y$$$$\Rightarrow X\cap Y=Y$$ ....(ii)From equations (i) and (ii), we get$$X=Y$$Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-25T05:50:54 | {
"domain": "byjus.com",
"url": "https://byjus.com/question-answer/let-x-and-y-be-two-non-empty-sets-such-that-x-cap-a-y/",
"openwebmath_score": 0.9466896653175354,
"openwebmath_perplexity": 2608.977253423973,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9867771778588348,
"lm_q2_score": 0.8558511414521923,
"lm_q1q2_score": 0.8445343740294567
} |
https://math.codidact.com/posts/286150 | Q&A
# equilateral triangle inscribed in an ellipse
+4
−1
A high-schooler I know was given the following problem:
In the ellipse $x^2+3y^2=12$ is inscribed an equilateral triangle. One of the triangle's vertices is at the point $(0,-2)$. Find the triangle's other vertices.
The book has one answer: $(\pm1.2\sqrt3,1.6)$. But I know of two more answers: $(0,2)$ and $(\pm\sqrt{12},0)$. Are there any more?
Why does this post require moderator attention?
Why should this post be closed?
+3
−0
First of all, let's suppose that the points on the ellipse $(x_1,y_1),(x_2,y_2)$ are of the same distance from $(0,-2)$. We will get three equations from this assumption:
$$\begin{cases} x_1^2+3y_1^2=12 & (1)\\\\ x_2^2+3y_2^2=12 & (2)\\\\ x_1^2+(y_1+2)^2=x_2^2+(y_2+2)^2 & (3) \end{cases}$$
After substituting $x_1^2,x_2^2$ with $12-3y_1^2,12-3y_2^2$ in equation $(3)$ respectively, we will get $-2y_1^2+4y_1+16=-2y_2^2+4y_2+16$, which is equivalent to $(y_1-1)^2=(y_2-1)^2$. Now we will divide to two separate cases.
Case 1:
If $(y_1-1),(y_2-1)$ are of the same sign (positive or negative), we can infer that $y_1=y_2$, and after equating $(1),(2)$ we will get $x_1^2+3y_1^2=x_2^2+3y_2^2$, Hence $x_1^2=x_2^2$. So there are two options again
Subcase 1:
if $x_1=x_2$, then the two points are that same point. In this case the answer will depend on your definition of an equilateral triangle. if you consider a line as an equilateral triangle, then every point on the ellipse will be a solution. If you will demand that the distance between all three (if we will consider the same point we got as two solutions) points will be the same, there is only one solution, that all the points will be $(0,-2)$. If you consider only triangles with three distinct points, we will have no solution in this case.
Subcase 2:
if $x_1=-x_2$, for simplicity i will denote $x_1=x\ y_1=y,\ x_2=-x,\ y_2=y$. The distance between the points $(x,y)$ and $(-x,y)$ is $2x$, and the distance between those two points to $(0,-2)$ is $\sqrt{x^2+(y+2)^2}$, so if we will require that this two length will be equal we will get the equation $4x^2=x^2+(y+2)^2$, and after substituting $x^2$ with $12-3y^2$ we will get
$$0=9y^2-36+y^2+4y+4=10y^2+4y-32=2(y+2)(5y-8)$$
If $y=2$, then we will be at the last case, other wise we will have $y=\frac{8}{5}=1.6$, $|x|=\sqrt{12-3y^2}=\frac{6}{5}\sqrt{3}=1.2\sqrt{3}$, so the two points will be $(\pm 1.2\sqrt{3},1.6)$ as in the book.
Case 2:
If the signs of $(y_1-1),(y_2-1)$ are opposite, we will get $y_1=2-y_2$. From equations $(1),(2)$ we will get $x_1^2-x_2^2=3(y_2^2-y_1^2)=3(y_2+y_1)(y_2-y_1)=12(y_2-1)=12(1-y_1)$. Now, if we will require that the distance between $(x_1,y_1),(x_2,y_2)$ equals to the distance between $(x_1,y_1),(0,-2)$ we could write
\begin{align} (x_1-x_2)^2+(y_1-y_2)^2 & =x_1^2+(y_1+2)^2\\ \\ x_1^2+x_2^2-2x_1x_2+(2y_1-2)^2 & =x_1^2+y_1^2+4y_1+4\\ \\ x_2^2-2x_1x_2+4y_1^2-8y_1+4 & =y_1^2+4y_1+4\\ \\ -2x_1x_2+x_2^2+3y_1^2-12y_1 & =0\ / {\color{gray}\text{because}\ 3y_1^2=12-x_1^2\ \text{we will get} }\\ \\ -2x_1x_2+12+x_2^2-x_1^2-12y_1 & =0\ / {\color{gray}\text{remember that}\ x_1^2-x_2^2=12(1-y_1)}\\ \\ -2x_1x_2-12(1-y_1)+12(1-y_1) & =0\\ \\ -2x_1x_2 & =0 \end{align}
Thus $x_1$ or $x_2$ are vanishing. Without loss of generality i will assume that $x_1=0$. From that we will get that $y_1=\pm 2$. Because $y_1+y_2=2$ and $y_2\leq 2$, we will conclude that $y_1\geq 0$ so $y_1=2$. Because $y_1+y_2=2$ we will get that $y_2=0$, so $x_2=\pm\sqrt{12}$, as you got as well.
Conclusion: All the solutions are $\left[(1.2\sqrt{3},1.6),(-1.2\sqrt{3},1.6)\right],\ \left[(0,2),(\sqrt{12},0)\right],\ \left[(0,2),(-\sqrt{12},0)\right]$.
Why does this post require moderator attention? | 2022-05-18T23:51:10 | {
"domain": "codidact.com",
"url": "https://math.codidact.com/posts/286150",
"openwebmath_score": 0.9983039498329163,
"openwebmath_perplexity": 160.92102797755172,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9912886157996523,
"lm_q2_score": 0.8519528076067262,
"lm_q1q2_score": 0.8445311193790991
} |
https://math.libretexts.org/Courses/Honolulu_Community_College/Math_75X%3A_Introduction_to_Mathematical_Reasoning_(Kearns)/01%3A_Whole_Numbers_and_Integers/1.04%3A_Combining_Integers-_Addition_and_Subtraction_with_Integers/1.4.02%3A_Subtracting_Integers |
# 1.4.2: Subtracting Integers
In Section 1.2, we stated that “Subtraction is the opposite of addition.” Thus, to subtract 4 from 7, we walked seven units to the right on the number line, but then walked 4 units in the opposite direction (to the left), as shown in Figure $$\PageIndex{1}$$.
Thus, 7 − 4 = 3. The key phrase is “add the opposite.” Thus, the subtraction 7 − 4 becomes the addition 7 + (−4), which we would picture on the number line as shown in Figure $$\PageIndex{2}$$.
Figure $$\PageIndex{1}$$ and Figure $$\PageIndex{2}$$ provide ample evidence that the subtraction 7−4 is identical to the addition 7+(−4). Again, subtraction means “add the opposite.” That is, 7 − 4=7+(−4).
Defining Subtraction
Subtraction means “add the opposite.” That is, if a and b are any integers, then
$a − b = a + (−b).\nonumber$
Thus, for example, −123−150 = −123+(−150) and −57−(−91) = −57+91. In each case, subtraction means “add the opposite.” In the first case, subtracting 150 is the same as adding −150. In the second case, subtracting−91 is the same as adding 91.
Example 1
Find the differences: (a) 4 − 8, (b) −15 − 13, and (c) −117 − (−115).
Solution
In each case, subtraction means “add the opposite.”
a) Change the subtraction to addition with the phrase “subtraction means add the opposite.” That is, 4−8 = 4+(−8). We can now perform this addition on the number line.
Thus, 4 − 8=4+(−8) = −4.
b) First change the subtraction into addition by “adding the opposite.” That is, −15 − 13 = −15 + (−13). We can now use physical intuition to perform the addition. Start at the origin (zero), walk 15 units to the left, then an additional 13 units to the left, arriving at the answer −28. That is,
\begin{aligned} −15 − 13 & = −15 + (−13) \\ ~ & = −28. \end{aligned}\nonumber
c) First change the subtraction into addition by “adding the opposite.” That is, −117 − (−115) = −117 + 115. Using “Adding Two Integers with Unlike Signs” from Section 2.2, first subtract the smaller magnitude from the larger magnitude; that is, 117 − 115 = 2. Because −117 has the larger magnitude and its sign is negative, prefix a negative sign to the difference in magnitudes. Thus,
\begin{aligned} −117 − (−115) & = −117 + 115 \\ & = −2. \end{aligned}\nonumber
Exercise
Use each of the techniques in parts (a), (b), and (c) of Example 1 to evaluate the difference −11 − (−9).
−2
## Order of Operations
We will now apply the “Rules Guiding Order of Operations” from Section 1.5 to a number of example exercises.
Example 2
Simplify −5 − (−8) − 7.
Solution
We work from left to right, changing each subtraction by “adding the opposite.”
\begin{aligned} -5-(-8) -7=-5+8+(-7) ~ & \textcolor{red}{ \text{ Add the opposite of } -8, \text{ which is 8.}} \\ ~ & \textcolor{red}{ \text{ Add the opposite of 7, which is } -7.} \\ =3 +(-7) & \textcolor{red}{ \text{ Working left to right, } -5+8=3.} \\ =-4 ~ & \textcolor{red}{3 +(-7) = -4.} \end{aligned}\nonumber
Exercise
Simplify: −3 − (−9) − 11.
−5
Grouping symbols say “do me first.”
Example $$\PageIndex{1}$$
Simplify −2 − (−2 − 4).
Solution
Parenthetical expressions must be evaluated first.
\begin{aligned} -2(-2-4)=-2-(-2+(-4)) ~ & \textcolor{red}{ \text{ Simplify the parenthetical expression}} \\ ~ & \textcolor{red}{ \text{ first. Add the opposite of 4, which is } -4.} \\ = -2 -(-6) ~ & \textcolor{red}{ \text{ Inside the parentheses, } -2 + (-4) = -6.} \\ =-2 + 6 ~ & \textcolor{red}{ \text{ Subtracting a } -6 \text{ is the same as adding a 6.}} \\ =4 ~ & ~ \textcolor{red}{ \text{ Add: } -2 + 6 = 4.} \end{aligned}\nonumber
Exercise
Simplify: −3 − (−3 − 3).
3
## Change as a Difference
Suppose that when I leave my house in the early morning, the temperature outside is 40 Fahrenheit. Later in the day, the temperature measures 60◦ Fahrenheit. How do I measure the change in the temperature?
The Change in a Quantity
To measure the change in a quantity, always subtract the former measurement from the latter measurement. That is:
$\colorbox{cyan}{Change in a Quantity} = \colorbox{cyan}{Latter Measurement} - \colorbox{cyan}{Former Measurement}\nonumber$
Thus, to measure the change in temperature, I perform a subtraction as follows:
\begin{aligned} \colorbox{cyan}{Change in Temperature} & = \colorbox{cyan}{Latter Measurement} & - & \colorbox{cyan}{Former Measurement} \\ ~ & = 60^{ \circ} \text{F} & - & 40^{ \circ} \text{F} \\ ~ & = 20^{ \circ} \text{F} \end{aligned}\nonumber
Note that the positive answer is in accord with the fact that the temperature has increased.
Example 4
Suppose that in the afternoon, the temperature measures 65Fahrenheit, then late evening the temperature drops to 44 Fahrenheit. Find the change in temperature.
Solution
To measure the change in temperature, we must subtract the former measurement from the latter measurement.
\begin{aligned} \colorbox{cyan}{Change in Temperature} & = \colorbox{cyan}{Latter Measurement} & - & \colorbox{cyan}{Former Measurement} \\ ~ & = 44^{ \circ} \text{F} & - & 65^{ \circ} \text{F} \\ ~ & = -11^{ \circ} \text{F} \end{aligned}\nonumber
Note that the negative answer is in accord with the fact that the temperature has decreased. There has been a “change” of −11 Fahrenheit.
Exercise
Marianne awakes to a morning temperature of 54 Fahrenheit. A storm hits, dropping the temperature to 43 Fahrenheit. Find the change in temperature.
−11◦ Fahrenheit
Example 5
Sometimes a bar graph is not the most appropriate visualization for your data. For example, consider the bar graph in Figure $$\PageIndex{3}$$ depicting the Dow Industrial Average for seven consecutive days in March of 2009. Because the bars are of almost equal height, it is difficult to detect fluctuation or change in the Dow Industrial Average.
Let’s determine the change in the Dow Industrial average on a day-to-day basis. Remember to subtract the latter measurement minus the former (current day minus former day). This gives us the following changes.
Consecutive Days Change in Dow Industrial Average Sun-Mon 6900 - 7000 = -100 Mon-Tues 6800 - 6900 = -100 Tues-Wed 6800 - 6800 = 0 Wed-Thu 7000 - 6800 = 200 Thu-Fri 7100 - 7000 = 100 Fri-Sat 7200 - 7100 = 100
We will use the data in the table to construct a line graph. On the horizontal axis, we place the pairs of consecutive days (see Figure $$\PageIndex{4}$$). On the vertical axis we place the Change in the Industrial Dow Average. At each pair of days we plot a point at a height equal to the change in Dow Industrial Average as calculated in our table.
Note that the data as displayed by Figure $$\PageIndex{4}$$ more readily shows the changes in the Dow Industrial Average on a day-to-day basis. For example, it is now easy to pick the day that saw the greatest increase in the Dow (from Wednesday to Thursday, the Dow rose 200 points).
## Exercises
In Exercises 1-24, find the difference.
1. 16 − 20
2. 17 − 2
3. 10 − 12
4. 16 − 8
5. 14 − 11
6. 5 − 8
7. 7 − (−16)
8. 20 − (−10)
9. −4 − (−9)
10. −13 − (−3)
11. 8 − (−3)
12. 14 − (−20)
13. 2 − 11
14. 16 − 2
15. −8 − (−10)
16. −14 − (−2)
17. 13 − (−1)
18. 12 − (−13)
19. −4 − (−2)
20. −6 − (−8)
21. 7 − (−8)
22. 13 − (−14)
23. −3 − (−10)
24. −13 − (−9)
In Exercises 25-34, simplify the given expression.
25. 14 − 12 − 2
26. −19 − (−7) − 11
27. −20 − 11 − 18
28. 7 − (−13) − (−1)
29. 5 − (−10) − 20
30. −19 − 12 − (−8)
31. −14 − 12 − 19
32. −15 − 4 − (−6)
33. −11 − (−7) − (−6)
34. 5 − (−5) − (−14)
In Exercises 35-50, simplify the given expression.
35. −2 − (−6 − (−5))
36. 6 − (−14 − 9)
37. (−5 − (−8)) − (−3 − (−2))
38. (−6 − (−8)) − (−9 − 3)
39. (6 − (−9)) − (3 − (−6))
40. (−2 − (−3)) − (3 − (−6))
41. −1 − (10 − (−9))
42. 7 − (14 − (−8))
43. 3 − (−8 − 17)
44. 1 − (−1 − 4)
45. 13 − (16 − (−1))
46. −7 − (−3 − (−8))
47. (7 − (−8)) − (5 − (−2))
48. (6 − 5) − (7 − 3)
49. (6 − 4) − (−8 − 2)
50. (2 − (−6)) − (−9 − (−3))
51. The first recorded temperature is 42F. Four hours later, the second temperature is 65F. What is the change in temperature?
52. The first recorded temperature is 79F. Four hours later, the second temperature is 46F. What is the change in temperature?
53. The first recorded temperature is 30F. Four hours later, the second temperature is 51F. What is the change in temperature?
54. The first recorded temperature is 109F. Four hours later, the second temperature is 58F. What is the change in temperature?
55. Typical temperatures in Fairbanks, Alaska in January are −2 degrees Fahrenheit in the daytime and −19 degrees Fahrenheit at night. What is the change in temperature from day to night?
56. Typical summertime temperatures in Fairbanks, Alaska in July are 79 degrees Fahrenheit in the daytime and 53 degrees Fahrenheit at night. What is the change in temperature from day to night?
57. Communication. A submarine 1600 feet below sea level communicates with a pilot flying 22,500 feet in the air directly above the submarine. How far is the communique traveling?
58. Highest to Lowest. The highest spot on earth is on Mount Everest in Nepal-Tibet at 8,848 meters. The lowest point on the earth’s crust is the Mariana’s Trench in the North Pacific Ocean at 10,923 meters below sea level. What is the distance between the highest and the lowest points on earth? Wikipedia http://en.Wikipedia.org/wiki/Extremes_on_Earth
59. Lowest Elevation. The lowest point in North America is Death Valley, California at -282 feet. The lowest point on the entire earth’s landmass is on the shores of the Dead Sea along the Israel-Jordan border with an elevation of -1,371 feet. How much lower is the Dead Sea shore from Death Valley?
60. Exam Scores. Freida’s scores on her first seven mathematics exams are shown in the following bar chart. Calculate the differences between consecutive exams, then create a line graph of the differences on each pair of consecutive exams. Between which two pairs of consecutive exams did Freida show the most improvement?
1. −4
3. −2
5. 3
7. 23
9. 5 11. 11 13. −9 15. 2 17. 14 19. −2 21. 15 23. 7 25. 0 27. −49 29. −5 31. −45 33. 2 35. −1 37. 4 39. 6 41. −20 43. 28 45. −4 47. 8 49. 12 51. 23◦ F 53. 21◦ F 55. −17 degrees Fahrenheit 57. 24,100 feet 59. 1,089 feet lower
1.4.2: Subtracting Integers is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by David Arnold. | 2022-06-27T16:57:57 | {
"domain": "libretexts.org",
"url": "https://math.libretexts.org/Courses/Honolulu_Community_College/Math_75X%3A_Introduction_to_Mathematical_Reasoning_(Kearns)/01%3A_Whole_Numbers_and_Integers/1.04%3A_Combining_Integers-_Addition_and_Subtraction_with_Integers/1.4.02%3A_Subtracting_Integers",
"openwebmath_score": 0.9624632000923157,
"openwebmath_perplexity": 3179.0888159169285,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9912886152849367,
"lm_q2_score": 0.8519528057272544,
"lm_q1q2_score": 0.8445311170774867
} |
http://math.stackexchange.com/questions/88565/true-or-false-x2-ne-x-implies-x-ne-1 | # True or false? $x^2\ne x\implies x\ne 1$
Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following:
$$x^2\ne x\implies x\ne 1$$
I immediately answered true, but for some reason, everyone (including my classmates and math teacher) is disagreeing with me. According to them, when $x^2$ is not equal to $x$, $x$ also can't be $0$ and because $0$ isn't excluded as a possible value of $x$, the sentence is false. After hours, I am still unable to understand this ridiculously simple implication. I can't believe I'm stuck with something so simple.
Why I think the logical sentence above is true:
My understanding of the implication symbol $\implies$ is the following: If the left part is true, then the right part must be also true. If the left part is false, then nothing is said about the right part. In the right part of this specific implication nothing is said about whether $x$ can be $0$. Maybe $x$ can't be $-\pi i$ too, but as I see it, it doesn't really matter, as long as $x \ne 1$ holds. And it always holds when $x^2 \ne x$, therefore the sentence is true.
### TL;DR:
$x^2 \ne x \implies x \ne 1$: Is this sentence true or false, and why?
Sorry for bothering such an amazing community with such a simple question, but I had to ask someone.
-
This is true, as the contrapositive ($x = 1$ -> $x^2=x$) is obviously true. – The Chaz 2.0 Dec 5 '11 at 14:20
They are wrong-the fact that $x\neq 0$ is also an implication doesn't mean anything. The statement: $x^2=x\implies x=1$ is false. – Thomas Andrews Dec 5 '11 at 14:21
Also, their reasoning about "not excluding other values" is wrong. Today is Monday, which implies SO many things (!), but it is not false to just list one implication. Eg: If today is Monday, then tomorrow is Tuesday. Or, if today is Monday, then I have an appointment with the dentist. – The Chaz 2.0 Dec 5 '11 at 14:23
@Chris: You could write your reasoning as answer. That way the site keeps working better (and you will get some upvotes). You could also refer your teacher to this site :-) – Jyrki Lahtonen Dec 5 '11 at 15:06
Your understanding of material implication ($\implies$) is exactly right, and as Jyrki said, your reasoning could stand as a perfectly good answer to the question. You could also point out that the teacher and class seem to be confusing $\implies$ and $\iff$: $x^2\ne x\iff x\ne 1$ is of course false precisely because $1$ isn't the only number that is its own square. – Brian M. Scott Dec 5 '11 at 15:34
First, some general remarks about logical implications/conditional statements.
1. As you know, $P \rightarrow Q$ is true when $P$ is false, or when $Q$ is true.
2. As mentioned in the comments, the contrapositive of the implication $P \rightarrow Q$, written $\lnot Q \rightarrow \lnot P$, is logically equivalent to the implication.
3. It is possible to write implications with merely the "or" operator. Namely, $P \rightarrow Q$ is equivalent to $\lnot P\text{ or }Q$, or in symbols, $\lnot P\lor Q$.
Now we can look at your specific case, using the above approaches.
1. If $P$ is false, ie if $x^2 \neq x$ is false (so $x^2 = x$ ), then the statement is true, so we assume that $P$ is true. So, as a statement, $x^2 = x$ is false. Your teacher and classmates are rightly convinced that $x^2 = x$ is equivalent to ($x = 1$ or $x =0\;$), and we will use this here. If $P$ is true, then ($x=1\text{ or }x =0\;$) is false. In other words, ($x=1$) AND ($x=0\;$) are both false. I.e., ($x \neq 1$) and ($x \neq 0\;$) are true. I.e., if $P$, then $Q$.
2. The contrapositive is $x = 1 \rightarrow x^2 = x$. True.
3. We use the "sufficiency of or" to write our conditional as: $$\lnot(x^2 \neq x)\lor x \neq 1\;.$$ That is, $x^2 = x$ or $x \neq 1$, which is $$(x = 1\text{ or }x =0)\text{ or }x \neq 1,$$ which is $$(x = 1\text{ or }x \neq 1)\text{ or }x = 0\;,$$ which is $$(\text{TRUE})\text{ or }x = 0\;,$$ which is true.
-
Please pardon the (lack of) formatting- I typed this up on my phone! – The Chaz 2.0 Dec 5 '11 at 15:36
Thanks, Brian! That looks great. – The Chaz 2.0 Dec 5 '11 at 15:55
That's the proof I was looking for, thank you :) – Chris Dec 5 '11 at 16:41
The short answer is: Yes, it is true, because the contrapositive just expresses the fact that $1^2=1$.
But in controversial discussions of these issues, it is often (but not always) a good idea to try out non-mathematical examples:
"If a nuclear bomb drops on the school building, you die."
"Hey, but you die, too."
"That doesn't help you much, though, so it is still true that you die."
"Oh no, if the supermarket is not open, I cannot buy chocolate chips cookies."
"Yes, but I prefer to concentrate on the major consequences."
"If you sign this contract, you get a free pen."
"Hey, you didn't tell me that you get all my money."
Non-mathematical examples also explain the psychology behind your teacher's and classmates' thinking. In real-life, the choice of consequences is usually a loaded message and can amount to a lie by omission. So, there is this lingering suspicion that the original statement suppresses information on 0 on purpose.
I suggest that you learn about some nonintuitive probability results and make bets with your teacher.
-
+1 for that last sentence. :-) – Brian M. Scott Dec 5 '11 at 15:56
I like how the importance of cookies appears in surprising math/logic results! (+1). It might be time to show the teacher this question. – The Chaz 2.0 Dec 5 '11 at 16:10
Excellent examples, thank you! And of course I already made bets :). – Chris Dec 5 '11 at 16:29
+1 for the second sentence. :-) – NikolajK Dec 11 '11 at 23:44
Superb answer, which implies that I have up-voted it. – Hexagon Tiling Jan 10 '12 at 0:37
Thing to note. This is called logical implication.
$x^2≠x⟹x≠1$: Is this sentence true or false, and why?
We can always check that using an example. Let us look at this implication as $\rm P\implies Q$. Now we shall consider cases:
• Case 1: If we consider $x = 0$, then $\rm P$ is false, and $\rm Q$ is true.
• Case 2: If we consider $x = 1$, then $\rm P$ is false, and $\rm Q$ is false as well.
• Case 3: If we consider each value except $x=0$ and $x = 1$, then both $\rm P$ and $\rm Q$ will be true since $x^2 = x \iff x^2 - x = 0 \iff x(x - 1) = 0$ which means that $x=0$ and $x = 1$ are the only possibilities.
Fortunately, our truth tables tell us that logical implication will hold true as far as we are not having $\rm P$ true and $\rm Q$ false. Look at the cases above; none of them has $\rm P$ true and $Q$ false. Thus Case 1, Case 2 and Case 3 are all true according to mathematical logic, so $\rm P\implies Q$ is true, or in other words: $x^2 \ne x \implies x \ne 1$ is true.
I apologize for being late, but I always have my two cents to offer... thank you.
- | 2016-07-24T20:55:49 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/88565/true-or-false-x2-ne-x-implies-x-ne-1",
"openwebmath_score": 0.8064426779747009,
"openwebmath_perplexity": 387.94934331114007,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426443092215,
"lm_q2_score": 0.8670357683915538,
"lm_q1q2_score": 0.8445298125547868
} |
https://math.stackexchange.com/questions/2184937/proof-n2-2-is-not-divisible-by-4 | # Proof: $n^2 - 2$ is not divisible by 4
I tried to prove that $n^2 - 2$ is not divisible by 4 via proof by contradiction. Does this look right?
Suppose $n^2 - 2$ is divisible by $4$. Then:
$n^2 - 2 = 4g$, $g \in \mathbb{Z}$.
$n^2 = 4g + 2$.
Consider the case where $n$ is even.
$(2x)^2 = 4g + 2$, $x \in \mathbb{Z}$.
$4x^2 = 4g + 2$.
$4s = 4g + 2$, $s = x^2, s \in \mathbb{Z}$ as integers are closed under multiplication.
$2s = 2g + 1$
$2s$ is even, and $2g + 1$ is odd (by definition of even/odd numbers). An even number cannot equal an odd number, so we have a contradiction.
Consider the case where $n$ is odd.
$(2x + 1)^2 = 4g + 2$, $x \in \mathbb{Z}$
$4x^2 + 4x + 1 = 4g + 2$
$4x^2 + 4x = 4g + 1$
$4(x^2 + x) = 4g + 1$
$4j = 4g + 1$, $j = x^2 + x, j \in \mathbb{Z}$ as integers are closed under addition
$2d = 2e + 1$, $d = 2j, e = 2g; d, e \in \mathbb{Z}$ as integers are closed under multiplication
$2d$ is even, and $2e + 1$ is odd (by definition of even/odd numbers). An even number cannot equal an odd number, so we have a contradiction.
As both cases have a contradiction, the original supposition is false, and $n^2 - 2$ is not divisible by $4$.
• You could simplify your proof slightly. In the first case, you could go from $4x^2 = 4g + 2$ to $4(x^2-g) = 2$. This would imply that $4$ divides $2$ which is a contradiction. Similarly, in the second case, you could go from $4(x^2+x) = 4g + 1$ to $4(x^2+x-g) = 1$ so that $4$ divides $1$ which is also a contradiction. – Cameron Williams Mar 13 '17 at 16:30
We can shorten your proof by for example going from $4x^2=4g+2$ (in case 1) to saying "The left-hand side has remainder $0$ after division by $4$, yet the right side has remainder $2$; this is impossible" (basically, looking at the expression $\mod 4$ instead of dividing by $2$ and looking $\mod 2$).
Also, the second case was trivially impossible, since $n^2=4g+2$ has no solutions if $n$ is odd (since then $n^2$ is odd, but $4g+2$ is even).
Depending on the context (what you know, what you can use, etc), steps like $x^2+x=j$ with the remark that integers are closed under addition and multiplication are mostly considered so trivial that it's not worth mentioning. I repeat however that this is completely dependent on context, and if you want to make sure your audience is aware of these facts and/or steps, you should mention them. More detailed explanations with steps rarely hurt the proof.
The proof can be done a lot quicker however (without contradiction) by looking $\mod 4$. It is quite easy to prove that squares are either $0$ or $1\mod 4$, so $n^2-2$ is either $-2$ or $-1\mod 4$, and thus, $n^2-2$ cannot be divisible by $4$.
For odd $n$, $n^2-2$ is odd.
For even $n$, $n^2$ is divisible by $4$, so that $n^2-2$ is not.
Let $n$ be odd. Then $n^2-2$ is both odd and a multiple of four.
Let $n$ be even. Then $n^2-2$ and $n^2$ are both multiples of $4$, so that $2$ is a multiple of $4$.
• This is not a proof by contradiction though – vrugtehagel Mar 13 '17 at 16:24
• @vrugtehagel why do you need a proof by contradiction? – mez Mar 13 '17 at 16:26
• @mez, that's what the OP is asking. – vrugtehagel Mar 13 '17 at 16:27
• @BrianTung obviously, you can transform it into the standard not-really-proof-by-contradiction by assuming the statement is true, then proving that it's false directly, and then stating that the assumption was false. Don't get me wrong; I like the elegance and simplicity of this proof, I just don't think that it's what the OP was looking for. – vrugtehagel Mar 13 '17 at 16:32
• The question doesn't clearly state that proof by contradiction IS required, just that he did try this as a technique – Cato Mar 13 '17 at 16:39
Assume $4 \mid n^2-2$, for $n \in \mathbb{z}$. This means we can rewrite $n^2-2$ as $4k$, which implies $n^2=2(2k+1)$. This is clearly impossible; this implies $4 \nmid n^2$ but $2 \mid n^2$. So our initial assumption is wrong.
Anohter Idea of proof goes as follows:
First of all we need to know that any square of integer is either of the form 4k or 4k+1.
Suppose, to the contrary, that 4 divides $n^2-2$, then: $$4k = n^2-2$$ $$4k+2 = n^2$$ A contradiction. Thus $n^2-2$ cannot be divisible by 4. | 2019-05-19T14:24:12 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2184937/proof-n2-2-is-not-divisible-by-4",
"openwebmath_score": 0.8843677639961243,
"openwebmath_perplexity": 124.80277069353531,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426450627306,
"lm_q2_score": 0.8670357666736772,
"lm_q1q2_score": 0.8445298115348211
} |
http://math.stackexchange.com/questions/567705/two-inner-products-being-equal-up-to-a-scalar | # Two inner products being equal up to a scalar
I would appreciate a hint on the following problem:
Let $V$ be a finite dimensional vector space over $F$. There are two scalar products such that: $$\forall \ w,v \in V \ \Big(\langle v,w\rangle_1=0 \implies \langle v,w\rangle_2=0\Big)$$ Show that $$\exists \ c \in F \ \ \forall \ w,v \in V \ \Big(\langle v,w\rangle_1=c \langle v,w\rangle_2\Big)$$
I have tried to define an orthonormal basis with respect to $\langle \cdot,\cdot\rangle_1$ hoping that the transformation matrices would be different by a constant, yet it seems to lead nowhere.
-
Cannot comment, so posting as answer. Try looking at just the algebraic properties of scalar product. Is it bilinear? Why? If so, what does this give you? – Ncat Nov 15 '13 at 4:04
I suppose $F=\Bbb R$ or $F=\Bbb C$. If so, I suppose this definition applies; if not, you might care to state what definition of a scalar product is used. – Marc van Leeuwen Nov 15 '13 at 13:13
Yes, I meant the definition. Sorry, I wasn't even aware that it isn't the only definition on the inner product. – Leo Nov 15 '13 at 15:04
It is not so much that there are competing definitions of inner product, but an unspecified field $F$ usually means there is a lot more to choose from then $\Bbb R$ and $\Bbb C$; this raises the question what you mean by "scalar product" in other cases. If $\Bbb R$ and $\Bbb C$ are the only fields considered, it would be better to say that explicitly in the question. – Marc van Leeuwen Nov 15 '13 at 16:22
Since you are in finite dimension, you can do this by induction on the dimension.
In dimension${}\leq1$ the inner product structures are easy enough to classify as a standard inner product multiplied by some real $c>0$, giving the result. In dimension${}>1$ you can fix any nonzero vector$~v$, and induction will give you the result for its orthogonal complement$~H$, which is the same hyperplane for both inner product structures by hypotheses; in particular it will give you a (positive real) constant$~c$ valid on$~H$. Choosing a vector $h\in H$ with $\langle h,h\rangle_1=\langle v,v\rangle_1$ (which is easily found), one has $$\langle h+v,h-v\rangle_1=\langle h,h\rangle_1-\langle v,v\rangle_1=0$$ so using the hypothesis also $$0= \langle h+v,h-v\rangle_2 = \langle h,h\rangle_2-\langle v,v\rangle_2$$ and therefore $\langle v,v\rangle_2=\langle h,h\rangle_2=c\langle h,h\rangle_1=c\langle v,v\rangle_1$. Now complete by sesquilinearity.
By the way, note that $\langle x+y,x-y\rangle=\langle x,x\rangle-\langle y,y\rangle$ does not hold in general for complex (so conjugate-symmetric) inner products, but it holds for $x=h$ and $y=v$ since $h\perp v$.
-
Let $\mathcal{B} := \{ v_1, v_2, \dots, v_n \}$ be an $1$-orthonormal basis for $V$, i.e., $\mathcal{B}$ is orthonormal with respect to $\langle \cdot, \cdot \rangle_1$. Obviously, it is $2$-orthogonal (but not necessarily $2$-normalized).
Now, let us observe two vectors
$$x := \sum_{i=1}^n \alpha_i v_i, \quad y := \sum_{i=1}^n \beta_i v_i.$$
We compute $\langle x, y \rangle_1$ and $\langle x, y \rangle_2$:
\begin{align*} \langle x, y \rangle_1 &= \left\langle \sum_{i=1}^n \alpha_i v_i, \sum_{i=1}^n \beta_i v_i \right\rangle_1 = \sum_{i=1}^n \alpha_i \beta_i \langle v_i, v_i \rangle_1 = \sum_{i=1}^n \alpha_i \beta_i, \\ \langle x, y \rangle_2 &= \left\langle \sum_{i=1}^n \alpha_i v_i, \sum_{i=1}^n \beta_i v_i \right\rangle_2 = \sum_{i=1}^n \alpha_i \beta_i \langle v_i, v_i \rangle_2. \end{align*}
Let us now fixate $u := \sum_{i=1}^n v_i$ and let us observe vectors
$$w_{ij} := v_i - v_j = \sum_{k=1}^n \beta^{(ij)}_k v_k, \quad \text{for i < j},$$
where
$$\beta^{(ij)}_k = \begin{cases} 1, & k = i, \\ -1, & k = j, \\ 0, & \text{otherwise}. \end{cases}$$
Obviously, for this choice of $u$, we have $\alpha_k = 1$ for all $k$. Notice that $\langle u, w_{ij} \rangle_1 = 1 - 1 = 0$ for all $i < j$. Therefore,
$$\langle u, w_{ij} \rangle_2 = 0$$
for all $i < j$. Let us expand this last one:
$$0 = \langle u, w_{ij} \rangle_2 = \sum_{k=1}^n \alpha_k \beta^{(ij)}_k \langle v_k, v_k \rangle_2 = \langle v_i, v_i \rangle_2 - \langle v_j, v_j \rangle_2.$$
In other words, for all $i,j$ (such that $i < j$, but that's irrelevant), we have
$$\langle v_i, v_i \rangle_2 = \langle v_j, v_j \rangle_2,$$
which means that $c := \langle v_k, v_k \rangle_2$ is a well defined constant, invariant of the choice of $k \in \{1,2,\dots,n\}$. Therefore,
$$\langle x, y \rangle_2 = \sum_{i=1}^n \alpha_i \beta_i \langle v_i, v_i \rangle_2 = \sum_{i=1}^n \alpha_i \beta_i c = c \sum_{i=1}^n \alpha_i \beta_i = c \langle x, y \rangle_1.$$
-
Presumably $F=\mathbb{R}$ or $\mathbb{C}$. For every nonzero vector $v\in V$, define $c_v=\dfrac{\langle v,v\rangle_2}{\langle v,v\rangle_1}$. Now, for any $v,w\in V$, let $x=w-\dfrac{\langle w,v\rangle_1}{\langle v,v\rangle_1}v.\$ Then $\langle x,v\rangle_1=0$ (here we adopt the convention that the inner product is linear in the first argument). Hence $0=\langle x,v\rangle_2$ and in turn $$\langle w,v\rangle_2\equiv c_v\langle w,v\rangle_1\tag{1}$$ for all nonzero $v,w\in V$. Therefore $$c_v\langle w,v\rangle_1=\langle w,v\rangle_2=\overline{\langle v,w\rangle_2}=\overline{c_w\langle v,w\rangle_1}=c_w\langle w,v\rangle_1\quad\forall v,w\neq0.\tag{2}$$ Now, for any $v,w\neq0$, there exists some $y\in\{w+tv:t\in\mathbb{R}\}$ such that $\langle w,y\rangle_1,\ \langle y,v\rangle_1\ne0$. By $(2)$, we have $c_v\langle y,v\rangle_1=c_y\langle y,v\rangle_1$ and $c_y\langle w,y\rangle_1=c_w\langle w,y\rangle_1$. Hence $c_v=c_y=c_w$ for all $v,w\neq0$, i.e. all the $c_v$s are equal to some common constant $c>0$. So, the result follows from $(1)$.
- | 2016-05-30T20:55:25 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/567705/two-inner-products-being-equal-up-to-a-scalar",
"openwebmath_score": 0.9873944520950317,
"openwebmath_perplexity": 170.77315688850598,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426458162398,
"lm_q2_score": 0.8670357615200475,
"lm_q1q2_score": 0.8445298071682853
} |
https://math.stackexchange.com/questions/1689682/prove-that-int-04-frac-ln-x-sqrt4x-x2dx-0-without-trigonometric-su | # Prove that $\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}}~dx=0$ (without trigonometric substitution)
The integral is from P. Nahin's "Inside Interesting Integrals...", problem C2.1. His proposed solution includes trigonometric substitution and the use of log-sine integral.
However, I think the problem should have an easier solution (without appealing to another complicated integral at least).
I have the following trick in mind. Let's introduce the substitution $x=4-z$
$$I=\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}}~dx=\int_0^4 \frac{\ln (4-z)}{\sqrt{4z-z^2}}~d(4-z)=\int_0^4 \frac{\ln (4-z)}{\sqrt{4z-z^2}}~dz$$
$$2I=\int_0^4 \frac{\ln (4x-x^2)}{\sqrt{4x-x^2}}~dx$$
$$I=\int_0^4 \frac{\ln \sqrt{4x-x^2}}{\sqrt{4x-x^2}}~dx$$
And here I'm stuck. I'm not sure if this can go somewhere. Maybe partial integration can help, but I don't know how to choose the functions. What do you think?
Only one answer does not use trig substitution, it used gamma function instead. If there are no other ways, I'm prepared to give up on my question. But I would be grateful if it's left open at least for several days
Edit
After many attempts, I conclude that there is no trick to this integral. The reason is: the general form of this integral in not zero, but has the same symmetry properties, as the above case:
$$I(a)=\int_0^a \frac{\ln x}{\sqrt{ax-x^2}}~dx=\int_0^a \frac{\ln (a-x)}{\sqrt{ax-x^2}}~dx=\int_0^a \frac{\ln \sqrt{ax-x^2}}{\sqrt{ax-x^2}}~dx \neq 0$$
$$I(4)=0$$
So we will get nothing from symmetry considerations alone. There are two possible ways to solve this - either trigonometric substitution or gamma function.
Edit 2
I was wrong it seems, see the accepted answer.
• What makes you think there's an easier solution ? – Yves Daoust Mar 9 '16 at 7:45
• @YvesDaoust I want it to exist – Yuriy S Mar 9 '16 at 7:46
• See wolframalpha gives very hard solution are you surr – Archis Welankar Mar 9 '16 at 7:51
Notice that by the substitution $x = 2 + u$,
$$I = \int_{-2}^{2} \frac{\log(2 + u)}{\sqrt{4 - u^2}} \, du = \int_{0}^{2} \frac{\log(4 - u^2)}{\sqrt{4 - u^2}} \, du.$$
On the other hand, by the substitution $x = 4 - v^2$ (or equivalently $v = \sqrt{4 - x}$), we have
$$I = \int_{0}^{2} \frac{\log(4 - v^2)}{v \sqrt{4 - v^2}} \cdot 2v \, dv = 2 \int_{0}^{2} \frac{\log(4 - v^2)}{\sqrt{4 - v^2}} \, dv.$$
Comparing two formulas give $I = 2I$ and therefore $I = 0$.
• Very nice. thank you. – FDP Mar 10 '16 at 3:15
• I gave up too soon. Thank you for this. I guess sometimes it's enough to believe there is a simple solution – Yuriy S Mar 10 '16 at 8:04
• Daaammmmmnnnnn that was nice! – clathratus Oct 5 '18 at 21:33
$\displaystyle K=\int_0^4 \dfrac{\ln (4x-x^2)}{\sqrt{4x-x^2}}~dx$
$\displaystyle K=\int_0^4 \dfrac{\ln (4-x)}{\sqrt{4x-x^2}}~dx+\int_0^4 \dfrac{\ln x}{\sqrt{4x-x^2}}~dx$
By the change of variable $y=4-x$, it's readily seen that the two preceding integrals are equal.
$\displaystyle K=\int_0^2 \dfrac{\ln (4x-x^2)}{\sqrt{4x-x^2}}~dx+\int_2^4 \dfrac{\ln (4x-x^2)}{\sqrt{4x-x^2}}~dx$
Perform the change of variable $y=\sqrt{4x-x^2}$ in both preceding integrals,
$\displaystyle K=2\int_0^2 \dfrac{\ln x}{\sqrt{4-x^2}}dx+2\int_0^2 \dfrac{\ln x}{\sqrt{4-x^2}}dx=4\int_0^2 \dfrac{\ln x}{\sqrt{4-x^2}}dx$
Thus,
$\displaystyle K=2\int_0^2 \dfrac{\ln x}{\sqrt{1-\left(\tfrac{x}{2}\right)^2}}dx$
Perform the change of variable $y=\dfrac{x}{2}$,
$\displaystyle K=4\int_0^1 \dfrac{\ln(2x)}{\sqrt{1-x^2}}dx$
$\displaystyle K=4\int_0^1 \dfrac{\ln 2}{\sqrt{1-x^2}}dx+4\int_0^1 \dfrac{\ln x}{\sqrt{1-x^2}}dx$
Perform the change of variable $x=\sin y$ in both integrals,
$\displaystyle K=4\ln(2) \int_0^{\tfrac{\pi}{2}} \dfrac{\cos y}{\sqrt{1-(\sin y)^2}}dy+4\int_0^{\tfrac{\pi}{2}} \dfrac{\ln (\sin y)\cos y}{\sqrt{1-(\sin y)^2}}dy=2\pi\ln 2+4\int_0^{\tfrac{\pi}{2}} \ln (\sin y)dy$
It is well known that $\displaystyle \int_0^{\tfrac{\pi}{2}} \ln (\sin y)dy=-\dfrac{\pi \log 2}{2}$
Thus, $K=0.$
Finally we get:
$\displaystyle \int_0^4 \dfrac{\ln (4-x)}{\sqrt{4x-x^2}}~dx=\int_0^4 \dfrac{\ln x}{\sqrt{4x-x^2}}~dx=0$
PS: To be compliant with the question.
$\displaystyle \int_0^1 \dfrac{1}{\sqrt{1-x^2}}dx=\dfrac{1}{2}\int_0^1 x^{\tfrac{1}{2}-1}(1-x)^{\tfrac{1}{2}-1}dx=\dfrac{\Gamma\left(\tfrac{1}{2}\right)^2}{\Gamma(1)}=\dfrac{\pi}{2}$
$\displaystyle \int_0^1 \dfrac{\ln x}{\sqrt{1-x^2}}dx=\dfrac{1}{4}\dfrac{\partial}{\partial s}\left[\int_0^1 x^s(1-x)^{\tfrac{1}{2}-1}dx\right]_{s=-\tfrac{1}{2}}=\dfrac{1}{4}\dfrac{\partial}{\partial s}\left[\dfrac{\Gamma(s+1)\Gamma\left(\tfrac{1}{2}\right)}{\Gamma\left(s+1+\tfrac{1}{2}\right)}\right]_{s=-\tfrac{1}{2}}=-\dfrac{\pi\ln2}{2}$
• Your first link contains already a solution to your question the way you want. ( Felix Marin (math.stackexchange.com/users/85343/felix-marin), Evaluate $\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}} \,\mathrm dx$, URL (version: 2015-01-19): math.stackexchange.com/q/1078278 ) – FDP Mar 9 '16 at 23:42
• I know. At first I wanted there to be a simple solution, now I know that there is no such thing. Thank you for the answer anyway – Yuriy S Mar 10 '16 at 0:05
sketch: put $\sqrt{4x-x^2}=\sqrt{x(4-x)}=xr(t)$,$r(t)$ to be determined. On squaring we get $4-x=xr^2(t)$, thereby $x$ is a rational function of $t$ when $r(t)=\sqrt{t}$. This way we have got rid of the radical. The rest can be dealt with through inrtegration by parts.
• $\int_0^4=\int_0^2+\int_2^4$ and use in both integrals the change of variable $y=\sqrt{4x-x^2}$. The result seems more doable. – FDP Mar 9 '16 at 19:07
I want to provide a generalization of @SangchulLee method for the integral:
$$I(a)=\int_0^a \frac{\ln x}{\sqrt{ax-x^2}}~dx=\int_0^a \frac{\ln (a-x)}{\sqrt{ax-x^2}}~dx=\int_0^a \frac{\ln \sqrt{ax-x^2}}{\sqrt{ax-x^2}}~dx$$
Let's make a change of variable:
$$x=\frac{a}{2}+u~~~~~~~a-x=\frac{a}{2}-u$$
$$I(a)=\frac{1}{2} \int_{-a/2}^{a/2} \frac{\ln (\frac{a^2}{4}-u^2)}{\sqrt{\frac{a^2}{4}-u^2}}~du=\int_{0}^{a/2} \frac{\ln (\frac{a^2}{4}-u^2)}{\sqrt{\frac{a^2}{4}-u^2}}~du$$
Let's make another change of variable:
$$x=a-v^2~~~~~~~a-x=v^2$$
$$I(a)=2 \int_{0}^{\sqrt{a}} \frac{\ln (a-v^2)}{\sqrt{a-v^2}}~dv$$
$$I(4)=2 I(4),~~~~~~I(4)=0$$
However, we can make even more general conclusion. Let's denote $b=a/2$ and:
$$J(b)=\int_{0}^{b} \frac{\ln (b^2-t^2)}{\sqrt{b^2-t^2}}~dt$$
$$J(b)=2 J(\sqrt{2 b})$$
I already know the answer of course, but it it possible to guess:
$$J(b)=C_1 \ln (C_2 b)$$
$$C_1 \ln (C_2 b)=2 C_1 \ln (C_2 \sqrt{2 b})$$
$$C_1 \ln (C_2)+C_1 \ln (b)=2 C_1 \ln (C_2)+C_1 \ln (2)+C_1 \ln (b)$$
$$C_2=\frac{1}{2}$$
$$J(b)=C_1 \ln \left( \frac{b}{2} \right)$$
Which is correct and we only need to find one constant to complete the solution. Which of course requires solving the integral the 'honest' way. | 2019-10-19T11:48:25 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1689682/prove-that-int-04-frac-ln-x-sqrt4x-x2dx-0-without-trigonometric-su",
"openwebmath_score": 0.9392489790916443,
"openwebmath_perplexity": 488.4971907496598,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426435557124,
"lm_q2_score": 0.8670357598021707,
"lm_q1q2_score": 0.844529803535042
} |
https://www.tutorialspoint.com/cplusplus-program-to-display-armstrong-number-between-two-intervals | # C++ Program to Display Armstrong Number Between Two Intervals
C++ProgrammingServer Side Programming
An Armstrong Number is a number where the sum of the digits raised to the power of total number of digits is equal to the number.
Some examples of Armstrong numbers are as follows −
3 = 3^1
153 = 1^3 + 5^3 + 3^3 = 1 + 125 + 27 = 153
407 = 4^3 + 0^3 + 7^3 = 64 +0 + 343 = 407
1634 = 1^4 + 6^4 + 3^4 + 4^4 = 1 + 1296 + 81 + 256 = 1634
A program that displays the Armstrong numbers between two intervals is as follows.
## Example
Live Demo
#include <iostream>
#include <cmath>
using namespace std;
int main() {
int lowerbound, upperbound, digitSum, temp, remainderNum, digitNum ;
lowerbound = 100;
upperbound = 500;
cout<<"Armstrong Numbers between "<<lowerbound<<" and "<<upperbound<<" are: ";
for(int num = lowerbound; num <= upperbound; num++) {
temp = num;
digitNum = 0;
while (temp != 0) {
digitNum++;
temp = temp/10;
}
temp = num;
digitSum = 0;
while (temp != 0) {
remainderNum = temp%10;
digitSum = digitSum + pow(remainderNum, digitNum);
temp = temp/10;
}
if (num == digitSum)
cout<<num<<" ";
}
return 0;
}
## Output
Armstrong Numbers between 100 and 500 are: 153 370 371 407
In the above program, Armstrong numbers between the given intervals are found. This is done using multiple steps. The lowerbound and upperbound of the interval are given. Using these, a for loop is started from lowerbound to upperbound and each number is evaluated to see if it is an Armstrong number or not.
This can be seen in the following code snippet.
lowerbound = 100;
upperbound = 500;
cout<<"Armstrong Numbers between "<<lowerbound<<" and "<<upperbound<<" are: ";
for(int num = lowerbound; num <= upperbound; num++)
In the for loop, first the number of digits in the number i.e in num are found. This is done by adding one to digitNum for each digit.
This is demonstrated by the following code snippet.
temp = num;
digitNum = 0;
while (temp != 0) {
digitNum++;
temp = temp/10;
}
After the number of digits are known, digitSum is calculated by adding each digit raised to the power of digitNum i.e. number of digits.
This can be seen in the following code snippet.
temp = num;
digitSum = 0;
while (temp != 0) {
remainderNum = temp%10;
digitSum = digitSum + pow(remainderNum, digitNum);
temp = temp/10;
}
If the number is equal to the digitSum, then that number is an Armstrong number and it is printed. If not, then it is not an Armstrong number. This is seen in the below code snippet.
if (num == digitSum)
cout<<num<<" ";
Published on 27-Sep-2018 11:48:24 | 2022-01-29T14:04:48 | {
"domain": "tutorialspoint.com",
"url": "https://www.tutorialspoint.com/cplusplus-program-to-display-armstrong-number-between-two-intervals",
"openwebmath_score": 0.2000410258769989,
"openwebmath_perplexity": 1932.8771551960274,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426375276383,
"lm_q2_score": 0.8670357460591569,
"lm_q1q2_score": 0.8445297849222049
} |
http://math.stackexchange.com/questions/281531/two-different-solutions-to-integral/281540 | # Two different solutions to integral
Given the very simple integral
$$\int -\frac{1}{2x} dx$$
The obvious solution is
$$\int -\frac{1}{2x} dx = -\frac{1}{2} \int \frac{1}{x} dx = -\frac{1}{2} \ln{|x|} + C$$
However, by the following integration rule $$\int \frac{1}{ax + b} dx = \frac{1}{a} \ln{|ax + b|} + C$$
the following solution is obtained $$\int -\frac{1}{2x} dx = -\frac{1}{2}\ln{|-2x|} + C$$
Why are these solutions different? Which is correct?
The second solution can be simplified $$-\frac{1}{2}\ln{|-2x|} + C = -\frac{1}{2}\ln{|-2|} -\frac{1}{2}\ln{|x|} + C= -\ln{\frac{1}{\sqrt{2}}} - \frac{1}{2}\ln{|x|} + C$$ but they still differ.
-
Hint: $\ln(xy)=\ln x+ \ln y$ – Hanul Jeon Jan 18 '13 at 17:52
$|a|=|-a|$ for any $a\in C$ – lab bhattacharjee Jan 18 '13 at 17:53
They are both correct: they differ only in terms of their constants of integration.
\begin{align} -\frac{1}{2}\ln|-2x| + \color{red}{C} &= -\frac{1}{2}(\ln 2 + \ln |x|) + \color{red}{ C} \\&= -\frac{1}{2}\ln |x| + \color{red}{(C - \frac{1}{2} \ln 2)}\\ &= -\frac{1}{2}\ln |x| + \color{red}{K} \end{align}
$C \neq K\;$ but $\;C, K\;$ are constants nonetheless!
TIP: One can always check two apparently different solutions to an integral by differentiating each of them; if the respective derivatives are equal to the original integrand, then you can conclude that the two apparently different solutions are, in fact, solutions that differ only in their constants of integration.
EDIT: You were almost there (in obtaining the adjusted constant of integration), but : $$-\frac{1}{2}\ln{|-2|} + C\; \ne \;-\ln{\frac{1}{\sqrt{2}}} + C$$ Rather, since $\;|-2| \;= \;2,\;$ we have $$-\frac{1}{2}\ln{|-2|}+C \; =\; -\frac{1}{2}\ln 2 + C\;=\; +\ln{\frac{1}{\sqrt{2}}} + C = K$$
The important thing to note is that all of $K$ is a constant term.
-
Glad to see you here online , amWhy. ;-) – Babak S. Jan 18 '13 at 18:10
Wooow! You made it colored. ;-) – Babak S. Jan 19 '13 at 17:18
They are both correct!
$$-\frac{1}{2}\ln|-2x| + C = -\frac{1}{2}(\ln 2 + \ln |x|) + C = -\frac{1}{2}\ln |x| + (C - \frac{1}{2} \ln 2)$$
The constant of integration is what "differs" here.
-
Both of the results is Ok. Note that there is no need that two constants $C$ in the first result and $C$ in the second one are the same. Here, we have $C_1=C_2\times0.5\ln|2|$.
-
+1 Nice to see you too, BabaK! – amWhy Jan 18 '13 at 18:12 | 2016-05-25T05:26:04 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/281531/two-different-solutions-to-integral/281540",
"openwebmath_score": 0.9920728802680969,
"openwebmath_perplexity": 975.3672612286377,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9719924793940119,
"lm_q2_score": 0.8688267847293731,
"lm_q1q2_score": 0.8444931006530308
} |
https://math.stackexchange.com/questions/735448/1992-imo-functional-equation-problem | # $1992$ IMO Functional Equation problem
The problem states:
Let $\Bbb R$ denote the set of all real numbers. Find all functions $f : \Bbb R \rightarrow \Bbb R$ such that $$f(x^{2}+f(y))=y+(f(x))^{2} \space \space \space \forall x, y \in \Bbb R.$$
My progress:
1. If we substitute $x=y=0$ in the given equation, then we get $$f(f(0))=(f(0))^{2}.$$ 2. We then substitute $x=0$ in the given equation and find out that $\forall y \in \Bbb R$, $$f(f(y))=y+(f(0))^{2}.$$ 3. Now, we observe that, for all $x,y \in \Bbb R$, $$y+(f(x))^{2}=f(x^{2}+f(y))=f((-x)^{2}+f(y))=y+(f(-x))^{2}.$$ Hence $\forall x \in \Bbb R,$ $$f(-x)=f(x) \space \space or \space \space f(-x)=-f(x).$$ 4. But, if for some $x \in \Bbb R$, $f(-x)=f(x)$, then we would have, $$x+(f(y))^{2}=f(y^{2}+f(x))=f(y^{2}+f(-x))=-x+(f(y))^{2}$$ for any $y \in \Bbb R$ implying $x=0$. So, for any $x \neq 0$, $f(-x)=-f(x).$
5. Now if there exists any $y \neq 0$ such that $f(y) \neq 0$ then, $$y+(f(0))^{2}=f(f(y))=f(-f(-y))=-f(f(-y))=y-(f(0))^{2}$$ which implies $f(0)=0$. So, if $f(0) \neq 0$, then $f(x)=0$ whenever $x \neq 0$. But then $f(0) \neq 0$ would imply $(f(0))^{2}=f(f(0))=0$ which cannot happen and hence $f(0)=0$.
6. So summing up all that we have got so far, we see that $f$ has the following properties:
(i)$f$ is an odd function.
(ii)$f(x^{2})=(f(x))^{2}.$
(iii)$f(f(x))=x.$
This is where I am stuck. I have observed that using the above mentioned three properties, we can write the given equation as $$f(x^{2}+f(y))=f(x^{2})+f(f(y))$$ which is almost $f(a+b)=f(a)+f(b)$, but that is of no help. Any hints would be welcome.
• Added (reference-request) in case the question is whether there exists a solution using the stated properties. – zyx Apr 1 '14 at 18:07
Well, properties (i)-(iii) are a good start. The only other things you could need are:
(iv) $f(x)> 0$ for $x>0$.
(v) $f$ is increasing.
These are enough to completely determine the functions satisfying the given condition. More details follow below the fold.
Note that $f(x)\neq 0$ if $x\neq 0$. Indeed, if $f(x)=0$, then $$0=f(0)=f(f(x))=x.$$ Furthermore, if $x>0$, then $f(x)>0$. Indeed, $f(x)=f(\sqrt{x}^2)=f(\sqrt{x})^2>0$.
Then in fact, $f$ is increasing. Since $f$ is odd, it suffices to show it is increasing on $(0,\infty)$. Well, if $x>y>0$, $$f(x)-f(y)=f(\sqrt{x}^2)+f(-y)=f(\sqrt x)^2+f(-y)=f(x+f(f(-y)))=f(x-y)>0.$$
But then, if $f(x) >x$, $x>f(f(x))>x$. Likewise, if $f(x)<x$, $x=f(f(x))<f(x)<x$. As these are impossible, we must have $f(x)=x$ for all $x$.
• Are you claiming that properties (iv) and (v) hold in this case? – TonyK Apr 2 '14 at 8:34
• I am claiming that if $f$ satisfies the original functional equation, then it has the properties (iv) and (v). – Sean Clark Apr 2 '14 at 13:44
Many references, including solutions and instructional booklets, can be seen at
The first search hit for MSE is this older question, which includes an answer I forgot I had written, but can be used as a long series of hints on how to solve a very similar problem. (I found the 1992 IMO problem when looking for the source of that question, and a comment on the similarity of the two problems is what caught the attention of the search engine.)
Functions satisfying $f\left( f(x)^2+f(y) \right)=xf(x)+y$
The comments under that question and the discussion of injectivity are particularly relevant since here the same arguments show that $f$ is injective and surjective, and the same method of piling up small observations often solves these things.
• For example, surjectivity converts your last equation to $f(A + B)=f(A)+f(B)$ where $B$ is any real number and $A \geq 0$, by setting $x = \sqrt{A}$ and $y$ the solution of $f(y)=B$. From this we "know", because a solution can be determined (it is an IMO problem), that $f$ has to be linear, and $f(x)=x$ is the only such possibility. We also know that some nonlinear argument will be necessary because $f(a)+f(b)=f(a+b)$ is not enough to prove linearity without some additional assumption like continuity or monotonicity (which I think can be proved here). – zyx Apr 1 '14 at 19:14
This is really a comment to Sabyasachi's answer (update: now deleted), but I am posting it as an answer, because it is too long for a comment.
Sabyasachi claims that if $f(f(x)) = x$ for all $x$ (so that the graph of $f$ is symmetrical about the line $y=x$), and if $f$ is odd, then either $f(x)=x$ for all $x$ or $f(x)=-x$ for all $x$.
This is not the case, as the following counterexample shows:
\begin{align} f(0) &= 0 \\ \\ \textrm{If } x > 0: f(x) &= x+1 \textrm{ if } \lfloor x \rfloor \textrm{ is odd} \\ &= x-1 \textrm{ if } \lfloor x \rfloor \textrm{ is even}\\ \\ \textrm{If } x < 0: f(x) &= -f(-x) \end{align}
The graph looks like lane markings on a highway (the highway $y=x$).
• will Sabyasachi's claim hold true if we add the condition that $f$ is continuous? Indeed a simpler counterexample is $f(0)=0$ and $f(x)=\frac {1}{x}$ when $x \neq 0$, which is again discontinuous at $0$. – Indrayudh Roy Apr 3 '14 at 14:13 | 2019-08-23T04:33:35 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/735448/1992-imo-functional-equation-problem",
"openwebmath_score": 0.996735155582428,
"openwebmath_perplexity": 176.66975477176018,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924777713886,
"lm_q2_score": 0.8688267796346599,
"lm_q1q2_score": 0.8444930942912293
} |
http://www.ferriterscobbo.com/misc/6o7rm4/f9476f-exponential-reliability-function | If the above formula holds true for all x greater than or equal to zero, then x is an exponential distribution. The Exponential distribution "shape" The Exponential CDF Reliability follows an exponential failure law, which means that it reduces as the time duration considered for reliability calculations elapses. Example. The distribution has one parameter: the failure rate (λ). Exponential distribution A lifetime statistical distribution that assumes a constant failure rate for the product being modeled. Abstract: This paper considers a class of an efficient 'two-stage shrinkage testimator' (TSST) of 'reliability function' of 'exponential distribution', and the class uses additional information which can be obtained from the past practices, and in the form of past initial … The PDF for the exponential has the familiar shape shown below. gamma distribution. The exponential conditional reliability equation gives the reliability for a mission of $t\,\! The exponential conditional reliability function is: The failure density function is. for t > 0, where λ is the hazard (failure) rate, and the reliability function is. Functions. The functions for this distribution are shown in the table below. Also, another name for the exponential mean is the Mean Time To Fail or MTTF and we have MTTF = $$1/\lambda$$. The cumulative hazard function for the exponential is just the integral of the failure rate or $$H(t) = \lambda t$$. The exponential distribution is a simple distribution with only one parameter and is commonly used to model reliability data.$ duration, having already successfully accumulated $T\,\! If a random variable X has this distribution, we write X ~ Exp(λ).. The effect of the shape parameter on a distribution is reflected in the shapes of the pdf, the reliability function and the failure rate function. The Exponential is a life distribution used in reliability engineering for the analysis of events with a constant failure rate. Step 4: Finally, the probability density function is calculated by multiplying the exponential function and the scale parameter. In life data analysis, the event in question is a failure, and the pdf is the basis for other important reliability functions, including the reliability function, the failure rate function… Definitions Probability density function. The exponential distribution is often used to model the reliability of electronic systems, which do not typically experience wearout type failures. The probability density function (pdf) of an exponential distribution is (;) = {− ≥, 0 is the parameter of the distribution, often called the rate parameter.The distribution is supported on the interval [0, ∞). Here we look at the exponential distribution only, as this is the simplest and the most widely applicable. Applications The distribution is used to model events with a constant failure rate. The exponential distribution is actually a … In other words, reliability of a system will be high at its initial state of operation and gradually reduce to its lowest magnitude over time. The exponential distribution exhibits infinite divisibility. Reliability Prediction Using the Exponential Distribution The exponential distribution applies when the failure rate is constant - the graph … Location Parameter The location parameter is … The Exponential Conditional Reliability Function. The mean time to failure (MTTF = θ, for this case) … Basic Example 1. the mean life (θ) = 1/λ, and, for repairable equipment the MTBF = θ = 1/λ . Persistence in Reliability Analysis of the Exponential Assumption Despite the inadequacy of the exponential distribution to accurately model the behavior of most products in the real world, it is still widely used in today’s reliability practices, standards and methods.$ hours of operation up to the start of this new mission. All X greater than or equal to zero, then X is an exponential distribution look the... Formula holds true for all X greater than or equal to zero, then X an. Model events with a constant failure rate electronic systems, which do not experience... Variable X has this distribution are shown in the table below table below ] hours of operation up to start. Of operation up to the start of this new mission electronic systems, which do typically. Multiplying the exponential has the familiar shape shown below of electronic systems, which do exponential reliability function typically wearout! Familiar shape shown below hazard ( failure ) rate, and the reliability for mission. In the table below θ ) = 1/λ of [ math ] t\, \ is used to model reliability. /Math ] hours of operation up to the start of this new mission having already successfully accumulated [ ]... > 0, where λ is the hazard ( failure ) rate, and, for repairable the!, which do not typically experience wearout type failures with a constant rate... In the table below this distribution are shown in the table below: the failure rate random variable X this... With a constant failure rate ( λ ) equation gives the reliability for a mission of [ math t\..., then X is an exponential distribution is often used to model events with a constant rate! Multiplying the exponential is a life distribution used in reliability engineering for the exponential distribution shape... Experience wearout type failures this is the simplest and the reliability of electronic systems, do... Above formula holds true for all X greater than or equal to,. Events with a constant failure rate rate ( λ ) and the widely! T\, \ t > 0, where λ is the hazard failure. For a mission of [ math ] t\, \ ) rate, and the most applicable! ) = 1/λ PDF for the exponential function and the reliability of systems! Equation gives the reliability for a mission of [ math ] t\,!... Has this distribution are shown in the table below multiplying the exponential distribution is often used to model with. Do not typically experience wearout type failures variable X has this distribution are shown in the table.! Exponential function and the most widely applicable then X is an exponential . Simplest and the scale parameter here we look at the exponential conditional reliability equation gives reliability! The mean life ( θ ) = 1/λ the location parameter the location parameter the location the... Distribution only, as this is the simplest and the reliability function is the! shape '' the exponential function and the reliability function is calculated multiplying. 1/Λ, and the scale parameter the table below than or equal to zero, then is... Duration, having already successfully accumulated [ math ] t\, \ step 4: Finally, the Probability function! Exponential conditional reliability equation gives the reliability for a mission of [ math ] t\, \ hazard ( )! Constant failure rate the location parameter is … Definitions Probability density function is rate ( λ ) for this are. Functions for this distribution are shown in the table below, and, for repairable equipment MTBF. Model events with a constant failure rate θ = 1/λ, and the most widely applicable often used to events! Of this new mission used to model the reliability for a mission of [ math ] t\ \. Greater than or equal to zero, then X is an exponential distribution mean life θ... Functions for this distribution, we write X ~ Exp ( λ..! Applications the distribution has one parameter: the failure rate the exponential function and the scale parameter this,... This distribution are shown in the table below the most widely applicable and the reliability electronic... Operation up to the start of this new mission multiplying the exponential distribution shape the. Electronic systems, which do not typically experience wearout type failures only, as this the! Has this distribution are shown in the table below the scale parameter exponential conditional equation... The location parameter the location parameter is … Definitions Probability density function is calculated by multiplying the exponential ... Mtbf = θ = 1/λ ( λ ) the reliability of electronic systems, do! Write X ~ Exp ( λ ) zero, then X is an exponential is... Pdf for the analysis of events with a constant failure rate the distribution. By multiplying the exponential distribution shape '' the exponential distribution is often used to model with! Events with a constant failure rate ( λ ) the failure density function to the start of this mission. Typically exponential reliability function wearout type failures, we write X ~ Exp ( λ ) distribution ''. The location parameter is … Definitions Probability density function location parameter is exponential reliability function Definitions Probability function. The mean life ( θ ) = 1/λ, and the reliability of electronic systems, which do not experience. Formula holds true for all X greater than or equal to zero, then is! Which do not typically experience wearout type failures [ /math ] duration, having already successfully accumulated [ math t\! Having already successfully accumulated [ math ] t\, \ this is the simplest and the scale parameter is... Math ] t\, \ has the familiar shape shown below experience wearout failures... [ math ] t\, \ location parameter is … Definitions Probability density function failures. For t > 0, where λ is the simplest and the most widely applicable new mission step 4 Finally! Applications the distribution has one parameter: the failure density function is by! Of electronic systems, which do not typically experience wearout type failures variable has... Is an exponential distribution shape '' the exponential distribution shape the. The Probability density function is calculated by multiplying the exponential distribution are shown in the table below: failure... Of [ math ] t\, \ the Probability density function is calculated by multiplying the exponential distribution only as... Is an exponential distribution widely applicable typically experience wearout exponential reliability function failures used to model events with a constant failure.! Equal to zero, then X is an exponential distribution only, as this is the hazard ( failure rate... For t > 0, where λ is the hazard ( failure ) rate, and the most applicable... Density function is calculated by multiplying the exponential distribution only, as this is simplest! Used to model events with a constant failure rate, we write X ~ Exp λ. Reliability engineering for the analysis of events with a constant failure rate function is, then X is an distribution... The above formula holds true for all X greater than or equal zero. Events with a constant failure rate if a random variable X has this distribution we., having already successfully accumulated [ math ] t\, \ wearout type failures most widely.... Exponential conditional reliability equation gives the reliability for a mission of [ math ],... Exponential function and the scale parameter reliability equation gives the reliability function is 1/λ and. Than or equal to zero, then X is an exponential distribution distribution! T > 0, where λ is the simplest and the scale parameter exponential conditional reliability equation gives reliability. Distribution, we write X ~ Exp ( λ ) ] duration, having already successfully [... Type failures failure rate holds true for all X greater than or equal to zero then... Shown below of operation up to the start of this new mission location parameter is … Definitions Probability function... Parameter: the failure density function is engineering for the exponential distribution shape the... Has one parameter: the failure rate duration, having already successfully accumulated [ ]! As this is the hazard ( failure ) rate, and, for repairable equipment the =! Reliability engineering for the analysis of events with a constant failure rate ] duration, having already successfully [! Model events with a constant failure rate zero, then X is an exponential distribution is used. Holds true for all X greater than or equal to zero, then X is an exponential only. Not typically experience wearout type failures is used to model events with a constant rate! ) = 1/λ, and the reliability for a mission of [ math ] t\ \... A life distribution used in reliability engineering for the analysis of events with a constant rate... If a random variable X has this distribution, we exponential reliability function X Exp... Exponential conditional reliability equation gives the reliability for a mission of [ math ] t\ \. Start of this new mission the table below shown below, we write X ~ Exp exponential reliability function )... Is often used to model events with a constant failure rate ( λ.! > 0, where λ is the simplest and the most widely applicable at... An exponential distribution equipment the MTBF = θ = 1/λ, and the most widely applicable,..., having already successfully accumulated [ math ] t\, \ density function MTBF..., we write X ~ Exp ( λ ) has one parameter: the failure rate ( λ ) )... shape '' the exponential distribution only, as this is the simplest and the reliability for a mission [. Successfully accumulated [ math ] t\, \ is the hazard ( failure ) rate,,... Shown below ( θ ) = 1/λ true for all X greater than equal... Scale parameter location parameter the location parameter is … Definitions Probability density function at the is. | 2022-12-06T10:32:34 | {
"domain": "ferriterscobbo.com",
"url": "http://www.ferriterscobbo.com/misc/6o7rm4/f9476f-exponential-reliability-function",
"openwebmath_score": 0.7518762350082397,
"openwebmath_perplexity": 1278.3672240253325,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924769600771,
"lm_q2_score": 0.8688267796346598,
"lm_q1q2_score": 0.8444930935863401
} |
https://mathhelpboards.com/threads/logarithm.3159/ | # logarithm
#### Lisa91
##### New member
How to prove that $$n^{\alpha} > \ln(n)$$ for $$\alpha>0$$?
#### MarkFL
Staff member
How are the variables defined?
For instance if $\alpha\in\mathbb{R}$ and $n\in\mathbb{N}$ then one example of a counter-example to your inequality occurs for:
$n=3,\,\alpha=0.01$
This leads me to believe there in some missing information.
#### ZaidAlyafey
##### Well-known member
MHB Math Helper
we have $n^{\alpha}> \ln(n)$ take ln to both sides :
I will separate the problem in two steps :
1-For 0<n<1 this is trivial since the right hand will be always negtative while
and the left hand side is always positive , also for n=1 the inequality holds .
2-Now for n>1 :
${\alpha}\ln(n)> \ln(\ln(n)) \Rightarrow \,\, \alpha> \frac{\ln(\ln(n))}{\ln(n) }$
Now this is only true iff $\frac{\ln(\ln(n))}{\ln(n) } \leq 0$
which holds iff $0<\ln(n)\leq 1\,\, \Rightarrow \,\, 1< n \leq e$
The inequality is true for all $\alpha$ iff $0<n \leq e$
#### Klaas van Aarsen
##### MHB Seeker
Staff member
2-Now for n>1 :
${\alpha}\ln(n)> \ln(\ln(n)) \Rightarrow \,\, \alpha> \frac{\ln(\ln(n))}{\ln(n) }$
Since the right hand side approaches zero for large n, this means that for any $\alpha>0$ there is a number N such that the inequality is true for any n > N.
Hey Lisa91!
Can it be there is a condition missing from your problem?
The extra condition that it holds for any n > N for some N?
Last edited:
#### ZaidAlyafey
##### Well-known member
MHB Math Helper
Since the right hand side approaches zero for large n, this means that for any $\alpha>0$ there is a number N such that the inequality is true for any n > N.
since $\alpha$ is an independent variable of n I can choose it as small as possible so that
it becomes lesser than the right-hand side .
Can you give a counter example for $\alpha$ and n that disproves my argument ?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
since $\alpha$ is an independent variable of n I can choose it as small as possible so that
it becomes lesser than the right-hand side .
Can you give a counter example for $\alpha$ and n that disproves my argument ?
It's just that you have assumed that the inequality should hold for specific n and all $\alpha$'s.
Whereas I have assumed it's not for all n.
In other words, you have solved:
Find n such that $n^\alpha > \ln n$ for all $\alpha > 0$.
Whereas I have use the first half of your argument to follow up with:
Prove that $n^\alpha > \ln n$ for $\alpha > 0$ if n is big enough given a certain $\alpha$.
That's why we need clarification on what the actual problem is.
Last edited: | 2022-07-03T18:12:14 | {
"domain": "mathhelpboards.com",
"url": "https://mathhelpboards.com/threads/logarithm.3159/",
"openwebmath_score": 0.917023777961731,
"openwebmath_perplexity": 506.0493434877316,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924802053235,
"lm_q2_score": 0.8688267762381844,
"lm_q1q2_score": 0.8444930931045485
} |
https://www.physicsforums.com/threads/is-this-the-correct-way-to-compute-the-row-echelon-form.835097/ | # Is this the correct way to compute the row echelon form?
#### kostoglotov
This is actually a pretty simple thing, but the ref(A) that I compute on paper is different from the ref(A) that my TI-89 gives me.
Compute ref(A) where A = $\begin{bmatrix} 1 & 2\\ 3 & 8 \end{bmatrix}$
$$\\ \begin{bmatrix}1 & 2\\ 3 & 8\end{bmatrix} \ r_2 \rightarrow r_2 - 3 \times r_1 \\ \\ \begin{bmatrix}1 & 2\\ 0 & 2 \end{bmatrix} \ r_2 \rightarrow \frac{1}{2} \times r_2 \\ \\ \begin{bmatrix}1 & 2\\ 0 & 1 \end{bmatrix}$$
Now I would have thought that this last matrix, A = $\begin{bmatrix} 1 & 2\\ 0 & 1 \end{bmatrix}$ would be the ref(A).
But my TI-89 gives ref(A) = $\begin{bmatrix} 1 & \frac{8}{3}\\ 0 & 1 \end{bmatrix}$ and this is not the rref(A), the rref(A) is just the 2x2 identity matrix.
Related Linear and Abstract Algebra News on Phys.org
#### Mark44
Mentor
This is actually a pretty simple thing, but the ref(A) that I compute on paper is different from the ref(A) that my TI-89 gives me.
Compute ref(A) where A = $\begin{bmatrix} 1 & 2\\ 3 & 8 \end{bmatrix}$
$$\\ \begin{bmatrix}1 & 2\\ 3 & 8\end{bmatrix} \ r_2 \rightarrow r_2 - 3 \times r_1 \\ \\ \begin{bmatrix}1 & 2\\ 0 & 2 \end{bmatrix} \ r_2 \rightarrow \frac{1}{2} \times r_2 \\ \\ \begin{bmatrix}1 & 2\\ 0 & 1 \end{bmatrix}$$
Now I would have thought that this last matrix, A = $\begin{bmatrix} 1 & 2\\ 0 & 1 \end{bmatrix}$ would be the ref(A).
Yes, I agree.
kostoglotov said:
But my TI-89 gives ref(A) = $\begin{bmatrix} 1 & \frac{8}{3}\\ 0 & 1 \end{bmatrix}$ and this is not the rref(A), the rref(A) is just the 2x2 identity matrix.
I'm guessing that your calculator switched the two rows, and then did row reduction. If you start with this matrix --
\begin{bmatrix}
3 & 8\\
1 & 2 \end{bmatrix}
-- row reduction gives you the matrix the calculator shows.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-10-16T05:28:28 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/is-this-the-correct-way-to-compute-the-row-echelon-form.835097/",
"openwebmath_score": 0.745779812335968,
"openwebmath_perplexity": 2316.265888842739,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924785827003,
"lm_q2_score": 0.8688267728417087,
"lm_q1q2_score": 0.8444930883934212
} |
https://mathematica.stackexchange.com/questions/104037/interpolation-function-way-off?noredirect=1 | # Interpolation function way off
I have a simple list of data points to which I would like to fit an interpolation function. However, the results given by Interpolation[] gives an oscillating function which is not warranted by the data. The data is as follows:
list = {{3.272492347489368*^-13,
3.393446032644599*^24}, {7.635815477475192*^-13,
1.1419553951933573*^25}, {1.1999138607461015*^-12,
6.25421894208354*^24}, {1.636246173744684*^-12,
6.099735099368696*^25}, {2.0725784867432662*^-12,
1.7770407891508593*^28}, {2.5089107997418487*^-12,
5.666293158766728*^30}, {2.945243112740431*^-12,
1.4673042757432113*^33}, {3.3815754257390136*^-12,
2.7847878750945815*^35}, {3.817907738737596*^-12,
3.70906787022051*^37}, {4.2542400517361785*^-12,
3.411500191373945*^39}, {4.690572364734761*^-12,
2.1558621505149318*^41}, {5.126904677733343*^-12,
9.344524755261095*^42}, {5.563236990731926*^-12,
2.771851542592801*^44}, {5.999569303730508*^-12,
5.5961200001183325*^45}, {6.435901616729091*^-12,
7.60451894430725*^46}, {6.872233929727673*^-12,
6.8161973406557206*^47}};
f = Interpolation[list];
Clearly the function should be positive and monotonic (except a small dip in the beginning) but when I try to interpolate it gives me a function that oscillates around zero.
I suspect it's because the numbers are very large and I am able to circumvent the issue by taking the log of the data and fitting to that instead so it's not a big deal. However, since Mathematica doesn't display any error messages how can I be sure that Intepolation[] gives reasonable results in general? What's going on here?
P.S. The figure is produced by (leaving out the legends)
style[scheme_, num_] :=
Table[Directive[Thick, ColorData[scheme][(i - 1)/(num - 1)]], {i, 1,
num}];
fs = Interpolation[list, Method -> "Spline"];
fh = Interpolation[list, Method -> "Hermite"];
fncplot =
LogPlot[{fs[x], -fs[x], fh[x], -fh[x]}, {x, First@First@list,
First@Last@list}, Frame -> True,
PlotStyle -> style["DarkRainbow", 4]];
dataplot =
ListLogPlot[list, Frame -> True,
PlotMarkers -> {Automatic, Small}];
Show[fncplot, dataplot]
• If you use InterpolationOrder -> 1 (instead of Spline or Hermite), then the plot looks fine. – bill s Jan 15 '16 at 12:19
This is a good example of why one should never blindly trust the numerical results of systems like Mathematica, without thinking about numerical methods that these systems use. Mathematica won't ever make numerical analysis courses obsolete.
Most interpolation methods use piecewise polynomials, and assume slowly varying smooth functions. Your data has extreme exponential variation. It won't be possible to reproduce this extreme variation with any accuracy by stitching together low order polynomials.
The correct approach, as you mention, is to interpolate the logarithm of the data (f = Interpolation[{#1, Log[#2]} & @@@ list]).
To see how things go wrong otherwise, let's look at a simple example. Let is try to approximate an exponential with a 2nd order polynomial.
a = 1;
pts = {{0, 1}, {1, Exp[-a 1]}, {2, Exp[-a 2]}};
f = Interpolation[pts, InterpolationOrder -> 2];
Plot[{Exp[-a x], f[x]}, {x, 0, 2},
Epilog -> {Red, PointSize[Large], Point[pts]}]
So far it looks good. Let is now force variations over orders of magnitude (just like your data) by increasing a to 5.
The polynomial "overshoots" and takes negative values between the 2nd and 3rd interpolation points. Actually when plotting things on a linear vertical scale, this seems to make sense. The 2nd and 3rd points are both effectively zero, at least if we care only about the differences in their values.
However, you are plotting your data on a log-scale (you care about the ratios of values, not differences). The interpolation function constructed on a linear scale is not at all "way off" if we look at it on a linear scale. It gives values very close to zero, which is fine. It looks "way off" only on a log-scale. The solution (as you mentioned) is to interpolate on a log-scale.
I don't think Mathematica should give warnings in these cases. It is after all just applying the interpolation method as specified. It is up to the user to think about whether this type of interpolation makes sense for the given scenario.
• +1 for the first and last paragraphs alone. (I could not give more than that, though). – Peltio Jan 14 '16 at 14:46
• Thank you for a swift and informative answer. – Philo Jan 14 '16 at 15:33
• Interpolation[pts, Method->FindFormula], on version 15.3.1, will probably work much better (for cases with a little more than three points...)! – P. Fonseca Jan 15 '16 at 12:47 | 2019-08-26T08:35:19 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/104037/interpolation-function-way-off?noredirect=1",
"openwebmath_score": 0.25882473587989807,
"openwebmath_perplexity": 1341.2938392487301,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9362850075259039,
"lm_q2_score": 0.9019206831203063,
"lm_q1q2_score": 0.8444548135830644
} |
http://math.stackexchange.com/questions/145345/residue-at-z-infty/145726 | # Residue at $z=\infty$
I'm a bit confused at when to use the calculation of a residue at $z=\infty$ to calculate an integral of a function.
Here is the example my book uses: In the positively oriented circle $|z-2|=1$, the integral of $$\frac{5z-2}{z(z-1)}$$ yields two residues, which give a value of $10\pi i$ for the integral, using the Cauchy Residue Theorem. I've got that down.
The book later, calculates the residue at infinity, yielding the same answer... - one residue calculation!
I can't seem to find what I'm missing here... Why is it that when considering this one bound, $2 \lt |z-2|\lt\infty$, we can use the residue at infinity to find the value of the integral... yet at the same time calculate a residue in three different bounds using the Cauchy Integral Theorem and find the same integral value?
Thanks!
-
The integral can be computed by the sum of the residues outside of the curve; in this case, there is only the one at infinity. As you can see, the sum of all the residues is $0$ (for any function with finitely many singularities on the Riemann sphere). There are many curious identities you can derive in this way. – user8268 May 15 '12 at 6:03
Did you mean $|z-1|=2$? The circle you wrote down doesn't contain the pole at $z=0$ and goes through the pole at $z=1$. – joriki May 15 '12 at 7:10
to extend the excellent comment by @user8268: you can perform the variable substitution $u=z^{-1}$ which interchanges the outside and the inside of the contour. Then you can calculate the "usual" residue at the origin. – Fabian May 15 '12 at 11:29
Joriki, actually it's |z|=2! I grabbed the other circle from a nearby example... Thx! – Joshua May 15 '12 at 16:56
@user8268: That's interesting... is there a reason why, in a nutshell? – Joshua May 15 '12 at 16:58
When can you use the residue at $\infty$ to calculate the value of an integral $\int f$ ?
is basically given by the following result.
If $f$ is a holomorphic function in $\mathbb{C}$, except for isolated singularities at $a_1, a_2, \dots , a_n$, then $$\operatorname{Res}{(f; \infty)} = -\sum_{k = 1}^{n} \operatorname{Res}{(f; a_k)}$$
were the residue at infinity is defined as in the wikipedia article. Now, from Cauchy's residue theorem it follows that
$$\operatorname{Res}{(f; \infty)} = -\frac{1}{2 \pi i} \int_{\gamma} f(z) \, dz$$
where you can take $\gamma$ to be a circle $|z| = R$, where $R$ is large enough so that all the singularities $a_k$ are contained inside the circle. Of course you can then use the more sophisticated versions of Cauchy's integral theorem to change the curve $\gamma$, but this one suffices for simplicity.
Then the last formula gives you a way to calculate a complex integral just by calculating the residue at infinity of the function, instead of computing all the "finite" residues.
Now to answer your second question of why this is the case, maybe a sketch of a proof of this result will be enough.
So let $\displaystyle{F(z) := -z^{-2}f(z^{-1})}$, then since $f(z)$ is holomorphic for $|z| > R$ for some large enough $R$, we see that $F(z)$ is holomorphic for $|z^{-1}| > R$, or equivalently, for $0 < |z| < \frac{1}{R}$. Thus $F$ has an isolated singularity at the origin, and then by the definition of the residue at infinity we have
$$\operatorname{Res}{(f; \infty)} := \operatorname{Res}{(F; 0)} = \frac{1}{2 \pi i} \int_{|w| = \frac{1}{R}} F(w) \, dw = \frac{1}{2 \pi i} \int_{|w| = \frac{1}{R}} -\frac{f(w^{-1})}{w^2} \, dw$$
Then by making the substitution $z = \frac{1}{w}$ we get
$$\frac{1}{2 \pi i} \int_{|w| = \frac{1}{R}} -\frac{f(w^{-1})}{w^2} \, dw = \mathbf{\color{red}{-}} \frac{1}{2 \pi i} \int_{|z| = R} \, f(z) dz$$
where the last negative sign comes from the fact that the new circle $|z| = R$ you get after the substitution has its orientation reversed. And this last equality is precisely what we wanted to prove.
Note
This result is exercise 12 in section V.2 in Conway's book Functions of One Complex Variable I (page 122), or it also appears in exercise 6 in section 13.1 of Reinhold Remmert's book Theory of Complex Functions (page 387), in case you want some references.
-
Thank you! I like that I can avoid using the Cauchy residue theorem. What if at least one singularity lies ON the given circle, can I still use this theorem? Yes, yes, I think I can. Right? So, for an integral of (z^5)/((1-(z^3)) it would be more prudent to find the value using the residue at z=0 in the integrand of (1/(z^2))f(1/z) - where C is positively oriented circle |z|=2? – Joshua May 16 '12 at 4:15 | 2014-12-22T20:48:06 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/145345/residue-at-z-infty/145726",
"openwebmath_score": 0.9724462032318115,
"openwebmath_perplexity": 168.7277742483389,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978051741806865,
"lm_q2_score": 0.8633916152464016,
"lm_q1q2_score": 0.8444416731531857
} |
https://math.stackexchange.com/questions/144464/bolzano-weierstrass-theorem-in-a-finite-dimensional-normed-space/4416919 | # Bolzano Weierstrass theorem in a finite dimensional normed space
The problem may have a very simple answer, but it is confusing me a bit now.
Let $(\mathbf{V},\lVert\cdot\rVert)$ be a finite dimensional normed vector space. A subset $\mathbf{U}$ of $\mathbf{V}$ is said to be bounded, if there is a real $M$ such that for any member $u$ of $\mathbf{U}$, we have: $\lVert u\rVert\lt M$. . Also, convergence of a sequence in $\mathbf{V}$ is defined with respect to the metric $\lVert\cdot\rVert$. Is it true that every bounded sequence of vectors in $\mathbf{V}$ admits a convergent subsequence?
If not, please give a counterexample with $\mathbf{V}$ finite dimensional.
• Yes, because it holds for every $\mathbb{R}^n$ with any norm (because they are all equivalent). Your space is isomorphic (then homeomorphic) to some $\mathbb{R}^n$, hence the notions of convergence on your $V$ can be viewed as convergence in this $\mathbb{R}^n$ May 13, 2012 at 4:01
I will write a little bit more details about my comment.
Take a basis $\{v_1,\ldots,v_n\}$ for $V$ and consider the isomorphism $T:\mathbb{R}^n\rightarrow V$ such that $T(e_i)=v_i$. Define a new norm on $\mathbb{R}^n$ by
$$\|x\|_{\mathbb{R}^n}=\|T(x)\|_V$$
This implies $T$ is a homeomorphism, because it takes opens balls onto open balls of the same radius, by definition of the norm on $\mathbb{R}^n$. In particular, remember that a homeomorphism takes convergent sequences in convergent sequences.
Once $T$ is a isomorphism, this is in fact a norm and, in addition, we have
$$\|v\|_V=\|T^{-1}(v)\|_{\mathbb{R}^n}$$
Take a bounded sequence $\{v_k\}_{k=1}^\infty$ on $V$. Then $x_k=T^{-1}(v_k)$ is a bounded sequence on $(\mathbb{R}^n,\|\cdot\|_{\mathbb{R}^n})$. Once all the norms on $\mathbb{R}^n$ are equivalent, this sequence is bounded with respect to the euclidean norm. Then, this sequence have a convergent subsequence, say $x_{k_j}$. Hence, $T(x_{k_j})$ is a convergent subsequence of the original $\{v_k\}_{k=1}^\infty$.
• But why does not work in an infinite dimensional real vectors space ? i,e which step of the proof fails in infinite dimensional case ? I mean we can still construct isomorphisms in such cases.
– Our
Nov 4, 2017 at 10:27
• @Our Norms on finite dimensional vector space are equivalent.
– Jiya
Jan 9 at 12:54
Here is a short answer but it depends on other results.
1. Every $$n$$-dimensional normed linear space $$V$$ is complete. It is in fact homeomorphic to $$R^n$$.
2. Recall that a set in $$R^n$$ is compact if and only if it is closed and bounded.
3. Following this, we can show that a set in an $$n$$-dim normed linear space is compact if and only if it is closed and bounded.
4. If a set $$U \subseteq V$$ is bounded. Then its closure is closed and bounded, hence compact.
5. Finally, we use the fact that every sequence of a compact set has a convergent subsequence that converges in the compact set itself.
6. Thus, if $$U$$ is bounded, then every sequence of $$U$$ has a convergent subsequence that converges in the closure of $$U$$.
7. If in addition, $$U$$ is closed, then the subsequence will converge in $$U$$ itself.
For completeness of $$n$$-dim normed linear spaces and compactness of their closed and bounded subsets, check the notes here. | 2022-06-27T05:29:18 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/144464/bolzano-weierstrass-theorem-in-a-finite-dimensional-normed-space/4416919",
"openwebmath_score": 0.9711917042732239,
"openwebmath_perplexity": 88.12091348165197,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466175,
"lm_q2_score": 0.8633916117313211,
"lm_q1q2_score": 0.8444416702676121
} |
https://math.stackexchange.com/questions/2377910/if-a-series-sum-lambda-n-of-positive-terms-is-convergent-does-the-sequence | # If a series $\sum\lambda_n$ of positive terms is convergent, does the sequence $n\lambda_n$ converge to $0$? [closed]
Let $$\lambda_n>0, n\in\mathbb{N}$$, with $$\sum_n \lambda_n<+\infty$$.
Can I conclude that $$n\lambda_n\to 0$$?
In this question and this question and their answers, it is shown that this is true if $$\lambda_n$$ are decreasing. What happens if $$\lambda_n$$ are not decreasing?
## closed as off-topic by YuiTo Cheng, Jendrik Stelzner, Paul Frost, José Carlos Santos, Xander HendersonMay 22 at 21:41
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – YuiTo Cheng, Jendrik Stelzner, Paul Frost, José Carlos Santos, Xander Henderson
If this question can be reworded to fit the rules in the help center, please edit the question.
• Of it didn't, then $\lambda_n$ would be on the order of $\frac{1}{n}$ and therefore the sum would diverge – Chickenmancer Jul 31 '17 at 16:42
• It is rather distressing to me how many smart people have gotten this innocuous problem completely wrong. – Steven Stadnicki Jul 31 '17 at 17:13
• @StevenStadnicki As a university student, I can say that I've seen very intelligent professors sometimes make "trivial" mistakes. We're all human, I guess. – MathematicsStudent1122 Jul 31 '17 at 20:27
No.
Define $\lambda_n$ by stating that $\lambda_{2^n}=2^{-n}$ and $\lambda_k=2^{-k}$ for other values of $k$.
Then $2^n\lambda_{2^n}=1$ so there is no convergence to $0$.
It is evident however that $\sum_n\lambda_n<\infty$.
Consider $$\lambda_n=\left\{\begin{array}{} \frac1n&\text{if n=k^2 for some k\in\mathbb{Z}}\\ \frac1{n^2}&\text{if n\ne k^2 for any k\in\mathbb{Z}}\\ \end{array}\right.$$ Then, when $n=k^2$, $$n\lambda_n=1$$ yet $$\sum_{n=1}^\infty\lambda_n=2\zeta(2)-\zeta(4)$$
However, if we have $\lambda_k\ge\lambda_{k+1}$, then $$\lim_{n\to\infty}n\lambda_n=0$$ Suppose not. Then there is an $\epsilon\gt0$ so that for any $n$, there is an $N\ge n$ so that $N\lambda_N\ge\epsilon$. Then, because of the monotonicity, we have \begin{align} \sum_{k=N/2}^{N}\lambda_k &\ge\sum_{k=N/2}^{N}\frac\epsilon{N}\\ &\ge\frac\epsilon2 \end{align} and since we can choose $n$ as large as we want, there is a limitless set of sequences of terms whose sum is at least $\frac\epsilon2$. That is, we can choose $n_{j+1}=2N_j+2$ so that $N_{j+1}/2\ge n_{j+1}/2\gt N_j$, so that the intervals $[N_j/2,N_j]$ are disjoint and $\sum\limits_{k=N_j/2}^{N_j}\lambda_k\ge\frac\epsilon2$. Therefore, $$\sum_{k=1}^\infty\lambda_k=\infty$$ Note: this latter argument is similar to this answer.
• $\lambda_n > 0$? – Alex Ortiz Jul 31 '17 at 17:00
• @AOrtiz: okay, if you must quibble about that, I have changed the other terms to be bigger. It still converges, and the limit is not $0$. We need monotonicity, or something similar, to guarantee that $n\lambda_n$ converges to $0$. – robjohn Jul 31 '17 at 17:12
• Is the sequence $\lambda_n$ in the second part of your answer the same as the sequence in the first part? I am also confused by your claim. $\lambda_n\to 0$ because the series converges. You probably want to claim $\lim n\lambda_n = 0$? – Alex Ortiz Jul 31 '17 at 17:16
• @AOrtiz: no. In the first part, I have given a sequence where $\lim\limits_{n\to\infty}n\lambda_n\ne0$ yet the sum converges. In the latter part, I have shown that the conclusion is true if the sequence is monotonic. – robjohn Jul 31 '17 at 17:19
• I don't understand the downvote. Other than the fact that I missed that the terms must be $\gt0$ instead of $\ge0$, there was nothing wrong with my initial answer. I have even shown a positive result for monotonic sequences. – robjohn Jul 31 '17 at 17:44 | 2019-08-24T07:40:37 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2377910/if-a-series-sum-lambda-n-of-positive-terms-is-convergent-does-the-sequence",
"openwebmath_score": 0.8835241198539734,
"openwebmath_perplexity": 324.6107478540725,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517411671126,
"lm_q2_score": 0.8633916099737807,
"lm_q1q2_score": 0.8444416674439328
} |
http://mathhelpforum.com/calculus/124256-given-function-find-points-find-limits-etc.html | # Thread: Given a function, find points, find limits, etc.
1. ## Given a function, find points, find limits, etc.
Given the function $f(x) = x^2ln(x)$, x is contained in the set (0,1)
A) Find the coordinates of any points where the graph of f has a horizontal tangent line.
B) Find the coordinates of any points of inflection on graph f
C) Find lim as $x --> 0^+ f(x)$ AND lim as $x --> 0^+ f'(x)$
D) Sketch graph of f using info obtained in a,b,c, with coordinates, max, min, etc.
The first 2 parts are based on units I did a WHILE ago ...
Do I do this for A?
$dy/dx = x + 2xln(x)$
$0 = x + 2xln(x)$ (because 0 = horizontal line)
And solve for x?
2. Do I do this for A?
(because 0 = horizontal line)
And solve for x?
absolutely
3. Originally Posted by Lord Darkin
Given the function $f(x) = x^2ln(x)$, x is contained in the set (0,1)
A) Find the coordinates of any points where the graph of f has a horizontal tangent line.
B) Find the coordinates of any points of inflection on graph f
C) Find lim as $x --> 0^+ f(x)$ AND lim as $x --> 0^+ f'(x)$
D) Sketch graph of f using info obtained in a,b,c, with coordinates, max, min, etc.
The first 2 parts are based on units I did a WHILE ago ...
Do I do this for A?
$dy/dx = x + 2xln(x)$
$0 = x + 2xln(x)$ (because 0 = horizontal line)
And solve for x?
FoR C)
$\lim_{x\to0^+}x^2\ln{x}=\lim_{x\to0^+}\frac{\ln{x} }{x^{-2}}=\lim_{x\to0^+}\frac{x^{-1}}{-2x^{-3}}$ by L'Hopital's rule.
4. Part a, my math solver on the calculator says -1.7632 (I had to use the absolute value of x to put in for the natural log).
But I'm confused here since x is supposed to be contained in the set from 0 to 1.
VonNemo19, thanks for the help, I'll look into that in more depth once I understand part a and b more.
5. you have x + xln(x) = 0
x(1+2ln(x)) = 0
x= 0 or e^(-1/2) = .606
0 is not in (0,1) so e^(-1/2) is the only pt on the interval with a horizontal tangent
6. $y = x^2 \cdot \ln{x}$
$y' = x + 2x\ln{x} = x(1 + 2\ln{x}) = 0$
reject $x = 0$ as a solution since $x \in (0,1)$
$\ln{x} = -\frac{1}{2}$
$x = e^{-\frac{1}{2}} = \frac{1}{\sqrt{e}}$
7. Ahh, I get it.
So how does everything look?
Part A
x = 0.6065
y = -0.184 (Makes sense? - plugged it back in f(x))
Part B
$y'' = 3 + 2lnx$
$0 = 3 + 2lnx$
$x = e^{-\frac{3}{2}}$
$x = 0.2231$
$y = -0.446$ (Plugged back in f ' (x) )
Part C
$\lim_{x\to0^+}[x^2\ln{x}] = \lim_{x\to0^+}\frac{\ln{x}}{x^{-2}} = \lim_{x\to0^+}\frac{x^{-1}}{-2x^{-3}} = 0$
$\lim_{x\to0^+}[x + 2xlnx] = (\lim_{x\to0^+}[x] + \lim_{x\to0^+}[2xlnx]) = \lim_{x\to0^+} -2x = 0$
Part D is just graphing, I should be fine with that.
8. looks great.
however $\lim_{x\to0^+}2x\ln{x}$ yields an inderminate form. rewrite as a quotient and Apply L'Hopital's rule.
9. ^yes, I used L'Hopital's rule, but didn't show it there (takes a while for me to type the latex). It came out to be -2x.
The only weird thing is, when I graph this on my calc, the point of inflection for part b doesn't make sense since the y value is -0.446 in part b but the graph on my calc is more like -0.2.
10. Originally Posted by Lord Darkin
^yes, I used L'Hopital's rule, but didn't show it there (takes a while for me to type the latex). It came out to be -2x.
The only weird thing is, when I graph this on my calc, the point of inflection for part b doesn't make sense since the y value is -0.446 in part b but the graph on my calc is more like -0.2.
Hint:
put $x=\frac{1}{t}$.
$t\to \infty ,x\to 0$
now find,
$\lim_{t\to \infty }-2\left (\frac{\ln t}{t} \right )$
11. Originally Posted by Lord Darkin
^yes, I used L'Hopital's rule, but didn't show it there (takes a while for me to type the latex). It came out to be -2x.
The only weird thing is, when I graph this on my calc, the point of inflection for part b doesn't make sense since the y value is -0.446 in part b but the graph on my calc is more like -0.2.
$
\frac{d^2y}{dx^2}=\frac{d}{dx}[x+2x\ln{x}]=1+(2x\frac{1}{x}+2\ln{x})=3+2\ln{x}=0\Rightarrow\ ln{x}=-\frac{3}{2}\Rightarrow{x}=e^{-3/2}
$
12. Originally Posted by Raoh
Hint:
put $x=\frac{1}{t}$.
$t\to \infty ,x\to 0$
now find,
$\lim_{t\to \infty }-2\left (\frac{\ln t}{t} \right )$
What is the purpose of doing that? I'm just wondering since it seems like I still get lim f '(X) = 0.
Also, I understand that e=^(-3/2) but I'm confused about the y value for the point of inflection. My calculator shows that the lowest point of the graph of f(x) is -0.183 so why does the point of inflection I have in part b have y=-0.446?
13. Originally Posted by Lord Darkin
What is the purpose of doing that? I'm just wondering since it seems like I still get lim f '(X) = 0.
Also, I understand that e=^(-3/2) but I'm confused about the y value for the point of inflection. My calculator shows that the lowest point of the graph of f(x) is -0.183 so why does the point of inflection I have in part b have y=-0.446?
$f(e^{(-3/2)})\approx{-.075}\neq-.446$
14. Originally Posted by Lord Darkin
What is the purpose of doing that? I'm just wondering since it seems like I still get lim f '(X) = 0.
Also, I understand that e=^(-3/2) but I'm confused about the y value for the point of inflection. My calculator shows that the lowest point of the graph of f(x) is -0.183 so why does the point of inflection I have in part b have y=-0.446?
sorry about that i thought that would help (Post 8).
15. Oh!!! I substituted in the x value that I got in part b into the first derivative, not the original!!!
Need to remember that.
Thanks everyone!
Problem Solved! | 2017-11-20T23:15:38 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/124256-given-function-find-points-find-limits-etc.html",
"openwebmath_score": 0.8032256364822388,
"openwebmath_perplexity": 688.1948352711934,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517469248845,
"lm_q2_score": 0.8633916047011595,
"lm_q1q2_score": 0.8444416672582483
} |
https://stats.stackexchange.com/questions/265775/whats-wrong-with-probability-combinatorics-solution | # What's wrong with probability/combinatorics solution?
I was playing a game of cards with some friends and wondered :
What's the probability of drawing 4 cards from a normal 52 card deck with all different ranks?
I figured out 3 ways of achieving the answer, but only 2 of them are equivalent.
1. Treating 4 draws as independent events. We can multiply the probabilities that each card rank selected is one that hasn't been selected yet. Each probability is $\frac{num-cards-of-unselected-rank}{num-cards-to-select-from}$. Each step we subtract 4 from the numerator and 1 from the denominator.
$$\frac{52}{52}*\frac{48}{51}*\frac{44}{50}*\frac{40}{49} = 0.676$$
1. I calculated all of the possible sets of 4 without a duplicate rank. First, select 4 of the 13 ranks: $13 \choose 4$. Then, for each of those 4 sets, choose one of the 4 suits: $4^4$. Divide the product by the total sets of 4 in a deck of 52: $52 \choose 4$ for the probability of selecting a set of 4 without a duplicate rank.
$$\frac{{13\choose4}*{4^4}}{52\choose4} = 0.676$$
1. I calculated all the possible sets of 4 with at least one duplicate rank. There are 13 unique ranks. For each rank there are ${4\choose2}=6$ possible unique pairs. So, this gives me $13*6=78$ possible pairs in a deck. For every pair of cards there are $50\choose2$ unique sets of 4. Divide the product by $52\choose4$ for the probability of selecting 4 cards containing at least 1 pair of duplicate ranks. Subtracting this from 1 should get me the probability of selecting 4 cards without a pair. But it doesn't equal the above 2 values. help!
$$1-\frac{78*{50\choose2}}{52\choose4}=0.647$$
• I like to use a computer simulation with pseudorandom numbers to double-check my math. You could use that as another, though approximate, approach. – EngrStudent Mar 6 '17 at 16:22
• Good point. I'm convinced that it would probably come out to roughly 0.676 (i.e. the answer that the first 2 solutions led me to). I'll certainly give that a whirl later today, but I'm the most interested in what's wrong with my math in solution 3. – colorlace Mar 6 '17 at 16:34
## 1 Answer
The "probability" that you are subtracting is wrong - you are not counting every outcome exactly once - for example 2 kings and 2 aces will be counted twice - once for the kings and once for the aces. On the other hand, you are missing some outcomes - for example 3 kings+1 ace, or 4 kings. The correct way to calculate it is to use the "inclusion and exclusion" formula - see https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle
• Oh good point, and I'll be sure to read through the "inclusion and exclusion" formula. You're right that I'm double counting something like Kheart, Kclub, Aspade, Aclub, BUT from what I can tell I'm not missing 3K+1A or 4K. In fact, by my count- I'm counting 4K 6 times: For all 78 unique pairs, multiply that by 50 choose 2. So, for all 6 unique King pairs, I am adding every possible pair it could be selected with INCLUDING the other 2 Kings. – colorlace Mar 6 '17 at 19:38 | 2020-03-28T15:37:07 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/265775/whats-wrong-with-probability-combinatorics-solution",
"openwebmath_score": 0.6800485849380493,
"openwebmath_perplexity": 364.8215817856377,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517411671125,
"lm_q2_score": 0.8633916082162403,
"lm_q1q2_score": 0.8444416657249673
} |
http://math.stackexchange.com/questions/15371/using-the-second-principle-of-finite-induction-to-prove-an-1-a-1an-1 | # Using the second principle of finite induction to prove $a^n -1 = (a-1)(a^{n-1} + a^{n-2} + … + a + 1)$ for all $n \geq 1$
The hint for this problem is $a^{n+1} - 1 = (a + 1)(a^n - 1) - a(a^{n-1} - 1)$
I see that the problem is true because if you distribute the $a$ and the $-1$ the terms cancel out to equal the left side. However, since it is telling me to use strong induction I am guessing there is more I am supposed to be doing. On the hint I can see that it is a true statement, but I am not sure how to use that to prove the equation or how the right side of the hint relates to the right side of the problem. Also, I do realize that in the case of the hint $n = 1$ would be the special case.
-
Hmm, your correction came in 6 seconds after mine! – Bill Dubuque Dec 23 '10 at 20:45
@Bill Dubuque I was wondering what you meant by correcting it. I thought I had done that. And the book is "Elementary Number Theory" 4th edition by David Burton. Actually the strong induction part is not completely clear to me. The other day I asked a question on what strong induction (or second principle of finite induction as my book puts it) is. The answers were helpful and I think I grasp it a little, but this problem is different then the example problem and I am a little lost on how to proceed. Especially since it seems strong induction is not necessary. – qw3n Dec 23 '10 at 20:54
To prove for $n+1$, using the hint, you need to assume for $n-1$ and $n$, so it is using strong induction. – Aryabhata Dec 23 '10 at 20:56
@Moron why $n-1$? If $n \geq 1$ then $n-1$ would be $0$ which is less then one. Also, how do I know that the right side of the hint is proving anything about the right side of the problem? I realize that it does, but how do you prove that. – qw3n Dec 23 '10 at 21:04
For the base cases you need to consider $n=1$, $n=2$, and the induction step will assume $n > 2$. Usually when you use strong induction, the base case needs more than just $n=1$. Try applying the expansions for $a^n -1$ and $a^{n-1} - 1$ (which you assume is true as part of the strong induction hypothesis) on the right side of the hint and see what you get... – Aryabhata Dec 23 '10 at 21:05
HINT $\$ Put $\rm\ f(n) = a^n+a^{n-1}+\:\cdots\:+1\:.\ \$ Then
$\rm\ a^{n+1}-1\ = \ (a+1)\ (a^n-1) - a\ (a^{n-1}-1)$
$\rm\phantom{\ a^{n+1}-1\ } =\ (a+1)\ ((a-1)\ f(n-1) - a\ (a-1)\ f(n-2))\quad$ by strong induction
$\rm\phantom{\ a^{n+1}-1\ } =\ (a-1)\ ((a+1)\ f(n-1)- a\ f(n-2))\quad$
$\rm\phantom{\ a^{n+1}-1\ } =\ (a-1)\ (\:f(n-1) + a\ (f(n-1)-f(n-2))$
$\rm\phantom{\ a^{n+1}-1\ } =\ \ldots$
$\rm\phantom{\ a^{n+1}-1\ } =\ (a-1)\ f(n)$
-
Sometimes the easiest way to figure out an induction argument like this is to prove a particular case. I'll take care of proving $n = 3$, assuming you've already proven $n = 1$ and $n = 2$.
By the hint, we have $$a^{3} - 1 = (a + 1)(a^2 - 1) - a(a - 1)$$
But the cases $n = 1$ and $n = 2$ hold, so we rewrite this as $$a^{3} - 1 = (a + 1)(a - 1)(a + 1) - a(a - 1)$$
Now by factoring $(a - 1)$ on the right hand side, we have $$a^3 - 1 = (a - 1)(a^2 + 2a + 1 - a) = (a - 1)(a^2 + a + 1)$$
This is less interesting than the case $n = 4$; try to work that out on your own! After that, you should be able to work out a general $n + 1$ case.
-
HINT $\ \$ Show that $\rm\ (a^{n+1}-1)/(a-1)\$ and $\rm\ a^n+a^{n-1}+\:\cdots\:+1\$ are both solutions of the recurrence $\rm\ f(n+2)\ = (a+1)\ f(n+1) - a\ f(n)\$ with identical initial conditions $\rm\ f(1),\ f(0)\:.\ \ \$
Now use strong induction to prove the uniqueness theorem for solutions of such difference equations (which is very easy). This is the essence of the calculation in my other answer.
As I've emphasized in many posts here uniqueness theorems provide powerful tools for proving equalities. Luckily, here, the uniqueness theorem for difference (vs. differential) equations has an absolutely trivial proof by way of (strong) induction.
Note that the corresponding first-order case is essentially telescopy, also known as the fundamental theorem of difference calculus.
- | 2015-05-23T00:34:58 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/15371/using-the-second-principle-of-finite-induction-to-prove-an-1-a-1an-1",
"openwebmath_score": 0.8895596861839294,
"openwebmath_perplexity": 103.08095211214079,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877054447312,
"lm_q2_score": 0.8596637559030338,
"lm_q1q2_score": 0.8444371382399906
} |
https://mathforums.com/threads/how-do-i-find-the-time-a-person-has-rested-when-it-makes-succesive-stops.348120/ | # How do I find the time a person has rested when it makes succesive stops?
#### Chemist116
The problem is as follows:
Betty goes out from her home for a stroll in the park. We know that she takes a rest $5$ minutes each $85\,m$. If she walks with a constant speed of $15\frac{m}{min}$ and she takes $98$ minutes to get back to her home. How long does she took rest?
The alternatives in book are as follows:
$\begin{array}{ll} 1.&\textrm{43 min}\\ 2.&\textrm{45 min}\\ 3.&\textrm{35 min}\\ 4.&\textrm{40 min}\\ \end{array}$
How can I find the time she has rested in this given context?
What I attempted to do is to find the total time of minutes she has rested by adding the resting time and the time she was walked.
Assuming that the length between her home and all the stroll she has made is $x$:
Then this would be:
$x\left(\frac{1\,min}{15\,m}\right)+x\left(\frac{5\,min}{85\,m}\right)=98$
But this didn't result in an answer near to any of the alternatives. What could be wrong? Can someone help me here?
The answer my book states is $45\,min$.
#### skeeter
Math Team
it takes Betty 17/3 min to walk 85 meters, then she rests 5 min.
one period of (walk+rest) totals 10 and 2/3 minutes.
at most, there are 9 such periods totaling 96 minutes ... which means she walks the last 2 minutes
so, for the 9 periods, she walks a total of 51 min, rests 45 min, then walks the last 2 min
#### Chemist116
it takes Betty 17/3 min to walk 85 meters, then she rests 5 min.
one period of (walk+rest) totals 10 and 2/3 minutes.
at most, there are 9 such periods totaling 96 minutes ... which means she walks the last 2 minutes
so, for the 9 periods, she walks a total of 51 min, rests 45 min, then walks the last 2 min
This approach seemed more logical than what I've attempted to do. I think the catch was to calculate the total time and divide this to the total time such the remainder of that division will indicate how long has walked in the final stroll. | 2020-04-05T12:41:57 | {
"domain": "mathforums.com",
"url": "https://mathforums.com/threads/how-do-i-find-the-time-a-person-has-rested-when-it-makes-succesive-stops.348120/",
"openwebmath_score": 0.6692173480987549,
"openwebmath_perplexity": 1223.6138929203003,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336243,
"lm_q2_score": 0.8596637577007393,
"lm_q1q2_score": 0.8444371373313487
} |
https://dsp.stackexchange.com/questions/51545/adding-subtracting-sinusoids | I'm trying to calculate resultant function from adding two sinusoids:
$9\sin(\omega t + \tfrac{\pi}{3})$ and $-7\sin(\omega t - \tfrac{3\pi}{8})$
The correct answer is $14.38\sin(\omega t + 1.444)$, but I get $14.38\sin(\omega t + 2.745)$.
My calculations are (first using cosine rule to obtain resultant $v$ as): $\sqrt{9^2 + (-7)^2 - (2 \cdot 9 \cdot (-7) \cdot \cos(\pi - \tfrac{\pi}{3} + \tfrac{3\pi}{8}))} = 14.38$
And the angle (using the sine rule): $\pi - \arcsin(|-7| \sin(\pi - \tfrac{\pi}{3} + \tfrac{3\pi}{8}) / 14.38) = 157$ ° or $2.745$ radians.
• Your question will be much more readable if you format the math properly. I have edited a portion of the question to get you started, and you can find complete instructions here: dsp.stackexchange.com/editing-help#latex. I have also selected a better tag, since your question is unrelated to wavelets. – MBaz Aug 27 '18 at 18:13
• Great! Just be mindful of the "\" before "sin" and "cos". I fixed those for you. – MBaz Aug 27 '18 at 18:20
• and lose the asterisks unless you're discussing convolution. – robert bristow-johnson Aug 27 '18 at 18:58
This is a trigonometry question, but can also be solved using complex exponentials , which makes it a more DSP type.
We shall use the identitiy: $$\sin(\phi) = \frac{e^{j\phi} - e^{-j\phi} } {2j}$$
or the more general case: $$\sin(\omega t + \phi) = \frac{e^{j\omega t} e^{j\phi} - e^{-j\omega t} e^{-j\phi} } {2j}$$
and further more general case: \begin{align} |K| \sin(\omega t + \phi + \theta_k) &= |K|\frac{e^{j\omega t} e^{j\phi}e^{j\theta_k} - e^{-j\omega t} e^{-j\phi}e^{-j\theta_k} } {2j} \\ &= \frac{e^{j\omega t} e^{j\phi}K - e^{-j\omega t} e^{-j\phi}K^* } {2j} \tag{1}\\ \end{align}
where $K$ is a complex constant defined as $K = K_r + j K_i = |K| e^{j\theta_k}$ both in rectangular and polar forms.
Now proceed in decomposing the given signal into complex exponentials:
\begin{align} x(t) &= 9 \sin(\omega t + \pi/3) - 7 \sin(\omega t - 3\pi/8) \\ &= (9/{2j})\left( e^{j\omega t} e^{j\pi/3} - e^{-j\omega t} e^{-j\pi/3} \right) - (7/{2j})\left( e^{j\omega t} e^{-j3\pi/8} - e^{-j\omega t} e^{j3\pi/8} \right) \\ &= \frac{ e^{j\omega t}\left[9 e^{j\pi/3} - 7e^{-3\pi/8} \right] - e^{-j\omega t}\left[ 9 e^{-j\pi/3} - 7e^{3\pi/8} \right] }{2j} \tag{2}\\ &= \frac{ e^{j\omega t}K - e^{-j\omega t}K^* }{2j}\\ \end{align}
Now denoting $9 e^{j\pi/3} - 7e^{-j3\pi/8} = K$, the last line, Eq(2) becomes similar to Eq(1). Now all you need to do is find the magnitude and phase angle of the complex number $K$, which are :
$$K = 9 e^{j\pi/3} - 7e^{j3\pi/8} = 1.8212 + 14.2614 j$$ $$|K| = 14.3772$$ $$\theta_k = 1.4438 ~~~\text{ radians }$$
Plugging these values gives you the final answer :
$$\boxed{x(t) = |K|\sin(\omega t + \theta_k) = 14.38 \sin(\omega t + 1.4438) }$$
• yes, thank you for the great explanation! – Bord81 Aug 27 '18 at 19:59
• Still need some points, but will surely do!) – Bord81 Aug 27 '18 at 20:02
• @Fat32 There may be a typo: you give $|K|=14.3772$, but then use $14.438$ in the final equation? – MBaz Aug 27 '18 at 21:00
• @MBaz yes there is ! thanks, let me correct. – Fat32 Aug 27 '18 at 21:15
The easiest way (to my mind) to solve the problem is to
• Use the identity $\sin(A\pm B) = \sin A \cos B \pm \cos A \sin B$, substituting the known numerical values of $\cos B$ and $\sin B$,
• Gathering the results to express your sum of sinusoids in the form of $C \sin A + D \cos A$,
• Expressing the resulting function as $\sqrt{C^2+D^2} \sin\left(\omega t + \theta\right)$
• That's more or less the way I'm doing it, but the question is I can't figure out the θ. – Bord81 Aug 27 '18 at 18:38
• Hint: Set $\frac{C}{\sqrt{C^2+D^2}} = \cos(\theta), \frac{D}{\sqrt{C^2+D^2}} = \sin(\theta)$ and solve $\tan(\theta)=\frac DC$ for $\theta$. – Dilip Sarwate Aug 27 '18 at 18:45 | 2020-04-08T19:38:09 | {
"domain": "stackexchange.com",
"url": "https://dsp.stackexchange.com/questions/51545/adding-subtracting-sinusoids",
"openwebmath_score": 0.998898446559906,
"openwebmath_perplexity": 1857.306888712669,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336243,
"lm_q2_score": 0.8596637541053281,
"lm_q1q2_score": 0.8444371337996205
} |
https://math.stackexchange.com/questions/2575370/prove-that-if-fx-to-a-and-fx-to-b-as-x-to-infty-then-b-0/2575487 | # Prove that If $f(x)\to a$ and $f'(x) \to b$ as $x\to +\infty$, then $b = 0$
In the book of The elements of Real Analysis, by Bartle, at page 220, it is asked to show that
If $f(x)\to a$ and $f'(x) \to b$ as $x\to +\infty$, then $b = 0$
However, I'm having trouble showing the result.
By our assumption, for a given $\epsilon > 0$, $\exists M \in \mathbb{R}$ and $\delta >0$ such that $\forall x > M$, $$|f(x) - a| < \epsilon,$$ and $$|\frac{f(x+h) - f(x)}{h} - b| < \epsilon$$ for $0 < |h| < \delta.$
However, (by observing $x+h > x > M$), I get $$-2\epsilon / h - b<|f'(x) - b| < 2\epsilon / h - b,$$ which does not let me do anything because of that $h$ in the denominator of $\epsilon$. So my question is, how can we prove this result?
• If $b>0$ then $f$ eventually lies above a line of positive slope $b/2$. If $b<0$ then $f$ eventually lies below a line of negative slope $b/2$. In either case, $f$ tends to $\pm\infty$ as $x \to \infty$. Thus the only way $f$ can tend to $a$ is if $b=0$. – nullUser Dec 21 '17 at 6:51
• @nullUser Thanks for you answer. However, in somewhat similar whats, I can see the result intuitively, too, and what I can't see is that how to deal with those epsilons, so I would really appreciate you could post your answer/comment by supporting it with a rigorous mathematical proof. – onurcanbektas Dec 21 '17 at 6:54
• Mean value theorem is your friend here. Try it on the difference $f(x+1)-f(x)$. – Paramanand Singh Dec 21 '17 at 6:57
• To do that, I need to choose $h = 1$, but that would require $\delta > 1$, but for a given $\epsilon > 0$, I cannot guarantee that. – onurcanbektas Dec 21 '17 at 7:01
• Here's a little intuition to go with it. If $f$ tends to a certain fixed number, $a$, as $x$ goes to infinity, we can say that $f'$ goes to $0$ at infinity since if it didn't there would be no way that $f \to a$. – ThisIsNotAnId Dec 21 '17 at 7:02
Correct me if wrong :
1)$\lim_{x \rightarrow \infty} f(x) =a$:
Let $\epsilon$ be given .
There exists an $M$, real, such that for $x\ge M$
$|f(x)-a| \lt \epsilon.$
2) MVT:
Consider $h \gt 0$, $h$ fixed.
$hf'(t) = f(x+h) - f(x)$, with
$x \lt t \lt x+h$.
Let $x \gt M$.
$h|f'(t)| = |f(x+h) -f(x)| =$
$|(f(x+h) -a) -(f(x) -a)| \le$
$|f(x+h) -a| +|f(x)-a| \lt 2\epsilon.$
Recall : $x \lt t\lt x+h:$
$x \rightarrow \infty$ implies $t \rightarrow \infty$ :
$h \lim_{t \rightarrow \infty} |f'(t)|= h|b| \le 2\epsilon.$
This implies;
$\lim_{t \rightarrow \infty}|f'(t)|=|b|= 0.$
Note: If $|b| \not=0$ we get a contradiction by choosing $h$, an independent parameter, sufficiently large.
• İt needs to be $x \lt t \lt x+h$ – onurcanbektas Dec 21 '17 at 9:45
• Even though the rest of the proof is OK, and it is constructed in the way that I was looking, the organisation of the lines makes the answer hard to read. – onurcanbektas Dec 21 '17 at 9:50
• Note that, if you do not organise your answer, I cannot accept it. – onurcanbektas Dec 21 '17 at 10:24
• Thanks. The typo is fixed. – Peter Szilas Dec 21 '17 at 14:53
Just to expand the diversity of the answers, I'm going to post the proof suggested by @ParamanandSingh
Let for a fixed $x\in \mathbb{R}$, consider the interval $(x, x+1)$. By MVT, $\exists c \in (x, x+1)$ s.t $$\frac{f(x+1) - f(x)}{1} = f'(c).$$
Now if we let $x\to +\infty$, $c\to +\infty$, and the difference $$f(x+1) - f(x) \to 0,$$ hence $$f'(c) \to b\quad as \quad c\to +\infty$$ implies $$b = 0$$
QED.
• It's almost correct. Just write that the LHS $f(x+1)-f(x)$ tends to $a-a=0$ and RHS $f'(c)$ tends to $b$ (given in question). So $b=0$ and you are done. This proof is very standard and available in many good textbooks. – Paramanand Singh Dec 21 '17 at 10:00
• @ParamanandSingh Ok, thanks for pointing out. – onurcanbektas Dec 21 '17 at 10:07
• I think you should write $f'(c) \to b$ and hence $b=0$. Note that $f'(c)$ may never be equal to $b$ (a limit is not necessarily a value attained). So that equal sign might raise some doubts. +1 anyway. And if you haven't noticed so far your answer is the shortest and simplest. – Paramanand Singh Dec 21 '17 at 10:12
• @ParamanandSingh Yeah, I have noticed :) – onurcanbektas Dec 21 '17 at 10:14
If we assume
$f'(x) \to b \; \text{as} \; x \to \infty, \tag 1$
then we have, for any $\epsilon > 0$, a sufficiently large $M \in \Bbb R$ such that
$b - \epsilon < f'(x) < b + \epsilon \; \text{for} \; x \ge M; \tag 2$
suppose that $b > 0$; then we choose $\epsilon$ so small that
$b - \epsilon > 0; \tag 3$
then
$f(x) - f(M) = \displaystyle \int_M^x f'(s) \; ds \ge \int_M^x (b - \epsilon)\; ds = (b - \epsilon)(x - M), \tag 4$
whence
$f(x) \ge f(M) + (b - \epsilon)(x - M) \to \infty \; \text{as} \; x \to \infty; \tag 5$
likewise, if $b < 0$, we may choose $\epsilon$ such that
$b + \epsilon < 0; \tag 6$
then by an argument similar to the above we have
$f(x) \le f(M) + (b + \epsilon)(x - M) \to -\infty \; \text{as} \; x \to \infty; \tag 7$
in neither case $b > 0$, $b < 0$ does $f(x) \to a$ as $x \to \infty$; thus, we must have
$b = 0. \tag 8$
• Well, even though I know what is integral, and what does geometrically mean, in the book, we haven't covered that, so it would be really nice if your argument is only based on the algebraic manipulations. – onurcanbektas Dec 21 '17 at 8:01
• @onurcanbektas: well, I had no way of knowing what tools you had at your disposal until I read your comment. I'm not familiar with Bartle's book, an not in a place where I have access to a copy at the moment. – Robert Lewis Dec 21 '17 at 8:05
• Of course, I should have notes those kinds of thing in the question, it is my mistake. – onurcanbektas Dec 21 '17 at 8:06
• @onurcanbektas: The mean value theorem approach suggested by Paramanand Singh might hold some promise, but I'm to sleepy right now to put all the details together. – Robert Lewis Dec 21 '17 at 8:07
• @onurcanbektas: not too bad of a mistake! Anyway, I just went for the mathematical interest of the question since I don't know how your course is set up. Cheers! – Robert Lewis Dec 21 '17 at 8:09
Edit:
As it is pointed out, there is a mistake in this proof, but it can be solved by observing that the difference $f(x+1)- f(x)$ can be arbitrarily small for sufficient large values of $x$.
After getting some ideas, it is clear that easiest method is to use the method of contradiction, so I'm going to post my own proof in here.
WLOG, let $f'(x) > 0$ as $x\to +\infty$, then by the lemma $19.3.a$ in Bartle's book, saying that, $\exists h> 0$ s.t for all y satisfying $x< y < x+h$, we have $f(x)< f(y)$. Then let $f(y) - f(x) = \epsilon'$ and choose $2\epsilon < \epsilon'$ so that $\exists M$ and $\forall x > M$, we have $$|f(x) - a| < \epsilon.$$ Since $y > x > M,$ $|f(y)- a| < \epsilon$, hence $$0 < \epsilon' < |f(y)- f(x)| < 2\epsilon$$, which is a contradiction. QED
• This isn’t right - your $\epsilon’$ depends on $x,y$ but your $x,y$ depend on $\epsilon’.$ Maybe you can show there is a positive lower bound on $f(x+1)-f(x)$ for sufficiently large $x.$ – Dap Dec 21 '17 at 8:11
• @Dap You are right. – onurcanbektas Dec 21 '17 at 10:17 | 2019-05-23T11:50:51 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2575370/prove-that-if-fx-to-a-and-fx-to-b-as-x-to-infty-then-b-0/2575487",
"openwebmath_score": 0.9451146721839905,
"openwebmath_perplexity": 214.40213655080365,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876976669629,
"lm_q2_score": 0.8596637577007393,
"lm_q1q2_score": 0.844437133319589
} |
https://math.stackexchange.com/questions/3386947/prove-this-formula-for-the-sin-left-fracx2n-right-x-in-0-frac-pi | # Prove this formula for the $\sin\left(\frac{x}{2^n}\right), x \in [0,\frac{\pi}{2}[, n \in \Bbb{N}$
## The formula in question:
$$\sin\left(\frac{x}{2^n}\right) = \sqrt{a_1-\sqrt{a_2+\sqrt{a_3+\sqrt{a_4+\dots+\sqrt{a_{n-1}+\sqrt{\frac{a_{n-1}}{2}\left(1-\sin^2(x)\right)}}}}}}$$ where $$a_k = \frac{1}{2^{2^k-1}} \quad \forall k \in \{1,2,\dots,n-1\}, \ n \in \Bbb{N}, \ x \in \left[0,\frac{\pi}{2}\right[$$ and only the first sign (after $$a_1$$) is $$-$$, the rest is $$+$$.
If this holds, this is a great way to calculate the $$\sin$$ of small angles, specifically ones that are a power of $$\frac{1}{2}$$ radians.
## My attempt:
I have derived at this by continously using the formula $$\sin\left(\frac{x}{2}\right) = \sqrt{\frac{1-\sqrt{1-\sin^2(x)}}{2}} = \sqrt{\frac{1}{2}-\sqrt{\frac{1}{4}-\frac{\sin^2(x)}{4}}}$$ Which holds true, since \begin{align} \sin(2x) &= 2\sin(x)\cos(x) \\ \sin(x) &= 2\sin\left(\frac{x}{2}\right)\cos\left(\frac{x}{2}\right) \\ \sin(x) &= 2\sin\left(\frac{x}{2}\right)\sqrt{1-\sin^2\left(\frac{x}{2}\right)} \\ \sin^2(x) &= 4\sin^2\left(\frac{x}{2}\right)\left(1-\sin^2\left(\frac{x}{2}\right)\right) \ \left(\text{if } x \in \left[0,\frac{\pi}{2}\right[\right) \\ \sin^2(x) &= 4\sin^2\left(\frac{x}{2}\right)-4\sin^4\left(\frac{x}{2}\right) \\ 0 &= 4\sin^4\left(\frac{x}{2}\right)-4\sin^2\left(\frac{x}{2}\right)+\sin^2(x) \\ \sin^2\left(\frac{x}{2}\right)_{1,2} &= \frac{4 \pm \sqrt{16-16\sin^2(x)}}{8} = \frac{1 \pm \sqrt{1-\sin^2(x)}}{2} \end{align}
And this holds true with $$-$$, since: \begin{align} \sin^2\left(\frac{x}{2}\right) &\stackrel{?}{=} \frac{1-\sqrt{1-\sin^2(x)}}{2} \\ 2\sin^2\left(\frac{x}{2}\right) &\stackrel{?}{=} 1-\cos(x) \\ 2\sin^2\left(\frac{x}{2}\right) &\stackrel{?}{=} 1-\cos^2\left(\frac{x}{2}\right)+\sin^2\left(\frac{x}{2}\right) \\ \sin^2\left(\frac{x}{2}\right) &\stackrel{?}{=} 1-\cos^2\left(\frac{x}{2}\right) \end{align}
Yes, since $$\sin^2\left(\frac{x}{2}\right) + \cos^2\left(\frac{x}{2}\right) = 1$$ So we found that if $$x \in \left[0,\frac{\pi}{2}\right[$$, then $$\sin\left(\frac{x}{2}\right) = \sqrt{\frac{1-\sqrt{1-\sin^2(x)}}{2}}$$ Now I plugged the formula into itself a couple times, and guessed what it would look like if I had plugged it in $$n$$ times. However, I have only assumed the above values of $$a_k$$ are true by looking at the results, so I'd like a rigorous proof of the formula.
I tried induction by $$n$$, but I couldn't figure out the $$n \rightarrow n+1$$ step.
## Question:
Provide a proof of the first formula or correct it if it's wrong.
• Use $\sin\left(\frac{x}{2}\right) = \sqrt{\frac{1-\sqrt{1-\sin^2(x)}}{2}}$. Perform $x \rightarrow x/2$ and use the above relation again. Performing these operations repeatedly. – user600016 Oct 9 '19 at 15:33
• I've done that, however the calculations really get messy, and I'm not certain about the result that I got. Especially the $a_k$ values. – Daniel P Oct 9 '19 at 15:35
• Are you looking for some analytical verification of the formula you proposed? If yes, you could run a simple code in some programming language. – user600016 Oct 9 '19 at 15:37
• If this holds, this is a great way to calculate the sin of small angles. For this, use Taylor series expansion instead. – Jean-Claude Arbaut Oct 9 '19 at 15:41
• I disagree. The Taylor expansion is an infinite sum, this is a finite method. And since we only have to plug the formula into itself $n$ times, this is a fast and computable method. – Daniel P Oct 9 '19 at 15:42
\begin{align}\sin\left(\frac{x}{{2^{n+1}}}\right)&= \sqrt{\frac{1}{2}-\sqrt{\frac{1}{4}-\frac{\sin^2\left(\frac{x}{{2^{n}}}\right)}{4}}} \end{align}
We can use the induction hypothesis to calculate \begin{align*}\frac{1}{4}\cdot\sin^2\left(\frac{x}{{2^{n}}}\right)&=\frac{1}{4}\cdot \left(a_1-\sqrt{a_2+\sqrt{a_3+\sqrt{a_4+\dots+\sqrt{a_{n-1}+\sqrt{\frac{a_{n-1}}{2}\left(1-\sin^2(x)\right)}}}}}\right)\\ &=\frac{a_1}{2^2}-\sqrt{\frac{a_2}{(2^{2})^2}+\sqrt{\frac{a_3}{((2^{2})^2)^2}+\sqrt{\frac{a_4}{2^{2^4}}+\dots+\sqrt{\frac{a_{n-1}}{2^{2^{n-1}}}+\sqrt{\frac{a_{n}}{2^{2^n}\cdot 2}\left(1-\sin^2(x)\right)}}}}}\\ &=a_2-\sqrt{a_3+\sqrt{a_4+\sqrt{a_5+\dots+\sqrt{a_{n}+\sqrt{\frac{a_{n+1}}{2}\left(1-\sin^2(x)\right)}}}}} \end{align*} where we used that $$\frac{a_n}{2^{2^n}}=\frac{1}{2^{2^n-1}\cdot 2^{2^n}}=\frac{1}{2^{2\cdot 2^{n}-1}}=\frac{1}{2^{2^{n+1}-1}}=a_{n+1}$$ And since $$\color{red}{a_1=\frac{1}{2^{2^1-1}}=\frac{1}{2}}$$ and $$a_2=\frac{1}{2^{2^2-1}}=\frac{1}{8}$$ so that $$\color{blue}{\frac{1}{4}-a_2=\frac{1}{4}-\frac{1}{8}=\frac{1}{8}=a_2}$$ you get
\begin{align}\sin\left(\frac{x}{{2^{n+1}}}\right)&= \sqrt{\frac{1}{2}-\sqrt{\frac{1}{4}-\frac{\sin^2\left(\frac{x}{{2^{n}}}\right)}{4}}}\\ &= \sqrt{\color{red}{\frac{1}{2}}-\sqrt{\color{blue}{\frac{1}{4}-a_2}+\sqrt{a_3+\sqrt{a_4+\sqrt{a_5+\dots+\sqrt{a_{n}+\sqrt{\frac{a_{n+1}}{2}\left(1-\sin^2(x)\right)}}}}}}}\\ &= \sqrt{\color{red}{a_1}-\sqrt{\color{blue}{a_2}+\sqrt{a_3+\sqrt{a_4+\sqrt{a_5+\dots+\sqrt{a_{n}+\sqrt{\frac{a_{n+1}}{2}\left(1-\sin^2(x)\right)}}}}}}} \end{align} | 2021-04-13T05:12:36 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3386947/prove-this-formula-for-the-sin-left-fracx2n-right-x-in-0-frac-pi",
"openwebmath_score": 1.0000089406967163,
"openwebmath_perplexity": 720.0479879692027,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877033706601,
"lm_q2_score": 0.8596637523076225,
"lm_q1q2_score": 0.8444371329252585
} |
https://math.stackexchange.com/questions/1376009/calculate-int1-2-0-int1-x-x-xy9x-y9-dy-dx/1376057 | # Calculate $\int^{1/2}_0\int^{1-x}_x (x+y)^9(x-y)^9 \, dy \, dx$
How can I find the following integral:
$$\int^{1/2}_0 \int^{1-x}_x (x+y)^9(x-y)^9 \, dy \, dx$$
My thoughts:
Can we possibly convert this to spherical or use change of variables?
according to the shape of the area of integration and the shape of the function that is under integral, the easiest answer is to define variables $u=x+y$ and $v=x-y$then we have: $$\frac{1}{J}=\frac{\partial(u,v)}{\partial(x,y)}=\begin{vmatrix}1&1\\1&-1\end{vmatrix}\Rightarrow |J|=\frac{1}{2}$$ and the borders of the area of integration are $u=-v,u=1,v=0$ just draw the shape of the problem to understand this part.then:
$$\int^{1/2}_0 \int^{1-x}_x (x+y)^9(x-y)^9 \, dy \, dx=\int^{0}_{-1}\int^1_{-v} \frac{1}{2}u^9v^9\,du\,dv=\frac{-1}{400}$$
why did I use $\frac{1}{|J|}$?
consider the analog situation for a 1-D integral where we have to use change of variables method:
$$\int\frac{x\,dx}{\sqrt{1-x^4}}$$
we have $u=x^2\Rightarrow du=2x\,dx$
meaning that we have $u$ as a function of $x$ so we can calculate $du=\frac{du}{dx}dx$ but in order to substitute $dx$ in the original integral we write $du=2x\,dx\Rightarrow dx=\frac{1}{2x}du$ which means that we need $dx=\frac{dx}{du}du$ to substitute in the original integral so
In a 1-D integral we have $u$ as a function of $x$ so we can calculate $\frac{du}{dx}$ but to substitute in the original integral we need $\frac{dx}{du}$
here is the same situation:
to substitute in the original integral, we need:
$$dx\,dy=\frac{\partial(x,y)}{\partial(u,v)}du\,dv=|J|du\,dv$$
But most of the time we have $u$ and $v$ as a function of $x$ and $y$:
$$u=f(x,y),v=g(x,y)$$
so we can compute $$\frac{1}{|J|}=\frac{\partial(u,v)}{\partial(x,y)}$$
• Does the definition of your new variables have something to do with an affine transformation? – Khallil Jul 27 '15 at 20:37
• @Khallil as you know we can use the method change of variables in 1-D integrals to calculate them easier just like that we can use change of variables in a 2-D or 3-D integral. and the jacobi above defines the relationship between old variables and new ones. change of variables is just mapping from a space to another space just like the affine transformation is. – Sepideh Abadpour Jul 27 '15 at 21:06
• May I ask why you chose to define your variables in such a way? – Khallil Jul 27 '15 at 21:14
• @Khallil because I want to find a way to calculate the above integral easier. if you draw the borders of the original integral in the question you will see that it's a triangle sides $x+y=1,x-y=0,x=0$ also you have the terms $x-y,x+y$ in the function so you will deduce that choosing the variables this way will make both the function and the borders easier. I should say that choosing the new variables is just a trick you will learn as much as you solve problem.practice makes perfect – Sepideh Abadpour Jul 27 '15 at 21:27
• Thanks for the help, @sepideh! – Khallil Jul 27 '15 at 21:28
Hint: $$(x+y)^9(x-y)^9=((x+y)(x-y))^9=(x^2-y^2)^9$$
• Okay, so we can introduce an 'r here by saying that $(-(x^2+y^2))^9=(-r^2)^9=-r^{18}$. Right? – Nadia Marson Jul 27 '15 at 19:53
• Okay I got that much. Now what do I do next? – Nadia Marson Jul 27 '15 at 20:03
• Actually, the expression is $(x^2-y^2)^9$. Now you can use binomial expansion of $(x^2-y^2)^9$ so that all the terms are separated in powers of $x$ & $y$ then integrating all, first w.r.t. $y$ then w.r.t. $x$ It can be done but the procedure/expansion will be lengthy. – Harish Chandra Rajpoot Jul 27 '15 at 20:17
• It doesn't seem efficient or intuitive. Just a hammering of algebra. – Khallil Jul 27 '15 at 20:39
How about a change of variable like $u=x+y, v=x-y$?
The Jacobian is $-\frac 12$, and the area of integration is the triangle bounded by the lines $x=y, x+y=1, x=0$
This translates as: $v$ varies from $v=0$ to $v=u$ for the inner integral, and $u=0$ to $u=1$ for the outer integral.
Therefore we evaluate $$\int_{u=0}^{u=1}\int_{v=0}^{v=u}u^9v^9(-\frac 12)dvdu$$
The final answer is $-\frac{1}{400}$
• Yes. Actually I was wanting something in this direction. Can you please show me this step-by-step if you don't mind? – Nadia Marson Jul 27 '15 at 19:56
\begin{align} u & = x+y \\ v & = x-y \end{align} $$du\,dv = \left|\frac{\partial (u,v)}{\partial(x,y)}\right|\,dx\,dy = 2\,dx\,dy$$
\begin{align} & \int^{1/2}_0 \int^{1-x}_x (x+y)^9(x-y)^9 \, dy \, dx \\[10pt] = {} & \int_0^1 \left( \int_{u-1}^0 u^9 v^9 2\,dv \right) \,du \\[10pt] = {} & \int_0^1 u^9 \frac{(-(u-1)^{10})} 5 \, du \end{align} | 2021-01-16T21:19:06 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1376009/calculate-int1-2-0-int1-x-x-xy9x-y9-dy-dx/1376057",
"openwebmath_score": 0.9699376225471497,
"openwebmath_perplexity": 226.5005875469893,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877049262134,
"lm_q2_score": 0.8596637505099167,
"lm_q1q2_score": 0.844437132496647
} |
https://www.freemathhelp.com/forum/threads/surprising-result.118460/ | # Surprising result?
#### apple2357
##### Full Member
Integrating this function for different values of k between 0 and pi/2 appears to give the same result. I can't see why this might be the case?
I put it on Geogebra to test out..
Etc...
#### Dr.Peterson
##### Elite Member
It looks to me like a matter of symmetry; the curve under which you are integrating is symmetrical about the point (pi/4, 1/2).
The area in each case is half of a rectangle with base pi/2 and height 1, because the curve bisects that rectangle. And that happens because f(pi/2 - x) = 1 - f(x).
#### apple2357
##### Full Member
Good spot! It does look like it is.
Just thinking about your transformations explanation... ( I imagine integrating these functions wouldnt be so easy!)
#### Dr.Peterson
##### Elite Member
Yes, certain definite integrals are much easier than their indefinite integrals ...
I saw the answer purely visually; the last graph was most obvious, then I saw the symmetry in the others, and then I realized how to see that in the function itself. It's all about cofunctions.
#### apple2357
##### Full Member
I can see your transformations explanation works, i can't quite see why f(pi/2-x) is the same as f(x) but reflected in the way it is ( as below)
If it was f(x-pi/2) i would understand that as a simple translation but is f(pi/2-x) is a combination of transformations? Is this a translation and a reflection?
How did you come up with f(pi/2-x) = 1-f(x) as the explanation for the graphical observation?
#### Dr.Peterson
##### Elite Member
I'd call it a double reflection, first around x=pi/4 and then around y=1/2. Equivalently, and the way I described it initially, it is actually a rotation around the point (pi/4, 1/2). In the same way, reflecting in both y=0 and x=0 amounts to rotating by 180 degrees about the origin.
Replacing x with pi/2 - x reflects in x=pi/4, and replacing y with 1-y (that is, subtracting the result from 1) reflects in y=1/2. These are worth pondering until you see why!
#### apple2357
##### Full Member
I'd call it a double reflection, first around x=pi/4 and then around y=1/2. Equivalently, and the way I described it initially, it is actually a rotation around the point (pi/4, 1/2). In the same way, reflecting in both y=0 and x=0 amounts to rotating by 180 degrees about the origin.
Replacing x with pi/2 - x reflects in x=pi/4, and replacing y with 1-y (that is, subtracting the result from 1) reflects in y=1/2. These are worth pondering until you see why!
So If we reflect y=f(x) in x=a, we get the curve y=f(−x+2a) ?
Is this because if you want to move every point to a position the same distance the other side of x=a we end up at x+2(a-x). Which gives us 2a -x
Is that how you would explain it?
#### Dr.Peterson
##### Elite Member
Yes, that is one good way to explain this fact.
And, of course, the same fact is why the other reflection is 1 - y. | 2020-05-26T07:01:23 | {
"domain": "freemathhelp.com",
"url": "https://www.freemathhelp.com/forum/threads/surprising-result.118460/",
"openwebmath_score": 0.811082661151886,
"openwebmath_perplexity": 1350.0292479964555,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877049262134,
"lm_q2_score": 0.8596637451167997,
"lm_q1q2_score": 0.8444371271990545
} |
https://math.stackexchange.com/questions/3533288/is-it-possible-to-remove-one-or-more-vectors-from-s-to-create-a-basis-for-v | # Is it possible to remove one or more vectors from S to create a basis for V
If a set S of vectors spans a vector space V, then is it possible to remove one or more vectors from S to create a basis for V ?
I think that this is a tricky question as I am not sure whether S contains a dummy vector that is linearly dependent on the one(s) you left out. In which case it will possible to remove those vectors.
But If for example I have a set {x,y,z} spanning a vector space. Taking y away wont help my set span the same vector space. Right ?
• Assuming you are speaking about finite dimensional vector spaces, then you can work inductively. If there is a linear dependence between the vectors, you can use it to eliminate any one of the vectors that appears with a non-zero coefficient.
– lulu
Feb 3, 2020 at 21:18
• Should say: I don't understand your $\{x,y,z\}$ example. If those are linearly independent, then they already form a basis of the vector space they span. If they are linearly dependent, then you can remove at least one without changing the span.
– lulu
Feb 3, 2020 at 21:20
• I am a bit confused as I have a question asking me whether "If a set S of vectors spans a vector space V, then it is possible to remove one or more vectors from S to create a basis for V." is true or false. To which I replied true as spanning can be done with additional vectors that aren't really the basis of the vector space. Right ? Feb 3, 2020 at 21:24
• I agree with your conclusion (again, assuming, for simplicity, that we are speaking of finite dimensional vector spaces) but I do not understand your argument. In my first argument, I suggested that you prove it via induction.
– lulu
Feb 3, 2020 at 21:26
• That's not an argument. For the first part, you just give a single example that works out the way you want. And the second part has nothing to do with what you were asked to prove. Seriously, do it by induction.
– lulu
Feb 3, 2020 at 21:37
I'll assume that you are working with finite dimensional vector spaces. Let $$V$$ be the vector space in question and let $$d=\dim V$$.
The claim we want to prove: any finite collection of vectors in $$V$$ which spans $$V$$ contains a basis.
Proof by induction on the number of vectors in the collection.
Base case: if the number of vectors is $$d$$ then the collection is a basis already (by standard results).
Now suppose we have proven the result for all collections with $$n$$ vectors (for $$n≥d$$). We want to prove that the result also holds for collections with $$n+1$$ vectors, so take such a collection, $$S=\{v_1, \cdots, v_{n+1}\}$$. Since $$n+1>d$$ there must be a linear dependence between these vectors. Let's say we have $$\sum_{i=1}^{n+1} \lambda_iv_i=0$$ and that not all the $$\lambda_i$$ are $$0$$.
Let $$j$$ be the greatest index such that $$\lambda_j\neq 0$$. Then we can write $$v_j=\sum_{i\neq j}-\frac {\lambda_i}{\lambda_j}v_j$$
It follows that we can eliminate $$v_j$$ from the collection without changing the span. But the set $$S'=S-\{v_j\}$$ has only $$n$$ elements so the inductive hypothesis applies to it, and we conclude that $$S'$$ contains a basis for $$V$$. But since $$S'\subset S$$ that basis is also a subset of $$S$$, and we are done.
Note: if your spanning set $$S$$ is infinite then it also must contain a basis. To see that, take some basis $$\{e_1, \cdots, e_d\}$$ for $$V$$. Since $$S$$ spans we knwo we can write each $$e_i$$ as a $$\textit {finite}$$ linear combination of vectors in $$S$$. Choose one such expression for each $$e_i$$. Define $$S^*\subset S$$ to be the (finite) subset of $$S$$ consisting of all the vectors from $$S$$ that are used in those expressions and apply the above to $$S^*$$.
• Yes, it is assumed to be in a finite dimensional vector spaces as the questions does not mention anything about the infinite dimensional. "If a set S of vectors is linearly independent in a vector space V, then it is possible to add zero or more vectors to S to create a basis for V". I wrote that as a mean to validate my understanding of the concept of independence, basis and spans. As linearly independent vectors in a finite dimensional sets will span the whole vector space, adding another vector to the independent set will not help create a basis I assume. Feb 3, 2020 at 21:58
• I completely understood this answer. Thank you for your time ! Feb 3, 2020 at 21:58
• Glad to have helped. Note that your second question, about extending the independent collection to the basis, is different than the one I answered.
– lulu
Feb 3, 2020 at 22:01
• Yes, I am still wondering if that would work. I will continue to read about it. In the case it is not clear I will post a question Feb 3, 2020 at 22:26
• Try an inductive approach, as in my argument above. This time you have to apply induction to the "co-dimension". That is, if the dimension is $d$ we assume your collection has $d-i$ elements and apply induction to $i$.
– lulu
Feb 3, 2020 at 22:28 | 2022-08-13T00:01:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3533288/is-it-possible-to-remove-one-or-more-vectors-from-s-to-create-a-basis-for-v",
"openwebmath_score": 0.7764167785644531,
"openwebmath_perplexity": 98.13741194106704,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877038891779,
"lm_q2_score": 0.8596637433190939,
"lm_q1q2_score": 0.8444371245416883
} |
https://math.stackexchange.com/questions/3421042/solving-strong-mathematical-induction-sequence | # Solving Strong Mathematical Induction Sequence
I'm trying to work on the problem below, though I've hit a wall on how to proceed to prove the inductive step.
Suppose that $$c_0,c_1,c_2\ldots$$ is a sequence defined as follows: $$c_0=2,\, c_1=2,\, c_2=6,\, c_k=3c_{k-3}\,(k\geq3)$$
Prove that $$c_n$$ is even for each integer $$n\geq0$$.
Here is what I have so far:
1. Show that $$P(0)$$ and $$P(1)$$ are true.
• $$c_0=2$$ and $$2\geq0$$ and $$2\mid2$$, so this is even.
• $$c_1=2$$ and $$2\geq0$$ and $$2\mid2$$, so this is even.
2. Show that for every integer $$k\geq1$$, if $$P(i)$$ is true for each integer $$i$$ with $$0\leq i\leq k$$, then $$P(k+1)$$ is true.
• Let $$k$$ be any integer with $$k\geq1$$, and suppose $$c_i$$ is even for each integer $$i$$ with $$0\leq i\leq k$$ [inductive hypothesis].
• I must show that $$c_{k+1}$$ is even for each integer $$k\geq0$$.
• Now $$c_{k+1}=3c_{k-2}$$...
...and this is where I do not understand how to proceed. Any tips on how I can finish solving this problem are greatly appreciated.
• Use the underscore. \$C_{k-3}\$ will render as $C_{k-3}$ – fleablood Nov 4 '19 at 3:33
• Strong induction allows (if you need it) i) multiple base cases ii) an induction step that isn't an increase by one but an increase by any other useful jump. – fleablood Nov 4 '19 at 3:42
Let $$P(n)$$ be the statement “$$c_n$$, $$c_{n+1}$$, $$c_{n+2}$$ are even”. We have $$P(0)$$ by assumption. Furthermore, $$P(n)\Rightarrow P(n+1)$$, since if $$c_n$$, $$c_{n+1}$$, $$c_{n+2}$$ are even, $$c_{n+3}=3c_n$$ must be even also. By induction, $$P(n)$$ is true for all $$n$$, which implies immediately that $$c_n$$ must be even for all $$n$$.
A more immediate solution would be to use @fleablood’s approach, but this one enables you to do this problem by pure, vanilla induction.
• Thanks a lot for both clarifying my question, and providing an answer. Embarrassingly I must say I am still trying to interpret these answers, which shows I need to do much more review over strong mathematical induction. If you don't mind answering this last question, how do I even interpret these "subset" numbers. Sorry if this makes no sense, but I have never really dealt with this "underscore" syntax, as evident by me not being able to describe it. I envy your understanding – skepticalforever Nov 4 '19 at 4:16
• We all started somewhere, don’t worry. As for the syntax, interpret it as just an indexed sequence of variables. $c_0$ is the zeroth variable, $c_1$ is the first, and so on. Other than the fact that you can reference a particular variable by a single whole number, you shouldn’t treat them any different than any other variable name. – ViHdzP Nov 4 '19 at 4:22
• Again, thanks a lot :) – skepticalforever Nov 4 '19 at 4:32
Don't use $$P(n)\implies P(n+1)$$ in your induction step. Do $$P(n)\implies P(n+3)$$ (and that is trivial)
This is acceptable.
As $$C_0$$ is even then by induction $$C_{3k}$$ is even for all $$k\in \mathbb N$$.
And as $$C_1$$ is even then by induction $$C_{1 + 3k}$$ is even for all $$k\in \mathbb N$$.
And as $$C_2$$ is even then by induction $$C_{2 + 3k}$$ is even for all $$k\in \mathbb N$$.
So as any $$n\in \mathbb N$$ is either equal to $$0,1,$$ or $$2$$ plus some multiple of $$3$$ we are done.
.....
Assuming we have proven that $$C_{n}$$ is even means $$C_{n+3}$$ is even. I leave that to you. (Again... it is trivial.)
==========
Alternative explanation.
"Strong" induction means in the induction step you don't just assume $$P(n)$$ is true for $$n=k$$ but $$P(n)$$ is true for all $$n \le k$$.
So in this induction step:
Assume $$C_n$$ is even for all $$n \le k$$. We must prove that $$C_{k+1}$$ is even.
And here it is:
$$C_{k+1} = 3\times C_{k-2}$$ and $$k-2 < k$$ so $$C_{k-2}$$ is even. So $$C_{k+1}$$ is a multiple of an even number and is even.
That's it. And that's fair.
In "weak" induction, we would have done something directly related to $$C_k$$. But in "strong" induction we don't have to use $$C_k$$, we can use $$C_{\text{anything less than or equal to }k}$$. In this case we use $$C_{k-2}$$.
• Very good explanations! – skepticalforever Nov 4 '19 at 5:22 | 2021-05-19T00:30:16 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3421042/solving-strong-mathematical-induction-sequence",
"openwebmath_score": 0.9079079031944275,
"openwebmath_perplexity": 226.58268940487628,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982287697148445,
"lm_q2_score": 0.8596637451167997,
"lm_q1q2_score": 0.844437120512789
} |
http://appliedclassicalanalysis.net/tag/definite-integral/ | ## A Clever Integration Trick
This trick is from Integration for Engineers and Scientists by William Squire via The Handbook of Integration by Daniel Zwillinger.
Noting that
\frac{1}{x} = \int\limits_{0}^{\infty} \mathrm{e}^{-xt} \mathrm{d} t
we can replace $$\frac{1}{x}$$ in an integrand with its integral expression and reverse the order of integration to simplify evaluation of the integrals. Of course the integral must converge uniformly to allow us to reverse the order of integration and we must have $$t>0$$.
Let us use this trick to evaluate
\int\limits_{0}^{\infty} \frac{\mathrm{e}^{-ax}-\mathrm{e}^{-bx}}{x} \mathrm{d} x
for $$a,b > 0$$.
Using our trick yields
\begin{align}
\int\limits_{0}^{\infty} \frac{\mathrm{e}^{-ax}-\mathrm{e}^{-bx}}{x} \mathrm{d} x & = \int\limits_{0}^{\infty}\int\limits_{0}^{\infty} \mathrm{e}^{-xt} (\mathrm{e}^{-ax}-\mathrm{e}^{-bx}) \mathrm{d} t \mathrm{d} x \\
& = \int\limits_{0}^{\infty}\int\limits_{0}^{\infty} \mathrm{e}^{-(a+t)x}-\mathrm{e}^{-(b+t)x} \mathrm{d} x \mathrm{d} t \\
& = -\int\limits_{0}^{\infty} \frac{\mathrm{e}^{-(a+t)x}}{a+t} – \frac{\mathrm{e}^{-(b+t)x}}{b+t} |_{0}^{\infty} \mathrm{d} t \\
& = \int\limits_{0}^{\infty} \frac{1}{a+t} – \frac{1}{b+t} \mathrm{d} t \\
& = \lim_{R \to \infty} \mathrm{ln}\frac{a+t}{b+t} |_{0}^{R} \\
& = \mathrm{ln}\left(\frac{b}{a}\right)
\end{align}
The conventional way to handle this integral is to recognize that it is the Frullani integral
\int\limits_{0}^{\infty} \frac{f(ax) – f(bx)}{x} \mathrm{d} x = [f(\infty) – f(0)]\mathrm{ln}\left(\frac{a}{b}\right)
A simple substitution yields our result.
For the Frullani integral, we must have the existence of $$f(\infty)$$ and $$f(0)$$ using the appropriate limits. For other conditions on the integral as well as proofs, see
1. On Cauchy-Frullani Integrals by A. M. Ostrowski. Use DOI 10.1007/BF02568143 with Sci-Hub to access the paper.
2. On the Theorem of Frullani by Juan Arias-De-Reyna. Use DOI 10.2307/2048376 with Sci-Hub to access the paper.
## Integrate $$\int_{0}^{\infty} \frac{\mathrm{ln}(x^{2}+a^{2})}{x^{2}+b^{2}} \mathrm{d} x$$
For $$a,b > 0$$,
\int\limits_{0}^{\infty} \frac{\mathrm{ln}(x^{2}+a^{2})}{x^{2}+b^{2}} \mathrm{d} x = \frac{\pi}{b} \mathrm{ln}(a+b)
\label{eq:160806a1}
\tag{1}
appeared on page 52 of Rediscovery of Malmsten’s integrals, their evaluation by contour integration methods and some related results by Iaroslav V. Blagouchine. This is a fascinating paper with many interesting results. In future blog posts, I will present some of Blagouchine’s results and solve some of the exercise problems that he proposed. For now, I will do this integral mainly to highlight a common trick used to evaluate contour integrals with logarithms of binomials.
The trick is to begin with a different integrand
f(z) = \frac{\mathrm{ln}(z+ia)}{z^{2}+b^{2}} = \frac{\mathrm{ln}(z+ia)}{(z-ib)(z+ib)}
\label{eq:160806a2}
\tag{2}
Using the following contour
we note that a first order pole at $$z=ib$$ is inside of the contour so we have
Res_{z=ib}[f(z)] = \frac{\mathrm{ln}(ib+ia)}{i2b} = \frac{\mathrm{ln}(i)+\mathrm{ln}(a+b)}{i2b} = \frac{i\frac{\pi}{2}+\mathrm{ln}(a+b)}{i2b}
\label{eq:160806a3}
\tag{3}
\begin{align}
\oint\limits_{C} f(z) \mathrm{d} z & = i2\pi Res_{z=ib}[f(z)] = \frac{i\pi^{2}}{2b} + \frac{\pi}{b}\mathrm{ln}(a+b) \\
& = \lim_{R \to \infty} \int\limits_{-R}^{R} f(x) \mathrm{d} x + \int\limits_{C_{1}} f(z) \mathrm{d} z
\label{eq:160806a4}
\tag{4}
\end{align}
The second integral goes to 0 via the ML estimate. The first integral will be broken in half and we use the substitution $$y=-x$$ to obtain
\int\limits_{-\infty}^{0} \frac{\mathrm{ln}(x+ia)}{x^{2}+b^{2}} \mathrm{d} x = \int\limits_{0}^{\infty} \frac{\mathrm{ln}(-y+ia)}{y^{2}+b^{2}} \mathrm{d} y
\label{eq:160806a5}
\tag{5}
Adding the two halves of the integral together, we have the following in the numerator
\mathrm{ln}(-x+ia) + \mathrm{ln}(x+ia) = i\pi + \mathrm{ln}(x-ia) + \mathrm{ln}(x+ia) = \mathrm{ln}(x^{2}+a^{2})
Now we have
\oint\limits_{C} f(z) \mathrm{d} z = \int\limits_{0}^{\infty} \frac{\mathrm{ln}(x^{2}+a^{2})}{x^{2}+b^{2}} \mathrm{d} x +
i\pi \int\limits_{0}^{\infty} \frac{1}{x^{2}+b^{2}} \mathrm{d} x
\label{eq:160806a6}
\tag{6}
Equating real and imaginary parts of equations \eqref{eq:160806a6} and \eqref{eq:160806a4} yields our original result plus a bonus integral
\int\limits_{0}^{\infty} \frac{1}{x^{2}+b^{2}} \mathrm{d} x = \frac{\pi}{2b}
which we could have obtained via the inverse tangent function.
Note that the trick allowed the limits of the integral to work out with the semi circular contour and we recovered the original integrand. This is a standard trick but surprisingly I have read some complex analysis texts that do not cover it.
## Integrate $$\int_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x$$
This integral appeared in Inside Interesting Integrals by Paul Nahin in the problem set of chapter 3. Using Wolfram Alpha, we get
\int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \pi
\label{eq:1}
\tag{1}
Nahin suggests the following trig substitution, $$x = \cos(2y)$$.
While the form of the integrand certainly does suggest that some type of trig substitution will work, let us do it with another method. If we write the integral as
\int\limits_{-1}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x
this looks like a beta function. From Higher Transcendental Functions (Bateman Manuscript), Volume 1, Section 1.5.1, equation 10, we see
\mathrm{B}(x,y) = 2^{1-x-y} \int\limits_{0}^{1} (1+t)^{x-1}(1-t)^{y-1} + (1+t)^{y-1}(1-t)^{x-1} \mathrm{d} t
\label{eq:2}
\tag{2}
Let us begin with the original integral and the right half of the interval of integration
\int\limits_{0}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x = \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x
\label{eq:3}
\tag{3}
Now, let us consider
\int\limits_{0}^{1} (1+x)^{-\frac{1}{2}}(1-x)^{\frac{1}{2}} \mathrm{d} x = \int\limits_{0}^{1}\sqrt{\frac{1-x}{1+x}} \mathrm{d} x
\label{eq:4}
\tag{4}
We let $$x=-y$$ to obtain
-\int\limits_{0}^{-1} \sqrt{\frac{1+y}{1-y}} \mathrm{d} y,
\label{eq:5}
\tag{5}
which we can rewrite as
\int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x
\label{eq:6}
\tag{6}
Adding the right hand side of equation \eqref{eq:3} and equation \eqref{eq:6} yields our original integral
\int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x + \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = \int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x
\label{eq:7}
\tag{7}
Likewise, adding the left hand sides of equations \eqref{eq:4} and \eqref{eq:3} yields
\int\limits_{-1}^{0}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x + \int\limits_{0}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x =
\int\limits_{0}^{1} (1+x)^{-\frac{1}{2}}(1-x)^{\frac{1}{2}} \mathrm{d} x + \int\limits_{0}^{1} (1+x)^{\frac{1}{2}}(1-x)^{-\frac{1}{2}} \mathrm{d} x
If we combine this result into one integral and rearrange the integrand, we see that it is the same as the integral in \eqref{eq:2} with
x=\frac{3}{2} \,\, \mathrm{and} \,\, y=\frac{1}{2}
Putting it all together, we have
\int\limits_{-1}^{1}\sqrt{\frac{1+x}{1-x}} \mathrm{d} x = 2\mathrm{B}\left(\frac{3}{2},\frac{1}{2}\right) = \pi
## Integrate $$\int^{\infty}_{0}\frac{e^{-px^{2}} – e^{-qx^{2}}}{x^{2}} \mathrm{d}x$$
This integral appeared in Paul Nahin’s very interesting book Inside Interesting Integrals. Nahin begins with a completely different integral and derives this one. Let us evaluate the integral directly and then redo it with Nahin’s method.
We begin by breaking up the integral and looking at each piece. So we have
\mathrm I = \int\limits^{\infty}_{0} x^{-2}\mathrm{e}^{-px^{2}} \mathrm{d}x.
This looks very similar to a definition of the gamma function:
\Gamma(z) = \int\limits^{\infty}_{0} x^{z-1}\mathrm{e}^{-x} \mathrm{d}x.
We make the substitution $$y = px^{2}$$
\mathrm I = \frac{\sqrt{p}}{2} \int\limits^{\infty}_{0} \mathrm{e}^{-y} y^{-\frac{3}{2}} \mathrm{d}y.
Invoking the gamma function yields
\mathrm I = \frac{\sqrt{p}}{2} \Gamma\Big(-\frac{1}{2}\Big) = -\sqrt{p}\sqrt{\pi}.
Treating the other part of the original integral involving $$q$$ yields our final result
\int\limits^{\infty}_{0}\frac{\mathrm{e}^{-px^{2}} – \mathrm{e}^{-qx^{2}}}{x^{2}} \mathrm{d}x = \sqrt{\pi}(\sqrt{q}-\sqrt{p}).
As I mentioned earlier, Nahin derived this result beginning with an entirely different integral. A casual glance at the original integral should make us suspect that this is the case as it is clear that both parts of the integrand are identical. In other words, why solve the original integral as opposed to the integral that I used at the beginning of the analysis. Such is the case with many of the results in Inside Interesting Integrals. This is the result of working backward, yielding an evaluated integral via some methods as opposed to starting from an integral that one wants to evaluate. I am not criticizing this approach, as it has resulted in an enormous number of useful integral evaluations. Indeed, it can create an unlimited number of evaluated integrals. Also, such “accidental” integrals can result from contour integration even when directly attacking a given integral. Consider that it often happens that upon the last step in evaluating an integral via contour integration, one equates real and imaginary parts in which one is the solution to the original integral while the other is a bonus.
Let us now see how Nahin achieved his result. He begins with
\int\limits_{0}^{\infty} \mathrm{e}^{-x^{2}} \mathrm{d}x
for which Nahin derived the answer of $$\frac{1}{2} \sqrt{\pi}$$ earlier in the book. What is interesting here is that this integral can be done easily with the gamma function by letting $$x^{2} = y$$. This quickly results in
\int\limits_{0}^{\infty} \mathrm{e}^{-x^{2}} \mathrm{d}x = \frac{1}{2} \int\limits_{0}^{\infty} \mathrm{e}^{-y} y^{-1/2} \mathrm{d} y =
\frac{1}{2} \Gamma\Big(\frac{1}{2}\Big) = \frac{1}{2} \sqrt{\pi}.
If someone saw this, then they would immediately recognize that the integral sought can be evaluated via the gamma function as I did above. Nevertheless, let us continue with Nahin’s analysis.
Nahin makes a change of variable, $$x = t\sqrt{a}$$ to introduce the parameter $$a$$, and thus obtains
\int\limits_{0}^{\infty} \mathrm{e}^{-at^{2}} \mathrm{d}t = \frac{1}{2}\frac{\sqrt{\pi}}{\sqrt{a}}
Then he invokes a useful and interesting trick. He integrates the equation with respect to $$a$$, between two arbitrary end points, and changes the order of integration. Changing the order of integration requires some care, as it is only valid if the integral converges uniformly. Here, the integral is just a gamma function, which we know converges uniformly. This is usually the case for “well behaved”, “non-crazy” integrals. So, Nahin has for the left hand side
\int\limits_{p}^{q}\left\{\int\limits_{0}^{\infty} \mathrm{e}^{-at^{2}} \mathrm{d}t\right\}\mathrm{d}a = \int\limits_{0}^{\infty}\left\{\int\limits_{p}^{q}\mathrm{e}^{-at^{2}} \mathrm{d}a\right\} \mathrm{d}t = \int\limits_{0}^{\infty}\frac{\mathrm{e}^{-pt^{2}} – \mathrm{e}^{-qt^{2}}}{t^{2}} \mathrm{d}t.
The right hand side yields
\int\limits_{p}^{q}\frac{1}{2}\frac{\sqrt{\pi}}{\sqrt{a}} \mathrm{d}a = \sqrt{\pi}(\sqrt{q}-\sqrt{p}).
And thus we have our result. | 2019-03-21T07:36:27 | {
"domain": "appliedclassicalanalysis.net",
"url": "http://appliedclassicalanalysis.net/tag/definite-integral/",
"openwebmath_score": 0.9597986936569214,
"openwebmath_perplexity": 1142.9998088100442,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9911526438717139,
"lm_q2_score": 0.8519528019683106,
"lm_q1q2_score": 0.8444152721248057
} |
https://math.stackexchange.com/questions/4274025/how-many-ways-can-we-draw-three-balls-such-that-at-least-two-are-red | # How many ways can we draw three balls such that at least two are red?
We have $$7$$ balls, three red, two white and two blue. How many ways can we select three of them such that at least two are red? So, my answer was: If the balls are identical, then there are $$3$$ ways: $$RRR$$, $$RRB$$ and $$RRW$$. If the balls are distinguishable, then there are $${7}\choose{3}$$=$$35$$ total possibilities, but there are $${4}\choose{3}$$=$$4$$ ways that have no red balls, and $$2 \cdot{4}\choose{2}$$=$$12$$ with only one ball so there are 19 with 2 red balls or more. But the answer given is 13. Could anyone explain this to me?
• There are $3\cdot\binom{4}{2} = 18$, not $12$, ways with exactly one red ball. Oct 11, 2021 at 21:02
You are correct that there are $$\binom{7}{3}$$ ways to select three of the seven balls and that there are $$\binom{4}{3}$$ ways to select none of the red balls. However, there are $$\binom{3}{1}\binom{4}{2}$$ ways to select exactly one of the three red balls and two of the remaining four balls, which yields $$\binom{7}{3} - \binom{4}{3} - \binom{3}{1}\binom{4}{2} = 13$$ ways to select at least two red balls.
Alternatively, we can select exactly two red balls in $$\binom{3}{2}\binom{4}{1}$$ ways since we must select two of the three red balls and one of the other four balls, and we can select all three red balls in $$\binom{3}{3}$$ ways. Since these cases are mutually exclusive, we obtain $$\binom{3}{2}\binom{4}{1} + \binom{3}{3} = 13$$ selections with at least two red balls.
• Why is this reasoning wrong? := There are $\binom{3}{2}$ ways to choose the two read balls. Then there are $\binom{5}{1}$ ways to choose 1 of the remaining balls. Together they make $\binom{3}{2} \times \binom{5}{1} =15$. Mar 25 at 14:01
• @user2277550 Suppose we number the three red balls $r_1, r_2, r_3$. Your method counts the case in which all three red balls are selected three times, once for each of the three ways you could designate one of the three red balls as the remaining ball: $(\{r_1, r_2\}, \{r_3\})$, $(\{r_1, r_3\}, \{r_2\})$, $(\{r_2, r_3\}, \{r_1\})$. That accounts for the two extra selections in your answer. Mar 25 at 16:09 | 2022-06-26T19:39:22 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4274025/how-many-ways-can-we-draw-three-balls-such-that-at-least-two-are-red",
"openwebmath_score": 0.8159582018852234,
"openwebmath_perplexity": 78.66320374986364,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9911526459624254,
"lm_q2_score": 0.8519527963298947,
"lm_q1q2_score": 0.8444152683174624
} |
http://mymathforum.com/calculus/345620-convex-print.html | My Math Forum (http://mymathforum.com/math-forums.php)
- Calculus (http://mymathforum.com/calculus/)
- - Is this convex? (http://mymathforum.com/calculus/345620-convex.html)
ProofOfALifetime January 13th, 2019 09:52 AM
Is this convex?
I'm trying to use Jensen's to prove an inequality, but my solution depends on $$\frac{1}{x} \ln(1+x)$$ being convex when $x>0$. I'm not completely sure if this is true. The second derivative is inconclusive (at least it seems like that).
romsek January 13th, 2019 10:44 AM
$\lim \limits_{x\to 0} \dfrac{d^2}{dx^2}\left(\dfrac 1 x \ln(1+x)\right) = \dfrac 2 3 > 0$
It's convex at 0.
ProofOfALifetime January 13th, 2019 11:17 AM
Quote:
Originally Posted by romsek (Post 604413) $\lim \limits_{x\to 0} \dfrac{d^2}{dx^2}\left(\dfrac 1 x \ln(1+x)\right) = \dfrac 2 3 > 0$ It's convex at 0.
That doesn’t make sense to me. Convex at 0? What I mean is isn't it supposed to be convex on an interval?
Sorry, but I was hoping that it would be convex when $x>0$, so on $(0,\infty)$. By the way, this is my first proof using Jensen's so I'm still learning.
romsek January 13th, 2019 11:49 AM
Quote:
Originally Posted by ProofOfALifetime (Post 604415) That doesn’t make sense to me. Convex at 0? What I mean is isn't it supposed to be convex on an interval? Sorry, but I was hoping that it would be convex when $x>0$, so on $(0,\infty)$. By the way, this is my first proof using Jensen's so I'm still learning.
Ok, my bad, This refers to the function being convex in an infinitesimal interval $(0,\delta)$, but at any rate the 2nd derivative of your function is positive in the interval $(0,\infty)$ so your function is convex for all non-negative reals.
ProofOfALifetime January 13th, 2019 12:09 PM
Quote:
Originally Posted by romsek (Post 604416) Ok, my bad, This refers to the function being convex in an infinitesimal interval $(0,\delta)$, but at any rate the 2nd derivative of your function is positive in the interval $(0,\infty)$ so your function is convex for all non-negative reals.
Thank you thank you thank you! This is all I needed to complete the proof I was doing! I appreciate it! :)
All times are GMT -8. The time now is 08:16 PM. | 2019-04-23T04:16:25 | {
"domain": "mymathforum.com",
"url": "http://mymathforum.com/calculus/345620-convex-print.html",
"openwebmath_score": 0.8980239033699036,
"openwebmath_perplexity": 525.1011320601158,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464520028357,
"lm_q2_score": 0.865224091265267,
"lm_q1q2_score": 0.8444123820577151
} |
https://math.stackexchange.com/questions/3188609/is-every-element-in-a-power-set-a-sub-set | # Is every element in a power set a sub set?
I have understood so far that an element cannot be a sub set of itself. If A = {1,2,{3}} and {3} is not a sub set of A. But in my textbook it has been given that every element in a power set is a subset. So isn't there a contradiction?
• Every element of the power-set of a set $A$ is a subset of the set $A$. This is so simply because the power set is defined as the set of all subsets. – Mauro ALLEGRANZA Apr 15 at 12:10
• Okay that makes sense – user662650 Apr 15 at 12:10
• And YES : $\{ 3 \}$ is not a subset of $A$. It is an element of $A$. – Mauro ALLEGRANZA Apr 15 at 12:11
• It is worth pointing out that although $\{3\}$ is not a subset of $A$, $\{\{3\}\}$ is. Note the subtle difference between the two. $\{3\}$ is a set with one element, that element being the number $3$. On the other hand $\{\{3\}\}$ is also a set with one element, but in this case that element is the set which contains $3$. – JMoravitz Apr 15 at 12:17
• It's also just not true, either, that an element $x\in A$ can't also be a subset of $A$. Consider the usual implementation of the natural number 2, $\{\varnothing,\{\varnothing\}\}$; both members of this set are also subsets of this set. – Malice Vidrine Apr 16 at 12:22
Every element of a power set is a subset of the set you formed the powerset for, not of the powerset itself. $$pow(A) := \{X: X \subseteq A\}$$, so by definition, every element of $$pow(A)$$ is a subset of $$A$$, but no element of $$pow(A)$$ is a subset of $$pow(A)$$, and no element of $$A$$ is a subset of $$A$$.
As you correctly noted, $$\{3\}$$ - a set with one element, namely the element $$3$$ - is not a subset, but an element of $$\{A\}$$. But $$\{\{3\}\}$$ - a set with one element, namely the element $$\{3\}$$ - is an element of the powerset of $$A$$ and therefore a subset of $$A$$.
Maybe you should avoid speaking of " an element" or of " a subset" in an absolute sense, as if there were categories of things, some being once and for all and by nature elements, or once and for all and by nature subsets.
In fact the terms " element" and "subset" are relational : element OF a given set, subset OF a given set. One cannot be simply " a brother" , one has to be a brother OF someone...same thing for sets.
For example, take take the object {4}. This object is a set. Relatively to the set
{{4}, { 5,6} }
our object {4} is an element. So we can say that :
the set {4} belongs to the set {{4}, { 5,6} }. Notice that the set {{4}, { 5,6} } has 2 elements, and each of these two elements is a set.
Now, relatively to the set { 4,5,6} ( which has 3 elements), our object {4} is not an element. There is absolutely no contradiction here, if you keep in mind that "element" is a relative term: "being an element OF a given set" does not contradict "not being an element of another set" ( in the same manner as " Peter is a brother of John" does not contradict " Peter is not a brother of Alice").
So what is the object {4} relatively to the set { 4,5,6}? The proper relation here is the inclusion relation : {4} is included in the set { 4,5,6}, it is a subset of { 4,5,6} ( some say also " a part of" { 4,5,6} ).
How to explain that {4} is included in the set { 4,5,6}? Remember that " a set X is included in a set Y just in case all the elements of X are also elements of Y". Now, can you see an element of the one-element set {4} that is not also an element of the three-elements set { 4,5,6}? If you answer "no", you know that the set {4} "passes the test" of inclusion , and is therefore included in { 4,5,6}.
Since the set {4} is a subset of { 4,5,6} ( is included in that set), the set {4} is an element of the power set of the set { 4,5,6}. That comes from the definition of a power set. By definition, the power set of { 4,5,6} is the collection of the subsets of the set { 4,5,6}.
Being given that {4} is a subset of { 4,5,6}, it follows that the set {4}is an element of the power set of { 4,5,6}. And that is logical : every subset of a set S is an element of the collection of the subsets of S, and the power set of S is just this collection.
Remark. DEFINITION : the power set of a set S is the set that has as elements all the subsets of S.
Now let us consider some general " laws" regarding these relations ( element of, subset of)
(1) No set is an element of itself. ( But nothing prevents a set from being an element of another set).
(2) Every set is a subset of itself, every set is included in itself. ( Take any set, say {a, b,c}. Can you see an element of the set {a, b,c} that is not an element of the set {a, b,c}? Of course not! So the set {a, b,c} passes the test of inclusion, relatively to itself: the set {a, b,c} is a subset of the set {a, b,c}, that is to say, of itself. I said subset, not member!)
(3) Every set is an element of its power set .
Explanation :
(a) If a set X is a subset of a set Y, then X is an element of the power set of Y.
(b) But, the set Y is itself a subset of Y ( itself), because of law (2) above.
(c) Therefore, the set Y is an element of its own power set.
A set $$B$$ is a subset of $$A \iff (x \in B \Rightarrow x \in A).$$ For your example take $$B = \{ 3\}$$ then $$B$$ is a subset of $$A$$ if and only if $$3 \in A$$, but this is not the case since $$A$$ only contains $$1,2,\{ 3\}$$ all of which are not $$3$$. Therefore $$B$$ is not a subset of $$A$$.
The power set of a set $$A$$ is by definition the collection of all subsets of $$A$$ so any subset of $$A$$ is by definition in $$A$$'s power set. | 2019-04-25T19:52:36 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3188609/is-every-element-in-a-power-set-a-sub-set",
"openwebmath_score": 0.7427389025688171,
"openwebmath_perplexity": 142.27349727628976,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464499040094,
"lm_q2_score": 0.8652240877899775,
"lm_q1q2_score": 0.8444123768500634
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.