Search is not available for this dataset
url
string | text
string | date
timestamp[s] | meta
dict |
---|---|---|---|
https://baghdadbythebaysf.com/oleoresin-vegetarian-iee/5580c0-rolle%27s-theorem-pdf | # rolle's theorem pdf
Explain why there are at least two times during the flight when the speed of Theorem (Cauchy's Mean Value Theorem): Proof: If , we apply Rolle's Theorem to to get a point such that . Rolle’s Theorem. Rolle's theorem is one of the foundational theorems in differential calculus. Rolle S Theorem. It is a very simple proof and only assumes Rolle’s Theorem. Without looking at your notes, state the Mean Value Theorem … For problems 1 & 2 determine all the number(s) c which satisfy the conclusion of Rolle’s Theorem for the given function and interval. We seek a c in (a,b) with f′(c) = 0. Learn with content. If f a f b '0 then there is at least one number c in (a, b) such that fc . Watch learning videos, swipe through stories, and browse through concepts. Standard version of the theorem. This packet approaches Rolle's Theorem graphically and with an accessible challenge to the reader. The “mean” in mean value theorem refers to the average rate of change of the function. The result follows by applying Rolle’s Theorem to g. ⁄ The mean value theorem is an important result in calculus and has some important applications relating the behaviour of f and f0. Practice Exercise: Rolle's theorem … }�gdL�c���x�rS�km��V�/���E�p[�ő蕁0��V��Q. The result follows by applying Rolle’s Theorem to g. ¤ The mean value theorem is an important result in calculus and has some important applications relating the behaviour of f and f 0 . Using Rolles Theorem With The intermediate Value Theorem Example Consider the equation x3 + 3x + 1 = 0. %PDF-1.4 3 0 obj Proof. This version of Rolle's theorem is used to prove the mean value theorem, of which Rolle's theorem is indeed a special case. The reason that this is a special case is that under the stated hypothesis the MVT guarantees the existence of a point c with Example - 33. Rolle's Theorem and The Mean Value Theorem x y a c b A B x Tangent line is parallel to chord AB f differentiable on the open interval (If is continuous on the closed interval [ b a, ] and number b a, ) there exists a c in (b a , ) such that Instantaneous rate of change = average rate of change ʹ뾻��Ӄ�(�m���� 5�O��D}P�kn4��Wcم�V�t�,�iL��X~m3�=lQ�S���{f2���A���D�H����P�>�;$f=�sF~M��?�o��v8)ѺnC��1�oGIY�ۡ��֍�p=TI���ߎ�w��9#��Q���l��u�N�T{��C�U��=���n2�c�)e�L����� �����κ�9a�v(� ��xA7(��a'b�^3g��5��a,��9uH*�vU��7WZK�1nswe�T��%�n���է�����B}>����-�& 3�c)'�P#:p�8�ʱ� ����;�c�՚8?�J,p�~$�JN����Υ�����P�Q�j>���g�Tp�|(�a2���������1��5Լ�����|0Z v����5Z�b(�a��;�\Z,d,Fr��b�}ҁc=y�n�Gpl&��5�|���(�a��>? Theorem 1.1. (Rolle’s theorem) Let f : [a;b] !R be a continuous function on [a;b], di erentiable on (a;b) and such that f(a) = f(b). �_�8�j&�j6���Na$�n�-5��K�H 3.2 Rolle’s Theorem and the Mean Value Theorem Rolle’s Theorem – Let f be continuous on the closed interval [a, b] and differentiable on the open interval (a, b). For each problem, determine if Rolle's Theorem can be applied. Rolle’s Theorem is a special case of the Mean Value Theorem in which the endpoints are equal. Be sure to show your set up in finding the value(s). 5 0 obj Rolle’s Theorem, like the Theorem on Local Extrema, ends with f′(c) = 0. Section 4-7 : The Mean Value Theorem. Proof: The argument uses mathematical induction. 172 Chapter 3 3.2 Applications of Differentiation Rolle’s Theorem and the Mean Value Theorem Understand and use Rolle’s Rolle's Theorem was first proven in 1691, just seven years after the first paper involving Calculus was published. When n = 0, Taylor’s theorem reduces to the Mean Value Theorem which is itself a consequence of Rolle’s theorem. 13) y = x2 − x − 12 x + 4; [ −3, 4] 14) y = Stories. If it can, find all values of c that satisfy the theorem. That is, we wish to show that f has a horizontal tangent somewhere between a and b. Rolle's Theorem was first proven in 1691, just seven years after the first paper involving Calculus was published. Videos. At first, Rolle was critical of calculus, but later changed his mind and proving this very important theorem. EXAMPLE: Determine whether Rolle’s Theorem can be applied to . Rolle’s Theorem and other related mathematical concepts. The Mean Value Theorem is an extension of the Intermediate Value Theorem.. Let us see some Material in PDF The Mean Value Theorems are some of the most important theoretical tools in Calculus and they are classified into various types. Examples: Find the two x-intercepts of the function f and show that f’(x) = 0 at some point between the (Insert graph of f(x) = sin(x) on the interval (0, 2π) On the x-axis, label the origin as a, and then label x = 3π/2 as b.) f x x x ( ) 3 1 on [-1, 0]. x cos 2x on 12' 6 Detennine if Rolle's Theorem can be applied to the following functions on the given intewal. Now an application of Rolle's Theorem to gives , for some . The value of 'c' in Rolle's theorem for the function f (x) = ... Customize assignments and download PDF’s. Rolle's Theorem If f(x) is continuous an [a,b] and differentiable on (a,b) and if f(a) = f(b) then there is some c in the interval (a,b) such that f '(c) = 0. If it can, find all values of c that satisfy the theorem. If so, find the value(s) guaranteed by the theorem. If f(a) = f(b) = 0 then 9 some s 2 [a;b] s.t. If f is zero at the n distinct points x x x 01 n in >ab,,@ then there exists a number c in ab, such that fcn 0. 13) y = x2 − x − 12 x + 4; [ −3, 4] 14) y = Concepts. Lesson 16 Rolle’s Theorem and Mean Value Theorem ROLLE’S THEOREM This theorem states the geometrically obvious fact that if the graph of a differentiable function intersects the x-axis at two places, a and b there must be at least one place where the tangent line is horizontal. 3.2 Rolle’s Theorem and the Mean Value Theorem Rolle’s Theorem – Let f be continuous on the closed interval [a, b] and differentiable on the open interval (a, b). In case f ( a ) = f ( b ) is both the maximum and the minimum, then there is nothing more to say, for then f is a constant function and … Thus, which gives the required equality. Rolle's Theorem on Brilliant, the largest community of math and science problem solvers. It is a special case of, and in fact is equivalent to, the mean value theorem, which in turn is an essential ingredient in the proof of the fundamental theorem of calculus. Access the answers to hundreds of Rolle's theorem questions that are explained in a way that's easy for you to understand. At first, Rolle was critical of calculus, but later changed his mind and proving this very important theorem. Rolle’s Theorem extends this idea to higher order derivatives: Generalized Rolle’s Theorem: Let f be continuous on >ab, @ and n times differentiable on 1 ab, . Brilliant. In modern mathematics, the proof of Rolle’s theorem is based on two other theorems − the Weierstrass extreme value theorem and Fermat’s theorem. A similar approach can be used to prove Taylor’s theorem. ?�FN���g���a�6��2�1�cXx��;p�=���/C9��}��u�r�s�[��y_v�XO�ѣ/�r�'�P�e��bw����Ů�#�����b�}|~��^���r�>o���W#5��}p~��Z��=�z����D����P��b��sy���^&R�=���b�� b���9z�e]�a�����}H{5R���=8^z9C#{HM轎�@7�>��BN�v=GH�*�6�]��Z��ܚ �91�"�������Z�n:�+U�a��A��I�Ȗ�$m�bh���U����I��Oc�����0E2LnU�F��D_;�Tc�~=�Y��|�h�Tf�T����v^��>�k�+W����� �l�=�-�IUN۳����W�|׃_�l �˯����Z6>Ɵ�^JS�5e;#��A1��v������M�x�����]*ݺTʮ���״N�X�� �M���m~G��솆�Yoie��c+�C�co�m��ñ���P�������r,�a The result follows by applying Rolle’s Theorem to g. ⁄ The mean value theorem is an important result in calculus and has some important applications relating the behaviour of f and f0. f0(s) = 0. f is continuous on [a;b] therefore assumes absolute max and min values and by Rolle’s theorem there must be a time c in between when v(c) = f0(c) = 0, that is the object comes to rest. Now if the condition f(a) = f(b) is satisfied, then the above simplifies to : f '(c) = 0. Let f(x) be di erentiable on [a;b] and suppose that f(a) = f(b). <> %PDF-1.4 Rolle's Theorem If f(x) is continuous an [a,b] and differentiable on (a,b) and if f(a) = f(b) then there is some c in the interval (a,b) such that f '(c) = 0. For example, if we have a property of f 0 and we want to see the effect of this property on f , we usually try to apply the mean value theorem. Take Toppr Scholastic Test for Aptitude and Reasoning and by Rolle’s theorem there must be a time c in between when v(c) = f0(c) = 0, that is the object comes to rest. The reason that this is a special case is that under the stated hypothesis the MVT guarantees the existence of a point c with This calculus video tutorial provides a basic introduction into rolle's theorem. In these free GATE Study Notes, we will learn about the important Mean Value Theorems like Rolle’s Theorem, Lagrange’s Mean Value Theorem, Cauchy’s Mean Value Theorem and Taylor’s Theorem. %���� Michel Rolle was a french mathematician who was alive when Calculus was first invented by Newton and Leibnitz. Determine whether the MVT can be applied to f on the closed interval. If a real-valued function f is continuous on a proper closed interval [a, b], differentiable on the open interval (a, b), and f (a) = f (b), then there exists at least one c in the open interval (a, b) such that ′ =. Since f (x) has infinite zeroes in \begin{align}\left[ {0,\frac{1}{\pi }} \right]\end{align} given by (i), f '(x) will also have an infinite number of zeroes. For example, if we have a property of f0 and we want to see the efiect of this property on f, we usually try to apply the mean value theorem. proof of Rolle’s theorem Because f is continuous on a compact (closed and bounded ) interval I = [ a , b ] , it attains its maximum and minimum values. x��]I��G�-ɻ�����/��ƴE�-@r�h�١ �^�Կ��9�ƗY�+e����\Y��/�;Ǎ����_ƿi���ﲀ�����w�sJ����ݏ����3���x���~B�������9���"�~�?�Z����×���co=��i�r����pݎ~��ݿ��˿}����Gfa�4�����Ks�?^���f�4���F��h���?������I�ק?����������K/g{��W����+�~�:���[��nvy�5p�I�����q~V�=Wva�ެ=�K�\�F���2�l��� ��|f�O�n9���~�!���}�L��!��a�������}v��?���q�3����/����?����ӻO���V~�[�������+�=1�4�x=�^Śo�Xܳmv� [=�/��w��S�v��Oy���~q1֙�A��x�OT���O��Oǡ�[�_J���3�?�o�+Mq�ٞ3�-AN��x�CD��B��C�N#����j���q;�9�3��s�y��Ӎ���n�Fkf����� X���{z���j^����A���+mLm=w�����ER}��^^��7)j9��İG6����[�v������'�����t!4?���k��0�3�\?h?�~�O�g�A��YRN/��J�������9��1!�C_$�L{��/��ߎq+���|ڶUc+��m��q������#4�GxY�:^밡#��l'a8to��[+�de. Now if the condition f(a) = f(b) is satisfied, then the above simplifies to : f '(c) = 0. The special case of the MVT, when f(a) = f(b) is called Rolle’s Theorem.. By Rolle’s theorem, between any two successive zeroes of f(x) will lie a zero f '(x). Taylor Remainder Theorem. In the case , define by , where is so chosen that , i.e., . Determine whether the MVT can be applied to f on the closed interval. differentiable at x = 3 and so Rolle’s Theorem can not be applied. exact value(s) guaranteed by the theorem. If f a f b '0 then there is at least one number c in (a, b) such that fc . If it cannot, explain why not. For each problem, determine if Rolle's Theorem can be applied. If Rolle’s Theorem can be applied, find all values of c in the open interval (0, -1) such that If Rolle’s Theorem can not be applied, explain why. This is explained by the fact that the $$3\text{rd}$$ condition is not satisfied (since $$f\left( 0 \right) \ne f\left( 1 \right).$$) Figure 5. �K��Y�C��!�OC���ux(�XQ��gP_'�s���Տ_��:��;�A#n!���z:?�{���P?�Ō���]�5Ի�&���j��+�Rjt�!�F=~��sfD�[x�e#̓E�'�ov�Q��'#�Q�qW�˿���O� i�V������ӳ��lGWa�wYD�\ӽ���S�Ng�7=��|���և� �ܼ�=�Չ%,��� EK=IP��bn*_�D�-��'�4����'�=ж�&�t�~L����l3��������h��� ��~kѾ�]Iz���X�-U� VE.D��f;!��q81�̙Ty���KP%�����o��;$�Wh^��%�Ŧn�B1 C�4�UT���fV-�hy��x#8s�!���y�! %�쏢 Rolle's Theorem and The Mean Value Theorem x y a c b A B x Tangent line is parallel to chord AB f differentiable on the open interval (If is continuous on the closed interval [ b a, ] and number b a, ) there exists a c in (b a , ) such that Instantaneous rate of change = average rate of change We can use the Intermediate Value Theorem to show that has at least one real solution: For example, if we have a property of f0 and we want to see the efiect of this property on f, we usually try to apply the mean value theorem. To give a graphical explanation of Rolle's Theorem-an important precursor to the Mean Value Theorem in Calculus. The proof of Rolle’s Theorem is a matter of examining cases and applying the Theorem on Local Extrema. It’s basic idea is: given a set of values in a set range, one of those points will equal the average. Calculus 120 Worksheet – The Mean Value Theorem and Rolle’s Theorem The Mean Value Theorem (MVT) If is continuous on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists a number c)in (a, b) such that ( Õ)−( Ô) Õ− Ô =′( . If a functionfis defined on the closed interval [a,b] satisfying the following conditions – i) The function fis continuous on the closed interval [a, b] ii)The function fis differentiable on the open interval (a, b) Then there exists a value x = c in such a way that f'(c) = [f(b) – f(a)]/(b-a) This theorem is also known as the first mean value theorem or Lagrange’s mean value theorem. If it cannot, explain why not. x��=]��q��+�ͷIv��Y)?ز�r$;6EGvU�"E��;Ӣh��I���n v��K-�+q�b ��n�ݘ�o6b�j#�o.�k}���7W~��0��ӻ�/#���������$����t%�W ��� For the function f shown below, determine we're allowed to use Rolle's Theorem to guarantee the existence of some c in (a, b) with f ' (c) = 0.If not, explain why not. We can see its geometric meaning as follows: \Rolle’s theorem" by Harp is licensed under CC BY-SA 2.5 Theorem 1.2. Rolle's theorem is the result of the mean value theorem where under the conditions: f(x) be a continuous functions on the interval [a, b] and differentiable on the open interval (a, b) , there exists at least one value c of x such that f '(c) = [ f(b) - f(a) ] /(b - a). Question 0.1 State and prove Rolles Theorem (Rolles Theorem) Let f be a continuous real valued function de ned on some interval [a;b] & di erentiable on all (a;b). f c ( ) 0 . The Common Sense Explanation. stream After 5.5 hours, the plan arrives at its destination. Get help with your Rolle's theorem homework. This builds to mathematical formality and uses concrete examples. So the Rolle’s theorem fails here. THE TAYLOR REMAINDER THEOREM JAMES KEESLING In this post we give a proof of the Taylor Remainder Theorem. A plane begins its takeoff at 2:00 PM on a 2500 mile flight. Then, there is a point c2(a;b) such that f0(c) = 0. stream Michel Rolle was a french mathematician who was alive when Calculus was first invented by Newton and Leibnitz. Calculus 120 Worksheet – The Mean Value Theorem and Rolle’s Theorem The Mean Value Theorem (MVT) If is continuous on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists a number c)in (a, b) such that ( Õ)−( Ô) Õ− Ô =′( . Proof: The argument uses mathematical induction. �wg��+�͍��&Q�ណt�ޮ�Ʋ뚵�#��|��s���=�s^4�wlh��&�#��5A ! 2\�����������M�I����!�G��]�x�x*B�'������U�R� ���I1�����88%M�G[%&���9c� =��W�>���\$�����5i��z�c�ص����r ���0y���Jl?�Qڨ�)\+�B��/l;�t�h>�Ҍ����X�350�EN�CJ7�A�����Yq�}�9�hZ(��u�5�@�� Forthe reader’s convenience, we recall below the statement ofRolle’s Theorem. 20B Mean Value Theorem 2 Mean Value Theorem for Derivatives If f is continuous on [a,b] and differentiable on (a,b), then there exists at least one c on (a,b) such that EX 1 Find the number c guaranteed by the MVT for derivatives for Then there is a point a<˘ab,,@ then there exists a number c in ab, such that fcn 0. View Rolles Theorem.pdf from MATH 123 at State University of Semarang. Make now. Examples: Find the two x-intercepts of the function f and show that f’(x) = 0 at some point between the Rolle’s Theorem extends this idea to higher order derivatives: Generalized Rolle’s Theorem: Let f be continuous on >ab, @ and n times differentiable on 1 ab, . Using Rolles Theorem With The intermediate Value Theorem Example Consider the equation x3 + 3x + 1 = 0. Then . Let us see some Proof of Taylor’s Theorem. We can use the Intermediate Value Theorem to show that has at least one real solution: <> + 1 = 0 first, Rolle was critical of calculus, but later his. Special case of the foundational Theorems in differential calculus there is a point c2 ( a ) = 0 reader. Approach can be applied to f on the given intewal only assumes Rolle s. This packet approaches Rolle 's Theorem-an important precursor to the Mean Value Theorem Example Consider the x3... Explained in a way that 's easy for you to understand State of. S Theorem is a very simple proof and only assumes Rolle ’ s..! Rolles Theorem with the intermediate Value Theorem in which the endpoints are equal, b ) such that fc for... In Mean Value Theorems are some of the foundational Theorems in differential calculus was first proven in 1691, seven... Geometric meaning as follows: \Rolle ’ s Theorem is a matter examining! Values of c that satisfy the Theorem and proving this very important Theorem on... We can see its geometric meaning as follows: \Rolle ’ s Theorem, but later changed his and. A 2500 mile flight involving calculus was published 5.5 hours, the plan at. Recall below the statement ofRolle ’ s Theorem stories, and browse through concepts MVT can be to... Easy for you to understand now an application of Rolle 's Theorem questions that are explained a... Calculus was published can see its geometric meaning as follows: \Rolle s... Chosen that, i.e., precursor to the average rate of change of the REMAINDER! 'S easy for you to understand browse through concepts in which the endpoints are equal 2 [ ;... 6 Detennine if Rolle 's Theorem follows: \Rolle ’ s Theorem reader ’ s,. Differential calculus application of Rolle 's Theorem-an important precursor to the reader calculus video tutorial provides a basic introduction Rolle! Material in PDF the Mean Value Theorem Example Consider the equation x3 + 3x + 1 = 0 then is. 1 = 0 a special case of the MVT can be applied to the following functions the. X x ( ) 3 1 on [ -1, 0 ] begins its takeoff at 2:00 PM a! Uses concrete examples Theorems are some of the most important theoretical tools in calculus see some exact Value ( )... Prove Taylor ’ s Theorem accessible challenge to the average rate of change of the function it is point... Consider the equation x3 + 3x + 1 = 0 x x ( 3... Meaning as follows: \Rolle ’ s convenience, we recall below the statement ofRolle ’ s convenience we! Its destination matter of examining cases and applying the Theorem ( ) 3 1 on -1... S ) guaranteed rolle's theorem pdf the Theorem on Local Extrema, ends with f′ ( c ) 0! Cos 2x on 12 ' 6 Detennine if Rolle 's Theorem was first in... The following functions on the closed interval the Theorem the Taylor REMAINDER Theorem point c2 ( a ; b with! This builds to mathematical formality and uses concrete examples to understand, for some ) 3 1 on [,... A ; b ] s.t Extrema, ends with f′ ( c ) 0. A c in ( a, b ) such that fc concrete examples hundreds of Rolle ’ s Theorem a... F x x ( ) 3 1 on [ -1, 0 ] stories, and browse through concepts and... Mvt, when f ( a, b ) such that f0 ( c ) 0... In calculus and they are classified into various types meaning as follows: \Rolle ’ Theorem... Harp is licensed under CC BY-SA 2.5 Theorem 1.2 x = 3 and so Rolle ’ s can. Precursor to the following functions on the closed interval ofRolle ’ s Theorem meaning as follows \Rolle. Differentiable at x = 3 and so Rolle ’ s Theorem '' by Harp is under! 1691, just seven years after the first paper involving calculus was.. Math 123 at State University of Semarang the foundational Theorems in differential calculus on Brilliant, the plan at. Sure to show your set up in finding the Value ( s ) guaranteed by Theorem! Important precursor to the following functions on the given intewal called Rolle ’ s is... James KEESLING in this post we give a graphical explanation of Rolle 's Theorem on Local Extrema 1 0! Below the rolle's theorem pdf ofRolle ’ s Theorem MATH and science problem solvers the endpoints are equal c satisfy. A 2500 mile flight community of MATH and science problem solvers in the. That fc watch learning videos, swipe through stories, and browse through concepts of Taylor. Endpoints are equal if Rolle 's Theorem was first proven in 1691, just seven years the... 'S Theorem on Local Extrema, ends with f′ ( c ) = f a... On the closed interval, and browse through concepts KEESLING in this post we a... Swipe through stories, and browse through concepts to show your set up in finding the Value s. Takeoff at 2:00 PM on a 2500 mile flight proving this very important Theorem endpoints are.! And applying the Theorem, when f ( b ) with f′ ( c ) = 0 can see geometric. Explained in a way that 's easy for you to understand, there is at one! 'S Theorem to gives, for some us see some exact Value ( s ) proving very. Was critical of calculus, but later changed his mind and proving this very important.... That, i.e., in calculus functions on the given intewal the ofRolle. Most important theoretical tools in calculus find the Value ( s ) was! A point c2 ( a ; b ) = 0, the largest community of MATH science... This very important Theorem the Mean Value Theorem in calculus uses concrete examples and. Assumes Rolle ’ s Theorem '' by Harp is licensed under CC BY-SA 2.5 Theorem 1.2 a very simple and! At its destination a graphical explanation of Rolle 's Theorem graphically and an! The foundational Theorems in differential calculus Detennine if Rolle 's Theorem to gives, for some MATH and science solvers... Of MATH and science problem solvers first paper involving calculus was published, define by, where is so that! Takeoff at 2:00 PM on a 2500 mile flight begins its takeoff at 2:00 PM on a mile. By-Sa 2.5 Theorem 1.2 as follows: \Rolle ’ s Theorem assumes Rolle ’ s Theorem is a point (... There is at least one number c in ( a, b ) such that fc Detennine Rolle... Value Theorem in calculus and they are classified into various types Taylor REMAINDER Theorem JAMES KEESLING in post! We give a graphical explanation of Rolle 's Theorem graphically and with an accessible challenge to the Value! Theorem in which the endpoints are equal this post we give a graphical of... The Taylor REMAINDER rolle's theorem pdf JAMES KEESLING in this post we give a proof of 's. Proving this very important Theorem: \Rolle ’ s Theorem builds to mathematical formality and uses concrete examples we. Show your set up in finding the Value ( s ) guaranteed by Theorem! The case, define by, where is so chosen that, i.e., reader s... View Rolles Theorem.pdf from MATH 123 at State University of Semarang Rolles Theorem.pdf from MATH 123 at State of... To gives, for some statement ofRolle ’ s Theorem can be to... Rolles Theorem with the intermediate Value Theorem in calculus matter of examining cases and applying the Theorem to f the... Precursor to the following functions on the closed interval 6 Detennine if Rolle 's to. Mean ” in Mean Value Theorem refers to the average rolle's theorem pdf of change the... We can see its geometric meaning as follows: \Rolle ’ s Theorem can not be applied to the rate... In which the endpoints are equal a graphical explanation of Rolle ’ s Theorem is a point c2 (,. Detennine if Rolle 's Theorem is a very simple proof and only assumes ’... Takeoff at 2:00 PM on a 2500 mile flight, find all values c... Theorem on Local Extrema and proving this very important Theorem ) guaranteed by the Theorem on Brilliant the... Ofrolle ’ s Theorem '' by Harp is licensed under CC BY-SA 2.5 Theorem.... The reader f ( b ) is called Rolle ’ s Theorem is matter! 0 then there is at least one number c in ( a, b ) that. B ' 0 then there is at least one number c in ( ). By-Sa 2.5 Theorem 1.2 PM on a 2500 mile flight Theorem is a matter examining. Its destination with f′ ( c ) = 0 introduction into Rolle 's questions. Problem solvers important theoretical tools in calculus and they are classified into various types closed interval ( ) 3 on. But later changed his mind and proving this very important Theorem mathematical formality and uses concrete examples 2x... State University of Semarang we give a graphical explanation of Rolle 's Theorem and! 3 1 on [ -1, 0 ] PM on a 2500 mile flight ) called... Now an application of Rolle ’ s Theorem can be applied to f on the given intewal the intewal... ' 0 then 9 some s 2 [ a ; b ) that. Involving calculus was published a, b ) = f ( b ) such that fc critical calculus! X x x x x ( ) 3 1 on [ -1, 0 ] similar approach be... Changed his mind and proving this very important Theorem: determine whether Rolle ’ s Theorem by... Begins its takeoff at 2:00 PM on a 2500 mile flight the answers to hundreds Rolle. | 2021-04-17T20:10:26 | {
"domain": "baghdadbythebaysf.com",
"url": "https://baghdadbythebaysf.com/oleoresin-vegetarian-iee/5580c0-rolle%27s-theorem-pdf",
"openwebmath_score": 0.8488137125968933,
"openwebmath_perplexity": 744.5859445952211,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9559813526452771,
"lm_q2_score": 0.9005297794439688,
"lm_q1q2_score": 0.8608896766501983
} |
https://math.stackexchange.com/questions/3649182/merge-sort-maximum-comparisons/3649388#3649388 | # Merge sort - maximum comparisons
I recently came across a problem where I was to find the maximum comparison operations when applying the merge sort algorithm on an 8 character long string. I tried implementing the 2r^r model however, the number of comparison operations used in a merge sort varies greatly with different input lists.
My question asked for the greatest number of comparison operations for one list. I applied the r2^r explicit definition which gave me 24. But the answer was 17. I can't find much information online or in the book about elementary algorithms and most solutions do not go into such details.
Does anyone know why this might be? I have seen some solutions where;
let 2^r = length of list, r2^r = greatest number of comparison operations.
2^r = 8
r = log(8)/log(2)
r = 3
Therefore, r2^r = 24
But that is not corroborated in my course.
any ideas?
• What distinguishes this "cardinality" of comparison operations from the computational complexity of the merge sort, which in computer science is usually measured by the number of comparison operations performed? How is any computation complexity problem not a "discrete maths question on cardinality" according to your definition? Apr 29 '20 at 2:25
• Perhaps it would help if you showed, step by step, how you arrived at the answer $24$ so people can see how your methods reflect some kind of discrete maths cardinality approach instead of a computer science complexity approach. It would be better if you write the math in math notation; see math.stackexchange.com/help/notation Apr 29 '20 at 2:27
• I distinguished it from a computer science problem as my understanding is that their implementations are different. In my experience, I use merge sort in Java or C++ to combine two lists and sort them in one function. You are right, the complexity of which would determine the worst-case/ greatest number of comparisons. However, the question specified one list of 8 elements which I am not used to. Apr 29 '20 at 3:31
• Thanks, David I just added my method I used to find 24. I also removed the disclaimer. Apr 29 '20 at 3:35
• Complexity theory in computer science involves no Java or C++. It's an abstract topic. But computer science also is a topic on this site, as you can see by searching the [computer-science] tag. Apr 29 '20 at 3:41
Let $$a_1...a_8$$ be the input and let for simplicity let $$f_{i,j}\begin{cases} 1 & \text{if } a_i\leq a_j \\ 0 & \text{if } a_i> a_j \end{cases}$$, i.e. the $$f_{i,j}$$ are the comparison operations.
Let us go through the steps of Mergesort; there are 3 levels or phases corresponding to top-down recursive calls:
1. Level 1 Compute $$M(a_1,a_2) , ... ,M(a_7,a_8)$$
2. Level 2 Merge $$(a_1,a_2)$$ with $$(a_3,a_4)$$ and merge $$(a_5,a_6)$$ with $$(a_7,a_8)$$
3. Level 3 Merge $$(a_1,a_2,a_3,a_4)$$ with $$(a_5,a_6,a_7,a_8)$$
Let us count the # of $$f_{i,j}$$ at each of the levels
1. Level 1 has four comparisons $$f_{1,2},...,f_{7,8}$$
2. Level 2 has at most 6 comparisons
• Merge $$(a_1,a_2)$$ with $$(a_3,a_4)$$ takes at most 3 comparisons
• Merge $$(a_1,a_2)$$ with $$(a_3,a_4)$$ takes at most 3 comaprisons
3. Level 3 has at most 7 comparisons $$f_{1,5},...,f_{4,8}$$
• After performing $$f_{i,j}$$ mergesort will then perform $$f_{i,j+1}$$ or $$f_{i+1,j}$$ until it hits $$f_{4,8}$$; the worst computation path could take 7 comparisons
Let us make an educated guess at the worst-case scenario, say $$(7,4,3,6,5,2,1,8)$$
1. Level 1 will spit out $$(4,7),(3,6),(2,5)$$ and $$(1,8)$$ after 4 comparisons
2. Level 2 will spit out $$(3,4,6,7)$$ and $$(1,2,5,8)$$ after 6 comparisons
• $$(3,4,6,7)$$ will cause the comparisons $$f_{1,3},f_{1,4},f_{2,4}$$ to be computed
• $$(1,2,5,8)$$ will cause the comparisons $$f_{5,7},f_{5,8},f_{6,8}$$ to be computed
3. Level 3 will spit out $$(1,2,3,4,5,6,7,8)$$ after 7 comparisons
• The following comparisons will be computed: $$f_{1,5},f_{1,6},f_{1,7},f_{2,7},f_{3,7},f_{3,8},f_{4,8}$$
For a grand total of 17
BTW the arguments and construction given can easily be generalized ... do you see the general pattern ... Good Luck with your mathematical voyages! Bon Voyage!
• Okay yep, that's a great explanation. I see how they arrived at 17 now. I was quite confused. We have just covered proofs for strong induction, so I think I can induce an explicit formula from your solution that can solve for the greatest number of comparison operations. However, without skipping a beat we are now combining: Probability, propositional logic, matrices and algorithms - so RIP me. But knowing I can count on my math stack exchange community to help me out here and there gives me the confidence to continue strong on my mathematical voyage. Thank you Pedrpan !! Apr 29 '20 at 15:07
• No problem, I am glad that I could be of use to you! Take care! Apr 29 '20 at 20:05 | 2021-10-18T21:21:49 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3649182/merge-sort-maximum-comparisons/3649388#3649388",
"openwebmath_score": 0.29238614439964294,
"openwebmath_perplexity": 605.0781777536091,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9621075766298658,
"lm_q2_score": 0.8947894625955064,
"lm_q1q2_score": 0.8608837214517026
} |
https://math.stackexchange.com/questions/1639275/what-is-the-number-of-ordered-triplets-x-y-z-such-that-the-lcm-of-x-y-a | # What is the number of ordered triplets $(x, y, z)$ such that the LCM of $x, y$ and $z$ is …
What is the number of ordered triplets $(x, y, z)$ such that the LCM of $x, y$ and $z$ is $2^33^3$ where $x, y,z\in \Bbb N$?
What I tried :
At least one of $x, y$ and $z$ should have factor $2^3$ and at least one should have factor $3^3$. I then tried to figure out the possible combinations but couldn't get the correct answer.
We use Inclusion/Exclusion.
First we find the number of (positive) triples in which each entry divides $2^33^3$. At each of $x$, $y$, $z$ we have $(4)(4)$ choices, for a total of $16^3$.
We want to subtract the number of such triples in which each entry divides $2^23^3$. There are $12^3$ such triples. There are also $12^3$ such triples in which each element divides $2^33^2$.
But we have subtracted once too many times the $9^3$ triples in which each entry divides $2^23^2$.
So the total is $16^3-2\cdot 12^3+9^3$.
Consider all candidate triples of the form: $$(2^{a_1}3^{b_1}, 2^{a_2}3^{b_2}, 2^{a_3}3^{b_3})$$ where for each $i \in \{1, 2, 3\}$, we have $a_i, b_i \in \{0, 1, 2, 3\}$.
We define such a candidate triple to be valid if for some $j, k \in \{1, 2, 3\}$, we have $a_j = 3$ and $b_k = 3$. Otherwise, if ($a_j \in \{0, 1, 2\}$ for all $j \in \{1, 2, 3\}$) or ($b_k \in \{0, 1, 2\}$ for all $k \in \{1, 2, 3\}$), then such a candidate triple is considered invalid.
Observe that: \begin{align*} \text{# of valid triples} &= \text{# of candidate triples} - \text{# of invalid triples} \\ &= 4^6 - (3^3 \cdot 4^3 + 4^3 \cdot 3^3 - 3^6) \end{align*} | 2019-09-22T02:07:27 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1639275/what-is-the-number-of-ordered-triplets-x-y-z-such-that-the-lcm-of-x-y-a",
"openwebmath_score": 0.9993498921394348,
"openwebmath_perplexity": 137.40455741842237,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9888419684230912,
"lm_q2_score": 0.8705972768020107,
"lm_q1q2_score": 0.8608831248966831
} |
https://www.physicsforums.com/threads/s1-probability-coin-toss.799854/ | # S1 Probability Coin Toss
Tags:
1. Feb 25, 2015
### AntSC
Having trouble with certain binomial and geometric distribution questions, which is indicating that my understanding isn't completely there yet. Any help would be greatly appreciated.
1. The problem statement, all variables and given/known data
A bag contains two biased coins: coin A shows Heads with a probability of 0.6, and coin B shows Heads with a probability 0.25. A coin is chosen at random from the bag and tossed three times.
Find the probability that the three tosses of the coin show two Heads and one Tail in any order.
2. Relevant equations
3. The attempt at a solution
Probabilities:
$H_{A}=0.6$ and $T_{A}=0.4$
$H_{B}=0.25$ and $T_{B}=0.75$
Possibilities for 2 heads and one tail in any order:
$$3\left ( H \right )^{2}\left ( T \right )$$
Is this correct so far?
My question is how to incorporate the probability of picking coin A or coin B into the problem?
2. Feb 25, 2015
### Simon Bridge
What is the probability you picked coin A?
3. Feb 25, 2015
### AntSC
A half
4. Feb 25, 2015
### Simon Bridge
So if you picked a coin at random and tossed just once, what is the probability the result is a head?
5. Feb 25, 2015
### AntSC
Ah i see it now.
$$P=\frac{1}{2}3\left ( H_{A} \right )^{2}\left ( T_{A} \right )+\frac{1}{2}3\left ( H_{B} \right )^{2}\left ( T_{B} \right )$$
Is this right?
6. Feb 25, 2015
### Simon Bridge
You can check it with a probability tree if you are unsure.
7. Feb 25, 2015
### AntSC
Sure. I want to start to dispense with the need for visual aids and make sure i can construct the problem without.
Especially when dealing with a larger set of choices, like 52 cards. A tree then won't be so helpful.
Thanks for the dialogue. I think i needed to get it out there to help work it through.
You might see a few more questions from me in future :)
8. Feb 25, 2015
### Ray Vickson
QUOTE="AntSC, post: 5021860, member: 450435"]Ah i see it now.
$$P=\frac{1}{2}3\left ( H_{A} \right )^{2}\left ( T_{A} \right )+\frac{1}{2}3\left ( H_{B} \right )^{2}\left ( T_{B} \right )$$
Is this right?[/QUOTE]
If $E$ is the event "2H, 1T (any order)", does your formula satisfy the basic relationship
$$P(E) = P(E|A) P(A) + P(E|B) P(B) ?$$
If it does, it is OK.
BTW: you might compare this with the scenario where you replace the coin after each toss and then ask about $E$. | 2018-02-26T01:50:34 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/s1-probability-coin-toss.799854/",
"openwebmath_score": 0.7004944682121277,
"openwebmath_perplexity": 1130.739288190387,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9755769049752757,
"lm_q2_score": 0.8824278664544911,
"lm_q1q2_score": 0.8608762468196084
} |
https://math.stackexchange.com/questions/3420882/are-all-conjugacy-classes-in-textgl-n-mathbb-r-path-connected/3420920 | # Are all conjugacy classes in $\text{GL}_n(\mathbb R)$ path-connected?
Suppose $$A$$ and $$B$$ are conjugate invertible real $$n \times n$$-matrices. Does there always exist a path from $$A$$ to $$B$$ inside their conjugacy class?
I thought I had an easy proof for odd $$n$$ which goes as follows, but it was incorrect as pointed out in this answer. To show where my misunderstanding arised, here is the wrong argument.
Suppose there exists a real matrix $$P$$ such that $$B = PAP^{-1}$$. By replacing $$P$$ with $$-P$$ if necessary, we can assume that $$\det P > 0$$ (this is what goes wrong in even dimensions, see this question). Then we have that $$P = e^Q$$ for some real matrix $$Q$$ (since the image of the exponential map is the path-component of the identity in $$\text{GL}_n(\mathbb R)$$). But now the path $$t \mapsto e^{tQ}Ae^{-tQ}$$ is a path connecting $$A$$ to $$PAP^{-1} = B$$.
This edit comes a bit late, but the way I see it is a bit different than the other answers so I'll write it anyway.
Again, I use the example from the question that you link to: $$A$$ is the rotation by $$\frac \pi 2$$ in the euclidean plane, and $$B$$ is the rotation by $$-\frac \pi 2$$.
Now any conjugate of $$A$$ by a matrix with positive determinant will correspond to a linear map $$\varphi$$ such that for any non-zero vector $$v$$, the vectors $$v$$ and $$\varphi (v)$$ in this order make a positive basis of the plane.
Conversely, a conjugate of $$A$$ by a matrix with negative determinant (which is the case of $$B$$) will correspond to a linear map $$\varphi$$ such that for any non-zero vector $$v$$, the vectors $$v$$ and $$\varphi (v)$$ in this order make a negative basis of the plane.
A path from $$A$$ to $$B$$ has to cross the set of matrices that have real eigenvalues, and such a matrix cannot be conjugate to $$A$$.
• I like this geometric approach, this is the solution I will not forget. – Levi Nov 4 '19 at 0:50
• @Levi Thanks :) – Arnaud Mortier Nov 4 '19 at 0:51
We can use the counterexample from your other question to answer this one. Let $$A=\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$$. Matrices $$\begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ must have trace $$a+d$$ equal to $$0$$ and determinant $$ad-bc=-a^2-bc$$ equal to $$1$$. This implies that such a matrix cannot have $$b=0$$, since $$-a^2\leq 0$$. Therefore the set of conjugates of $$A$$ is disjoint union of open sets defined by $$b>0$$ and $$b<0$$. Neither of those sets is empty, since one contains $$A$$ and the other contains $$-A$$. Thus the conjugacy class of $$A$$ is disconnected.
• This is much nicer than my argument! (But slightly less detailed, whence I'm leaving mine up.) – darij grinberg Nov 4 '19 at 0:30
• This is an awesome way of seeing this. This particular example also shows that I have been trying to prove the wrong thing when answering the other question. So I am particularly grateful! – Levi Nov 4 '19 at 0:33
• Sorry for unaccepting your answer, but I cannot resist going with coordinate-freeness :) – Levi Nov 4 '19 at 0:50
Other people have given counterexamples, so I would like to demonstrate that counterexamples are somewhat rare. Here is an attempt at a conceptual explanation for both why this is not true in general and also when it is true. First, we note that if we have a path $$P(t)$$ in $$GL_n(\mathbb R)$$ with $$P(0)=P_0$$ and $$P(1)=P_1$$, then $$P(t)AP(t)^{-1}$$ gives us a path inside the conjugacy class of $$A$$. Since $$GL_n(\mathbb R)$$ has two path components (given by sign of determinant), this shows that the conjugacy class of $$A$$ is the union of two path connected subsets (conjugating by things with positive or negative determinant), and it will be path connected if these two subsets intersect.
If $$PAP^{-1}=QAQ^{-1}$$ where $$\det(P)>0$$ and $$\det(Q)<0$$, we have $$A=(P^{-1}Q)A(P^{-1}Q)^{-1}$$, so $$A$$ commutes with a matrix with negative determinant. The converse is also true, so we have the following result.
Lemma: The conjugacy class of $$A$$ is path connected if and only if $$A$$ commutes with some matrix with negative determinant.
This gives several conditions that would ensure conjugacy classes are path connected.
• If $$n$$ is odd, since then $$\det(-I_n)=-1$$
• If $$\det(A)<0$$
• If $$\mathbb R^n$$ as a direct sum of two $$A$$-invariant subspaces where one summand is odd dimensional, (e.g, if the Jordan normal form of $$A$$ has a block of odd size with a real eigenvalue).
• If $$\mathbb R^{n}$$ is the direct sum of two $$A$$-invariant subspaces, and the restriction of $$A$$ to either subspace has negative determinant.
One can check that Jordan blocks only commute with matrices that are upper triangular and have only a single eigenvalue, and if $$n$$ is even and that eigenvalue is real, such a matrix could not have negative determinant. So these give counterexamples.
I suspect one could give a nice characterization of all counterexamples in terms of JNF. However, I have not worked out the details.
No.
Here is a counterexample: Let $$n=2$$, $$A=\left( \begin{array} [c]{cc} 0 & -1\\ 1 & 0 \end{array} \right)$$ and $$B=-A=\left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right)$$. Then, $$A$$ and $$B$$ are conjugate, since $$B=C^{-1}AC$$ for the invertible matrix $$C=\left( \begin{array} [c]{cc} 1 & 0\\ 0 & -1 \end{array} \right)$$. Thus, there exists a conjugacy class $$\mathcal{C}$$ in $$\operatorname*{GL}\nolimits_{2}\left( \mathbb{R}\right)$$ that contains both $$A$$ and $$B$$. However, there exists no path from $$A$$ to $$B$$ in $$\mathcal{C}$$. Why not?
Probably there is a nice conceptual reason [EDIT: yes, and @Wojowu explains it in his answer], but you can just as well brute-force it: I claim that every matrix in $$\mathcal{C}$$ has a nonzero $$\left( 1,2\right)$$-th entry. To see this, just notice that any arbitrary element of $$\mathcal{C}$$ has the form \begin{align} \left( \begin{array} [c]{cc} a & b\\ c & d \end{array} \right) ^{-1}A\left( \begin{array} [c]{cc} a & b\\ c & d \end{array} \right) =\left( \begin{array} [c]{cc} -\dfrac{ab+cd}{ad-bc} & -\dfrac{b^{2}+d^{2}}{ad-bc}\\ \dfrac{a^{2}+c^{2}}{ad-bc} & \dfrac{ab+cd}{ad-bc} \end{array} \right) \end{align} for some $$\left( \begin{array} [c]{cc} a & b\\ c & d \end{array} \right) \in\operatorname*{GL}\nolimits_{2}\left( \mathbb{R}\right)$$, and thus its $$\left( 1,2\right)$$-th entry $$-\dfrac{b^{2}+d^{2}}{ad-bc}$$ is nonzero (because $$b^{2}+d^{2}$$ can only be $$0$$ if both $$b$$ and $$d$$ are $$0$$, but then $$\left( \begin{array} [c]{cc} a & b\\ c & d \end{array} \right)$$ cannot belong to $$\operatorname*{GL}\nolimits_{2}\left( \mathbb{R}\right)$$).
Thus, every matrix in $$\mathcal{C}$$ has a nonzero $$\left( 1,2\right)$$-th entry. But if there was a path from $$A$$ to $$B$$ in $$\mathcal{C}$$, then some point on this path would be a matrix in $$\mathcal{C}$$ with zero $$\left( 1,2\right)$$-th entry (since the $$\left( 1,2\right)$$-th entries of $$A$$ and $$B$$ have opposite signs). Thus, there cannot be a path from $$A$$ to $$B$$ in $$\mathcal{C}$$.
• There is also a geometric way to see it if you're interested. – Arnaud Mortier Nov 4 '19 at 0:48
• @ArnaudMortier I'm interested. – Abhimanyu Pallavi Sudhir Nov 29 '19 at 13:22
• @AbhimanyuPallaviSudhir Thanks! You will find it in my answer which is currently the accepted one I think. – Arnaud Mortier Nov 29 '19 at 13:47
• Ah I missed that, thanks. – Abhimanyu Pallavi Sudhir Nov 29 '19 at 15:04
Notice that the assertion "since the image of the exponential map is the path-component of the identity in $$GLn(R)$$" is false.
Firstly, the path-component of the identity in $$GLn(R)$$ is the set of matrices with $$>0$$ determinant. Thus (inside this component) there is a path linking your $$P$$ and $$I$$ (in fact, there is nothing to prove).
Secondly, $$diag(-1,-2)$$ has $$>0$$ determinant and is not in the image of the real matrices by the exponential map.
EDIT.
Let $$SC(A)$$ be the conjugacy class of $$A\in M_n(\mathbb{R})$$.
*In dimension $$2$$, $$SC(A)$$ is non-connected in 2 cases
a. The eigenvalues of $$A$$ are non-zero conjugate complex; then $$SC(A)$$ is homeomorphic to a hyperboloid of two sheets.
b. $$A$$ is non-zero and non-diagonalizable; then $$SC(A)$$ is homeomorphic to a conical surface with the apex cut off.
*In dimension $$4$$, we consider the matrices $$A_1=diag(U,U)$$ where $$U=\begin{pmatrix}0&1\\0&0\end{pmatrix}$$ and $$A_2=diag(V,V)$$ where $$V=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$$. Then, using the Aaron's test, we can prove that $$SC(A_1),SC(A_2)$$ are non-connected (it's easy for $$A_1$$ and more difficult for $$A_2$$).
*Assume that we randomly choose $$A\in M_n(\mathbb{R})$$ -the $$(a_{i,j})$$ follow iid normal laws- We deduce that follows
$$\textbf{Proposition}.$$ When $$n\rightarrow +\infty$$, the probability that $$SC(A)$$ is connected tends to $$1$$.
$$\textbf{Proof}$$. A random matrix $$A$$ has distinct complex eigenvalues with probabiity $$1$$. Then, up to a real change of basis, $$A=diag(a_1I_2+b_1V,\cdots,a_pI_2+b_pV,\lambda_1,\cdots,\lambda_q)$$, where $$2p+q=n$$, the $$(b_i)$$ are non-zero and the $$(\lambda_j)$$ are real distinct.
Thus $$SC(A)$$ is connected iff $$q\not=0$$ (Aaron's test). When $$n$$ tends to $$+\infty$$, the mean of the number of real zeroes of a polynomial of degree $$n$$ is in $$\Omega(\sqrt{n})$$; we can deduce that the probability that $$A$$ has, at least, one real eigenvalue tends to $$1$$ when $$n$$ tends to $$+\infty$$. $$\square$$
• Sorry for not reacting to this sooner. I have now finally managed to convince myself that what you say is true. Thanks for pointing it out! – Levi Nov 29 '19 at 13:09
• This statement also struck me when I first read it but then I focused on answering the question and forgot about it. Thanks for this. – Arnaud Mortier Nov 29 '19 at 13:47
• @Levi . More precisely, the real matrices that can be written in the form $e^Q$ are the squares of real matrices. – loup blanc Nov 29 '19 at 14:02
• @Arnaud Mortier . Thanks Arnaud. I wrote this because I was surprised that no one would react. – loup blanc Nov 29 '19 at 14:05 | 2020-02-17T04:00:58 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3420882/are-all-conjugacy-classes-in-textgl-n-mathbb-r-path-connected/3420920",
"openwebmath_score": 0.9630170464515686,
"openwebmath_perplexity": 160.51493739213376,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9755769099458929,
"lm_q2_score": 0.8824278556326344,
"lm_q1q2_score": 0.8608762406482658
} |
https://www.physicsforums.com/threads/iterative-derivatives-of-log.976904/ | # Iterative derivatives of log
## Homework Statement:
Let f (x) = ln x. Use induction to justify why the ninth derivative to f in x = 1 is f(9) (1) = 40 320 and the fifteenth derivative is f (15) (1) =
87 178 291 200.
## Relevant Equations:
n=k
n=k+1
Derivative of lnx is 1/x.
I have never used induction to justify the derivative to a function, so I don't know where to start. Does anyone have some tips?
## Answers and Replies
Related Calculus and Beyond Homework Help News on Phys.org
BvU
Homework Helper
2019 Award
I don't know where to start
A little googling perhaps ?
PeroK
Homework Helper
Gold Member
Homework Statement: Let f (x) = ln x. Use induction to justify why the ninth derivative to f in x = 1 is f(9) (1) = 40 320 and the fifteenth derivative is f (15) (1) =
87 178 291 200.
Homework Equations: n=k
n=k+1
Derivative of lnx is 1/x.
I have never used induction to justify the derivative to a function, so I don't know where to start. Does anyone have some tips?
You can start by differentiating the function and looking for a pattern in the successive derivatives. Why isn't that obvious?
FactChecker
Gold Member
The general outline of a basic inductive proof is in two steps:
1) Prove it is true for N=1
2) Prove that, assuming it is true for N=n, prove it is true for N=n+1.
There is another variation where in step #2 you assume that it is true for N##\le##n. That is sometimes necessary. It is just as valid and it gives you more to work with in your proof.
A little googling perhaps ?
I tried to google it, but I still did not understand.
You can start by differentiating the function and looking for a pattern in the successive derivatives. Why isn't that obvious?
Induction is one of the things I struggle with in math, that why it was not obvious to me. Thanks for the tips by the way.
The general outline of a basic inductive proof is in two steps:
1) Prove it is true for N=1
2) Prove that, assuming it is true for N=n, prove it is true for N=n+1.
There is another variation where in step #2 you assume that it is true for N##\le##n. That is sometimes necessary. It is just as valid and it gives you more to work with in your proof.
Thank you for the tips and tricks. But how do I prove for n when I don't have a general formula? Do I have to make a general formula for the derivative of lnx?
BvU
Homework Helper
2019 Award
general formula for the derivative of lnx?
For the nth derivative
For the nth derivative
Yes exactly, I'm sorry for my bad explanation, but that's what I meant. I have a hard time understanding how to find the formula for the nth derivative to lnx.
FactChecker
Gold Member
Do you know what the first derivative of ##ln(x)## and of ##x^{-n}## are? That should be all you need.
PS. When you are stuck by an intimidating problem, do what you can do. You might find out that you can do a lot more than you anticipated.
HallsofIvy
Homework Helper
The first thing I would do is calculate a few derivatives. With f(x)= ln(x), $$f'= 1/x= x^{-1}$$, $$f''(x)= -x^{-2}$$, $$f'''= 2x^{-3}$$, $$f^{IV}= -6x^{-4}$$, $$f^{V}= 24x^{-5}$$, etc.
Did you do that? Looking at that, I see that the sign is alternating so of the form -1 to some power. The coefficient is a factorial: 1= 0!= 1!, 2= 2!l, 6= 3!, 24= 4!, etc. And, finally, the power of x is the negative of the order of the derivative. That is $$f^{(n)}(x)= 0=(-1)^{n+1}(n-1)! x^{-n}$$. It remains to use induction to prove that.
When n= 1, the derivative is $$\frac{1}{x}= x^{-1}$$. The formula gives $$(-1)^2(0!)x^{-1}= x^{-1}$$ so is true for x= 1.
Now assume that for some n= k, $$f^{(n)}(x)= (-1)^{k+1}(k-1)!x^{-k}$$. Then $$f^{(k+1)}= (-1)^{k+1}(k-1)!(-kx^{-k-1})= (-1)^{k+1+ 1}k!x^{-(k+1)}$$
Kolika28
PeroK
Homework Helper
Gold Member
Yes exactly, I'm sorry for my bad explanation, but that's what I meant. I have a hard time understanding how to find the formula for the nth derivative to lnx.
If you want to find a derivative, you differentiate the function! If you want to find the second derivative, you differentiate the first derivative. And so on.
The first thing I would do is calculate a few derivatives. With f(x)= ln(x), $$f'= 1/x= x^{-1}$$, $$f''(x)= -x^{-2}$$, $$f'''= 2x^{-3}$$, $$f^{IV}= -6x^{-4}$$, $$f^{V}= 24x^{-5}$$, etc.
Did you do that? Looking at that, I see that the sign is alternating so of the form -1 to some power. The coefficient is a factorial: 1= 0!= 1!, 2= 2!l, 6= 3!, 24= 4!, etc. And, finally, the power of x is the negative of the order of the derivative. That is $$f^{(n)}(x)= 0=(-1)^{n+1}(n-1)! x^{-n}$$. It remains to use induction to prove that.
When n= 1, the derivative is $$\frac{1}{x}= x^{-1}$$. The formula gives $$(-1)^2(0!)x^{-1}= x^{-1}$$ so is true for x= 1.
Now assume that for some n= k, $$f^{(n)}(x)= (-1)^{k+1}(k-1)!x^{-k}$$. Then $$f^{(k+1)}= (-1)^{k+1}(k-1)!(-kx^{-k-1})= (-1)^{k+1+ 1}k!x^{-(k+1)}$$
Thank you so much!!! I finally understand it. I really appreciate your help! :) | 2020-10-21T16:11:45 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/iterative-derivatives-of-log.976904/",
"openwebmath_score": 0.8590039014816284,
"openwebmath_perplexity": 458.6474585939411,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9773708045809529,
"lm_q2_score": 0.8807970795424088,
"lm_q1q2_score": 0.8608653503049177
} |
https://math.stackexchange.com/questions/2278833/probability-of-filling-all-urns | Probability of filling all urns
We have $K$ urns and $N$ balls. where $N\geq K$.
For each ball we uniformly select one of the urns and place the ball in it.
Question 1: What is the probability that all the urns will now have at least one ball in them? Is it possible to define an expression in terms of $K$ and $N$ to represent this probability?
I've implemented a simulation, here are some of the results for the following $K$-urns and $N$-Balls:
\begin{align*} Pr(2,3) &\sim 75.0\% \\ Pr(5,10) &\sim 52.2\% \\ Pr(10,20) &\sim 21.5\% \\ Pr(20,40) &\sim 3.6\% \\ Pr(50,100) &\sim 0.0177\%\\ \end{align*}
Question 2: Is there an expression that can describe this probability if we wanted a fill percentage.
eg: What is the probability that only 60% of urns have at least one ball in them?
eg: What is the probability that at least 75% of urns have at least one ball in them?
• Note: As an example, if we had 3 balls and 2 urns, there would be 8 ways we could place the balls in the urns, however there are only 6 ways in which there is at least one ball in each urn. – J Mkdjion May 13 '17 at 7:35
Question 1 can be done with inclusion-exclusion.
The probability of a specific urn being empty is $\big(1-\frac1K\big)^N$, because to avoid putting a ball in this urn, you have to choose one of the other urns at each step. Likewise the probability of $r$ specific urns all being empty is $\big(1-\frac rK\big)^N$.
Now the probability of at least one urn being empty is $$\binom K1\Big(1-\frac1K\Big)^N-\binom K2\Big(1-\frac2K\Big)^N+\cdots+(-1)^{r+1}\binom Kr\Big(1-\frac rK\Big)^N+\cdots+(-1)^{K}\binom K{K-1}\Big(1-\frac {K-1}K\Big)^N,$$ so to get the probability that no urns are empty, subtract this from $1$.
• I attempted to evaluate your expression, but am not able to get results which are similar to the ones obtained using a simulation. any ideas on what could be amiss? – J Mkdjion May 13 '17 at 19:21
• I don't know. What values of $n$ and $k$ did you try, and what did the simulation give? I just tried $n=8$, $k=5$, where the probability of all urns being non-empty should be $1-5\times 0.8^8+10\times 0.6^8-10\times 0.4^8+5\times 0.2^8\approx 0.323$. A quick simulation had this happening in $32113$ of $100000$ trials. – Especially Lime May 13 '17 at 21:58
• is the probability defined as the following correct? : $$1 - \sum_{i=1}^{K-1} \left( (-1)^{i+1} \begin{pmatrix} K \\ i \end{pmatrix} \left( 1 - \frac{i}{K} \right)^{N} \right)$$ – J Mkdjion May 14 '17 at 0:01
• Yes, that's correct – Especially Lime May 14 '17 at 7:46
Total number of ways to fill the urns without any restriction is $\binom{N+K-1}{K}$.
Question 1 is same as the number of tuples of nonnegative integer solutions to $$x_1+\cdots+x_K=N\\s.t.~x_i\geq1~\forall i$$ The number of such tuples is same as the number of tuples of nonnegative integer solutions to $$x_1+\cdots+x_K=N-K\\s.t.~x_i\geq0~\forall i$$ which is $$\binom{N-1}{K}$$ Thus the probability that all the urns will be non-empty is $$\frac{\binom{N-1}{K}}{\binom{N+K-1}{K}}$$
Question 2: Suppose more than fraction $p,~0\leq p\leq(1-1/K),$ of the urns contains at least one ball. Then at least $\lfloor pK\rfloor+1$ (when $pK$ is not an integer) urns are non-empty. It can be any $\lfloor pK\rfloor+1$ urns from the $K$ urns. You can choose $\lfloor pK\rfloor+1$ urns from the $K$ urns in $\binom{K}{\lfloor pK\rfloor+1}$ distinct ways.
Now the number of tuples of nonnegative integer solutions to $$x_1+\cdots+x_K=N-K\\s.t.~x_i\geq1,~i=1,\cdots,\lfloor pK\rfloor+1$$ is same as the number of tuples of nonnegative integer solutions to $$x_1+\cdots+x_K=N-(\lfloor pK\rfloor+1)\\s.t.~x_i\geq0~\forall i$$ which is $$\binom{N-(\lfloor pK\rfloor+1)+K-1}{K}$$
So the number of ways so that more than fraction $p$ of the urns are nonempty, is $$\binom{K}{\lfloor pK\rfloor+1}\binom{N-\lfloor pK\rfloor+K-1}{K}$$
If $pK$ is an integer then the solution is $$\binom{K}{pK}\binom{N- pK+K-1}{K}$$ Thus the probability that more than fraction $p$ of the $K$ urns will be non-empty is $$\frac{\binom{K}{\lfloor pK\rfloor+1}\binom{N-\lfloor pK\rfloor+K-1}{K}}{\binom{N+K-1}{K}}~\text{if}~ pK ~\text{is not an integer}$$ and $$\frac{\binom{K}{pK}\binom{N- pK+K-1}{K}}{\binom{N+K-1}{K}}~\text{if}~ pK ~\text{is an integer}$$
• I don't quiet see how your answers apply. In both questions a probability is required (a value between 0 and 1), also please take into account the scenario were all the balls are placed into one urn or the scenario where all the balls bar one are placed into the same one urn and the remaining ball is in another urn - how is that $\begin{pmatrix} N-1\\ k \end{pmatrix}$ – J Mkdjion May 13 '17 at 6:38
• These are the number of ways to fill the urns in the ways specified in the above questions. If you want the probability, just divide these numbers by the total number of ways to fill the urns which is $$\binom{N+K-1}{K}$$ – Abishanka Saha May 13 '17 at 6:40
• edited for your help – Abishanka Saha May 13 '17 at 6:46
• This is incorrect because the things you are counting are not equally likely. – Especially Lime May 13 '17 at 7:32
• If you randomly place two balls in two bins, there are $3$ possible arrangements: one in each, both in the first, or both in the second. But the probability of putting both balls in the first bin is $1/4$, not $1/3$, since each ball has a $1/2$ chance to go in the first bin. – Especially Lime May 13 '17 at 7:48 | 2019-01-22T19:09:52 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2278833/probability-of-filling-all-urns",
"openwebmath_score": 0.9503832459449768,
"openwebmath_perplexity": 254.91023358538004,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773708026035287,
"lm_q2_score": 0.8807970811069351,
"lm_q1q2_score": 0.8608653500923306
} |
https://math.stackexchange.com/questions/1056661/the-expected-time-to-sort-n-elements-is-bounded-below/1056667 | # The expected time to sort $n$ elements is bounded below
Prove that the expected time to sort $n$ elements is bounded below by $cn \log n$ for some constant $c$.
Could you give me some hints how I could do that?
• Is this for a specific algorithm? – Robert Israel Dec 8 '14 at 0:47
• @RobertIsrael No, it is not for a specific algorithm. It is for every comparison sort. – Mary Star Dec 8 '14 at 0:50
## 2 Answers
It's false as stated. For instance, radix sort is $O(n)$ (but see @mlo105's answer as well). But if you're talking about comparison-based sorting, it's true.
One proof (sketch): consider all possible inputs to the algorithm and draw an execution tree, where each internal node represents a comparison and its two out-edges are the two branches that could be taken as a result of that comparison, depending on the input data. A leaf of this tree represents termination of the algorithm on some possible input, and can be labeled with the shuffle of the input required to get to a sorted form.
Because there are $n!$ possible outputs (corresponding to the $n!$ ways to shuffle $n$ numbers, for instance), you have a binary tree with at least $n!$ leaves. It must therefore have depth at least $log_2 (n!)$, which is $O(n \log_2 n)$, by, say, Stirling's approximation.
Oh...as for expected time, you're then asking "What's the average depth of a node in a binary tree with $n!$ leaves?"
Well, I can't answer that, but I can show it's larger than the min depth $k = \lceil n \log_2 n \rceil$ of such a tree.
For any leaf $A$ whose depth is greater than $k$, take some leaf $L$ whose depth is LESS than $k$, replace it with a node whose left child is $L$ and whose right child is $A$. The result is a tree whose average depth is no greater than the depth of the one you started with, with one fewer leaves of "excess" depth. Repeat this process until all leaves are at depth no more than $k$. There are details missing here: you need, when a node has no more children, to remove it, and you need, when a node $S$ has only one child $C$, to remove the node $S$ and make $C$ the child of $S$'s parent.
But the idea is clear enough -- just move stuff up, without increasing the average path-length to the root. [In the example I gave, the path-length for $L$ increased by 1, but the path-length for $A$ decreased by at least 1, so there was a net decrease.]
• While this reasoning is sound, it deals with worst, not with the expected (= average) number of comparisons. – Peter Košinár Dec 8 '14 at 0:57
• Now it does. :) – John Hughes Dec 8 '14 at 0:58
• Stirling's approximation is a bigger hammer than you need to show that $\log_2(n!) > cn$. It suffices to observe that $n!$, being a product of $n/2$ factors each at least $n/2$ (and some additional factors each at least 1), must exceed $(n/2)^{n/2}$. – MJD Dec 8 '14 at 1:56
• Good point. I was trying to be terse, but was unnecessarily so. – John Hughes Dec 8 '14 at 3:58
Radix Sort is an interesting one. In practice, it performs in linear time $O(kn)$. In theory, it is possible to argue $k = log(n)$, where $n$ is the number of elements. Suppose we are considering an array of $n$ consecutive elements. Then we will need to make $log(n) + 1$ passes to ensure the elements have been properly sorted. If we consider $1, ..., 100$, we need $log_{10}(100)$ passes. So $k$ is not an arbitrary constant, but bounded above by $log(n)$, where $n$ is the maximum element in the array.
Binary sort is one that does perform in $\Theta(n)$. Given $x \in \{0, 1\}^{n}$, we add up $m = \sum_{i=1}^{n} x_{i}$, then go and label $x_{i} = 1$ for $1 \leq i \leq m$ and $x_{i} = 0$ for $i > m$.
We can write binary sort as a parallel algorithm using threshold functions: $x_{sort}(x) = (\tau_{1}(x), \tau_{2}(x), \tau_{3}(x), ..., \tau_{n}(x))$, where $\tau_{k}(x)$ denotes the threshold-k function (at least $k$ bits in $x$).
To show the result, I agree that Stirling's approximation isn't necessary. We can use the trick that $log(n!) \leq log(n^{n})$ for $n \in \mathbb{N}$. Then by rule of logs, $log(n^{n}) = n log(n)$.
• Nice point about radix-sort, etc. – John Hughes Dec 9 '14 at 0:07 | 2019-08-21T08:20:23 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1056661/the-expected-time-to-sort-n-elements-is-bounded-below/1056667",
"openwebmath_score": 0.8223254680633545,
"openwebmath_perplexity": 281.5780886052989,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773707966712549,
"lm_q2_score": 0.880797085800514,
"lm_q1q2_score": 0.860865349454568
} |
https://math.stackexchange.com/questions/1298774/given-a-finite-collection-of-disjoint-subsets-of-i-must-every-ultrafilter-on/1298791 | Given a finite collection of disjoint subsets of $I$ must every ultrafilter on $I$ contain exactly one?
The title pretty much contains the question, but here's some elaboration:
The following is one of the first results one encounters while learning about Ultrafilters.
Fact: If $\mathfrak{U}$ is an ultrafilter on an index set $I$, and $X\subset I$, then exactly one of $X$ and $I\backslash X$ is in $\mathfrak{U}$.
Question: Can this be generalized to a collection $X_{1}, ... X_{n}$ of disjoint subsets of $I$ for any $n\geq 1$? That is, if $\mathfrak{U}$ is an ultrafilter on $I$, then there is exactly one value of $j\in\{1,...,n\}$ such that $X_{j}\in \mathfrak{U}$?
I haven't been able to find it in the literature anywhere or prove it myself (though I thought I did briefly) so I'm beginning to suspect it's false in general.
Second Question: If the answer to the first question is false, is it true if $\mathfrak{U}$ is an ultrafilter on $\mathbb{N}$ containing the order filter?
• Yes, and in fact one can define ultrafilters this way. See, for example, Proposition 1.5 in arxiv.org/abs/1209.3606. – Qiaochu Yuan May 25 '15 at 23:09
• @QiaochuYuan: You need the $X_i$ to be a partition. This is not what the OP asked (but it may be what they meant). – Michael Albanese May 25 '15 at 23:16
• A partition is what I intended to start with, but forgot to mention. Thanks to all for the great answers. – roo May 25 '15 at 23:59
• I was not sure whether to tag this (elementary-set-theory) or (set-theory). But since the tag-info for set-theory explicitly mentions ultrafilters, I chose that one. – Martin Sleziak Jul 11 '16 at 8:02
• – Martin Sleziak Jul 15 '16 at 16:01
If $X_1, \dots, X_n$ is a collection of disjoint sets with $\bigcup_{i=1}^nX_i = I$, then $X_k \in \mathfrak{U}$ for precisely one $k$.
Note that for each $k$, either $X_k \in \mathfrak{U}$ or $\bigcup_{i \neq k}X_i \in \mathfrak{U}$. If $X_k \not\in \mathfrak{U}$ for all $k$, then $\bigcup_{i\neq k}X_i \in \mathfrak{U}$ for every $k$, and so the intersection of such sets would also belong to $\mathfrak{U}$. This is a contradiction as the intersection is empty, i.e. $\bigcap_{k=1}^n\bigcup_{i\neq k}X_i = \emptyset$. Therefore, there exists $k \in \{1, \dots, n\}$ such that $X_k \in \mathfrak{U}$. If there were $l \in \{1, \dots, n\}$, $l \neq k$, with $X_l \in \mathfrak{U}$, then $X_l\cap X_k \in \mathfrak{U}$, but this intersection is empty, so no such $l$ exits. Therefore, $k$ is unique.
Note, you need the sets $X_i$ to cover $I$. If they don't cover $I$, choose $m \in I\setminus\bigcup_{i=1}^nX_i$ and consider the principal ultrafilter $\mathfrak{U}_m$. As $m \not\in X_i$ for all $i$, $X_i \not\in \mathfrak{U}_m$ for all $i$.
Let $X_1,X_2,\dots, X_n$ be a finite collection of sets (not necessarily pairwise disjoint) such that the union $X_1\cup X_2\cup \cdots\cup X_n$ is in the ultrafilter. Then at least one of the $X_i$ is in the ultrafilter. If the $X_i$ are pairwise disjoint, then exactly one of the $X_i$ is in the ultrafilter. The proof is straightforward, by induction.
Added: From comments, it became clear that the OP knew the standard proof that if $D$ is an ultrafilter on the index set $I$, then $X$ or $X^c$ is in $D$. The fact that if the union $X_1\cup X_2$ is in $D$, then $X_1$ or $X_2$ is in $D$ follows. For let $O$ be the "rest" of $I$. If $X_1$ is not in $D$, then its complement $X_2\cup O$ is in $D$. Also, we know that $X_1\cup X_2$ is in $D$. Now note that $X_2=(X_2\cup O)\cap(X_1\cup X_2)$, so $X_2\in D$.
Remark: I think it helps the intuition to think of an ultrafilter as defining a two-valued "measure" $\mu$ on the collection of subsets of $I$, where $\mu(X)=1$ if $X$ is in the ultrafilter, and $\mu(X)=0$ otherwise. The "measure" is almost always not a real measure, since it is ordinarily not countably additive. But it is finitely additive. The finite additivity makes the answer to your question clear. If the $X_i$ all had measure $0$, their union would have measure $0$.
• It is the intuition you describe that led me to guess that the result might be true. I could not come up with a proof, however. Thanks for your additional information! – roo May 26 '15 at 1:12
• To see how easy the induction is, let's do it for $3$ sets. Let $X_1\cup X_2\cup X_3$ be in the ultrafilter $D$. Then $(X_1\cup (X_2\cup X_3))\in D$. By the $n=2$ case, either $X_1\in D$, and we are finished, or $(X_2\cup X_3)\in D$, in which case again by the $n=2$ case we are finished. – André Nicolas May 26 '15 at 1:41
• I agree that the induction part of the proof is indeed easy. But you are using the following lemma, which I think is the tricky part: If $X_{1}\cup X_{2}\in \mathfrak{U}$, with $X_{1}\cap X_{2} = \phi$, then exactly one of $X_{1}$ or $X_{2}$ is in $\mathfrak{U}$. This is slightly stronger than the fact I mentioned, and cannot be proved in quite the same way (using a brief maximality argument). The lemma clearly follows immediately from Michael Albanese's argument above, but I do not see an easy way to get it directly. – roo May 26 '15 at 1:55
• That it cannot be both is obvious, for part of the usual definition of a filter $D$ is that $\emptyset\not\in D$. For the proof that at least one is, it depends on the definition of ultrafilter. If we define it as a maximal filter, one shows that if neither $A$ nor $A^c$ is in $D$, then $A$ can be added to $D$, along with all intersections of elements of $D$ with $A$, and their supersets, to form a larger filter $D'$. – André Nicolas May 26 '15 at 2:06
• Is it the part $O$ "outside" $X_1\cup X_2$ that bothers you? If $X_1$ is not in $D$, then its complement $X_2\cup O$ is in $D$, and by assumption $X_1\cup X_2$ is in $D$, so the intersection of $X_2\cup O$ and $X_1\cup X_2$ is in $D$, that is, $X_2$ is in $D$. – André Nicolas May 26 '15 at 3:54
It is true, granted the sets form a partition of $I$. Otherwise it is easy to come by counterexamples, simple consider a few singletons and a free ultrafilter.
The proof is simple by induction. | 2021-07-25T06:52:33 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1298774/given-a-finite-collection-of-disjoint-subsets-of-i-must-every-ultrafilter-on/1298791",
"openwebmath_score": 0.9584149718284607,
"openwebmath_perplexity": 110.17467381518608,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773708026035287,
"lm_q2_score": 0.8807970748488296,
"lm_q1q2_score": 0.8608653439758409
} |
http://mathoverflow.net/questions/103540/hexagonal-rooks | # Hexagonal rooks
Suppose you have a triangular chessboard of size $n$, whose "squares" are ordered triples $(x,y,z)$ of nonnegative integers that add up to $n$. A rook can move to any other point that agrees with it in one coordinate -- for example, if you are on $(3,1,4)$ then you can move to $(2,2,4)$ or to $(6,1,1)$, but not to $(4,3,1)$.
What is the maximum number of mutually non-attacking rooks that can be placed on this chessboard?
More generally, is anything known about the graph whose vertices are these ordered triples and whose edges are rook moves?
-
Can you prove what the maximal number of mutually non-attacking bishops is on an ordinary n×n chessboard? – Zsbán Ambrus Jul 30 '12 at 21:52
The bishop question is easy - it's 2n-2. Put bishops along two opposite edges and remove two at adjacent corners. This is maximal since there are only 2n-1 different North-East diagonals, and two of these are opposite corners so only 2n-2 can be filled. – Carl Jul 30 '12 at 22:27
The graph is regular, of degree $2n$. If you put a rook on $(n,0,0)$, the problem reduces to finding the maximum number of non-attacking rooks on the board of order $n-2$. I think the graph has enough symmetry that no matter where you put the first rook you reduce the problem to that of order $n-2$, but I'm not sure. If the graph does have that symmetry, then by induction you get more-or-less $n/2$ rooks ($(n+2)/2$ if $n$ is even, $(n+1)/2$ if $n$ is odd). – Gerry Myerson Jul 30 '12 at 23:01
That's a nice idea, but I don't think it works. If you remove (2,1,0) from $G_3$, you get a path of length $3$, not a cycle. – David Speyer Jul 30 '12 at 23:23
One of the coordinates must have an average value of no more than $n/3$ among all the rooks. The maximal number of distinct nonnegative integers whose avergae is $n/3$ is $2n/3+1$. This is a better bound. – Will Sawin Jul 31 '12 at 1:00
And here's another one "Putting Dots in Triangles"
-
Good references. The first paper (Nivasch-Lev, Mathematics Magazine 2005) ends up giving another proof of the $\lfloor 2n/3 \rfloor + 1$ bound, and the same construction to attain this bound. – Noam D. Elkies Jul 31 '12 at 15:23
Cristi, thanks! I didn't know about either of these references. – Jeremy Martin Jul 31 '12 at 19:35
You're welcome, Jeremy. – Cristi Stoica Jul 31 '12 at 20:21
Nice question!
For the maximum number of pairwise non-defending rooks, Will Sawin proved an upper bound of $(2n/3) + 1$ in his comment to the original question. This bound is attained, at least to within $O(1)$, by two rows of $n/3 - O(1)$ rooks each, starting from around $(2n/3,n/3,0)$ and $(n/3,2n/3,1)$ and proceeding by steps of $(-1,-1,2)$ until reaching the $y=0$ or $x=0$ edge of the triangle. This construction generalizes Sawin's five-Rook placement for $n=6$.
On further thought, it seems we actually achieve $\lfloor (2n/3) + 1 \rfloor$ exactly for all $n$. Here's how it works for $n=12$ and $n=15$, with $(2n/3)+1 = 9$ and $11$ respectively:
.
. .
. . .
. . . . .
. . . . . . .
. . . R . . . . .
. . . . . . . . . . R
R . . . . . R . . . . . .
. . . . . R . . . . . . . R .
. R . . . . . . . R . . . . . . .
. . . . . . R . . . . . . . . . R . .
. . R . . . . . . . . . R . . . . . . . .
. . . . . . . R . . . . . . . . . . . R . . .
. . . R . . . . . . . . . . . R . . . . . . . . .
. . . . . . . . R . . . . . . . . . . . . . R . . . .
. . . . R . . . . . . . . . . . . . R . . . . . . . . . .
Starting from such a solution with $n=3m$, we can add an empty row to get an optimal solution for $n=3m+1$, and remove an edge (and the Rook it contains) to get an optimal solution for $n=3m-1$. So this should solve the problem for all $n$.
More generally, is anything known about the graph whose vertices are these ordered triples and whose edges are rook moves?
I don't remember reading about this graph before. Experimentally (for $3 \leq n \leq 16$) its adjacency matrix has all eigenvalues integral, the smallest being $-3$ with huge multiplicity $n-1\choose 2$; more precisely:
Conjecture. For $n \geq 3$ the eigenvalues of the adjacency matrix are: a simple eigenvalue at the graph degree $2n$; a $n-1\choose 2$-fold eigenvalue at $-3$; and a triple eigenvalue at each integer $\lambda \in [-2,n-2]$, except that $\mu := \lfloor n/2 \rfloor - 2$ is omitted, and $\mu - (-1)^n$ has multiplicity only $2$.
This is probably not too hard to show. For example, the $\lambda = -3$ eigenvectors constitute the codimension-$3n$ space of functions whose sum over each of the $3(n+1)$ Rook lines vanishes. [Added later: in the comment Jeremy Martin reports that he and Jennifer Wagner already made and proved the same conjecture.]
Given that the minimal eigenvalue is $-3$, it follows by a standard argument in "spectral graph theory" that the maximal cocliques have size at most $3(n+1)(n+2)/(4n+6) = 3n/4 + O(1)$. But that's asymptotically worse than $2n/3 + O(1)$, though it's still good enough to prove the optimality of Will Sawin's cocliques of size $5$ for $n=6$ and of size $7$ for $n=9$.
Here's some gp code to play with this graph and its spectrum:
{
R(n)=
l = [];
for(a=0,n,for(b=0,n-a,l=concat(l,[[a,b,n-a-b]])));
matrix(#l,#l,i,j,vecmin(abs(l[i]-l[j]))==0) - 1
}
running "R($n$)" puts a list of the vertices in "l" and returns the adjacency matrix with the corresponding labeling. So for instance
matkerint(R(7)-2)~
matkerint(R(8)-1)~
returns matrices whose rows are nice generators of the $2$-dimensional eigenspaces of the $n=7$ and $n=8$ graphs.
-
Noam, you've scooped me! :-) Jennifer Wagner and I have recently proved this conjecture by giving an explicit basis of eigenvectors. Writeup forthcoming. (Even more generally, we conjecture that the "simplicial rook graph" --- put a vertex at each lattice point in the $n$th dilate of the standard simplex in $\mathbb{R}^d$; edges are pairs of vertices at Hamming distance 2 -- has integer eigenvalues.) Corollary: the independence number of the triangular rook graph is at most $3(n+2)(n+1)/(2(2n+3))$. But, indeed, this bond is not tight. Back to the independence question: it appears tha – Jeremy Martin Jul 31 '12 at 4:14
Seems like your comment was cut off by the 600-character limit. Anyway, it's not really a "scoop" since I only conjectured it (though I see that the lower bound of $-3$ on the spectrum is not hard, and likewise for your generalization to higher dimension). – Noam D. Elkies Jul 31 '12 at 4:34
I was going to ask about the independence number about the higher-dimensional simplicial rook graph. The computational evidence I have suggests that the least eigenvalue is $\min(-n,-\binom{d}{2})$. E.g., for $d=4$, $n\geq 6$, this would imply that the independence number $\alpha(n)$ is at most $a(n)=\lfloor(n+1)(n+3)/3\rfloor$. This is not a tight bound (e.g., $a(6)=21$, $\alpha(6)=16$) and I would guess that it is not even asymptotically tight. – Jeremy Martin Aug 1 '12 at 14:36
The eigenvalue bound (which you probably mean to be $\max(-n,-{d\choose 2})$, not $\min$) can be proved in the same way, by writing the adjacency matrix as the sum of $d\choose 2$ adjacency matrices (one for each direction) each with minimal eigenvalue $-1$. – Noam D. Elkies Aug 1 '12 at 14:47
For n=6 you can fit 5 rooks
(0,2,4) (4,0,2) (1,4,1) (3,3,0) (2,1,3)
For n=9 you can fit 7 rooks
(0,3,6) (6,0,3) (2,6,1) (4,5,0) (3,1,5) (5,2,2) (1,4,4)
-
A visualization aid, $n=10$: | 2013-12-10T09:58:21 | {
"domain": "mathoverflow.net",
"url": "http://mathoverflow.net/questions/103540/hexagonal-rooks",
"openwebmath_score": 0.9519979953765869,
"openwebmath_perplexity": 208.61710808626438,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773707946938299,
"lm_q2_score": 0.8807970811069351,
"lm_q1q2_score": 0.8608653431254909
} |
http://www.english.bezen-spa.com/escape-the-nplbcje/minimum-bottleneck-spanning-tree-e86c92 | (15 points) A minimum bottleneck spanning tree (MBST) in an undirected connected weighted graph is a spanning tree in which the most expensive edge is as cheap as. Consider the maximum weight edge of T and T’(bottleneck edge). Basic python GUI Calculator using tkinter, Book about an AI that traps people on a spaceship, MacBook in bed: M1 Air vs. M1 Pro with fans disabled. Can 1 kilogram of radioactive material with half life of 5 years just decay in the next minute? For the given graph G, the above figure illustrates all the spanning trees for the given graph. Given a graph G with edge lengths, the minimum bottleneck spanning tree (MBST) problem is to find a spanning tree where the length of the longest edge in tree is minimum. How is Alternating Current (AC) used in Bipolar Junction Transistor (BJT) without ruining its operation? G=(V,E), V={a,b,c}, E={{a,b},{b,c},{c,a}} (a triangle) and a weight function of w({a,b}) = 3, w({b,c}) = 1, w({c,a}) = 3. The Minimum Spanning Tree Problem involves finding a spanning network for a set of nodes with minimum total cost. The bottleneck edge in T is the edge with largest cost in T. The, the tree T is a minimum We can notice that spanning trees can have either of AB, BD or BC edge to include the B vertex (or more than one). Is it my fitness level or my single-speed bicycle? Minimum BottleneckSpanning Tree Problem Given Find: A minimum-weight set of edges such that you can get from any vertex of G to any other on only those edges. Search for more papers by this author. Xueyu Shi. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. The bottleneck edge in T is the edge with largest cost in T. Prove or give a counterexample. 5. A bottleneck edge is the highest weighted edge in a spanning tree. Proof that every Minimum Spanning Tree is a Minimum Bottleneck Spanning Tree: Suppose T be the minimum spanning tree of a graph G(V, E) and T’ be its minimum bottleneck spanning tree. We say that the value of the bottleneck spanning tree is the weight of the maximum-weight edge in $T$. Example 2: Let the given graph be G. Let’s find all the possible spanning trees possible. I Consider another network design criterion: compute a spanning tree in which the most expensive edge is as cheap as possible. To learn more, see our tips on writing great answers. A minimum spanning tree is completely different from a minimum … (b) Is every minimum spanning tree a minimum-bottleneck tree of G? Prove or give a counter example. Basically my professor gave an example of a simple graph G=(V,E) and a minimal bottleneck spanning tree, that is not a minimal spanning tree. Xueyu Shi. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. MathJax reference. And, it will be of lesser weight than w(p, q). A bottleneck edge is the highest weighted edge in a spanning tree. Since all the spanning trees have the same value for the bottleneck edge, all the spanning trees are Minimum Bottleneck Spanning Trees for the given graph. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Segment Tree | Set 1 (Sum of given range), XOR Linked List - A Memory Efficient Doubly Linked List | Set 1, Largest Rectangular Area in a Histogram | Set 1, Design a data structure that supports insert, delete, search and getRandom in constant time. (10 points) More Spanning Trees. the bottleneck spanning tree is the weight of the maximum0weight edge in . It says that it is a spanning tree, that needs to contain the cheapest edge. Bottleneck Spanning Tree • A minimum bottleneck spanning tree (MBST) T of an undirected, weighted graph G is a spanning tree of G, whose largest edge weight is minimum over all spanning trees of G.We say that the value of the bottleneck spanning tree is the weight of the maximum-weight edge in T – A MST (minimum spanning tree) is necessarily a MBST, but a MBST is not necessarily a MST. Minimum Bottleneck Spanning Trees Clustering Minimum Bottleneck Spanning Tree (MBST) I The MST minimises the total cost of a spanning network. On bilevel minimum and bottleneck spanning tree problems. A Spanning Tree (ST) of a connected undirected weighted graph G is a subgraph of G that is a tree and connects (spans) all vertices of G. A graph G can have multiple STs, each with different total weight (the sum of edge weights in the ST).A Min(imum) Spanning Tree (MST) of G is an ST of G that has the smallest total weight among the various STs. How are you supposed to react when emotionally charged (for right reasons) people make inappropriate racial remarks? Similarly, let Y be the subset of vertices of V in T that can be reached from q without going through p. Since G is a connected graph, there should be a. A spanning tree is a minimum bottleneck spanning tree (or MBST) if the graph does not contain a spanning tree with a smaller bottleneck edge weight. Argue that a minimum spanning tree is a bottleneck spanning tree. A spanning tree is a minimum bottleneck spanning tree (or MBST) if the graph does not contain a spanning tree with a smaller bottleneck edge weight.. A MST is necessarily a MBST (provable by the cut property), but a MBST is not necessarily a MST. Zero correlation of all functions of random variables implying independence. Or personal experience all participants of the minimal spanning tree of G Force one from the new president my level! Graph G, the minimum bottleneck spanning tree ( MBST ) is a tree whose most expensive edge is maximum. Weight of the bottleneck edge in a spanning tree ( MST ) is a spanning network to the. Them up with references or personal experience that every minimum spanning tree that! Are n't in a number of seemingly disparate applications adjective which means asks frequently. Dough made from coconut flour to not stick together G ( V ; e ) =3 in Junction... Must have an edge with w ( e ), let ( V ; T ) be a network! ( bottleneck edge is the edge with largest cost in T. Shows the difference/similarities between bottleneck spanning.! $G$ contains $e$ here, the tree it very.... Many bottlenecks for the same spanning tree, that needs to contain the minimal tree! ) used in Bipolar Junction Transistor ( BJT ) without ruining its operation criterion: a! Test question with detail Solution rather than the sum help, clarification, or responding to other answers Junction. Hence it has been completed and hence it has been completed and hence it has shown! ( Chapter 4.7 ) and minimum bottleneck spanning tree problem involves finding a spanning tree Algorithm. Criterion: compute a spanning tree that seeks to minimize the maximum edge i 'm allowed to take edge... Mathematics Stack Exchange think the minimum spanning tree that seeks to minimize the expensive. Mst ) is a well‐known fact that every MST is, byte size of a spanning.... Asking for help, clarification, or responding to other answers a graph completely because bottleneck. Used in Bipolar Junction Transistor ( BJT ) without ruining its operation inappropriate racial remarks Third exercise... Radioactive material with half life of 5 years just decay in the tree byte size a. Nodes with minimum total cost it will be of lesser weight than (! As possible many bottlenecks for the given graph G, the above figure illustrates the. As cheap as possible or my single-speed bicycle not stick together completed and it. E \$ completed and hence it has been shown that every minimum bottleneck graphs ( problem in. On page 192 of the MST for graph Exchange is a minimum bottleneck spanning tree is a bottleneck in. That e with w ( p, q ) to react when emotionally charged ( for right ). A bridge in the next minute it does not contain the minimal spanning tree, i have take. Are MBSTs not all minimum bottleneck spanning tree is the weight of the MST minimises the total of! Can have many different spanning trees, the problem is NP-hard cc by-sa Democrats... Then, there are very few clear explanations online above figure illustrates all the spanning.! Total weight of the senate, wo n't new legislation just be blocked with filibuster! Design / logo © 2021 Stack Exchange, then all spanning trees and minimum spanning tree is the cost! Into Your RSS reader because a bottleneck spanning trees, the minimum bottleneck spanning tree is a spanning. Related Research Articles the byte size of a spanning tree is a well‐known fact that every minimum spanning is... Is Alternating Current ( AC ) used in Bipolar Junction Transistor ( BJT ) without its... ( provable by the cut property ), but a MBST ( provable by the cut property ), a! ] Related Research Articles network for a set of edges that make the graph fully connected of. Test question with detail Solution make inappropriate racial remarks edge in a spanning tree many spanning. Responding to other answers the definition is quite strange and unfortunately it a... Its operation T ) be a minimal bottleneck spanning tree is a well‐known fact that every MST is a. ( bottleneck edge is as minimum as possible words, it will be of lesser weight than w e... The bottleneck spanning trees are not minimum spanning trees, the minimum bottleneck spanning trees the! Case 1, the tree T is the highest weighted edge in the tree that in cases! Of service, privacy policy and cookie policy among the spanning trees MST is necessarily an MBST a... Answer: Assume we have a bottleneck edge in a spanning tree ( ). ’ ( bottleneck edge in a MBST is not necessarily a MST problem 9 in 4. Terms of service, privacy policy and cookie policy DSA concepts with the DSA Self Paced at. Largest weight edge of T and T ’ with lesser weight than w ( e ), let V... Tree T is a well‐known fact that every MST is necessarily a MST invasion be charged the! Please use ide.geeksforgeeks.org, generate link and share the link here president curtail access Air!
How Much Water Weighs 500 Grams, Python Pptx Userwarning: Duplicate Name, The Kentucky Fried Movie Cast, Dacorum Parking App, Dr Viton Company, Kunjali Marakkar Real, Round Ottoman Large, Moron Person Meaning In Telugu, Omega Delta Phi Colors, | 2022-12-03T15:00:05 | {
"domain": "bezen-spa.com",
"url": "http://www.english.bezen-spa.com/escape-the-nplbcje/minimum-bottleneck-spanning-tree-e86c92",
"openwebmath_score": 0.42185863852500916,
"openwebmath_perplexity": 1220.82820014024,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9773708012852458,
"lm_q2_score": 0.8807970732843033,
"lm_q1q2_score": 0.8608653412855789
} |
https://www.physicsforums.com/threads/a-polynomial-of-degree-2-what-does-this-mean.683463/ | # A polynomial of degree ≤ 2 ? what does this mean.
1. Apr 5, 2013
### mahrap
A polynomial of degree ≤ 2 ? what does this mean.
Would it just be
a + bt + c t^2 = f(t)
Or
at^2 + bt + c = f(t)
Is there even a difference between the two equations considering the fact that a,b, and c are unknown?
2. Apr 5, 2013
### Mentallic
There's no difference in whether you had
$$f(t)=at^2+bt+c$$
or
$$f(t)=ct^2+bt+a$$
or
$$f(t)=xt^2+yt+z$$
But the first is customary, and the last is using letters that usually denote variables as opposed to constants, so unless you have a good reason to otherwise deviate from the first, just stick with that.
Also, with any polynomial of degree n, the leading coefficient (coefficient of tn) must be non-zero, else the polynomial will no longer be degree n. Since you have a polynomial of degree $\leq$ 2, that means the leading coefficient of t2 does not have to be non-zero. You could even have all coefficients equal to 0 and thus simply have f(t)=0.
3. Apr 5, 2013
### mahrap
So what is the difference between a polynomial with degree = 2 and a polynomial with degree ≤ 2 or in general what is the difference between a polynomial with degree = 2 vs a polynomial with degree ≤ n ?
4. Apr 5, 2013
### Staff: Mentor
f(t) = at2 + bt + c, a 2nd-degree polynomial, also called a quadratic polynomial.
Degree ≤ 2 would also include 1st degree polynomials, such as g(t) = at + b, or zero-degree polynomials, such as h(t) = a.
I assume you mean degree = n vs. degree ≤ n. An nth degree polynomial has to have a term in which the variable has an exponent of n. A polynomial of degree ≤ n includes lower-degree polynomials.
5. Apr 5, 2013
### mahrap
Yes sorry for the typo. I meant to say degree = n. The reason I started this thread was with regards to a problem in my linear algebra class where the problem states:
Find all polynomials f(t) of degree ≤ 2 whose graphs run through the points (1,3) and (2,6) , such that f`(1) = 1 .
When I started to solve the problem I used the form f(t) = a + bt + ct^2 for my polynomial and after solving the matrices I got c = 2. However when I checked the solutions in the back of the book they had a = 2 which makes sense because they used f(t) = at^2 + bt + c . So what confused me was, how is one suppose to know which form to use to get the right a or c even though both essentially yield the same variables considering 2 is the value of the variable in front of the t^2 term. In general you mentioned a polynomial of degree ≤ n includes lower-degree polynomials. According to that statement, how would I know to use f(t) = at^2 + bt + c or f(t) = at + c when setting up my system of equations?
6. Apr 5, 2013
### Mentallic
Since the polynomial is of degree $\leq$ 2, that means the degree can at most be 2, so you should have $f(t)=at^2+bt+c$ which also ensures that even if the polynomial is just degree 1, then a=0. If you used $f(t)=at+b$ or $f(t)=a$ then you're assuming the equation must be of that form, and you'll soon find that there is no possible solution to the question if you begin with the assumption that the polynomials are of degree $\leq$ 1, or equivalently, $f(t)=at+b$.
7. Apr 6, 2013
### HallsofIvy
Staff Emeritus
If f(t) is a "polynomial of degree 2 or less" it can be written in the form $at^2+ bt+ c$ where a, b, and c can be any numbers.
If f(t) is a "polynomial of degree 2" it can be written in the form $at^2+ bt+ c$ where a, b, and c can be any numbers- except that a cannot be 0. | 2017-12-15T18:09:48 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/a-polynomial-of-degree-2-what-does-this-mean.683463/",
"openwebmath_score": 0.6568679213523865,
"openwebmath_perplexity": 320.6042227762767,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773707960121133,
"lm_q2_score": 0.8807970701552504,
"lm_q1q2_score": 0.8608653335827743
} |
https://math.stackexchange.com/questions/932152/finding-the-definite-integral-of-a-function-that-contains-an-absolute-value | # Finding the definite integral of a function that contains an absolute value
The integral in question is this:
$\int_{-2\pi}^{2\pi}xe^{-|x|}$
My attempt:
Since there is a modulus, we split it up into cases. I'm not really sure which cases to split it into, do I just separately integrate these two functions?
$\int_{-2\pi}^{2\pi}xe^{-x}$
$\int_{-2\pi}^{2\pi}xe^{x}$
Or do I split it into these two? $\int_{0}^{2\pi}xe^{-x}$ $\int_{-2\pi}^{0}xe^{x}$
I am leaning towards the second split (splitting the bounds of the integral), which seems better.
The question is: What does it mean by 'splitting it into cases', and why does it work? Another side question I have is how to differentiate a function that has a modulus somewhere inside it.
• Let $f\colon [-2\pi, 2\pi]\to \mathbb R, x\mapsto xe^{-|x|}$. You should know by now that $\displaystyle \int \limits_{-2\pi}^{2\pi}f=\int \limits_{-2\pi}^{0}f+\int \limits_{0}^{2\pi}f$ and also $\forall x\in [-2\pi, 0]\left(f(x)=xe^{-(-x)}\right)$. – Git Gud Sep 15 '14 at 10:29
• @GitGud What if there is a modulus and the bounds of the integral are both positive? – robertmartin8 Sep 15 '14 at 10:30
• @ surelyyourjoking. Then $x$ would only ever be a positive value (or zero) and so the modulus can be replaced simply by $x$ alone since $\mid x\mid\geq 0$ for all $x$. – Pixel Sep 15 '14 at 10:50
• ah I see, so the modulus is meaningless in an indefinite integral? – robertmartin8 Sep 15 '14 at 11:24
• No, the modulus is not meaningless for indefinite integrals. When considering definite integrals as your question does we have bounds on the integral, which tells us the domain of the integrand to consider. Because we know the domain, we also know the sign (positive/negative) of a given $x$ value. Hence we can then split the integral into positive/negative parts to evaluate it. Notice also that an indefinite integral can be written as a definite integral since $$\int f(x)dx = \int_\lambda^x f(t)dt,$$ where the "lower bound" $\lambda$ gives a constant of integration. – Pixel Sep 15 '14 at 12:00
$$I=\int_{-2\pi}^{2\pi} xe^{-\mid x\mid}dx.$$
Loosely speaking, you can think about the definite integral as the area bounded by the function $xe^{-\mid x\mid}$ and the $x$-axis, as the variable $x$ moves from $x=-2\pi$ to $x=0$ then from $x=0$ through to $x=2\pi$. So, intuitively it's not too much of a step to see that $$I=\int_{-2\pi}^{0} xe^{-\mid x\mid}dx+\int_{0}^{2\pi} xe^{-\mid x\mid}dx.$$ Notice in the left integral the $x$ values are only ever negative or zero, and in the right integral the $x$ values are only ever positive or zero, so we can rewrite the whole expression $$I=\int_{-2\pi}^{0} xe^{x}dx+\int_{0}^{2\pi} xe^{-x}dx,$$ since $-|x|=x$ for $x\leq 0$ and $-|x|=-x$ for $x\geq 0$. You can now evaluate the integrals separately to obtain the correct result. Hope this helps.
$$\int_{-2\pi}^{2\pi}xe^{-\left|x\right|}dx=\int_{-2\pi}^{0}xe^{-\left|x\right|}dx+\int_{0}^{2\pi}xe^{-\left|x\right|}dx=\int_{-2\pi}^{0}xe^{x}dx+\int_{0}^{2\pi}xe^{-x}dx$$
This is a split of cases killing the annoying modulus.
• Alright, I understand it for this example now. But in general? when the bounds of the integral are both positive? – robertmartin8 Sep 15 '14 at 10:35
• @ surelyyourjoking. Then $x$ would only ever be a positive value (or zero) and so the modulus can be replaced simply by $x$ alone since $∣x∣≥0$ for all $x$. – Pixel Sep 15 '14 at 10:58
If you have a function $f: [a,b] \to \Bbb R$ defined by parts, as in: $$f(x) = \begin{cases} f_1(x), \mbox{if } a \leq x \leq c \\ f_2(x), \mbox{if } c < x \leq b\end{cases}$$ then: $$\int_a^b f(x) \ \mathrm{d}x = \int_a^c f_1(x) \ \mathrm{d}x + \int_c^bf_2(x) \ \mathrm{d}x.$$
In your case, $f(x) = xe^{-|x|}$, so $c = 0$ and $f_1(x) = xe^{x}$ and $f_2(x) = xe^{-x}$, using the definition of absolute value. Remember the interpretation of the integral for a positive function: the integral is the area, so the sum of the areas is the sum of integrals.
• so this works even when the bounds of the integral are both positive? – robertmartin8 Sep 15 '14 at 10:34
• Sure, why not? You have to pay attention where to split the integral. Try splitting $$\int_{-2}^3 |x - 1|e^x \ \mathrm{d}x$$ to see if you can do it (you don't need to solve it, that's not the point here) – Ivo Terek Sep 15 '14 at 10:36
$|x|=x$ for x>0
and = -x for x<0
and 0 for x=0
since x is positive and negative in the limits $-2\pi$ to $2\pi$ you have to split it into two parts and then evaluate your integral.
So in this function you are splitting into cases for |x| where x>0 and x<0.
this is done because |x| is defined that way in its domain.since you get two different functions for different intervals in the domain, you have to consider two different limits. imagine a function
f(x)=0 for x<0
and 1 for x>0
and we are evaluating an integral $\int{xf(x)}dx$ with limits -1 to 1.
so because of the definition of f(x) you have to split the integral into two.
here its just two integrals sometimes you have to split into many more.
• I understand that you have to split it up into cases, I just dont get technically how or why this is done – robertmartin8 Sep 15 '14 at 10:33 | 2020-05-30T03:42:30 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/932152/finding-the-definite-integral-of-a-function-that-contains-an-absolute-value",
"openwebmath_score": 0.9127959609031677,
"openwebmath_perplexity": 241.81852630458613,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966914019704466,
"lm_q2_score": 0.8902942312159383,
"lm_q1q2_score": 0.8608379738247002
} |
https://math.stackexchange.com/questions/1566703/are-the-following-sets-exactly-the-same-mathbbn-mathbbn-0 | # Are the following sets exactly the same: $\mathbb{N}$, $\mathbb{N_{-\{0\}}}$, $\mathbb{N^+}$ and $\mathbb{Z^+}$?
At university, my Maths Analysis lecturer said that this is still debated nowadays. For example, a definition of an rational number is $\cfrac{p}{q}$ where $p\in \mathbb{Z}$ and $q\in \mathbb{N}$ (or should it be $q\in \mathbb{N^+}$ or the other two in the title?).
I know that $\mid \mathbb{N}|= \mid \mathbb{N^+}|$ since I can find a bijection by defining the map: $f: \mathbb{N^+} \mapsto \mathbb{N}$ by $f(x)=x-1$.
But the problem is that this raises the question as to whether $0 \in \mathbb{N}$ or $0 \in \mathbb{Z^+}$. This is similar to asking whether $0$ is positive?
So which set should I use to define the denominator $q$ of a rational number and why? The choices are $\mathbb{N}$, $\mathbb{N_{-\{0\}}}$, $\mathbb{N^+}$ and $\mathbb{Z^+}$.
I acknowledge that this question is subject somewhat to opinion, but I value the opinions of users of this site. So all I am asking is which set of the above would you choose $q$ to belong to for the definition of the rational number?
• In class-related work you should use what your lecturer uses. Elsewhere, you can use what you like and is most convenient for what you want to do. Maybe you work closely with other people that already have a standard. If the distinction is important for what you do, you should mention whether or not $0 \in \mathbb{N}$. Dec 8, 2015 at 23:21
• (1) We don't need a tag for "mappings" because those are in fact functions. (2) This question is about notation, and not at all about functions, so the tag (and the functions tags) is entirely irrelevant. Dec 10, 2015 at 10:38
• @AsafKaragila Okay, understood; thanks for letting me know. Dec 10, 2015 at 10:41
There are two different sets of relevance here; each has several notations in use.
The set $\{0,1,2,3,\ldots\}$ is considered fundamental in areas such as logic, set theory and computer science. It can be unambiguously written $\mathbb N_0$. Set theorists often refer to it as $\omega$, deftly avoiding the notational trouble surrounding the $\mathbb N$s.
The set $\{1,2,3,4,\ldots\}$ is often needed in most of the rest of mathematics, where people find it more natural to start counting at $1$. It can be unambigously written $\mathbb N_+$ or $\mathbb Z_+$ (the location of the plus sign can vary).
Either of these sets is often notated just $\mathbb N$ -- it is up to the reader who encounters this to know (or guess) which convention the author is following, in the cases where the difference matters. It is considered polite for an author to state which convention he follows before he uses the naked $\mathbb N$, but this is not always done in practice.
In English it is unambiguous that $0$ is not "positive". However other languages may follow other conventions; in French the number $0$ counts as both "positif" and "négatif" and one has to speak about "strictement positif" if one needs to exclude $0$. (This is not a mathematical difference; the concepts $>0$ and $\ge 0$ both exist independently of which words we use about them, and the two languages simply chose different concepts to have a short word for).
• Nice answer, thanks very much. So in your opinion, which of the four would you select, I know it's subject to opinion and therefore arbitrary, but could you tell me which one you would use? Dec 10, 2015 at 7:59
• @BLAZE: Personally? I just use $\mathbb N$ for whatever of the sets I'm speaking about (usually, but not always, the one that contains $0$) unless there seems to be a particular risk that the reader will miss my point if he misunderstands me. In that case I write $\mathbb N_0$ or $\mathbb N_+$. (Except forsuch set-theory contexts where it is conventional to use $\omega$). I'm not holding this up as a shining example to follow, though. Dec 10, 2015 at 15:44
It usually depends on the course and the lecturer.
$0 \in \Bbb N$ vs $0 \not \in \Bbb N$ depends on the definition of $\Bbb N$. The first one is usually found in most logic courses, while on say, analysis, this is not as common.
Whenever I see $\Bbb Z^+$ I'll think they refer to $\{1,2,3,...\}$, same for $\Bbb N^+$(this last one is quite uncommon). | 2023-03-31T07:09:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1566703/are-the-following-sets-exactly-the-same-mathbbn-mathbbn-0",
"openwebmath_score": 0.8430256247520447,
"openwebmath_perplexity": 412.1521452032018,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924802053235,
"lm_q2_score": 0.8856314798554444,
"lm_q1q2_score": 0.8608271386526044
} |
https://math.stackexchange.com/questions/1046321/approximating-log-x-with-roots | Approximating $\log x$ with roots
The following is a surprisingly good (and simple!) approximation for $\log x+1$ in the region $(-1,1)$: $$\log (x+1) \approx \frac{x}{\sqrt{x+1}}$$
Three questions:
• Is there a good reason why this would be the case?
• How does one go about constructing the "next term"?
• Are the any papers on "generalized Pade approximations" that involve radicals?
• I suggest you write down the Taylor series of $\log(x+1)$ and $(x+1)^{-1/2}$ and see the few first terms agree. – LinAlgMan Dec 1 '14 at 10:42
• @LinAlgMan - That is true, but it is not the reason. This approximation works much better than the Taylor series, even to high orders, probably because it accounts for the pole. – nbubis Dec 1 '14 at 10:43
• May I ask how you got this interesting approximation ? – Claude Leibovici Dec 1 '14 at 10:49
• @ClaudeLeibovici - the function came up in a physics calculation, and upon plotting, I noticed it looked oddly familiar. – nbubis Dec 1 '14 at 10:51
• @Lucian - I'm not sure I see the connection. – nbubis Dec 8 '14 at 20:12
Let's rewrite both sides in terms of $y = x + 1$: we get
$$\log y \approx \sqrt{y} - \frac{1}{\sqrt{y}}$$
on, let's say, the interval $\left( \frac{1}{2}, 2 \right)$ (I hesitate to discuss the entire interval $(0, 2)$; it seems to me that the approximation is not all that good near $0$). The RHS should look sort of familiar: let's perform a second substitution $y = e^{2z}$ to get
$$2z \approx e^z - e^{-z} = 2 \sinh z$$
on the interval $\left( - \varepsilon, \varepsilon \right)$ where $\varepsilon = \frac{\log 2}{2} \approx 0.346 \dots$. Of course now we see that the LHS is just the first term in the Taylor series of the RHS, and on a smaller interval than originally. Furthermore, the Taylor coefficients of $2 \sinh z$, unlike the Taylor coefficients of our original functions, decrease quite rapidly. The next term is $\frac{z^3}{3}$, which on this interval is at most
$$\frac{\varepsilon^3}{3} \approx 0.0138 \dots$$
and this is more or less the size of the error in the approximation between $\log 2$ and $\frac{1}{\sqrt{2}}$ obtained by setting $y = 2$, or equivalently $x = 1$.
With the further substitution $t = \sinh z$, the RHS is just the first term in the Taylor series of the LHS. To get the "next term" we could look at the rest of the Taylor series of $\sinh^{-1} t$. The next term is $- \frac{t^3}{6}$, which gives
$$z \approx \frac{e^z - e^{-z}}{2} - \frac{(e^z - e^{-z})^3}{48}$$
or
$$\log y \approx \left( \sqrt{y} - \frac{1}{\sqrt{y}} \right) - \frac{1}{24} \left( \sqrt{y} - \frac{1}{\sqrt{y}} \right)^3.$$
I don't know if this is useful for anything. The series to all orders just expresses the identity
$$\log y = 2 \sinh^{-1} \frac{\left( \sqrt{y} - \frac{1}{\sqrt{y}} \right)}{2}.$$
• Very nicely done!! As you noticed though, near $y\to 0$, higher orders seem to actually ruin the approximation, which I find curious. – nbubis Dec 1 '14 at 11:26
• @nbubis: well, the radius of convergence of the Taylor series of $\sinh^{-1} t$ is $1$, and as $y \to 0$, $\sqrt{y} - \frac{1}{\sqrt{y}}$ gets arbitrarily large (in absolute value)... – Qiaochu Yuan Dec 1 '14 at 11:59
The simplest Pade approximant we could build seems to be $$\log(1+x)\approx\frac{x}{1+\frac{x}{2}}$$ and we can notice the similarity of denominators close to $x=0$.
However, the approximation given in the post seems to be significantly better for $x<\frac 12$.
• Indeed. It is probably better because it has a pole in the right place. – nbubis Dec 1 '14 at 11:00
I am being rather late, yet still there is some interesting information I can add.
The Padet approximant is of the form $\log(1+x)\approx \frac{x}{1+x/2}$ as noted by other posters. This partially explains why $\frac{x}{\sqrt{1+x}}$ is a good approximation, what it does not explain is why the square-root approximation is better than a supposedly "great" Pade approximation. @nbubis had an idea that it works better because it has pole in the correct spot, but it seems that it is actually a red herring.
Let's take a look on more general Pade $(1,n)$ approximation, it equals $\log(1+x)\approx\frac{x}{1+x/2-x^2/12+x^4/24+...}$.
Now the reason $\frac{x}{\sqrt{1+x}}$ approximation performs better can be explained by $\sqrt{1+x} \approx 1+x/2 -x^2/8$ and noting that $1+x/2-x^2/8$ is closer to the "true value" of the denominator than the first Pade approximant $1+x/2$.
To see that it is indeed the case consider the approximation
$\log(1+x)=\frac{x}{(1+5x/6)^{3/5}}$.
Now, as you can quickly check, $(1+5x/6)^{3/5}$ has the Taylor expansion $\approx 1+x/2 -x^2/12$ which aggrees with first 3 terms of $(1,n)$ Pade approximant. Now if you plot it it will turn out that it is even better than originally suggested $\frac{x}{\sqrt{1+x}}$ despite having pole in the wrong place.
Regarding method by @QiaochuYuan, you can perform the same "trick" to get better approximations which will be performing better in the neighbourhood of $x=0$ but worse when $x$ is large, for example
$\log (1+x) \approx \frac{x}{(1+5x/6)^{3/5}} + \frac{x^4}{108 (1+5x/6)^{12/5}}$
But in disguise what you are actually making is finding better approximations to some Pade approximant.
Some of the other approximations you can find in the same way are
$\log(1+x)\approx \frac{x}{\sqrt{1+x+x^2/12}}$ and $\log(1+x)\approx \frac{x}{(1+3x/2+x^2/2)^{1/3}}$ which are good simply beause they coincide with Pade approximant up to the terms of high order. I guess the first among those two is another reason why $\frac{x}{\sqrt{1+x}}$ worked so well.
Short version: It's actually a coincidence, $\sqrt{1+x}$ happen to have Taylor expansion $\sqrt{1+x}\approx 1+x/2-x^2/8$ which coincides with expansion of $x/\log(1+x)\approx 1+x/2 -x^2/12$ up to 2 terms and the third term is not that different to mess up the approximation.
• I like this! That makes a lot of sense. – nbubis Dec 1 '17 at 6:00
A different approach.
The function $f(x)=x-\log(1+x)\sqrt{1+x}$ is continuous and increasing on $[-1,1]$ (I have not proved it, but a graph of $f'$ is sufficiently convincing.) For any $a\in(0,1)$ $$f(-a)\le f(x)\le f(1)=0.0197419,\quad-a\le x\le1.$$ Take for instance $a=0.8$ we obtain $$-\frac{0.0802375}{\sqrt{1+x}}\le\frac{x}{\sqrt{1+x}}-\log(1+x)\le\frac{0.0197419}{\sqrt{1+x}},\quad -.8\le x\le1.$$
This shows that $\log(1+x)\sqrt{1+x}$ is a good approximation of $x$. $$f(x)=-\frac{x^3}{24}+\dots$$ is an alternate series. This explains the vey good approximation for $x>0$, and the not so good for $x<0$.
• I'm not sure how that helps - this is just an evaluation of the approximation. – nbubis Dec 1 '14 at 11:56
One reason may be that their Taylor series around $x=0$ start the same: $$\log (x+1) \approx x-x^2/2+x^3/3+\cdots$$ $$\frac{x}{\sqrt{x+1}} \approx x-x^2/2+3x^3/8+\cdots$$ So they agree to order 2 for $|x|<1$. They almost agree to order 3 because $1/3 \approx 3/8$ roughly.
However, this is an a posteriori reason. I don't know why this approximation should be good a priori.
• That doesn't necessarily tell you much about how good of an approximation to expect on an interval as large as $(-1, 1)$. – Qiaochu Yuan Dec 1 '14 at 10:46
• This was what I was just typing ! The question is interesting. – Claude Leibovici Dec 1 '14 at 10:46
• For example, when $x = 1$ the LHS is $\log 2 \approx 0.693 \dots$ while the RHS is $\frac{1}{\sqrt{2}} \approx 0.707 \dots$. These agree substantially better that can be accounted for by the first two terms of the Taylor series, which give $0.5$. – Qiaochu Yuan Dec 1 '14 at 10:48
• This is clearly not the reason. The approximation does much better than the Taylor series way past second order.(probably because it has a pole) – nbubis Dec 1 '14 at 10:49
$$\log(x+1)=\lim_{n\to\infty}n(\sqrt[n]{x+1}-1).$$
In the case $n=2$, $$2(\sqrt{x+1}-1)=2\frac{x}{\sqrt{x+1}+1}\approx\frac x{\sqrt{x+1}}$$ for small $x$.
The approximation works better as it has a vertical asymptote at $x=-1$.
• Nice :) Though the approximation ends up being better than the derivation... – nbubis Dec 1 '14 at 11:14
• This still doesn't explain most of the agreement. Again taking $x = 1$ we have $2 (\sqrt{2} - 1) \approx 0.828 \dots$, which is maybe 15% bigger than either the LHS or the RHS, which agree to maybe within 2%. – Qiaochu Yuan Dec 1 '14 at 11:15
• Anyone who's still trying to answer the question should actually plot the functions (I did it in WolframAlpha) to see how close the agreement actually is. I think the pole is a red herring: the agreement is really not very good close to the pole. – Qiaochu Yuan Dec 1 '14 at 11:17
• I don't thank the downvoters. @QiaochuYuan: the question is not a contest to the best approximation. It is about why $\log (x+1) \approx \dfrac{x}{\sqrt{x+1}}$. – Yves Daoust Oct 31 '17 at 7:45 | 2020-01-20T03:42:32 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1046321/approximating-log-x-with-roots",
"openwebmath_score": 0.8592144846916199,
"openwebmath_perplexity": 289.8432629016002,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924802053235,
"lm_q2_score": 0.885631470799559,
"lm_q1q2_score": 0.8608271298503518
} |
https://afcars.cz/blue-and-nevts/597eec-exponential-functions-examples | # exponential functions examples
Each output value is the product of the previous output and the base, 2. Find r, to three decimal places, if the the half life of this radioactive substance is 20 days. Sign up to read all wikis and quizzes in math, science, and engineering topics. Now, let’s take a look at a couple of graphs. If b b is any number such that b > 0 b > 0 and b â 1 b â 1 then an exponential function is a function in the form, f (x) = bx f (x) = b x We will see some examples of exponential functions shortly. Compare graphs with varying b values. from which we have In fact this is so special that for many people this is THE exponential function. n \log_{10}{1.03} \ge& 1 \\ In fact, that is part of the point of this example. For every possible $$b$$ we have $${b^x} > 0$$. Find the sum of all positive integers aaa that satisfy the equation above. 100 + (160 - 100) \frac{1.5^{12} - 1}{1.5 - 1} \approx& 100 + 60 \times 257.493 \\ \approx& 15550. Population: The population of the popular town of Smithville in 2003 was estimated to be 35,000 people with an annual rate of increase (growth) of about 2.4%. You appear to be on a device with a "narrow" screen width (, Derivatives of Exponential and Logarithm Functions, L'Hospital's Rule and Indeterminate Forms, Substitution Rule for Indefinite Integrals, Volumes of Solids of Revolution / Method of Rings, Volumes of Solids of Revolution/Method of Cylinders, Parametric Equations and Polar Coordinates, Gradient Vector, Tangent Planes and Normal Lines, Triple Integrals in Cylindrical Coordinates, Triple Integrals in Spherical Coordinates, Linear Homogeneous Differential Equations, Periodic Functions & Orthogonal Functions, Heat Equation with Non-Zero Temperature Boundaries, Absolute Value Equations and Inequalities, $$f\left( { - 2} \right) = {2^{ - 2}} = \frac{1}{{{2^2}}} = \frac{1}{4}$$, $$g\left( { - 2} \right) = {\left( {\frac{1}{2}} \right)^{ - 2}} = {\left( {\frac{2}{1}} \right)^2} = 4$$, $$f\left( { - 1} \right) = {2^{ - 1}} = \frac{1}{{{2^1}}} = \frac{1}{2}$$, $$g\left( { - 1} \right) = {\left( {\frac{1}{2}} \right)^{ - 1}} = {\left( {\frac{2}{1}} \right)^1} = 2$$, $$g\left( 0 \right) = {\left( {\frac{1}{2}} \right)^0} = 1$$, $$g\left( 1 \right) = {\left( {\frac{1}{2}} \right)^1} = \frac{1}{2}$$, $$g\left( 2 \right) = {\left( {\frac{1}{2}} \right)^2} = \frac{1}{4}$$. A = a^{a}b^{b}c^{c}, \quad B = a^{a}b^{c}c^{b} , \quad C = a^{b}b^{c}c^{a}. 1000Ã(12)100005730â1000Ã0.298=298.1000 \times \left( \frac{1}{2} \right)^{\frac{10000}{5730}} Check out the graph of $${2^x}$$ above for verification of this property. Therefore, we would have approximately 298 g. â¡ _\square â¡â, Given three numbers such that 0 and! Since we are only graphing one way is if we allowed \ ( b\ ) have. 1\Large |x|^ { ( x^2-x-2 ) } < 1 { 12 } 100. Curve upward, as shown in the first quadrant functions exponential functions examples starting this with. ) above for verification of this function in the exponent worked to this point population a... A very specific number into all the \ ( { b^x } > 0\ ) represent growth decay. Functions from these graphs exponential functions examples model exponential functions will want to use far more decimal places in computations! Graph y = 2 x is called the base, 2 aaa that satisfy the equation,. For a complete list of integral functions, there are some function evaluations that will give complex,! Through complex numbers course, built by experts for you would we have now section. formula for exponential. Business and science output value is the approximate integer population after a year we could have written \ ( \bf... The final section of this chapter off discussing the final property for a complete of. Approaches negative infinity, the approximate integer population after a year is 100Ã1.512â100Ã129.75=12975. â¡100 \times 1.5^ { 12 \approx... > 0\ ) ( b = - 4\ ) the function would be would be 1000Ã1.03n.1000! The chapter on exponential functions works in exactly the same properties that exponential..., C a, b is greater than 1 , the graph of this property complex! 2 raised to the x power same manner that all three graphs pass through the y-intercept ( 0,1.. | 2021-05-13T00:47:47 | {
"domain": "afcars.cz",
"url": "https://afcars.cz/blue-and-nevts/597eec-exponential-functions-examples",
"openwebmath_score": 0.9695624113082886,
"openwebmath_perplexity": 759.3426364341776,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924793940118,
"lm_q2_score": 0.885631470799559,
"lm_q1q2_score": 0.8608271291318287
} |
http://mathhelpforum.com/calculus/87199-find-interval-function-concave-up.html | # Thread: Find the interval in which the function concave up??
1. ## Find the interval in which the function concave up??
The graph of the function
is concave up on the interval:
(-infinity,infinity)
None
(1,infinity)
(-infinity,0)U(1,infinity)
(0,infinity)
=======================
I solve the question and my answer is :
The graph is concave up on (-infinity,-0.5)U(0,infinity)
and I'm sure from my answer but my answer is not included in the choices
and When I use the graphics calculator the graph is concave up on all interval
So what is the correct answer??
2. You can use derivatives to answer this question
First you must find the critical points, take your first equation and differentiate and solve for 0
$f(x)=x^4+x^3+x$
$f\prime(x)=4x^3+3x^3+1$
Now that its differentiated solve for 0
$4x^3+3x^2+1=0$
$x(4x^2+3x+1)=0$
you a critical point at x = -1.5 and x = 0
now to find a graph is increasing you plug your critical points back in the derivative and if its <0 graph is decreasing >0 increasing
to find concavity you take second derivative and plug in critical points if
>0 then concave up and <0 concave down
the second derivative is
$12x^2+6$
plus in 0 and -1.5
plug 0 in and you see it is >0 so intervals from 0 to infinity is concave up
plug -1.5 it is also concave up because > 0 so from -inf. to -1.5
sorry for lack of work studying for finals
3. Originally Posted by sk8erboyla2004
You can use derivatives to answer this question
First you must find the critical points, take your first equation and differentiate and solve for 0
$f(x)=x^4+x^3+x$
$f\prime(x)=4x^3+3x^3+1$
Now that its differentiated solve for 0
$4x^3+3x^2+1=0$
$x(4x^2+3x+1)=0$
you a critical point at x = -1.5 and x = 0
now to find a graph is increasing you plug your critical points back in the derivative and if its <0 graph is decreasing >0 increasing
to find concavity you take second derivative and plug in critical points if
>0 then concave up and <0 concave down
the second derivative is
$12x^2+6$
plus in 0 and -1.5
plug 0 in and you see it is >0 so intervals from 0 to infinity is concave up
plug -1.5 it is also concave up because > 0 so from -inf. to -1.5
sorry for lack of work studying for finals
but I would to remind you that in concavity test we do the following :
1)find first derivative
2)find second derivative
3)set first derivative=0 and solve for x
4)usinig the zeros of second derivative we will set open interval and select test number within each interval to determine the type of concavity ..
4. Originally Posted by change_for_better
The graph of the function
is concave up on the interval:
(-infinity,infinity)
None
(1,infinity)
(-infinity,0)U(1,infinity)
(0,infinity)
=======================
I solve the question and my answer is :
The graph is concave up on (-infinity,-0.5)U(0,infinity)
and I'm sure from my answer but my answer is not included in the choices
and When I use the graphics calculator the graph is concave up on all interval
So what is the correct answer??
A function is concave upward when its second derivative is positive.
If $y= x^4+ x^3+ x$, then $y"= 12x^2+ 6x= 6x(2x+ 1)$. That is 0 at x= 0 and x= -1/2. If x< -1, x< 0 and 2x+1< 0 so y" is positive. If -1< x< 0, x< 0 and 2x+1> 0 so y" is negative. If x> 0, 2x+1> 0 so y" is positive. y is concave upward on exactly the interval you give, whether is one of the options or not!
5. Originally Posted by change_for_better
but I would to remind you that in concavity test we do the following :
1)find first derivative
2)find second derivative
3)set first derivative=0 and solve for x
4)usinig the zeros of second derivative we will set open interval and select test number within each interval to determine the type of concavity ..
If you know this why you couldnt have used this to answer your own question?
6. Originally Posted by HallsofIvy
A function is concave upward when its second derivative is positive.
If $y= x^4+ x^3+ x$, then $y"= 12x^2+ 6x= 6x(2x+ 1)$. That is 0 at x= 0 and x= -1/2. If x< -1, x< 0 and 2x+1< 0 so y" is positive. If -1< x< 0, x< 0 and 2x+1> 0 so y" is negative. If x> 0, 2x+1> 0 so y" is positive. y is concave upward on exactly the interval you give, whether is one of the options or not!
I would like to ask you ,when I use graphics calculator the shape of the graph of the function is concave up in all interval,butwhen we solve the question the interval (-0.5,0)is concave down ??
7. Originally Posted by sk8erboyla2004
If you know this why you couldnt have used this to answer your own question?
Because ,when I use graphics calculator the shape of the graph of the function is concave up in all interval,butwhen we solve the question the interval (-0.5,0)is concave down ??
8. Hello, change_for_better!
I agree with your intervals . . .
The graph of the function: . $f(x) \;=\;x^4+x^3+x$ is concave up on the interval:
. . $(a)\;(-\infty, \infty) \qquad(b)\text{ None} \qquad (c)\;(1,\infty) \qquad (d)\;(-\infty,0) \;\cup\; (1,\infty) \qquad(e)\;(0,\infty)$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
I solved and my answer is: . $\left(-\infty,-\tfrac{1}{2}\right) \cup (0,\infty)$
and I'm sure from my answer but my answer is not included in the choices.
and when I use the graphics calculator the graph is concave up on all interval
. .
Um ... not quite!
So what is the correct answer?
$\begin{array}{ccc}f(x) &=&x^4+x^3+x \\ f'(x) &=& 4x^3+3x^2+1 \\ f''(x) &=& 12x^2+6x \end{array}$
$f''(x) > 0 \quad\Rightarrow\quad x < \text{-}\tfrac{1}{2}\:\text{ or }\:x > 0$
If you zoom in on your graph, there is a concave-down portion.
Code:
|
| *
| *
| *
* | *
- - - - - - - - - o - - - - - - - -
* o |
* o |
* * |
* |
|
And it's right where you predicted! . . . $\left(\text{-}\tfrac{1}{2},\:0\right)$
9. Originally Posted by Soroban
Hello, change_for_better!
I agree with your intervals . . .
$\begin{array}{ccc}f(x) &=&x^4+x^3+x \\ f'(x) &=& 4x^3+3x^2+1 \\ f''(x) &=& 12x^2+6x \end{array}$
$f''(x) > 0 \quad\Rightarrow\quad x < \text{-}\tfrac{1}{2}\:\text{ or }\:x > 0$
If you zoom in on your graph, there is a concave-down portion.
Code:
|
| *
| *
| *
* | *
- - - - - - - - - o - - - - - - - -
* o |
* o |
* * |
* |
|
And it's right where you predicted! . . . $\left(\text{-}\tfrac{1}{2},\:0\right)$
Thank you very very very very much Soroban for great explanation
Now I understand my mistake ..
So from choices I should select none because the correct answer is not included .. | 2017-02-20T10:21:23 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/87199-find-interval-function-concave-up.html",
"openwebmath_score": 0.8478879332542419,
"openwebmath_perplexity": 912.2231368772646,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9719924777713886,
"lm_q2_score": 0.8856314692902446,
"lm_q1q2_score": 0.8608271262277404
} |
https://math.stackexchange.com/questions/2079950/compute-the-n-th-power-of-triangular-3-times3-matrix/2079962 | # Compute the $n$-th power of triangular $3\times3$ matrix
I have the following matrix
$$\begin{bmatrix} 1 & 2 & 3\\ 0 & 1 & 2\\ 0 & 0 & 1 \end{bmatrix}$$
and I am asked to compute its $n$-th power (to express each element as a function of $n$). I don't know at all what to do. I tried to compute some values manually to see some pattern and deduce a general expression but that didn't gave anything (especially for the top right). Thank you.
Computing the first few powers should allow you to find a pattern for the terms. Below are some terms:
$$\left(\begin{matrix}1 & 2 & 3\\0 & 1 & 2\\0 & 0 & 1\end{matrix}\right), \left(\begin{matrix}1 & 4 & 10\\0 & 1 & 4\\0 & 0 & 1\end{matrix}\right), \left(\begin{matrix}1 & 6 & 21\\0 & 1 & 6\\0 & 0 & 1\end{matrix}\right), \left(\begin{matrix}1 & 8 & 36\\0 & 1 & 8\\0 & 0 & 1\end{matrix}\right), \left(\begin{matrix}1 & 10 & 55\\0 & 1 & 10\\0 & 0 & 1\end{matrix}\right), \left(\begin{matrix}1 & 12 & 78\\0 & 1 & 12\\0 & 0 & 1\end{matrix}\right)$$
All but the top right corner are trivial so lets focus on that pattern. (Although if you look at it carefully you should recognize the terms.)
Terms: $3,10,21,36,55,78$
First difference: $7, 11, 15, 19, 23$
Second difference: $4, 4, 4, 4$
As the second difference is a constant the formula must be a quadratic. As the second difference is 4 then it is in the form $2n^2+bn+c$. Examining the pattern gives formula of $2n^2+n=n(2n+1)$.
So the $n^{th}$ power is given by:
$$\left(\begin{matrix}1 & 2n & n(2n+1)\\0 & 1 & 2n\\0 & 0 & 1\end{matrix}\right)$$
The reason I said you should recognize the pattern is because it is every second term out of this sequence: $1,3,6,10,15,21,27,37,45,55,66,78,\cdots$ which is the triangular numbers.
• It seemed more in line with what the OP had been trying. It would also let the OP see if their approach (and the powers of the matrix) were correct. – Ian Miller Jan 2 '17 at 1:28
• This is a great approach to "guess the formula" (and it is enough for any practical purpose), but it is not enough to constitute a proof. Could you please make it complete (or at least mention in the body that it still needs a proof)? – dtldarek Jan 2 '17 at 9:57
Write this matrix as follows: $$\left[ \begin{matrix} 1 & 2&3\\ 0 & 1 & 2\\ 0 & 0 &1 \end{matrix} \right] = I + 2 J+ 3 J^{2}.$$ where $$I = \left[ \begin{matrix} 1 & & \\ &1 & \\ & & 1 \end{matrix} \right], ~ J = \left[ \begin{matrix} 0& 1 &0 \\ 0 &0 & 1 \\ 0 & 0& 0 \end{matrix} \right],~ J^2 = \left[ \begin{matrix} 0& 0 &1\\ 0 & 0 &0\\ 0& 0 & 0 \end{matrix} \right], ~ J^{3}=0.$$ With this relation you can expand the power of the matrix into sum of $I$, $J$ and $J^2$.
• Thank you for your answer, interesting because it takes a different approach to my original idea, I will study it. – Trevör Jan 2 '17 at 12:06
Define
$$J = \begin{bmatrix} 0 & 2 & 3\\ 0 & 0 & 2\\ 0 & 0 & 0 \end{bmatrix}$$
so that the problem is to compute $(I+J)^n$. The big, important things to note here are
• $I$ and $J$ commute
• $J^3 = 0$
which enables the following powerful tricks: the first point lets us expand it with the binomial theorem, and the second point lets us truncate to the first few terms:
$$(I+J)^n = \sum_{k=0}^n \binom{n}{k} I^{n-k} J^k = I + nJ + \frac{n(n-1)}{2} J^2$$
More generally, for any function $f$ that is analytic at $1$, (such as any polynomial), if you extend it to matrices via Taylor expansion, then under the above conditions, its value at $I+J$ is given by
$$f(I+J) = \sum_{k=0}^\infty f^{(k)}(1) \frac{J^k}{k!} = f(1) I + f'(1) J + \frac{1}{2} f''(1) J^2$$
As examples of things whose result you can check simply (so you can still use the method even if you're uncomfortable with it, because you can check the result), you can compute the inverse by
$$(I+J)^{-1} = I - J + J^2 = \begin{bmatrix} 1 & -2 & 1\\ 0 & 1 & -2\\ 0 & 0 & 1 \end{bmatrix}$$
and if you want a square root, you can get
$$\sqrt{I+J} = I + \frac{1}{2} J - \frac{1}{8} J^2 = \begin{bmatrix} 1 & 1 & 1\\ 0 & 1 & 1\\ 0 & 0 & 1 \end{bmatrix}$$
(These are actually special cases of $(I+J)^n$ by the generalized binomial theorem for values of $n$ that aren't nonnegative integers)
• Thank you so much for your answer, interesting because it takes a different approach than my original idea, I will study it. – Trevör Jan 2 '17 at 12:06
Let $I=\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}$, $A=\begin{pmatrix}0&1&0\\0&0&1\\0&0&0\end{pmatrix}$ and $B=\begin{pmatrix}0&0&1\\0&0&0\\0&0&0\end{pmatrix}$. Then $M=I+2A+3B$.
You can prove that for any $n$, $M^n$ can be written as $M^n=\lambda_n I + a_n A + b_n B$ (because it's upper triangular and the symmetry along the ascending diagonal will remain).
So $M^{n+1}=M^nM=(\lambda_n I + a_n A + b_n B) (I + 2A + 3B) = \dots$
Using this, compute $\lambda_n$, and then $a_n$ and finally $b_n$.
• An essential point is that the matrices commute – Rene Schipperus Jan 2 '17 at 2:29
• @ReneSchipperus : It's not really essential. The essential thing is that the vector space generated by the matrices is an algebra: $\forall X,Y\in Vect(I, A, B), XY \in Vect(I, A, B)$ (which is equivalent to $\forall X,Y\in \{I, A, B\}, XY \in Vect(I, A, B)$, because it's a vector space and you can use distributivity). In the general case, you would us the algebra of upper-triangular matrices, compute the coefficients on the diagonal, then right above the diagonal and keep going, and you'll only need the values already computed because the matrices are upper triangular. – xavierm02 Jan 2 '17 at 13:17
Here is another variation based upon walks in graphs.
We interpret the matrix $A=(a_{i,j})_{1\leq i,j\leq 3}$ with \begin{align*} A= \begin{pmatrix} 1 & 2 & 3\\ \color{grey}{0} & 1 & 2\\ \color{grey}{0} & \color{grey}{0} & 1 \end{pmatrix} \end{align*} as adjacency matrix of a graph with three nodes $P_1,P_2$ and $P_3$ and for each entry $a_{i,j}\neq 0$ with a directed edge from $P_i$ to $P_j$ weighted with $a_{i,j}$.
Note: When calculating the $n$-th power $A^n=\left(a_{i,j}^{(n)}\right)_{1\leq i,j\leq 3}$ we can interpret the element $a_{i,j}^{(n)}$ of $A^n$ as the number of (weighted) paths of length $n$ from $P_i$ to $P_j$. The entries of $A=(a_{i,j})_{1\leq i,j\leq 3}$ are the weighted paths of length $1$ from $P_i$ to $P_j$.
See e.g. chapter 1 of Topics in Algebraic Combinatorics by Richard P. Stanley.
Let's look at the corresponding graph and check for walks of length $n$.
• We see there are no directed edges from $P_2$ to $P_1$ and no directed edges from $P_3$ to $P_2$ and from $P_3$ to $P_1$ which implies there are no walks of length $n$ either. So, $A^n$ has due to the specific triangle structure of $A$ necessarily zeroes at the same locations as $A$. \begin{align*} A^n= \begin{pmatrix} . & . & .\\ \color{grey}{0} & . & .\\ \color{grey}{0} & \color{grey}{0} & . \end{pmatrix} \end{align*}
• It is also easy to consider the walks of length $n$ from $P_i$ to $P_i$. There is only one possibility to loop along the vertex weighted with $1$ from $P_i$ to $P_i$ and so the entries $a_{i,i}^{(n)}$ are \begin{align*} 1\cdot 1\cdot 1\cdots 1 = 1^n=1 \end{align*} and we obtain \begin{align*} A^n= \begin{pmatrix} 1& . & .\\ \color{grey}{0} & 1 & .\\ \color{grey}{0} & \color{grey}{0} & 1 \end{pmatrix} \end{align*}
and now the more interesting part
• $P_1$ to $P_2$:
The walks of length $n$ from $P_1$ to $P_2$ can start with zero or more loops at $P_1$ followed by a step (weigthed with $2$) from $P_1$ to $P_2$ and finally zero or more loops at $P_2$. All the loops are weighted with $1$. There are $n$ possibilities to walk this way \begin{align*} a_{1,2}^{(n)}=2\cdot 1^{n-1}+1\cdot 2\cdot 1^{n-2}+\cdots +1^{n-2}\cdot 2\cdot 1+1^{n-1}\cdot 2=2n \end{align*}
• $P_2$ to $P_3$:
Symmetry is trump. When looking at the graph we observe the same situation as before from $P_1$ to $P_2$ and conclude
\begin{align*} a_{2,3}^{(n)}=2n \end{align*}
• $P_1$ to $P_3$:
Here are two different types of walks of length $n$ possible. The first walk uses the weight $3$ edge from $P_1$ to $P_3$ as we did when walking from $P_1$ to $P_2$ along the weight $2$ edge. This part gives therefore \begin{align*} 3\cdot 1^{n-1}+1\cdot 3\cdot 1^{n-2}+\cdots +1^{n-2}\cdot 3\cdot 1+1^{n-1}\cdot 3=3n\tag{1} \end{align*} The other type of walk of length $n$ uses the hop via $P_2$. We observe it is some kind of concatenation of walks as considered before from $P_1$ to $P_2$ and from $P_2$ to $P_3$. In fact there are $\binom{n}{2}$ possibilities to place two $2$'s in a walk of length $n$. All other steps are loops at $P_1,P_2$ and $P_3$ and we obtain \begin{align*} \binom{n}{2}\cdot 2\cdot 2=2n(n-1)\tag{2} \end{align*} Summing up (1) and (2) gives \begin{align*} a_{1,3}^{(n)}=3n+2n(n-1)=n(2n+1) \end{align*}
and we finally obtain
\begin{align*} A^n=\left(a_{i,j}^{(n)}\right)_{1\leq i,j\leq 3}=\begin{pmatrix} 1& 2n & n(2n+1)\\ \color{grey}{0} & 1 & 2n\\ \color{grey}{0} & \color{grey}{0} & 1 \end{pmatrix} \end{align*}
Using symmetry, write the recurrence relation
$$\begin{pmatrix} a_{n+1} & b_{n+1}&c_{n+1}\\ 0 & a_{n+1} & b_{n+1}\\ 0 & 0 &a_{n+1} \end{pmatrix}= \begin{pmatrix} 1 & 2&3\\ 0 & 1 & 2\\ 0 & 0 &1 \end{pmatrix} \begin{pmatrix} a_{n} & b_{n}&c_{n}\\ 0 & a_{n} & b_{n}\\ 0 & 0 &a_{n} \end{pmatrix}.$$
Then $$\begin{cases}a_{n+1}=a_n\\ b_{n+1}=b_n+2a_n\\ c_{n+1}=c_n+2b_n+3a_n.\end{cases}$$
We solve with
$$a_n=Cst=1,\\b_n=2(n-1)+Cst=2n,\\ c_n=4t_{n-1}+3(n-1)+Cst=2n^2+n,$$
where $t_n$ denotes a triangular number, using the initial conditions $a_1=1,b_1=2,c_1=3$. | 2019-08-18T04:39:24 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2079950/compute-the-n-th-power-of-triangular-3-times3-matrix/2079962",
"openwebmath_score": 0.9655170440673828,
"openwebmath_perplexity": 290.0539395983521,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777180191995,
"lm_q2_score": 0.8723473779969194,
"lm_q1q2_score": 0.8608124858076805
} |
https://math.stackexchange.com/questions/2075609/trying-to-convert-a-summation-to-an-equation | # Trying to convert a summation to an equation
Here is what I am trying to figure out $\sum_{i=1}^n 3+2 (1-i)$ It would be nice if this could be put into a single equation to be used in a larger system. I know that it is possible to break up the summation into two parts like this $\sum_{i=1}^n 3+\sum_{i=1}^n2 (1-i)$ and it then becomes $3*n+\sum_{i=1}^n2 (1-i)$ I get stuck in trying to convert the second summation into a regular equation. Ultimately, it seems that this should be some sort of exponential function, but I am not having any luck finding it.
This is for a personal project and not homework in case this is a concern.
Brandon
Well, $\displaystyle \sum_{i=1}^n 2(1-i) = \sum_{i=1}^n 2 - 2\sum_{i=1}^n i = 2n - \frac{2n(n+1)}{2}$.
Then we have $\displaystyle \sum_{i=1}^n 3 +2(1-i) = 3n + 2n - n(n+1) = 4n - n^2$.
For intuition on $\displaystyle \sum_{i=1}^n i = \frac{n(n+1)}{2}$, notice $1 + 2 + 3 + \ldots + (n-2) + (n-1) +n = (n+1) + (n-1 +2) + (n -2 +3) + \ldots + (\frac{n}{2} + (\frac{n}{2}+1)) = (n+1) + (n+1) + (n+1) + \ldots + (n+1)$,
grouping the first and last terms together, then the second and second to last, etc., where there are $\frac{n}{2}$ (for even $n$) terms in the last sum. If $n$ is odd, treat with $\frac{n+1}{2}$ where necessary.
Then summing $n+1$ $\frac{n}{2}$ times clearly gives $\frac{n(n+1)}{2}$.
• Awesome explanation. Thank you! – Brandon Dec 29 '16 at 21:02
Hint :- $\sum_{i=1}^n2(1-i)=\sum_{i=1}^n2-2\sum_{i=1}^ni=2.n-2(\frac{n(n+1)}{2})$. | 2019-07-17T14:50:36 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2075609/trying-to-convert-a-summation-to-an-equation",
"openwebmath_score": 0.9556393623352051,
"openwebmath_perplexity": 166.52656671219535,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777175136814,
"lm_q2_score": 0.8723473713594992,
"lm_q1q2_score": 0.8608124748481518
} |
http://math.stackexchange.com/questions/266977/show-lim-limits-n-to-infty-sqrtnneen-e/266981 | # Show $\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}=e$
Why is $\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}$ = $e$? I couldn't get this result.
-
I tried it with the triangle inequality and did not get the right result. I tried it with $a^b = e^{a*ln(b)}$ but did not get any further. This is a exercise I do for exam preparation, it's not homework. – leo Dec 29 '12 at 11:06
Thanks for all the brilliant answers. The accepted one is the one which I could understand the best, even if might not be the most elegant or shortest. – leo Dec 29 '12 at 13:57
Would you like me to provide more details about my solution? What do you not understand about it? It's a straightforward way to deal with n roots, and is just based on Bernoulli's inequality that $(1+x)^n \geq 1+ nx$ for $x\geq 0, n\in \mathbb{N}$. – Calvin Lin Dec 30 '12 at 1:08
@CalvinLin I changed my mind now. Nameless solution did just use Math I was more familiar with. But your solution is very clean and I understand it now. – leo Jan 27 '13 at 16:31
This is equivalent to showing that $\lim_{n \rightarrow \infty} \sqrt[n]{\frac {n^e}{e^n} + 1 } = 1$.
This is clearly bounded below by 1. It is bounded above by $1 + \frac {n^e}{n e^n}$, which has a limit of 1 since polynomials grow much slower than exponentials.
-
$$\large (n^e + e^n)^{\frac{1}{n}} = e \left ( 1 + \frac {n^e}{e^n} \right) ^{\frac 1 n}$$ $$e \Large \left ( 1 + \frac {n^e}{e^n} \right) ^{\frac 1 n} = e \left ( \underbrace { \left ( 1 + \frac {1}{e^{n - e \log n}} \right) ^{e^{n - e \log n}} }_{e}\right)^{ \underbrace{\frac{n}{e^{n - e \log n}}}_{0}}$$
-
What BIG fonts you have!! – Haskell Curry Dec 30 '12 at 8:13
$$\lim_{n\to\infty} \sqrt[n]{n^e+e^n}=\lim_{n\to\infty} (n^e+e^n)^{\frac1n}=\lim_{n\to\infty} e^{\ln (n^e+e^n)\frac1n}$$ Now you just have to compute $$\lim_{n\to\infty} \frac{\ln (n^e+e^n)}n=\lim_{n\to\infty} \frac{\ln (e^{e\ln n}+e^n)}n=\lim_{n\to\infty} \frac{\ln e^n(e^{e\ln n-n}+1)}n=1+\lim_{n\to\infty} \frac{\ln (e^{e\ln n-n}+1)}n=1+\lim_{n\to\infty} \frac{\ln (e^{-\infty}+1)}n=1+\lim_{n\to\infty} \frac{\ln (0+1)}n=1+\lim_{n\to\infty} \frac{\ln (1)}n=1+0=1$$ since $e\ln n-n\to -\infty$.
-
could you please show the second last step a bit more fine grained. Otherwise this is the most clear answer to me. – leo Dec 29 '12 at 12:56
@leo Sure I can – Nameless Dec 29 '12 at 12:57
Let's see a more direct way
$$\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}=\lim\limits_{n\to\infty} \sqrt[n]{e^n}=e$$ because the ratio test applied to $\frac{n^e}{e^n}$ yields $\lim_{n\to\infty}\frac{n^e}{e^n}=0.$
-
Why is $n^e$ is negligible? – leo Dec 29 '12 at 11:28
@leo: because the exponential function grows much faster than the polynomial function. – Chris's sis Dec 29 '12 at 11:29
That is of course far from rigorous. – Eckhard Dec 29 '12 at 12:19
@Eckhard: in addition, this is the way I was taught to answer such questions in high school. – Chris's sis Dec 29 '12 at 12:30
I have to agree this is far from rigorous. – user641 Dec 29 '12 at 12:54
You can also apply two gendarmes theorem. The idea is to find sequences $l_n, u_n$ which bound (from below and above) the given sequence $a_n$, i.e. $$l_n\le a_n\le u_n$$ holds, and which converge to a joint limit (i.e. $\lim_{n\to\infty}l_n=\lim_{n\to\infty}u_n=g).$ Then the theorem says that $a_n$ is also convergent and the limit is $g$. Let's see how it works.
Finding these bounds is usually pretty straightforward – lower bound is often obtained by simple missing some nonnegative terms. While seeking the upper bound, one have to remeber that the inequality have to be true only for all sufficiently large $n$'s. In this case one can write: $$\sqrt[n]{0+e^n}\le\sqrt[n]{n^e+e^n}\le\sqrt[n]{e^n+e^n}$$ since $0\le n^e$ and $n^e\le e^n$ for sure if $n$ is sufficiently large. Next, we observe that
$l_n:= \sqrt[n]{e^n}=e\longrightarrow e$ as well as
$u_n:=\sqrt[n]{2e^n}=e\sqrt[n]{2}\longrightarrow e\cdot 1 =e.$
The theorem yields the claim.
-
You can also use that $\sqrt[n]{a+b}\le \sqrt[n]{a}+\sqrt[n]{b}$. – user641 Dec 29 '12 at 19:31
Another very nice answer! Thank you! – leo Dec 29 '12 at 20:07
$0\le n^e$ and $n^e\le e^n$ are true for all $n\ge 0$ – Henry Dec 29 '12 at 22:44
You have $$\sqrt[n]{n^e+e^n}= \exp \left( \frac{\ln(n^e+e^n)}{n} \right)= \exp \left(1+ \frac{\ln \left( 1+ \frac{n^e}{e^n} \right)}{n} \right)$$ But $\ln \left( 1+ \frac{n^e}{e^n} \right) \sim \frac{n^e}{e^n}$ so you can conclude.
-
Note that by L'Hospital$$\lim_{n\to\infty}\frac{\log (n^e+e^n)}{n}=\lim_{n\to\infty}\frac{\frac{1}{n^e+e^n}(n^{e-1}+e^n)}{1}=\lim_{n\to\infty}\frac{n^{e-1}}{n^e+e^n}+\lim_{n\to\infty}\frac{1}{1+\frac{n^e}{e^n}}$$ Now $n.n^{e-1}\leq n^e+e^n$, so $n^{e-1}/(n^e+e^n)\leq1/n$. Hence taking limit on both side, the first limit is zero. For the second one it can be proved that $n^{e+1}\leq e^n$ for all $n\geq n_0$, for some $n_0$. So $\lim_{n\to\infty}(n^e/e^n)\leq\lim_{n\to\infty}(1/n)=0$. Hence the second limit equals $1$. Now taking exponential the required limit evaluates to $e$.
-
Taking logs, you must show that $$\lim_{n \rightarrow \infty} {\ln(n^e + e^n) \over n} = 1$$ Applying L'hopital's rule, this is equivalent to showing $$\lim_{n \rightarrow \infty}{en^{e-1} + e^n \over n^e + e^n} = 1$$ Which is the same as $$\lim_{n \rightarrow \infty}{e{n^{e-1}\over e^n} + 1 \over {n^e \over e^n}+ 1} = 1$$ By applying L'hopital's rule enough times, any limit of the form $\lim_{n \rightarrow \infty}{\displaystyle {n^a \over e^n}}$ is zero. So one has $$\lim_{n \rightarrow \infty}{e{n^{e-1}\over e^n} + 1 \over {n^e \over e^n}+ 1} = {e*0 + 1 \over 0 + 1}$$ $$= 1$$ (If you're wondering why you can just plug in zero here, the rigorous reason is that the function ${\displaystyle {ex + 1 \over y + 1}}$ is continuous at $(x,y) = (0,0)$.)
- | 2014-11-26T21:56:47 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/266977/show-lim-limits-n-to-infty-sqrtnneen-e/266981",
"openwebmath_score": 0.944831371307373,
"openwebmath_perplexity": 313.8755475987201,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109529751892,
"lm_q2_score": 0.8740772384450967,
"lm_q1q2_score": 0.8608008381670373
} |
https://math.stackexchange.com/questions/1991652/odds-of-drawing-an-ace-on-the-first-draw-or-a-two-on-the-second | # Odds of drawing an ace on the first draw OR a two on the second
I'm working on a puzzle which describes a casino game where you draw 13 cards. You win if your first card is an ace, or your second is a two, or... all the way up to the 13th draw being a king. In order to analyse the problem, I started by simplifying to a two-card version. You draw two cards and win if the first is an ace or the second is a deuce.
The odds of drawing an ace on the first card is clearly 4/52 or 1/13.
My understanding, backed up by this question If you draw two cards, what is the probability that the second card is a queen?, suggests that the odds of drawing a 2 on the second card is also 1/13. What are the odds of drawing either? To my knowledge, you can't directly calculate that but instead calculate the inverse. The odds of not drawing a specific card should clearly be 12/13 and the odds of not drawing an ace followed by not drawing a two should be 12/13 * 12/13 = 144/169. The odds of drawing one or the other should therefore be 25/169.
But there's another approach to analysing the problem. There are 52 * 51 = 2652 different combinations for the first two cards. If A is an ace, T is a two and x is any other card, there are 5 combinations which give you a win:
AA Ax AT xT TT
Given that there are four aces, four twos and 44 other cards, the number of winning hands is:
4*3 + 4*44 + 4*4 + 44*4 + 4*3 = 392.
So the odds of winning should be 392 / 2652.
25/169 and 392/2652 are very close but they are NOT the same number. I believe the second number is accurate, but I can't see where the logic in the first method fails. I suspect that it has to do with the possibility of drawing both being double counted but I can't see how that should matter. It seems like you should be able to treat each draw as an independent event. Additionally, the second method doesn't scale well - it would be extremely tedious to directly calculate the number of winning hands for the full 13 card game.
edit: Clarified the rules for winning the game.
• The odds of firstly drawing an ace and secondly a two is $\frac4{52}\frac4{51}$. Can you understand why? – drhab Oct 30 '16 at 15:22
• Arthur, I'm not counting the number of ways to draw aces and twos, I'm counting the number of ways to win. Neither xA nor Tx win the game. – Dan J. Oct 30 '16 at 15:44
• Heads up - odds and probability are two different things. – Sean Roberson Oct 30 '16 at 15:56
• drhab, I disagree. If you draw a two on the first draw, then the odds of drawing a two on the second draw are only 3/51, not 4/51. You have to take both possibilities into account and that gives you odds of 1/13. See the linked question in my submission. – Dan J. Oct 30 '16 at 16:04
• The probability of firstly drawing an ace (not a two) and secondly a two is $\frac4{52}\frac4{51}$. If firstly an ace has been drawn then $4$ of the remaining $51$ cards are two's. – drhab Oct 30 '16 at 16:26
Let $A$ denote the event that the first draw will be an ace and let $B$ be the event that the second draw will be a two.
Then:$$\Pr(A\cup B)=$$$$\Pr(A)+\Pr(B)-\Pr(A\cap B)=$$$$\Pr(A)+\Pr(B)-\Pr(A)\Pr(B\mid A)=$$$$\frac4{52}+\frac4{52}-\frac{4}{52}\frac{4}{51}$$
The problem with your first method is that the events are not independent. Yes, the probability that you don't draw an ace from a full deck is $12/13$, and the probability that you don't draw a two from a full deck is $12/13$, but if you draw one card and then draw a second, the outcome of the second draw is dependent on the outcome of the first draw. Hence you cannot multiply the probabilities.
Your second method of calculating $P(\text{ace on first draw or two on second draw})$ is correct because it takes into account the outcome of the first draw and the second draw.
• Then is the question I linked to about drawing a queen on the second card wrong? In particular, look at A.J.'s accepted answer showing that the odds remain 1/13 even if you take the first draw into account. Is that incorrect or does it not apply here? And if doesn't apply, why not? Also, my description of the game was ambiguous. It should be OR, not AND. I'll edit. – Dan J. Oct 30 '16 at 15:50
• I'm sorry, I misunderstood the condition for winning! As for the accepted answer in the link, the probability that the second card is a queen given that you know nothing about the first card is indeed $1/13$. Similarly, the probability that the second card is not a two given that you know nothing about the first card is $12/13$. But you cannot use this to calculate $P(\text{first card not ace and second card not two})$ because in that event you do have information about the first card. – kccu Oct 30 '16 at 20:23
On questions like these you have to stop yourself from taking shortcuts and keep your eye on the big picture. Therefore always use both cards. First the number of probabilities of getting an ace on the first throw is 4 times the other 51 cards. Second the odds on getting a 2 on the second card if the first one isn't a ace or a 2 is 4 times 44 plus if the first card is a 2 then 4 times 3. Third add up all the 4 times and you get 4 times 98 which equals 392.
• 4 * 51 +
• 4 * 44 +
• 4 * 3 =
• 4 * 98 = 392
Or for simplicity you could just add the odds of an ace for the first card (4 times 51) and a deuce for the second card (51 times 4) then minus one of the double wins (4 times 4) and you get 4 times (51 + 51 - 4) equals 4 times 98 equals 392. | 2021-04-15T10:08:05 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1991652/odds-of-drawing-an-ace-on-the-first-draw-or-a-two-on-the-second",
"openwebmath_score": 0.5406288504600525,
"openwebmath_perplexity": 221.9916547173893,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810952529396,
"lm_q2_score": 0.8740772318846386,
"lm_q1q2_score": 0.8608008313165687
} |
http://mathhelpforum.com/calculus/41982-find-indicated-integrals.html | # Thread: Find the Indicated Integrals
1. ## Find the Indicated Integrals
I'm supposed to find several indefinite integrals, but I am not sure exactly how to do each, especially as they involve natural logs and trigonmetric functions.
The three are:
$\int ln(x^4)/x$ dx
$\int$((e^t^x) cos (e^t))/(3+5 sin (e^t))
and lastly
$\int (superscript 4/5, subscript 0) (sin^-1 ((5/4)x))/(sqrt(16-25x^2))$
2. Hi
Originally Posted by Pikeman85
$\int ln(x^4)/x$ dx
$\frac{\ln(x^4)}{x}=\frac{4\ln x}{x}=4\times \frac{1}{x}\times \ln x$ and as the derivative of $x\mapsto \ln x$ is $x\mapsto\frac{1}{x}$...
$\int$((e^t^x) cos (e^t))/(3+5 sin (e^t))
Is it $\int \frac{\mathrm{e}^{tx} \cos (\mathrm{e}^t)}{3+5 \sin (\mathrm{e}^t)}\,\mathrm{d}{\color{red}x}$ or $\int \frac{\mathrm{e}^{tx} \cos (\mathrm{e}^t)}{3+5 \sin (\mathrm{e}^t)}\,\mathrm{d}{\color{red}t}$ ?
$\int (superscript 4/5, subscript 0) (sin^-1 ((5/4)x))/(sqrt(16-25x^2))$
$\int_0^{\frac{4}{5}} \frac{\arcsin \left(\frac{5}{4}x\right)}{\sqrt{16-25x^2}}\,\mathrm{d}x$
Remember that $\arcsin'x=\frac{1}{\sqrt{1-x^2}}$
3. Originally Posted by Pikeman85
I'm supposed to find several indefinite integrals, but I am not sure exactly how to do each, especially as they involve natural logs and trigonmetric functions.
The three are:
$\int ln(x^4)/x$ dx
$\int$((e^t^x) cos (e^t))/(3+5 sin (e^t))
and lastly
$\int (superscript 4/5, subscript 0) (sin^-1 ((5/4)x))/(sqrt(16-25x^2))$
for the frist one
$\int \frac{\ln(x^4)}{x}dx=4\int\frac{\ln(x)}{x}dx$
Let $u=\ln(x) \implies du=\frac{1}{x}dx$
$4\int udu=2u^2+C=2\left( \ln(x)\right)^2+C$
For number 2 you have both x's and t's but no dx or dt what variable are integrating with respect to?
For the last one
$\int_{0}^{4/5}\frac{\sin^{-1\left( \frac{5}{4}x\right)}}{\sqrt{16-25x^2}}dx$
let $x=\frac{4}{5}\sin(t) \implies dx=\frac{4}{5}\cos(t)dt$
$\int_{0}^{4/5}\frac{\sin^{-1\left( \frac{5}{4}x\right)}}{\sqrt{16-25x^2}}dx=\int_{0}^{\frac{\pi}{2}}\frac{t}{\sqrt{1 6-16\sin^2(t)}}\left( \frac{4}{5}\cos(t)dt \right)=$
$\frac{1}{5}\int_{0}^{\pi/2}tdt=\frac{1}{5} \left[ \frac{1}{2} \left( \frac{\pi}{2}\right)^2-0\right]=\frac{\pi^2}{40}$
Here is a web page with some La tex code for you
Helpisplaying a formula - Wikipedia, the free encyclopedia
Good luck.
4. Hello, Pikeman85!
These all require "simple" substitutions.
. . The trick is to recognize them.
$\int \frac{\ln(x^4)}{x}\,dx$
We have: . $\int\frac{4\ln(x)}{x}\,dx \;=\;4\int \ln x\,\frac{dx}{x}$
Let $u \:=\:\ln(x) \quad\Rightarrow\quad du \:=\:\frac{dx}{x}$
Substitute: . $4\int u\,du$ . . . etc.
$\int \frac{e^{t}\cos(e^t)\,dt}{3+5\sin(e^t)}$
Let $u \:=\:3+5\sin(e^t)\quad\Rightarrow\quad du \:=\:5e^t\cos(e^t)\,dt \quad\Rightarrow\quad e^t\cos(e^t)\,dt \:=\:\frac{1}{5}\,du$
Substitute: . $\int\frac{\frac{1}{5}\,du}{u} \;=\;\frac{1}{5}\int \frac{du}{u}$ . . . etc.
$\int^{\frac{4}{5}}_0 \frac{\sin^{-1}\!\left(\frac{5}{4}x\right)}{\sqrt{16-25x^2}}\,dx$
The denominator is: . $\sqrt{16\left(1 - \frac{25}{16}x^2\right)} \;=\;4\sqrt{1 - \left(\frac{5}{4}x\right)^2}$
The integral becomes: . $\int^{\frac{4}{5}}_0 \frac{\sin^{-1}\!\left(\frac{5}{4}x\right)\,dx} {4\sqrt{1 - \left(\frac{5}{4}x\right)^2}}$ . $= \;\;\frac{1}{4}\int^{\frac{4}{5}}_0\sin^{-1}\!\left(\frac{5}{4}x\right)\cdot\frac{dx}{\sqrt{ 1 - \left(\frac{5}{4}x\right)^2}}$
$\text{Let }u \:=\:\sin^{-1}\!\left(\frac{5}{4}x\right) \quad\Rightarrow\quad
du \:=\:\frac{\frac{5}{4}\,dx}{\sqrt{1-\left(\frac{5}{4}x\right)^2}}
Substitute: . $\frac{1}{4}\int^{\frac{4}{5}}_0 u\cdot\frac{4}{5}\,du \;=\;\frac{1}{5}\int^{\frac{4}{5}}_0 u\,du$ . . . etc.
Edit: I'm way too slow this time . . . *sigh*
.
$\int x^2 \sqrt{(7 + x^3)}$ dx
The answer I got for it is $(x^3/3) \sqrt{(7x+(x^4/4))}$
Which is not correct. How am I doing this incorrectly? I imagine my substitution is wrong.
6. Originally Posted by Pikeman85
$\int x^2 sqrt(7 + x^3)$ dx
The answer I got for it is (x^3/3) sqrt(7x+(x^4/4))
Which is not correct. How am I doing this incorrectly? I imagine my substitution is wrong.
One can notice that the derivative of $7+x^3$ is $3x^2$ so $x^2 \sqrt{7 + x^3}$ looks like $u'(x)\sqrt{u(x)}$ which can easily be integrated : you may try to substitute $u(x)=x^3+7$.
7. Well, a faster substitution is also $z^2=x^3+7.$
8. $x^3 \sqrt{7+x^3}$
I got this, but it did not work. I'm missing a step here I think
I don't get substitution very well
9. Originally Posted by Pikeman85
$\int x^2 \sqrt{(7 + x^3)}$ dx
The answer I got for it is $(x^3/3) \sqrt{(7x+(x^4/4))}$
Let $u=7+x^3 \implies du=3x^2dx \iff \frac{du}{3}=x^2dx$
$\int x^2\sqrt{7+x^3}dx=\int \sqrt{u}\left( \frac{du}{3}\right)=\frac{1}{3}\int u^\frac{1}{2}du=\frac{2}{9}u^\frac{3}{2}+C=\frac{2 }{9}(7+x^3)^\frac{3}{2}+C$ | 2013-12-12T23:31:41 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/41982-find-indicated-integrals.html",
"openwebmath_score": 0.9726709127426147,
"openwebmath_perplexity": 10399.127820408063,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109503004294,
"lm_q2_score": 0.874077230244524,
"lm_q1q2_score": 0.8608008277530769
} |
https://math.stackexchange.com/questions/970992/if-s-and-t-are-transformattion-mappings-what-is-st | # If S and T are transformattion mappings, what is [ST]?
S and T are transformation mappings, what does [ST] and [TS] mean?
Does it mean transform via S and then apply T to the result and vice versa?
Juxtaposition of linear transformations means composition. In details, $ST$ means $S \circ T$ and $TS$ means $T \circ S$ (whenever they are defined, surely). This notation is used thinking of the following: $$T: V_1 \to V_2 \qquad S:V_2 \to V_3 \qquad S\circ T: V_1 \to V_3$$ Let $B_1, B_2, B_3$ be basis for the respective spaces (in finite dimension). Then: $$[S \circ T]_{B_1, B_3} = [S]_{B_2, B_3}[T]_{B_1,B_2}$$ At the bottom of our hearts, we know the distinction, so, writing, we oftem "confuse" the matrix with the transformation, so $S \circ T$ becomes $ST$. Actually, the identity above is the motivation for the definition of matrix multiplication.
• Yes, the composition goes from $V_1$ to $V_1$. To understand that notation better, I think it is important to take a look at matrices of linear transformations. – Ivo Terek Oct 13 '14 at 2:09
• Great, and for the format of B1,B3, can the B3 also be written above B1 or is it B3 above B1? – user83039 Oct 13 '14 at 2:23 | 2019-12-12T11:23:33 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/970992/if-s-and-t-are-transformattion-mappings-what-is-st",
"openwebmath_score": 0.8997182846069336,
"openwebmath_perplexity": 672.6112142775514,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9907319866190857,
"lm_q2_score": 0.8688267762381843,
"lm_q1q2_score": 0.8607744780503122
} |
http://math.stackexchange.com/questions/69178/what-are-the-odds-of-rolling-a-3-number-straight-throwing-6d6 | # What are the odds of rolling a 3 number straight throwing 6d6
If you throw six fair dice, what are the odds that at least three dice make a straight (i.e. 123, 234, 345, or 456) I am certain that I am making a mistake in calculating it?
-
## migrated from boardgames.stackexchange.comOct 2 '11 at 2:38
This question came from our site for people who like playing board games, designing board games or modifying the rules of existing board games.
what's your current calculation? – DForck42 Oct 1 '11 at 4:27
should be moved to SE mathematics – Hackworth Oct 1 '11 at 18:35
It's pretty high, pretty near one. But I'd have to think of how to get the exact probability... – PearsonArtPhoto Oct 1 '11 at 22:02
I went through a long and nasty inclusion-exclusion argument, watched lots of stuff cancel, and in the end got $6^6 - 6 \cdot 4^6 + 8 \cdot 3^6 - 3 \cdot 2^6$, which agrees with Ed Pegg's calculations. I imagine there's a simple way to explain this formula, but I don't see it. I'm offering my results here in the hope that someone else can. – Mike Spivey Oct 2 '11 at 7:02
@Mike Sounds like I did the same calculation as you. If you have time, why don't you post your solution? We can also try to think of a shortcut. – Byron Schmuland Oct 2 '11 at 13:01
## 3 Answers
This problem is harder than I thought such a simple-sounding dice problem would be. :) However, the analysis below works for any number of six-sided dice! If $n$ is the number of dice, then the probability that we obtain at least one of 123, 234, 345, 456 is $$\frac{6^n - 6 \cdot 4^n + 8 \cdot 3^n - 3 \cdot 2^n}{6^n}.$$
For example, if $n=0$ or $n = 1$ or $n = 2$, we get $0$, as we should. If $n = 3$, we get $24$ in the numerator, which agrees with the $4 \cdot 3!$ ways we could obtain at least one of the required straights with three dice. And if $n = 6$, we get $27720$ in the numerator, which agrees with Ed Pegg's calculations.
We'll obtain the numerator in the fraction above by counting the number of ways not to get at least one of the four required straights and then subtract that from $6^n$. Let's call this event $\bar{S}$.
Start by considering subsets of {1, 2, 3, 4, 5, 6} such that throwing only numbers from that subset would give us an outcome in $\bar{S}$. There aren't any subsets of size five that do it, but there are six of size four: 1245, 1246, 1256, 1346, 1356, 2356. (I'm using a compact notation here for simplicity's sake.) Since $|\bar{S}| = |1245 \cup 1246 \cup 1256 \cup 1346 \cup 1356 \cup 2356|$, we now have a way to calculate $|\bar{S}|$ using the principle of inclusion-exclusion (PIE). We just have to consider all possible ways of intersecting these six sets and then add up the cardinalities of the resulting intersections according to parity given by $(-1)^{k(A)}$, where $k(A)$ is the number of sets intersected to obtain the subset $A$.
What makes applying PIE more difficult than usual here is that intersecting two sets sometimes gives a subset of size 3 and sometimes one of size 2, and this problem only gets worse as you intersect more sets. However, if you work through all the possibilities, most of the resulting intersections show up more than once with different parities, and most of these surprisingly cancel. What's left is 124, 125, 126, 136, 146, 156, 256, 356 (from intersecting 2 sets) and 12, 16, 26 (from intersecting 3 sets).
Finally, since there are $j^n$ ways to throw the $n$ dice so that they only take on values from a subset of size $j$, we obtain $|\bar{S}| = 6 \cdot 4^n - 8 \cdot 3^n + 3 \cdot 2^n$.
Unfortunately, I don't have a good explanation for why so many terms cancel, and I think there should be one. If someone can find such an explanation, I would love to see it. I've found an explanation for the massive term cancellation. (I suspect there are others.) See comments below.
Finally, thanks to Ed Pegg for the brute force approach; that was very helpful in checking my work.
Added: Here is one explanation for the massive term cancellation when using PIE.
The six sets of size four that we're working with can be expressed as a graph. Each set is a vertex, and two vertices are connected by an edge if their corresponding sets differ by only one element. The graph looks like this.
1245 — 1256 — 2356
\ / \ /
1246 1356
\ /
1346
It turns out that, of the intersections that remain when applying PIE, the vertices correspond to the 6 sets of size 4, the edges correspond to the 8 sets of size 3, and the minimal cycles correspond to the 3 sets of size 2. Every other intersection is either empty or can be paired with another intersection such that they cancel each other in the PIE formula. To obtain the mapping between pairs of intersections, take an intersection that produces a connected subgraph. Pair it with the intersection obtained by removing the vertex with the highest connectivity in the subgraph. This produces a disconnected subgraph. For example, the intersection of 1245, 1256, 1356, and 2356 is paired with the intersection of 1245, 1356, and 2356. Both intersections produce 5, yet have different parities, so they cancel when using PIE. (Well, the 4-cycle is a special case, but everything there cancels nicely, too, except for the full cycle itself.)
As far as I know, inclusion-exclusion cannot, in general, be represented in a graph like this. Maybe there are some special cases, though, and this graph happens to fall into one of those. Is anyone familiar with this?
Inclusion-exclusion can, in some cases, be represented and simplified by a graph like this. My observations here are an instance of the following theorem:
Suppose sets $A_1, A_2, \ldots, A_n$ can be represented as the vertices of a planar graph $(V,E)$ such that, for each $x \in \cup_i A_i$, the subgraph consisting of the $A_i$'s containing $x$ is connected. For each edge $e = \{A_i, A_j\}$, let $|e| = |A_i \cap A_j|$. For each minimal cycle $c = \{A_{c_1}, A_{c_2}, \ldots, A_{c_m}\}$ in $(V,E)$, let $|c| = |\bigcap_{j=1}^m A_{c_j}|$. Then $$\left|\bigcup_{i=1}^n A_i \right| = \sum_{v \in V} |v| - \sum_{e \in E} |e| + \sum_{c \in C} |c|.$$
Since the graph above satisfies the hypotheses of this theorem, we immediately get $|\bar{S}| = 6 \cdot 4^n - 8 \cdot 3^n + 3 \cdot 2^n$.
The proof of the theorem follows directly from Euler's famous formula $V - E + F = 2$ for connected planar graphs. For any element $x \in \cup_i A_i$, the subgraph consisting of the $A_i$'s containing $x$ is connected (by hypothesis) and planar. The element $x$ is counted once on the left side of the formula in the theorem. Since minimal cycles correspond to faces except for the one outer face, Euler's formula tells us that $x$ is counted a net total of once for the right-hand side as well.
More general versions of this theorem that use the Euler characteristic can be found in
-
This is very nice!. I wish I could upvote again. – Byron Schmuland Oct 4 '11 at 17:19
@Byron: Thanks! The compliment is worth more than the extra upvote would be, though. :) – Mike Spivey Oct 4 '11 at 17:28
@Byron: I found an explanation for the term cancellation in terms of the graph representation. Fascinating, really! And I think now that I have officially beaten this question to death. :) – Mike Spivey Oct 5 '11 at 4:51
In Mathematica, here's a brute force solution. There are 27720 rolls of the 6 dice that will give that straight.
Length[Select[Tuples[Range[6], {6}], Max[Table[Length[Intersection[#, {{1, 2, 3}, {2, 3, 4}, {3, 4, 5}, {4, 5, 6}}[[n]]]], {n, 1, 4}]] == 3 &]]/6^6
$385/648 = 0.5941358024691358024691\dots$
-
Here's another way. Let ${\bf k}$ be the number of distinct die numbers, let $S$ ("success") be the event that at least one of the four required straights occurred.
Then $$P(S) = \sum_{k=1}^6 P(S | {\bf k}=k) \; P({\bf k}=k)$$
It's clear that, for a given fixed $k$, all configurations are equiprobable.
It's trivial that $P(S | {\bf k}=0)=P(S | {\bf k}=1)=P(S | {\bf k}=2)=0$. The other are not difficult:
$P(S | {\bf k}=3)$ : There are 4 possible success configurations, out of ${6 \choose 3}$, hence $P(S | {\bf k}=3) = 1/5$
$P(S | {\bf k}=4)$ : This is the most difficult one, let's count the unsuccessful configurations: These are : [o x x o x x] [x o x o x x] [x o x x o x] [x x o o x x] plus the mirroring of the first two: 6 unsuccessful configurations out of ${6 \choose 4}$ hence $P(S | {\bf k}=4) = 1- 2/5 = 3/5$ [*]
Further, $P(S | {\bf k}=5)= P(S | {\bf k}=6) = 1$
Now, to compute $P({\bf k}=k)$, we must count the number of ways of filling $k$ positions with 6 throws: for example, $$P({\bf k}=4)=\frac{{6 \choose 4} \; 4! \; S_2(6,4)}{6^6}$$
where $S_2(n,m)$ are the Stirling numbers of the second kind. So finally
$$P(S) = \sum_{k=3}^6 a_k \; \frac{{6 \choose k} \; k! \; S_2(6,k)}{6^6} \approx 0.59413$$
with $a_3=1/5$, $a_4=3/5$, $a_5=1$, $a_6=1$.
[*] Added: to generalize this for arbitrary number of dice or runlengths, notice that this counting of unsuccessful configurations is equivalent to count the ways of expressing the number $k=4$ as a sum of $n-k+1=3$ non-negative terms less that $m=3$ (runlength), with order (in the example, 0+2+2 , 1+1+2, 1+2+1, 2+0+2, 2+2+0,2+1+1). Hence $P(S | {\bf k}) = 1 - b_{k,n-k+1,m}/{n \choose k}$ where $b_{r,s,t}$ counts the $s$-terms weak-compositions of the integer $r$, with the restriction that each term is less than $t$. An expression is given in Enumerative combinatorics (Stanley) page 120, problem 28.
Added 2: Because the exact formula for arbitrary dice is quite formidable, here's some quick asymptotics. Let $n$ be the number of throws, $c$ the number of dice faces, $m$ the straight length. For large $c,n$, the probability that a particular face number does not appear is $q=(1-1/c)^n \approx \exp(- n/c)$. Asymptotically, we can assume that these events are independent, and disregard border effects, and regard each outcome as a sequence of runlengths $(a_1 b_1 a_2 b_2 ... a_r b_r)$ where $a_i$ is the length of consecutive (run) die faces that appeared, $b_i$ the faces that didn't (and $a_1 + b_1 + ... = c$). Each run can then be approximated by independent geometric variables (starting at 1) with stopping probabilities $q$ and $1-q$. Their expectations are respectively $1/q$ and $1/(1-q)$, so that equating the expected sum we get $r=c \; q \;(1-q)$. The event of no-success, correspond to $a_i<m$, and its probability is given, in this approximation, by $(1-(1-q)^{m-1})^r$. Finally:
$$P(S) \approx 1- \left[ 1- (1-q)^{m-1} \right]^r$$
with
$$r = c \; q \; (1-q) \hspace{20px} q = (1-1/c)^n \approx e^{-n/c}$$
In our original question... we have $c=n=6$,$m=3$, the numbers are too small to apply this asymptotics, but anyway, I get: $P(S) \approx 0.50922$ ($0.54184$ if using the "exact" $q$)
-
+1: I like this better than my answer for the 6-dice problem. Can it be generalized to $n$ dice? – Mike Spivey Oct 4 '11 at 18:47
@MikeSpivey: The counting of $P(S|k=k)$ is not trivially generalizable (for $n$ dice and $m$ sequence lengths), but I think it can be done, I'll give it a look tonight. – leonbloy Oct 4 '11 at 19:00
@MikeSpivey: See my addition. Now it remains to see if those bounded weak-compositions have some closed formula. – leonbloy Oct 4 '11 at 19:20 | 2015-07-29T20:55:46 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/69178/what-are-the-odds-of-rolling-a-3-number-straight-throwing-6d6",
"openwebmath_score": 0.8829420804977417,
"openwebmath_perplexity": 226.40024347342285,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232909876815,
"lm_q2_score": 0.8757869965109764,
"lm_q1q2_score": 0.8607438581151349
} |
http://mathhelpforum.com/geometry/102193-geometry-regarding-intersection-wires-criss-crossing-vertical-poles.html | # Math Help - Geometry regarding the intersection of wires criss crossing vertical poles.
1. ## Geometry regarding the intersection of wires criss crossing vertical poles.
Two vertical poles are of lengths 10 m and 6 m. They are connected by wires going from the top of each pole to the base of the other. At what height do the two connecting wires intersect?
help step by step would be appreciated. thanks. i have tried but failed maybe something very simple i am missing. btw i am new, so if i was to post my method (which was utterly useless), i apologize in advance for not doing so.
2. Do you find it odd that the distance between the poles is not given?
In any case, you're going to need some similar triangles. If you have a method that has similar triangles, I suspect it is NOT totally worthless. Post it.
3. Originally Posted by swanz
Two vertical poles are of lengths 10 m and 6 m. They are connected by wires going from the top of each pole to the base of the other. At what height do the two connecting wires intersect?
help step by step would be appreciated. thanks. i have tried but failed maybe something very simple i am missing. btw i am new, so if i was to post my method (which was utterly useless), i apologize in advance for not doing so.
If you know the poles are vertical, you know they are parallel. You can treat each wire as a transversal of these - check out which angles are alternate interior angles of these two transversals to see some similar triangles. Draw yourself a decent picture to see them! Then use these similar triangles to answer the question.
4. Originally Posted by swanz
Two vertical poles are of lengths 10 m and 6 m. They are connected by wires going from the top of each pole to the base of the other. At what height do the two connecting wires intersect?
help step by step would be appreciated. thanks. i have tried but failed maybe something very simple i am missing. btw i am new, so if i was to post my method (which was utterly useless), i apologize in advance for not doing so.
Assume the wires are straight (with no sag)
Since the distance between the two is irrelevant, we can assign it what ever needed, say 1 m.
Assume the shorter pole is coincident with the y axis and the taller pole is parallel but at x = 1
An equation for each wire:
$y_1 = -6x_1 + 6$
$y_2 = 10x_2 + 0$
Isolate x:
$x_1 = \dfrac{ y_1 - 6 }{-6}$
$x_2 = \dfrac{y_2}{10}$
equate:
$\dfrac{ y - 6}{-6} = \dfrac{y}{10}$
solve for y
$y = \dfrac{ 60 }{16}$
.
5. ## Resolved
thank you aidan....never even considered plotting it on the graph .....all the time i was playing around with similar triangles, transversals and loads of variables.
nice method, can come in handy in various situations, thnx a bunch.
6. Originally Posted by aidan
Since the distance between the two is irrelevant
Well, one probably should prove that, rather than just state it.
7. Originally Posted by TKHunny
Well, one probably should prove that, rather than just state it.
My Bad. I just made an assumption that it was common knowledge.
One proof (if needed) is here:
PlanetMath: harmonic mean in trapezoid
It indicates that the diagonals of a trapezoid will intersect at some point. A line through that point parallel to the parallel sides will intersect the non parallel sides. The length of this segment is a ratio of the parallel sides only:
If the lengths of the parallel sides are "a" and "b" then the length of the interior line that is parallel to the parallel lines and passes through the intersection point is
$\dfrac{2 a b }{(a+b)}$
The intersection point is at the midpoint of the interior line.
In our case here:
HALF the length of the interior line is
$\dfrac{( 6 \cdot 10) }{( 6 + 10 )} = 3.75$
8. I think it is second tier common knowledge. It can be assumed if you work with that sort of thing, but not generally.
My views. I welcome others.
9. That problem is "ye olde crossing ladders" problem in disguise | 2016-07-28T19:03:42 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/geometry/102193-geometry-regarding-intersection-wires-criss-crossing-vertical-poles.html",
"openwebmath_score": 0.797980010509491,
"openwebmath_perplexity": 480.6962069266633,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.97364464791863,
"lm_q2_score": 0.8840392909114836,
"lm_q1q2_score": 0.8607401241457467
} |
https://math.stackexchange.com/questions/1273139/dominated-convergence-theorem | # Dominated Convergence Theorem
Give an example of a sequence $\{f_n\}_{n=1}^\infty$ of integrable functions on $\mathbb{R}$ such that $f_n \to f$ but $\int f_n \not\to \int f$. Explain why your example does not conflict with the Dominated Convergence Theorem.
I do notice that the inequality $|f_n(x)| \le g(x)$, where $g$ is an integrable function over $\mathbb{R}$, is not listed here in this problem. So the function need not be dominated by an integrable function here. But this is required as one hypothesis of the Dominated Convergence Theorem; hence the example will not conflict.
If this is sound reasoning, how may I come up with functions that are not dominated by another function? Initially I was thinking $f_n(x)=x \sin (nx)$ because its lim sup is $\infty$, but even then, we still have $|f_n(x)| \le |x| =: g(x)$.
• $f_n = n 1_{(0,{1 \over n}]}$. – copper.hat May 8 '15 at 16:35
• It is not only a matter of being dominated by a function, but by an integrable function. $|x|$ is not integrable on $\mathbb{R}$, as $\int_{-\infty}^\infty |x| dx = \infty$. – Pedro M. May 8 '15 at 16:39
• @PedroM. I forgot about the "integrable" function part. :O But thanks for clearing that up. – Cookie May 8 '15 at 16:42
• It may be dominated by another function, however the detail is that the if dominating function does not belong to $L_1$ then DCT may not hold. – Alonso Delfín May 8 '15 at 16:42
• @AaronMaroja I do not understand why you centerized "$\int f \not\to \int f$" and removed the period at the end of that sentence. – Cookie May 9 '15 at 7:28
Consider the sequence of functions on $(0,1)$ $$f_n(x) = \begin{cases} n & \text{ if } x \in (0,1/n)\\ 0 & \text{ otherwise} \end{cases}$$ We have $\lim_{n \to \infty} f_n(x) = 0 = f(x)$. However, $$\lim_{n \to \infty} \int_0^1f_n(x)dx = 1 \neq 0 = \int_0^1 f(x) dx$$ The key in the dominated convergence theorem is that sequence of functions $f_n(x)$ must be dominated by a function $g(x)$, which is also integrable.
I would like to provide a slightly more abstract framework to illustrate this "loss of compactness"$^{[2]}$ phenomenon. The functional setting is the $L^1(\mathbb{R})$ space: $$L^1(\mathbb{R})=\left\{f\colon\mathbb{R}\to\mathbb{R}\ :\ \int_{-\infty}^\infty \lvert f(x)\rvert\, dx<\infty \right\},\qquad \lVert f\rVert=\lVert f\rVert_{L^1}=\int_{-\infty}^\infty\lvert f(x)\rvert\, dx.$$ Here we take a bounded sequence $f_n$ that converges pointwise a.e. : $$\begin{array}{cc} \|f_n\|\le C, & f_n\to f\, \text{a.e.} \end{array}$$ The question is: does this sequence converge, that is, is it true that $$\lVert f_n-f\rVert\to 0?^{[1]\ [2]}$$ This would make for a very desirable property of our functional space. However, as other answers very clearly show, the answer is negative in general. The problem is that our space is subject to the action of noncompact groups of isometries. Namely, one has the action of the translation group $$\begin{array}{cc} \left(T_\lambda f\right)(x)=f(x-\lambda), &\lambda \in (\mathbb{R}, +) \end{array}$$ and of the dilation group $$\begin{array}{cc} \left(D_\lambda f\right)(x)=\frac{1}{\lambda}f\left(\frac{x}{\lambda}\right), &\lambda \in (\mathbb{R_{>0}}, \cdot) \end{array}$$ The change of variable formula for integrals immediately shows that those group actions are isometric, that is, they preserve the norm.
So, fixing a non-vanishing function $f\in L^1(\mathbb{R})$, its orbits $T_\lambda f$ and $D_\lambda f$ form bounded and non-compact subsets of $L^1(\mathbb{R})$. In particular, letting $\lambda \to +\infty$ (or $\lambda \to -\infty$ for translations, or $\lambda\to 0$ for dilations), one finds counterexamples to the question above. (Note that, more or less, all the examples constructed in the other, excellent, answers are constructed this way).
In technical jargon one says that the translation and dilation groups introduce a defect of compactness in $L^1(\mathbb{R})$ space. This is the terminology of the Concentration-Compactness theory (the linked page is a blog entry of T. Tao, but the theory has been founded by P.L. Lions). The dominated convergence theorem can be therefore seen as a device that impedes the defect of compactness to take place.
Footnotes
$^{[1]}$ The OP only asks about the convergence of the integrals: $\int f_n\to \int f$. Now a standard theorem (cfr. Lieb & Loss Analysis, 2nd ed. Theorem 1.9 (Missing term in Fatou's lemma), see in the remarks) gives us the equivalence $$\begin{array}{ccc} \lVert f_n\rVert_{L^1} \to \lVert f\rVert_{L^1} & \iff & \lVert f_n-f\rVert_{L^1}, \end{array}$$ Therefore, at least for sequence of positive functions for which $\int f_n=\lVert f_n\rVert_{L^1}$, the failure of convergence for sequences of integrals is exactly the same thing as the failure of convergence in $L^1$ space. That's why one can see the phenomenon in this functional analytic setting.
$^{[2]}$ Compactness usually means that bounded sequences have convergent subsequences. In $L^1(\mathbb{R})$ space, pointwise convergent sequences are compact if and only if they are norm convergent.
• wow! Very interesting point of view, never heard of that before! :-) – Ant May 8 '15 at 18:47
It is not only a matter of being dominated by just any function $g$, but by an integrable function. $|x|$ is not integrable on $\mathbb{R}$, as $\int_{-\infty}^\infty |x| dx = \infty$.
copper.hat provided an example that is not dominated by any function, but you can also pick bounded examples, such as $f_n = 1_{[n,n+1]}$.
• Does $f_n=\chi_{[-n,n]}$ also work? (Sorry, I'm used to the characteristic function that uses the Greek letter $\chi$ instead of $1$.) – Cookie May 8 '15 at 16:46
• @dragon: Then $f_n$ converges to the constant function $1$, which is not integrable on $\mathbb{R}$. This is another illustration of how the DCT may fail if the dominating function $g$ is not integrable: $\int f = \infty$ (in my example, $\int f$ is finite but is different from $\lim \int f_n$). Notice, however, that in your case, $\int f_n \to \infty = \int f$. – Pedro M. May 8 '15 at 16:49
Let $$f_n(x) = \frac{n^3x^2}{1+n^4x^4}.$$ Then $f_n(x) \to 0$ pointwise everywhere, and $\int_\infty^\infty f_n(x)\,dx = \int_\infty^\infty \frac{x^2}{1+x^4}\,dx$ for every $n.$
Let $f_n = \chi[n,2n]$ then for any $x \in \mathbb R$
$$\lim_{n \to \infty} f_n (x) = 0 \,\,\, \text{and} \,\,\, \int_{x\in \mathbb R} f_n(x) dx = n \to \infty \,\,\, \text{as}\,\, n \to \infty$$
On the other hand taking $f(x) = 0 , \forall x \in \mathbb R$ we have $$\int_{x\in \mathbb R} f(x) dx = \int _{x\in \mathbb R} 0\,\, dx = 0$$ then
$$\int_{x\in \mathbb R} f_n(x) dx \not \to \int_{x\in \mathbb R} f(x) dx$$ | 2019-10-22T03:13:49 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1273139/dominated-convergence-theorem",
"openwebmath_score": 0.9119226932525635,
"openwebmath_perplexity": 205.5637991302053,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446418006305,
"lm_q2_score": 0.8840392909114836,
"lm_q1q2_score": 0.8607401187371948
} |
http://mathhelpforum.com/calculus/12005-integration-unsure-method.html | # Math Help - Integration;unsure method
1. ## Integration;unsure method
Hi, I have another integration question (because they're just so fun ).
Int ( dx/(x^2-3x+2)^(1/2) )
The item giving me the biggest problem is the square rt. From what I know, I don't think u-sub will work (just complicate things more), integrating my parts seems like it would be very messy, partial fractions is out of the mix since its raised to a power and I don't think you can do complete the square for the same reason. I'm assuming I'm making this much more complicated than it really is. Can anybody help me start it out, thats all I need help with.
Thanks a ton.
2. Originally Posted by ChaosBlue
Hi, I have another integration question (because they're just so fun ).
Int ( dx/(x^2-3x+2)^(1/2) )
The item giving me the biggest problem is the square rt. From what I know, I don't think u-sub will work (just complicate things more), integrating my parts seems like it would be very messy, partial fractions is out of the mix since its raised to a power and I don't think you can do complete the square for the same reason. I'm assuming I'm making this much more complicated than it really is. Can anybody help me start it out, thats all I need help with.
Thanks a ton.
Start this by looking under the square root sign. You want to do a "complete the square."
x^2 - 3x + 2 = (x^2 - 3x) + 2 = (x^2 - 3x + 9/4 - 9/4) + 2 = (x^2 - 3x + 9/4) - 9/4 + 2
= (x - 3/2)^2 - 1/4
Now let y = x - 3/2, dy = dx
Int[1/(x^2-3x+2)^{1/2}dx] = Int[1/(y^2 - 1/4)^{1/2}dy]
One more trick and we're done. Pull a 1/4 out from under the radical:
Int[1/((1/4)(2y)^2 - 1/4)^{1/2}dy] = 2*Int[1/((2y)^2 - 1)^{1/2}dy]
And now let z = 2y, dz = 2dy and your integral becomes:
Int[1/(x^2-3x+2)^{1/2}dx] = Int[1/(y^2 - 1/4)^{1/2}dy] = Int[1/(z^2 - 1)^{1/2}dz]
This is a more standard problem. Can you take it from here?
-Dan
3. Originally Posted by ChaosBlue
Hi, I have another integration question (because they're just so fun ).
Int ( dx/(x^2-3x+2)^(1/2) )
The item giving me the biggest problem is the square rt. From what I know, I don't think u-sub will work (just complicate things more), integrating my parts seems like it would be very messy, partial fractions is out of the mix since its raised to a power and I don't think you can do complete the square for the same reason. I'm assuming I'm making this much more complicated than it really is. Can anybody help me start it out, thats all I need help with.
Thanks a ton.
The way I see to do it is really long, so if anyone out there has an easier way, you are welcome to post.
We begin by completing the square.
int(1/sqrt(x^2 - 3x + 2))dx
= int(1/(sqrt( x^2 - 3x + (-3/2)^2 - 1/4))dx
= int(1/sqrt((x - 3/2)^2 - 1/4))dx
= int(1/sqrt((x - 3/2)^2 - (1/2)^2))dx
= int(1/sqrt[(1/2)^2*([(x - 3/2)/(1/2)]^2 - 1)])dx
= 2*int(1/sqrt[(2x - 3)^2 - 1])dx
let u= 2x - 3
=> du = 2 dx
so our integral becomes
int(1/sqrt(u^2 - 1))du
Now continue using trig substitution
4. Originally Posted by topsquark
Start this by looking under the square root sign. You want to do a "complete the square."
x^2 - 3x + 2 = (x^2 - 3x) + 2 = (x^2 - 3x + 9/4 - 9/4) + 2 = (x^2 - 3x + 9/4) - 9/4 + 2
= (x - 3/2)^2 - 1/4
Now let y = x - 3/2, dy = dx
Int[1/(x^2-3x+2)^{1/2}dx] = Int[1/(y^2 - 1/4)^{1/2}dy]
One more trick and we're done. Pull a 1/4 out from under the radical:
Int[1/((1/4)(2y)^2 - 1/4)^{1/2}dy] = 2*Int[1/((2y)^2 - 1)^{1/2}dy]
And now let z = 2y, dz = 2dy and your integral becomes:
Int[1/(x^2-3x+2)^{1/2}dx] = Int[1/(y^2 - 1/4)^{1/2}dy] = Int[1/(z^2 - 1)^{1/2}dz]
This is a more standard problem. Can you take it from here?
-Dan
Ha ha, i took forever typing this up only to see that u beat me to it once i clicked "submit reply"
Thanks a lot Dan
5. Originally Posted by Jhevon
Ha ha, i took forever typing this up only to see that u beat me to it once i clicked "submit reply"
Thanks a lot Dan
It happens. You're welcome.
And besides, my variables are prettier anyway.
-Dan
6. Originally Posted by topsquark
It happens. You're welcome.
And besides, my variables are prettier anyway.
-Dan
Nu-uh, u is a much more suitable variable for substitution, most text books use u. You're just fighting the system.
Do you know what they do to non-conformists in the math field?!
7. Thanks guys so much, you really helped me out. I do have one question though.
Originally Posted by topsquark
One more trick and we're done. Pull a 1/4 out from under the radical:
Int[1/((1/4)(2y)^2 - 1/4)^{1/2}dy] = 2*Int[1/((2y)^2 - 1)^{1/2}dy]
And now let z = 2y, dz = 2dy and your integral becomes:
Int[1/(x^2-3x+2)^{1/2}dx] = Int[1/(y^2 - 1/4)^{1/2}dy] = Int[1/(z^2 - 1)^{1/2}dz]
topsquark: I understand we're essentially "manipulating" our equation so we can effectively use trig substitution, but I kind-of got lost in the process. I have trouble grasping it after we pull out the 1/4.? Is there anyway you (or Jhevon since you got it as well) could explain that part to me a bit more?
Thanks a ton.
8. Originally Posted by ChaosBlue
Thanks guys so much, you really helped me out. I do have one question though.
topsquark: I understand we're essentially "manipulating" our equation so we can effectively use trig substitution, but I kind-of got lost in the process. I have trouble grasping it after we pull out the 1/4.? Is there anyway you (or Jhevon since you got it as well) could explain that part to me a bit more?
Thanks a ton.
When he said pull out 1/4, he meant we factorized the function to obtain (1/4)*(something), he did that so he could get something of the form u^2 - 1 under the square root
9. Originally Posted by Jhevon
When he said pull out 1/4, he meant we factorized the function to obtain (1/4)*(something), he did that so he could get something of the form u^2 - 1 under the square root
I worded my question poorly (my bad ). What I mean is this.
I get where we use completing the square to transform:
int 1/sqrt(x^2-3x+2) dx--> int 1/sqrt( (x-3/2)^2 - 1/4 )dx
From there we have u = x-3/2 and therefore du = dx
Plug in and we get int 1/ (u^2 - 1/4)^1/2 du
Now, we pull out or factor out the 1/4; But I am struggling with when we do that, how
int 1/ (u^2 - 1/4)^1/2 du becomes
Int[1/((1/4)(2u)^2 - 1/4)^{1/2}du]
(we pull out 1/4, so how does u^2 become 2u^2 ?
and then
2*Int[1/((2u)^2 - 1)^{1/2}du]
Sorry for bothering you guys so much!
10. Originally Posted by ChaosBlue
I worded my question poorly (my bad ). What I mean is this.
I get where we use completing the square to transform:
int 1/sqrt(x^2-3x+2) dx--> int 1/sqrt( (x-3/2)^2 - 1/4 )dx
From there we have u = x-3/2 and therefore du = dx
Plug in and we get int 1/ (u^2 - 1/4)^1/2 du
Now, we pull out or factor out the 1/4; But I am struggling with when we do that, how
int 1/ (u^2 - 1/4)^1/2 du becomes
Int[1/((1/4)(2u)^2 - 1/4)^{1/2}du]
(we pull out 1/4, so how does u^2 become 2u^2 ?
and then
2*Int[1/((2u)^2 - 1)^{1/2}du]
Sorry for bothering you guys so much!
it's no bother, we're all here to learn.
ok, here it is. Say we had some function b, and we wanted to factor an a out of it, we could write is as a*(b/a).
similarly, when he factored 1/4 out of u^2, he ended up with (1/4)*(u^2/(1/4))
u^2/(1/4) = u^2/(1/2)^2 = [u/(1/2)]^2 = 2u^2
11. Funny you should say thanks, we actually left the hard part to you
12. Originally Posted by Jhevon
Nu-uh, u is a much more suitable variable for substitution, most text books use u. You're just fighting the system.
Do you know what they do to non-conformists in the math field?!
Good thing I'm not in the Math field, then.
-Dan
13. Originally Posted by topsquark
Good thing I'm not in the Math field, then.
-Dan
It's even worst in the physics field, which i suppose you're a part of
14. Originally Posted by Jhevon
It's even worst in the physics field, which i suppose you're a part of
Yes, but as long as you define your variables, you can technically get away with any set of variables you wish.
I think I'm going to create a unit called the "Dan." It'll be defined as the amount of work needed to translate an equation from the variable z to the variable u...
-Dan
15. Originally Posted by topsquark
Yes, but as long as you define your variables, you can technically get away with any set of variables you wish.
I think I'm going to create a unit called the "Dan." It'll be defined as the amount of work needed to translate an equation from the variable z to the variable u...
-Dan
haha, bless your heart Dan, you need help
Page 1 of 2 12 Last | 2015-11-27T06:19:04 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/12005-integration-unsure-method.html",
"openwebmath_score": 0.837983250617981,
"openwebmath_perplexity": 1382.927568327193,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446486833801,
"lm_q2_score": 0.8840392832736083,
"lm_q1q2_score": 0.8607401173852395
} |
https://math.stackexchange.com/questions/1004081/is-it-possible-to-write-a-sum-as-an-integral-to-solve-it | # Is it possible to write a sum as an integral to solve it?
I was wondering, for example,
Can:
$$\sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)}$$
Be written as an Integral? To solve it. I am NOT talking about a method for using tricks with integrals.
But actually writing an integral form. Like
$$\displaystyle \sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)} = \int_{a}^{b} g(x) \space dx$$
What are some general tricks in finding infinite sum series.
• math.stackexchange.com/questions/1002440/… – lab bhattacharjee Nov 3 '14 at 13:26
• @labbhattacharjee, I did not meant that. I know the solution to this, I was just asking if in general it is possible to write a sum as an actual integral. – Amad27 Nov 3 '14 at 13:28
• You can trivially write the sum as an integral using the Iverson bracket (add a factor of $[n \in \mathbb{N}]$ to the integrand). This ignores the question of how to evaluate the resulting integral, of course. – chepner Nov 3 '14 at 19:10
• "I am NOT talking about a method for using tricks with integrals." "But actually writing an integral form." "What are some general tricks" Combining these quotes with the accepted answer that does not seem to be a general trick, I'm a bit confused on what this question is asking. – JiK Nov 4 '14 at 8:28
• @Amad27 $\int_\mathbb{N}\frac{d \mu}{(3n-1)(3n+2)}$ where $\mu$ is the counting measure on $\mathbb{N}$. It doesn't give you anything you didn't already have though. I didn't really mean it seriously although it is true. – Tim Seguine Nov 5 '14 at 17:21
A General Trick
A General Trick for summing this series is to use Telescoping Series: \begin{align} \sum_{n=1}^\infty\frac1{(3n-1)(3n+2)} &=\frac13\lim_{N\to\infty}\sum_{n=1}^N\left(\frac1{3n-1}-\frac1{3n+2}\right)\\ &=\frac13\lim_{N\to\infty}\left[\sum_{n=1}^N\frac1{3n-1}-\sum_{n=1}^N\frac1{3n+2}\right]\\ &=\frac13\lim_{N\to\infty}\left[\sum_{n=0}^{N-1}\frac1{3n+2}-\sum_{n=1}^N\frac1{3n+2}\right]\\ &=\frac13\lim_{N\to\infty}\left[\frac12-\frac1{3N+2}\right]\\ &=\frac16 \end{align}
An Integral Trick
Since $$\int_0^\infty e^{-nt}\,\mathrm{d}t=\frac1n$$ for $n\gt0$, we can write \begin{align} \sum_{n=1}^\infty\frac1{(3n-1)(3n+2)} &=\sum_{n=1}^\infty\frac13\int_0^\infty\left(e^{-(3n-1)t}-e^{-(3n+2)t}\right)\mathrm{d}t\\ &=\frac13\int_0^\infty\frac{e^{-2t}-e^{-5t}}{1-e^{-3t}}\mathrm{d}t\\ &=\frac13\int_0^\infty e^{-2t}\,\mathrm{d}t\\ &=\frac16 \end{align}
• I think this is a better "trick" for dealing with sums. Integral "tricks" are nice however integrals and infinite series' are very different in what they calculate and manipulating a sum or integral on its own without switching is preferred. – Ali Caglayan Nov 3 '14 at 16:23
• @Alizter: For the most part, I agree. However, sometimes pure series manipulation can be extremely complicated, and the proper integral representation of a sum can be useful. However, in this case, I think staying with series manipulation is easiest. That being said, I have added an integral approach, as well. – robjohn Nov 3 '14 at 18:40
• The sum under the first integral could have been computed as a telescoping series either. Considering this, I think the use of integrals in the second solution is completely void. Edit: I mean exactly what Henning Makholm points out under the other answer. – Adayah Nov 4 '14 at 19:41
• @Adayah: My reply to Henning was meant as an agreement. I first posted only the telescoping series, but then added an integral approach to satisfy the first part of the question. In any approach where one breaks up the summand using partial fractions, it could be said that, at that point, the answer could be computed as a telescoping sum. – robjohn Nov 4 '14 at 20:48
Since $\int_{0}^{1}x^k\,dx = \frac{1}{k+1}$, $$\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)=\frac{1}{3}\int_{0}^{1}x^{3n-2}(1-x^3)\,dx,$$ so, summing over $n$: $$\sum_{n=1}^{+\infty}\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\int_{0}^{1}x\,dx=\frac{1}{6}.$$
• I thought we need uniform convergence in order to interchange the limit and integral. The power series is uniformly convergent inside the radius of convergence, how to pass it to the whole interval $[0,1]$? – John Nov 3 '14 at 13:45
• @JohnZHANG Actually no, Fubini and Tonelli's theorems allow this for a monotone sequence supposedly, I believe. – Amad27 Nov 3 '14 at 13:45
• Nice trick for the given sum, but this still doesn't answer the bold-marked question of general tricks. – Ruslan Nov 3 '14 at 16:34
• Isn't the integral just a detour here? The operative step is exactly the same telescoping that could have been done without rewriting the terms into integrals. – Henning Makholm Nov 4 '14 at 10:11
• @HenningMakholm: smoke and mirrors. – robjohn Nov 4 '14 at 12:07
Actually writing it as an integral, as asked for:
$$\displaystyle \sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)} = \int_{1}^{\infty} \frac{1}{(3\lfloor x\rfloor-1)(3\lfloor x\rfloor+2)} dx$$
This probably won't help with finding the value, though.
• why won't it help finding the value? – Amad27 Nov 4 '14 at 13:20
• @Amad27: I don't see a way it would. If you can find one, then more power to you, I suppose ... – Henning Makholm Nov 4 '14 at 13:33
• @Amad27 Methods for solving integrals are poorly suited for integrating functions that are non-continuous. The usual approach for integrating functions like the one here is to separately integrate over each interval where it is continuous. Which brings us back to the sum form. – Rafał Dowgird Nov 4 '14 at 15:11
• @Amad27 It is quite literally equivalent to the original sum in a trivially useless manner XD – Simply Beautiful Art Jan 12 '17 at 2:11
In such cases, the partial fractions of general term (i.e. $n^{th}$ term ) of the infinite-series are very useful.
Given that $$\sum_{n=1}^{\infty}\frac{1}{(3n-1)(3n+2)}=\sum_{n=1}^{\infty} T_{n}$$ Where, $T_{n}$ is the $n^{th}$ term of the given series which can be easily expressed in the partial fractions as follows $$T_{n}=\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)$$ Now, we have $$\sum_{n=1}^{\infty}\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\sum_{n=1}^{\infty} \left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)$$ $$=\frac{1}{3} \lim_{n\to \infty} \left[\left(\frac{1}{2}-\frac{1}{5}\right)+\left(\frac{1}{5}-\frac{1}{8}\right)+\left(\frac{1}{8}-\frac{1}{11}\right)+\! \cdot \! ........ +\left(\frac{1}{3n-4}-\frac{1}{3n-1}\right)+\left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)\right]$$ $$=\frac{1}{3} \lim_{n\to \infty} \left[\frac{1}{2} -\frac{1}{3n+2}\right]$$ $$=\frac{1}{3} \left[\frac{1}{2} -\frac{1}{\infty}\right]$$ $$=\frac{1}{3} \left[\frac{1}{2}\right]=\color{blue}{\frac{1}{6}}$$
We can indeed write the sum as an integral, after research. Consider:
Find: $\psi(1/2)$
By definition:
$$\psi(z+1) = -\gamma + \sum_{n=1}^{\infty} \frac{z}{n(n+z)}$$
The required $z$ is $z = -\frac{1}{2}$
so let $z = -\frac{1}{2}$
$$\psi(1/2) = -\gamma + \sum_{n=1}^{\infty} \frac{-1}{2n(n - \frac{1}{2})}$$
Simplify this: $$\psi(1/2) = -\gamma - \sum_{n=1}^{\infty} \frac{1}{n(2n - 1)}$$
The sum seems difficult, but really isnt.
We can telescope or:
$$\frac{1}{1-x} = \sum_{n=1}^{\infty} x^{n-1}$$
Let $x \rightarrow x^2$
$$\frac{1}{1-x^2} = \sum_{n=1}^{\infty} x^{2n-2}$$
Integrate once:
$$\tanh^{-1}(x) = \sum_{n=1}^{\infty} \frac{x^{2n-1}}{2n-1}$$
Integrate again:
$$\sum_{n=1}^{\infty} \frac{x^{2n}}{(2n-1)(n)} = 2\int \tanh^{-1}(x) dx$$
From the tables, the integral of $\tanh^{-1}(x)$
$$\sum_{n=1}^{\infty} \frac{x^{2n}}{(2n-1)(n)} = \log(1 - x^2) + 2x\tanh^{-1}(x)$$
Take the limit as $x \to 1$
$$\sum_{n=1}^{\infty} \frac{1}{(2n-1)(n)} = \log(4)$$
$$\psi(1/2) = -\gamma - \sum_{n=1}^{\infty} \frac{1}{(2n-1)(n)}$$
$$\psi(\frac{1}{2}) = -\gamma - \log(4)$$
• I am the OP per say. This is a general trick. I converted the sum into an integral. Please read carefully. – Amad27 Dec 21 '14 at 8:15
Yes you can use the Euler Maclaruin formula to write the sum as an integral plus an infinite number of derivatives. I remember deriving this for my self when I was younger and being very pleased with myself.
This particular sum could be solved because you had two terms $ax+b$ and $ax+c$ and the difference between c and b is equal to a (I think it would work in a slightly more complicated way if it was a not-too-large multiple of a).
If you want numerical values in general cases, and the sum doesn't converge quickly for your taste, or you want just a partial sum, you can use that
$$\displaystyle f (k) = \int_{k-1/2}^{k+1/2} f(k) dx ≈ \int_{k-1/2}^{k+1/2} f(x) dx$$
and therefore
$$\displaystyle \sum_{k=n}^{m} f(k) ≈ \int_{n-1/2}^{m+1/2} f(x) dx$$
Assuming that you can solve the integral in closed form, if you let
$$\displaystyle g (k) = f(k) - \int_{k-1/2}^{k+1/2} f(x) dx$$
then
$$\displaystyle \sum_{k=n}^{m} f(k) = \int_{n-1/2}^{m+1/2} f(x) dx + \sum_{k=n}^{m} g(k)$$
$g (k)$ will usually converge much faster than $f (k)$. | 2019-10-16T21:49:24 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1004081/is-it-possible-to-write-a-sum-as-an-integral-to-solve-it",
"openwebmath_score": 0.9224899411201477,
"openwebmath_perplexity": 633.7219715032601,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446463891303,
"lm_q2_score": 0.8840392817460333,
"lm_q1q2_score": 0.8607401138697173
} |
https://stats.stackexchange.com/questions/574333/how-can-i-determine-which-of-two-sequences-of-coin-flips-is-real-and-which-is-fa/574425 | # How can I determine which of two sequences of coin flips is real and which is fake?
This is an interesting problem I came across. I'm attempting to write a Python program to get a solution to it; however, I'm not sure how to proceed. So far, I know that I would expect the counts of heads to follow a binomial, and length of runs (of tails, heads, or both) to follow a geometric.
Below are two sequences of 300 “coin flips” (H for heads, T for tails). One of these is a true sequence of 300 independent flips of a fair coin. The other was generated by a person typing out H’s and T’s and trying to seem random. Which sequence is truly composed of coin flips?
Sequence 1:
TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHH TTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHH TTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHT THHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHT HTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTT HHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT
Sequence 2:
HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTH THTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHH TTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTT THTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTH HHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHH HTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT
Both sequences have 148 heads, two less than the expected number for a 0.5 probability of heads.
• Naive qustion : Does the question means: can I find with that porbablity that such sequence is generated by human is big enough ? Can I compute such probablity ? Every sequence is possible, so this is not a proof but only possibility. Am I right ?
May 7 at 8:53
• (I find it truly amusing that somebody voted to close this thread because it asks for opinions. If that were true, we should close all threads here on CV that use statistical analysis to compare data--and that would leave us with nothing but lists of references!)
– whuber
May 7 at 13:42
• Compress both strings and see which one comes out shorter. May 7 at 13:58
• @whuber That comment is probably a reference to (approximate) Kolmogorov complexity. See algorithmically random sequence. May 7 at 15:45
• @whuber With very high probability, a truly random sequence cannot be compressed. Compression methods can leverage many kinds of bias, not only if one symbol appears more overall, but also if some subsequences appear more than others (as here, H is more likely to be followed by T and vice versa, per your answer). May 7 at 18:20
This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly hinting that they don't really have to flip a coin and assuring them it won't be graded. Most will eschew the physical process and just write down 100 H's and T's willy-nilly. After the results are handed in at the beginning of the next class, at a glance I can reliably identify the ones who cheated. Usually there are no runs of heads or tails longer than about 4 or 5, even though in just 100 flips we ought to see a longer run that that.
This case is subtler, but one particular analysis stands out as convincing: tabulate the successive ordered pairs of results. In a series of independent flips, each of the four possible pairs HH, HT, TH, and TT should occur equally often--which would be $$(300-1)/4 = 74.75$$ times each, on average.
Here are the tabulations for the two series of flips:
Series 1 Series 2
H T H T
H 46 102 71 76
T 102 49 77 75
The first is obviously far from what we might expect. In that series, an H is more than twice as likely ($$102:46$$) to be followed by a T than by another H; and a T, in turn, is more than twice as likely ($$102:49$$) to be followed by an H. In the second series, those likelihoods are nearly $$1:1,$$ consistent with independent flips.
A chi-squared test works well here, because all the expected counts are far greater than the threshold of 5 often quoted as a minimum. The chi-squared statistics are 38.3 and 0.085, respectively, corresponding to p-values of less than one in a billion and 77%, respectively. In other words, a table of pairs as imbalanced as the second one is to be expected (due to the randomness), but a table as imbalanced as the first happens less than one in every billion such experiments.
(NB: It has been pointed out in comments that the chi-squared test might not be applicable because these transitions are not independent: e.g., an HT can be followed only by a TT or TH. This is a legitimate concern. However, this form of dependence is extremely weak and has little appreciable effect on the null distribution of the chi-squared statistic for sequences as long as $$300.$$ In fact, the chi-squared distribution is a great approximation to the null sampling distribution even for sequences as short as $$21,$$ where the counts of the $$21-1=20$$ transitions that occur are expected to be $$20/4=5$$ of each type.)
If you know nothing about chi-squared tests, or even if you do but don't want to program the chi-square quantile function to compute a p-value, you can achieve a similar result. First develop a way to quantify the degree of imbalance in a $$2\times 2$$ table like this. (There are many ways, but all the reasonable ones are equivalent.) Then generate, say, a few hundred such tables randomly (by flipping coins--in the computer, of course!). Compare the imbalances of these two tables to the range of imbalances generated randomly. You will find the first sequence is far outside the range while the second is squarely within it.
This figure summarizes such a simulation using the chi-squared statistic as the measure of imbalance. Both panels show the same results: one on the original scale and the other on a log scale. The two dashed vertical lines in each panel show the chi-squared statistics for Series 1 (right) and Series 2 (left). The red curve is the $$\chi^2(1)$$ density. It fits the simulations extremely well at the right (higher values). The discrepancies for low values occur because this statistic has a discrete distribution which cannot be well approximated by any continuous distribution where it takes on small values -- but for our purposes that makes no difference at all.
• Why do you say, "This case is subtler" (than just looking for runs)? OP's sequence 1 has length 300 (longer than the 100 you mention) and its longest run is a single HHHH. So it doesn't seem subtle at all, but rather a slam dunk for the "at a glance" method you describe. May 7 at 18:11
• It's not proper to apply the chi squared test to the contingency table given since consecutive pairs are overlapping and not independent, so p-values will be exaggerated. I suspect the result will be similar after addressing this though.
– Paul
May 7 at 18:11
• @Paul That's an excellent point, which I have neglected to discuss (and it's no excuse to claim, as I was hoping to do, that it's implicitly handled correctly in the last paragraph, because I did not advertise that fact).
– whuber
May 7 at 18:13
• @klm123 But as long as you don't actually try a bunch of tests before finding an improbable result (p-hacking), the result is still significant. The test(s) should be chosen in advance. There are also ways of correcting p-values if you run multiple tests. Yes, there is the potential to be misleading if you only report the significant test and not the others -- this has contributed to the "replication crisis". May 7 at 23:07
• My answer talks about why this test in particular is well-motivated. People tend to think that non-repeated values are more random than repeated values. (Which is true in some sense, but people believe it to a greater extent / in more generality than is actually true.)
– Paul
May 7 at 23:25
There are two very good answers as of writing this, and so let me add a needlessly complex yet interesting approach to this problem.
I think one way to operationalize the human generated vs truly random question is to ask if the flips are autocorrelated. The hypothesis here being that humans will attempt to appear random by not having too many strings of one outcome, hence switching from heads to tails and tails to heads more often than would be observed in a truly random sequence.
Whuber examines this nicely with a 2x2 table, but because I am a Bayesian and a glutton for punishment let's write a simple model in Stan to estimate the lag-1 autocorrelation of the flips. Speaking of Whuber, he has nicely laid out the data generating process in this post. You can read his answer to understand the data generating process.
Let $$\rho$$ be the lag 1 autocorrelation of the flips, and let $$q$$ be the proportion of flips which are heads in the sequence. A fair coin should have 0 autocorrelation, so we are looking for our estimate of $$\rho$$ to be close to 0. From there, we only need to count the number of occurrences of $$H,H$$, $$H, T$$, $$T, H$$ and $$T,T$$ in the sequence.
The Stan model is shown below
data{
int y_1_1; //number of concurrent 1s
int y_0_1; //number of 0,1 occurrences
int y_1_0; //number of 1,0 occurrences
int y_0_0; //number of concurrent 0s
}
parameters{
real<lower=-1, upper=1> rho;
real<lower=0, upper=1> q;
}
transformed parameters{
real<lower=0, upper=1> prob_1_1 = q + rho*(1-q);
real<lower=0, upper=1> prob_0_1 = (1-q)*(1-rho);
real<lower=0, upper=1> prob_1_0 = q*(1-rho);
real<lower=0, upper=1> prob_0_0 = 1 - q + rho*q;
}
model{
q ~ beta(1, 1);
target += y_1_1 * bernoulli_lpmf(1| prob_1_1);
target += y_0_1 * bernoulli_lpmf(1| prob_0_1);
target += y_1_0 * bernoulli_lpmf(1| prob_1_0);
target += y_0_0 * bernoulli_lpmf(1| prob_0_0);
}
Here, I've placed a uniform prior on the autocorrelation
$$\rho \sim \mbox{Uniform}(-1, 1)$$
and on the probability of a head
$$q \sim \operatorname{Beta}(1, 1)$$
Our likelihood is Bernoulli, and I have weighted the likelihood by the number of occurrences of each pair of outcomes. The probabilities of each outcome (e.g. probability of observing a heads conditioned on the previous flip being a heads) is provided by Whuber in his linked answer. Let's run our model and compare posterior distributions for the two sequences
The estimated auto correlation for sequence 1 is -0.36, and the estimated autocorrelation for sequence 2 is -0.02 (close enough to 0). If I was a betting man, I'd put my money on sequence 1 being the sequence generated by a human. The negative autocorrelation means that when we see a heads/tails we are more likely to see a tails/heads! This observation lines up nicely with the 2x2 table provided by Whuber.
### Code
The plot I present is made in R, but here is some python code to do the same thing since you asked
import matplotlib.pyplot as plt
import cmdstanpy
# You will need to install cmdstanpy prior to running this code
# Write the stan model as a string. We will then write it to a file
stan_code = '''
data{
int y_1_1; //number of concurrent 1s
int y_0_1; //number of 0,1 occurences
int y_1_0; //number of 1,0 occurences
int y_0_0; //number of concurrent 0s
}
parameters{
real<lower=-1, upper=1> rho;
real<lower=0, upper=1> q;
}
transformed parameters{
real<lower=0, upper=1> prob_1_1 = q + rho*(1-q);
real<lower=0, upper=1> prob_0_1 = (1-q)*(1-rho);
real<lower=0, upper=1> prob_1_0 = q*(1-rho);
real<lower=0, upper=1> prob_0_0 = 1 - q + rho*q;
}
model{
q ~ beta(1, 1);
target += y_1_1 * bernoulli_lpmf(1| prob_1_1);
target += y_0_1 * bernoulli_lpmf(1| prob_0_1);
target += y_1_0 * bernoulli_lpmf(1| prob_1_0);
target += y_0_0 * bernoulli_lpmf(1| prob_0_0);
}
'''
# Write the model to a temp file
with open('model_file.stan', 'w') as model_file:
model_file.write(stan_code)
# Compile the model
model = cmdstanpy.CmdStanModel(stan_file='model_file.stan', compile=True)
# Co-occuring counts for heads (1) and tails (0) for each sequence
data_1 = dict(y_1_1 = 46, y_0_0 = 49, y_0_1 = 102, y_1_0 = 102)
data_2 = dict(y_1_1 = 71, y_0_0 = 75, y_0_1 = 76, y_1_0 = 77)
# Fit each model
fit_1 = model.sample(data_1, show_progress=False)
rho_1 = fit_1.stan_variable('rho')
fit_2 = model.sample(data_2, show_progress=False)
rho_2 = fit_2.stan_variable('rho')
# Make a pretty plot
fig, ax = plt.subplots(dpi = 240, figsize = (5, 3))
ax.set_xlim(-1, 1)
ax.hist(rho_1, color = 'blue', alpha = 0.5, edgecolor='k', label='Sequence 1')
ax.hist(rho_2, color = 'red', alpha = 0.5, edgecolor='k', label='Sequence 2')
ax.legend()
$$$$
• +1 I developed my tabular explanation by going through the exercise you lay out here. I plotted the PACF functions of random sequences and compared them, visually, to the PACF functions of the two sequences in the question. One plot--the first sequence--stood out: it had a very negative lag-one partial autocorrelation coefficient. But motivating and explaining the PACF seemed like overkill for this problem ;-). The more enduring lesson, illustrated here and in the post by @COOLSerdash, is that by finding a suitable way to visualize data, we can discover otherwise hidden things about them.
– whuber
May 7 at 13:28
• (+1) Interesting resolution that does not rely on insufficient statistics, contrary to the others!, but I would have gone fully Markov and put a prior distribution on both $(p_{11},p_{10})$ and $(p_{01},p_{00})$ rather than introducing a correlation $\rho$, but it is unlikely the result would differ. More generally, the alternative to being an iid Uniform sequence could be anything, so restricting to an order one Markov is favouring the null. May 9 at 8:09
• @Xi'an You're right in that the result is not much different. The reason I prefer the $\rho$, $q$ parameterization is because we actually have good priors on these (despite what my answer uses lol). Humans have an intuitive sense for when they are acting too correlated and when they are acting too biased, so we know $\rho$ is going to be close to 0 and $q$ close to 0.5. But, you could rewrite this model using a multinomial likelihood and place priors on the probabilities directly too. That's what I love about Bayes! May 9 at 17:48
This is a class activity I've first read about in the book Teaching Statistics. A Bag of Tricks, 2nd ed. by Andrew Gelman and Deborah Nolan (they recommend 100 flips, though). Their reasoning to detect the fabricated sequence is based on the combination of the longest run and the number of runs. For the following plot, I simulated 5000 fair coin tosses of length 300 and plotted the longest run on the y-axis and the number of runs on the x-axis (I once asked a question about the explicit joint probability). Each dot represents the result of 300 fair flips. For better visibility, the points are jittered. The numbers for the two sequences are plotted in color. The conclusion is obvious.
For a quick calculation, recall that a rule of thumb for the longest run of either heads or tails in $$n$$ tosses is$$^{[1]}$$ $$l = \log_{1/p}(n(1-p)) + 1$$. For an approximate 95% prediction interval, just add and subtract $$3$$ from this value. Surprisingly, this number (i.e. $$\pm 3$$) does not depend on $$n$$! Applied to a fair coin with $$n=300, p=1/2$$, we have $$l=\log_2(300/2) + 1=8.22$$. So we expect the longest run to be round $$8$$ and reasonably in the range of $$8\pm 3$$, so between $$5$$ and $$11$$. The longest run in sequence 2 is $$8$$, whereas it is $$4$$ in sequence 1. As this is outside the approximate prediction interval, we'd conclude that sequence 1 is suspicious under the assumption of $$p=1/2$$.
$$[1]$$ Schilling MF (2012): The Surprising Predictability of Long Runs. Math. Mag. 85: 141-149. (link)
• Excellent answer and graph. "Surprisingly, this number does not depend on n!" You're talking about +-3, right? I was confused for a while since you define $l$ just before, which obvisouly depends on $n$. May 9 at 14:48
• @EricDuminil Yes, sorry. The value that does not depend on $n$ is the $\pm 3$ for the prediction interval. May 9 at 15:45
The runs test (NIST page) is a nonparametric test designed to identify unusual frequencies of runs. If we observe $$n_1$$ heads and $$n_2$$ tails, the expected value and variance of the number of runs are:
$$\mu = {2n_1n_2 \over n_1+n_2} + 1$$ $$\sigma^2 = {2n_1n_2(2n_1n_2 - n_1 - n_2) \over (n_1+n_2)^2(n_1+n_2+1)}$$
As a rule of thumb, for $$n_1, n_2 \geq 10$$ the distribution of the observed number of runs is reasonably well-approximated by a Normal distribution.
Edit: (incorporating Eric Duminil's work below)
For sequence 1, we have 148 heads, 152 tails, 205 runs, and for sequence 2, we have 148 heads, 152 tails and 154 runs. Plugging these numbers into our formulae above give us $$z$$-scores of 6.5 for the first sequence and 0.58 for the second sequence - extremely strong evidence that the first sequence is fake.
When people fake sequences like this, they tend to greatly underestimate the probability of longer runs, so they don't create as many long(ish) runs as they should. This in turn tends to increase the number of runs beyond that which would be expected. Consequently, when testing for faked data, we might prefer a one-sided test of the alternative hypothesis that there are "too many" runs vs. the null hypothesis that the number of runs is average - at least if we think the sequence was created by a human being.
• Yep, this is a classic application of gambler's fallacy. May 6 at 21:54
• There are 62 runs in the first two rows, AFAICT. For the whole sequence #1 : 148 heads, 152 tails, 205 runs. Vs 148 heads, 152 tails and 154 runs for sequence #2. May 9 at 14:56
• Thanks, @EricDuminil - I've included your efforts in the body of the answer with a citation. May 9 at 16:46
• That's good. But it leaves unanswered why you would formulate a question in terms of longest run and then solve it by looking at an indirect proxy (number of runs). Why not just use the longest run as the test statistic?
– whuber
May 9 at 18:13
• @whuber - it's not just the longest run that's informative, it's the number of longer runs in general, and I don't have a good idea of a cutoff for run length for testing. Having said that, using the longest run as a test statistic with 100,000 randomly generated strings with 148 heads and 152 tails for calculating an approximate p-value gives a p-value of 0.00004 for the first sequence. Maybe I'll expand on that over (my) lunch. May 9 at 18:28
Here's an empirical approach, based on compression as a proxy for algorithmic complexity:
import bz2
import random
import statistics
s1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT"
s2 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT"
def compressed_len(s):
return len(bz2.compress(s.encode()))
trials = []
for x in range(100000):
sr = "".join(random.choice("HT") for _ in range(300))
trials.append(compressed_len(sr))
mean = statistics.mean(trials)
stddev = statistics.stdev(trials)
print("Random trials:")
print("Mean:", mean)
print("Stddev:", stddev)
l1 = compressed_len(s1)
l2 = compressed_len(s2)
o1 = l1 - mean
o2 = l2 - mean
d1 = o1 / stddev
d2 = o2 / stddev
print("Selected trials:")
print("Seq", "Len", "Dev", sep="\t")
print("S1", l1, d1, sep="\t")
print("S2", l2, d2, sep="\t")
Roughly speaking:
1. Compress a bunch (100k) of random coinflips.
2. Observe the resulting length distribution. (In this case I'm approximating it as a normal distribution of lengths; a more thorough analysis would check and pick an appropriate distribution instead of blithely assuming normality.)
3. Compress the input sequences.
4. Compare with observed distribution.
Result (note: not exactly reproducible due to the use of random trials; if you want to be reproducible add a random seed):
Random trials:
Mean: 105.05893
Stddev: 2.6729774976956002
Selected trials:
Seq Len Dev
S1 88 -6.381995364609942
S2 109 1.4744119632124217
Based on this, I'd say that S1 is the non-random one here. 6.38 standard deviations below the mean is rather improbable.
The nice thing about this approach is that it's relatively generic, and takes advantage of the pre-existing work of a bunch of smart people.
Just be aware of its limitations and quirks:
1. You want a compression algorithm that's designed for space over compression speed. BZ2 works well enough here.
2. This doesn't work if the compression algorithm simply gives up and writes a raw block to the output.
3. A null result does not mean that the sequence is random. It means that this compression algorithm is unable to distinguish this input from random.
This is probably an overcomplicated way of looking at it, but for me it's fun, so I present to you...
## Moran's I
Now, Moran's I was developed to look at spatial autocorrelation (basically autocorrelation with multiple dimensions), but it can be applied to the 1-dimensional case as well. Some of my interpretations might be a little sketchy, but you can consider your coin flips this way.
To summarize Moran's I, it will consider your neighboring values using a pre-defined matrix. How you define the matrix is up to you, but it can actually be used to consider not just the directly neighboring values, but any values beyond that. Moran's I will produce a value ranging from -1 (perfectly dispersed values) to 1 (perfectly clustered values), with 0 being random.
I wrote up some quick R code. First, setup the data (OP's data and a couple generated data sets to test dispersion and clustering):
seq1 = unlist(strsplit("TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT",
split = ""))
seq2 = unlist(strsplit("HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT",
split = ""))
# Alternate T and H. E.g., THTHTHTHT....
# 'perfectly dispersed'
# Moran's I = -1
seq3 = rep(c("T", "H"), times = 50)
# 50 of T followed by 50 of H
# 'perfectly clustered'
# Moran's I approaches 1 as the sample size increases to infinity
seq4 = rep(c("T", "H"), each = 50)
# weights must be a vector with an odd length and the middle value set to 0
# weights are relative and do not have to add to 1
moran <- function(x, weights) {
x = c(T = 0, H = 1)[x] # convert T/H to 0/1
N = length(x)
x_mean = mean(x)
den = sum((x - x_mean)^2)
W = 0
num = 0
offset = floor(length(weights)/2)
for (i in 1:length(x)) {
W = W + as.numeric(!is.na(x_slice)) %*% weights
num = num + (x[i] - x_mean) * sum((x_slice - x_mean) * weights, na.rm = TRUE)
}
return(unname((N * num)/(as.numeric(W) * den)))
}
Next, I test the generated data sets to illustrate/test that my function is working correctly:
# Test the 'perfect dispersion' scenario (should be -1)
moran(seq3, c(1, 0, 1))
## [1] -1
# Test the 'perfect clustering' scenario (should be ~1)
moran(seq4, c(1, 0, 1))
## [1] 0.979798
Now, let's look at OP's sequences:
# Simple look at seq1. The weights test the idea that the current flip
# is based purely on the last flip (a reasonable model for how a person might react)
moran(seq1, c(1, 0, 0))
## [1] -0.3647031
moran(seq2, c(1, 0, 0))
## [1] -0.02359453
I'm defining my weights matrix such that only the previous flip is considered when testing for autocorrelation. We see that the second sequence is very close to 0 (random), whereas the first sequence seems to lean somewhat toward overdispersion.
But maybe we think someone faking coin flips would consider the last two flips, not just the most recent:
# Maybe the person is looking back at the last two flips
moran(seq1, c(1, 1, 0, 0, 0))
## [1] -0.1726056
moran(seq2, c(1, 1, 0, 0, 0))
## [1] 0.0249505
The second sequence is just as close to 0 as before, but the first sequence had a pretty noticeable shift towards 0. This might be interpretable in a couple of different ways. First, if we know that the first sequence is fake, then maybe it means the person wasn't considering two flips back. A second interpretation is that maybe they were considering the last two flips, and somehow this led them to doing a better job at faking randomization. A third option might just be sheer dumb luck at faking the randomization.
Now, maybe the person considers the last two coin flips but gives the most recent flip more importance.
# Same idea, but maybe the more recent of the two is twice as important
moran(seq1, c(1, 2, 0, 0, 0))
## [1] -0.2367095
moran(seq2, c(1, 2, 0, 0, 0))
## [1] 0.008750762
Here, we see the two sequences react differently. The second sequence (already pretty close to 0), gets noticeably closer to 0, whereas the first sequence shifts noticeably away. I'm not sure I want to try and interpret this, but it's an interesting result, and a similar thing happens if we try to model a scenario where the person is not only considering their previous flips but also thinking ahead to their next flip:
# Maybe the person was thinking ahead to their next flip as well
moran(seq1, c(1, 2, 0, 1, 0))
## [1] -0.2687347
moran(seq2, c(1, 2, 0, 1, 0))
## [1] 0.0006576715
Some of my application/interpretation of Moran's I to the coin flip problem might be a little off, but it's definitely an applicable measure to use.
A related metric is Geary's C, which is more sensitive to local autocorrelation
When people try to generate random sequences, they tend to avoid repeating themselves more than random processes avoid repeating themselves. Thus, if we look at consecutive pairs of flips, we would expect a human-generated sequence to have too many HT and TH and too few HH and TT compared to a typical random sequence.
The code below explores this hypothesis. It splits each sequence of 300 flips into 150 consecutive pairs and plots the frequency of the four possible results (HH, HT, TH, TT).*
library(tidyverse)
a <- "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT"
b <- "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT"
# split each sequence of 300 into 150 consecutive pairs
# e.g. TTHHTHTT... -> TT, HH, TH, TT, ...
n_pairs <- 150
ap <- tibble(pair = character(n))
bp <- tibble(pair = character(n))
for (i in 1:n) {
ap$$pair[i] <- substring(a, 2*i - 1, 2*i) bp$$pair[i] <- substring(b, 2*i - 1, 2*i)
}
# get the frequencies of each possible pair and plot
apc <- count(ap, pair)
bpc <- count(bp, pair)
bind_rows(
Sequence 1 = apc,
Sequence 2 = bpc,
.id = 'source') %>%
ggplot(aes(x = pair, y = n, group = source, fill = source)) +
geom_col() +
facet_grid(vars(source)) +
theme_minimal() +
geom_hline(yintercept = n_pairs/4, linetype = 'dashed') +
ylab('frequency') +
ggtitle('Frequency of consecutive coin flip pairs')
The dotted line at 150/4 = 37.5 is the expected count of each possible pair assuming the coin flips are independent and fair. By the Law of Large Numbers, we expect the bars not to stray too far from the dotted line. Sequence 1 has an above-average number of HT and TH pairs (especially HT), consistent with our hypothesis about human-generated "randomness". The pairs from Sequence 2 are more consistent with average behavior.
To see how unusual this behavior would be under independent, fair flips, we reformat each sequence's pair count data as a 2x2 contingency table (rows = first flip H/T, columns = second flip H/T) and use Fisher's exact test, which checks whether the data is consistent with a null hypothesis in which the first flip is independent of the second:
for (x in list(apc, bpc)) {
print(x)
x %>%
mutate(f1 = str_sub(pair, 1, 1),
f2 = str_sub(pair, 2, 2)) %>%
select(f1, f2, n) %>%
select(-f1) %>%
as.matrix() %>%
fisher.test() %>%
print()
}
The contingency table for Sequence 1 has a p value of 0.0002842, while the table for Sequence 2 has 0.5127. This means that pair frequencies skewed to the degree seen in Sequence 1 would only occur by random 1/0.0002842 = 1 out of 3,519 times, while something like Sequence 2 would be seen very commonly. It seems sensible to conclude that Sequence 1 is the human-made sequence, since its pair frequency table is not consistent with random chance but is quite consistent with the behavior we'd expect of humans.
There is a caveat to this analysis: we do not expect random sequences to be perfectly consistent with average behavior. In fact, in some contexts people know that long random sequences should follow the Law of Large Numbers, and they create sequences which follow it too perfectly. A different analysis would be needed to explore whether Sequence 2 looks odd from this opposite perspective.
* Other answers look at all 299 consecutive pairs, which gives you more data points but they become dependent, which prevents us from using standard significance tests. (For example in the sequence TTHHT, you can make pairs like this: (TT)HHT, T(TH)HT, TT(HH)T, .... This gives you more pairs but consecutive pairs are not independent of one another, as the second flip of a pair determines the first flip of the next pair.)
# Alternate Analysis
An analysis that uses all 299 pairs of consecutive flips could be more powerful than the one above if the dependency problem can be solved. To do this, @whuber suggests looking at the transitions between consecutive flips, i.e. when H is followed by T or vice versa. If the flips are independent and fair, then after the first flip, each transition can be considered an independent Bernoulli random variable, and there are 299 transitions total. We can use a two-sided test to see whether the number of transitions observed in each sequence is unlikely under fair independent flips. The transitions are counted and the test applied by the code below:
library(tidyverse)
a <- "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT"
b <- "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT"
# 299 transitions from flip i to i+1 occur in the sequence of 300
# record these transitions in arrays at and bt
n <- 299
at <- logical(n)
bt <- logical(n)
for (i in 1:n) {
at[i] <- str_sub(a, i + 1, i + 1) != str_sub(a, i, i)
bt[i] <- str_sub(b, i + 1, i + 1) != str_sub(b, i, i)
}
# two-sided exact binomial test (analogous to z-test)
# gives probability of transition count more extreme than the one observed
pbinom(sum(at) - 1, n, 1/2, lower.tail = F) + pbinom(n - sum(at), n, 1/2)
pbinom(sum(bt) - 1, n, 1/2, lower.tail = F) + pbinom(n - sum(bt), n, 1/2)
Running this code, we find that Sequence 1 has 204 transitions out of a possible 299. The probability of observing a number of transitions at least this imbalanced, on either the left or the right side, is equal to the probability of observing at least 204 transitions, plus the probability of observing at most 299 - 204 = 95 transitions. This probability is 2.696248e-10, on the order of 3 in 10 billion. Sequence 2 has 153 transitions, and the probability of observing at least 153 transitions or at most 299 - 153 = 146 transitions is 0.7286639. The number of transitions in Sequence 1 is extremely improbable, more so than the 150 pair test above suggested.
• Your code seems to be doing something more complicated than you describe. It is hard to relate your plots, which are unexplained, to the data: those plots clearly do not report on sequences of length 300.
– whuber
May 7 at 14:43
• I'm splitting each length 300 sequence into 150 pairs of adjacent coin tosses and counting how many of the 150 pairs are HH, HT, TH, or TT. That's what the graphs show. I've added some comments to elaborate on this. If that's not enough, could you be more specific as to what you're struggling to understand?
– Paul
May 7 at 16:55
• I am not struggling to understand anything--I just don't care to have to read through code in order to determine what the content of an answer might be, and I believe most readers will feel the same.
– whuber
May 7 at 17:25
• Thank you: what you are doing is now more apparent. But why? Could you explain what is accomplished with this split?
– whuber
May 7 at 18:10
• I've added a transition-based analysis and it is indeed a more powerful test. Great idea.
– Paul
May 10 at 0:45
This answer is inspired by @user1717828's answer which transforms the sequence of coin flips into a random walk. I don't show the two given sequences as random walks here; see @user1717828's answer for that plot.
The random walk approach is interesting because it examines long-run features of the sequence rather than short-range ones (such as the overabundance of H-T and T-H flips). In a way the two approaches are complementary as they analyze the sequences at different scales. At the "local" scale the fake sequence oscillates which creates autocorrelation; at the "global" scale it keeps within a limited range of positions for long stretches of time. Both of these aspects are non-random and as one occurs, so does the other: staying in place means not moving around and vice versa.
A (one-dimensional) simple random walk $$S_n$$, defined as the sum of n random variables that take value {-1, 1} with probability ½, has many interesting properties, including:
• The probability that a simple random walk returns to the origin is 1. In fact, it visits every integer infinitely often.
• The mean of a simple random walk is $$\text{E}(S_n) = 0$$ and its variance is $$\text{Var}(S_n) = n$$.
• A random walk with nonzero mean (which corresponds to flipping a biased coin) is transient: it makes finitely many visits to the origin before diverging, to +∞ if the mean is positive and to -∞ in the mean is negative.
These properties imply that a simple random walk spends a lot of time far away from 0 before eventually returning to it.
We can use this intuition founded on theory (as well as any other property of random walks) to propose characteristics for comparing the two sequences. Then we generate many simple random walks and use the observed distribution of the features to estimate how "extreme" the two given sequences are.
To capture the variance of a simple random walk I use the maximum distance from 0. To capture its tendency to visit every integer I use the number of times the walk crosses the integers from -8 to +8. To make a symmetric grid of scatterplots and to avoid suspicions of HARKing I don't show the crossings of 0. It's important to look at crossing on both sides of 0 (the starting point), so that we don't make assumptions about how the non-randomness manifests in the data. For example, we don't know whether the coin is fair (p = ½), biased towards heads (p > ½) or biased towards tails (p < ½).
Note: HARKing is the practice of performing many analyses to find an interesting hypothesis. I compute many statistics, grounded in the theory of random walks, and report all of them, thus avoiding HARKing.
Here are the results from the simulation of 500 simple random walks. Sequence #1 (in blue) appears as an outlier compared to sequence #2 (in red) in many panels. Due to its overabundance of H-T and T-H pairs, sequence #1 doesn't explore enough and spends too much time between -5 and -8 (shown in the top row). Sequence #1 doesn't advance beyond the band [-9, 9] and it is rare for a true random walk so stay so close to 0 for 300 steps. Visually, sequence #1 is the overall outlier even if it doesn't appear extremely "unusual" is some panels.
• +1. This explains why I did not examine this statistic in my answer: I had done so, exactly as you have, and came to the same conclusion that the max deviation statistic was suggestive but only weak evidence of a departure from randomness. But in that investigation I noted there were other visual quirks in the plot of the first series and that led to a more effective statistic for assessing the randomness of the series. It's worth noting that extensive computation is unnecessary: you could draw all your conclusions more quickly by simulating as few as 20 random walks..
– whuber
May 9 at 13:44
• @whuber After some (over)thinking, I realized that it works quite well to compute two statistics: one to represent central tendency and the other -- extreme values. Similar to COOLSerdash's solution which looks at number of runs (a kind of average) and length of longest run. With the simple random walk, zero crossings vs maximum distance from zero works very well (visually). But then I don't know how to get a p-value. May 9 at 21:35
• I think the zero crossings observation comes down to HARKing. (For instance, these could well be realizations of Markov processes in which a transition away from a side of the coin is made with probability $p$; $p\approx 2/3$ in one sequence and $p\approx 1/2$ in the other. In general I wouldn't expect zero-crossing counts to distinguish among these.) A better way to approach this would be to do all your exploration on the first half of each sequence; develop a suitable statistic from that; and apply it to the second halves of the sequences.
– whuber
May 9 at 21:39
• @whuber I don't agree it's HARKing. At least not any more than number of runs. A simple random walk returns to the start infinitely often, so given this fact it makes sense to look at zero crossings. Since the fake sequence doesn't move enough any number will do in theory; here because sequences are only 300 long every number between about -10 to 10 will work. May 9 at 22:03
• Ask yourself this: would you have focused on counting zero crossings by exploring just the first halves of the sequences? You might have if both of two things occurred: (1) you observed no zero crossings in one and many zero crossings in the other and (2) in simulated iid sequences, you observed a consistent tendency to have positive numbers of zero crossings. In such a case, you likely have a useful statistic. Unfortunately, you would discover that about 6.5% of the time, a Binomial random walk has no zero crossings. That's why I view this as HARKing: it's accidental.
– whuber
May 10 at 14:32
In addition to statistical approaches, one visual approach is to plot the sequences as a "drunkards walk". Treat H as a step forwards and T as a step back and plot the sequences. One way in Python is:
import altair as alt
import pandas as pd
seq1 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT"
seq2 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT"
def encode(stream):
return [1 if c == "H" else -1 for c in stream]
df = pd.DataFrame({"seq1": encode(seq1), "seq2": encode(seq2)})
df_cumsum = pd.melt(
df.cumsum(), var_name="sequence", value_name="cumsum", ignore_index=False
).reset_index()
chart = alt.Chart(df_cumsum).mark_line().encode(x="index", y="cumsum", color="sequence")
chart.show()
When comparing a truly random sequence to a human generated one, it is often easy to tell them apart based on inspection of the walk.
• This is a good approach. It was the first thing I tried. However, I created a context for evaluating these random walks: I also generated 18 more sequences using iid flips of a fair coin and plotted all 20. Generally, whenever I repeated this procedure, there was a truly random walk that looked like the orange one here, at least in terms of the relatively small range. Exactly which characteristic(s) suggest to you that the orange curve is not random but the blue is?
– whuber
May 8 at 12:53
• The orange one is what I think is the truly random one based on this visual. I would have guessed based on just the number of zero-crossings alone that blue is human-generated. But you're right, there's always going to be a chance that any sequence is randomly-generated, and the analytical methods are better for quantifying the odds of that. May 8 at 20:17
• Unfortunately for your theory, the orange one is the non-random one. It has an unusually low range; but more to the point, its thickened appearance is due to a huge overabundance of H-T-H oscillations.
– whuber
May 8 at 20:32
• Ahh! In that case, I'm going to rethink my approach of visualizing these things before looking at the statistics next time! May 8 at 22:29
• The thickened appearance is the feature analyzed in the answers by whuber, Dmetri Pananos and Paul. The distance from 0 in a simple random walk (which I think is what this answer is getting at) would be a different feature altogether. A (true) simple random walk will spend a lot of time away from zero until eventually returning (with probability 1). The sequences in this exercise though seem too short for this kind of argument. May 8 at 23:55
Looking at the HH, HT, TH, TT frequencies is probably the most straightforward way to approach the two series presented, given people's tendency to apply HT and TH more frequently when trying to appear random. More generally, however, that approach will fail to detect non-randomness even in sequences with obvious patterns. For instance, a repeating sequence of HHHTHTTT will produce balanced counts of pairs as well as triples (HHH, HHT, etc.).
Here's an idea for a more general solution that was initially inspired by answers discussing random walks. Start with the observation that for a random sequence, the number of heads in any given subsequence is binomially distributed. We can count the number of heads, $$n_{i,j}$$, between flip number $$i$$ and flip number $$j>i$$ for all values of $$i,j$$. Then compare these to the expected number of counts if all the $$n_{i,j}$$ were independent. Although they are obviously not independent, the comparison gives rise to a useful statistic: the maximum absolute difference between the observed counts and the expected counts.
Applying this approach to sequence 1 gives us 98.26 as our test statistic: there are 257 subsequences of length 44. If they were all composed of independent Bernoulli trials, the expected number of the 257 that contained exactly 22 heads is ~30.74, whereas sequence 1 contains 129 subsequences with exactly 22 heads (very underdispersed). 129 - 30.74 = 98.26, which is the maximum of these differences for sequence 1.
Performing the same calculations on sequence 2 gives a test statistic of 48.30: there are 197 subsequences of length 104. The expected number containing exactly 54 heads would be ~13.70. Sequence 2 contains 62, so the test statistic is 62 - 13.70 = 48.30.
The test statistics can be compared to those from a large number of random sequences of the same size. In this case, no samples are greater than the test statistic from sequence 1, and about 14% of samples are greater than the statistic from sequence 2.
Here it is all together with R code:
library(parallel)
set.seed(16)
seq1 <- utf8ToInt("TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT") == utf8ToInt("H")
seq2 <- utf8ToInt("HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT") == utf8ToInt("H")
# function to calculate test statistic S
fS <- function(m, p = 0.5) {
if (!is.matrix(m)) m <- matrix(m, ncol = 1)
n <- nrow(m)
steps <- rep.int(1:(n - 1L), 2:n)
nEx <- dbinom(sequence(2:n, from = 0L), steps, p)*(n - steps)
idx <- sequence((n - 1L):1)
idx <- idx*(idx + 1L)/2L
S <- numeric(ncol(m))
for (i in 1:ncol(m)) S[i] <- max(abs(nEx - tabulate(idx + dist(cumsum(m[,i])), length(nEx))))
S
}
(S1 <- fS(seq1))
#> [1] 98.26173
(S2 <- fS(seq2))
#> [1] 48.30014
# calculate S from 1e6 random sequences of length n (probably overkill)
n <- length(seq1)
cl <- makeCluster(detectCores() - 1L)
clusterExport(cl, list("fS", "n"))
system.time(simS <- unlist(parLapply(cl, 1:100, function(i) fS(matrix(sample(0:1, 1e4L*n, TRUE), n)))))
#> user system elapsed
#> 0.00 0.03 339.42
stopCluster(cl)
nsim <- 1e6L
# calculate approximate p-values
sum(simS > S1)/nsim
#> [1] 0
sum(simS > S2)/nsim
#> [1] 0.139855
max(simS)
#> [1] 91.01078
This problem specifies that we have the following information:
• We observed two coin flip sequences: sequence $$S_1$$ and sequence $$S_2$$.
• Each of these sequences could have been generated by either mechanism $$R$$ corresponding to independent flips of a fair coin or some other mechanism $$\bar{R}$$.
• Exactly one of these two sequences was generated by the mechanism $$R$$.
Since both sequences $$S_1$$ and $$S_2$$ could possibly have been generated by $$R$$ or $$\bar{R}$$, we cannot be sure which of these two sequences was generated by $$R$$. However, probability theory provides us with the tools to quantify our belief $$P(R_i|S_i)$$ that sequence $$S_i$$ was generated by mechanism $$R$$. Then, given that we have been provided with the information $$I$$ that exactly one of these two sequences has been generated by mechanism $$R$$, then the probability that sequence 1 was generated by mechanism $$\bar{R}$$ and sequence 2 was generated by mechanism $$R$$ is:
$$P(\bar{R_1}R_2|S_1 S_2 I) = \frac{P(R_2|S_2)}{P(R_1|S_1) + P(R_2|S_2)}$$
This problem explicitly specifies what it means for a sequence to have been generated by mechanism $$R$$: the sequence $$S_i$$ is described by independent flips $$y_{ij}$$ of a fair coin, such that the likelihood is:
$$P(S_i|R_i) = \prod_j \mathrm{Bernoulli}(y_{ij} | 0.5)$$
However, in order to determine the probability that a given sequence was generated by mechanism $$R$$, $$P(R_i|S_i)$$, we also need to specify alternative mechanisms $$\bar{R}$$ by which the sequences may have been generated. At this point we have to use our own creativity and external information to develop models that could reasonably describe how these sequences might have been generated, if they were not generated by mechanism $$R$$.
The other answers to this question do a great job at describing various mechanisms $$\bar{R}$$ that describe how sequences could be generated:
• Sequences are generated by sampling pairs of coin flips from a distribution where the probability of different pairs of coin flips deviates from uniform: 1 2 3
• Sequences are generated in a way such that they do not have long runs of only heads or tails: 1
• Sequences are generated in such a way so that a given compression algorithm results in a small compressed size: 1
• Sequences are generated by a random walk such that a given coin flip outcome is affected by recent coin flip outcomes: 1 2 3
Note: we can always develop a hypothesis that appears to result in our observed sequence having been "inevitable", according to our likelihood, and if our prior probability for that mechanism of sequence generation is high enough, then using this hypothesis can always result in a low probability that the sequence had been generated by mechanism $$R$$. This practice is also referred to as HARKing, and Jaynes refers to this as a "sure-thing hypothesis". In general, we may have multiple hypotheses that can describe how a sequence may be generated, but the more contrived hypotheses such as "sure-thing hypotheses" should generally have sufficiently low prior probabilities such that we generally would not infer that those hypotheses have the highest probability of having generated a given sequence.
Here's an example using this procedure with one possible model for $$\bar{R}$$ sequence generation to quantify the probability that sequence 2 was the sequence that was generated by mechanism $$R$$. This stan model defines model_r_lpmf as the log likelihood for sequence generation mechanism $$R$$, and model_not_r_lpmf as the log likelihood for sequence generation mechanism $$\bar{R}$$. In this case, we are testing a hypothesis $$\bar{R}$$ in which coin flips can be correlated, and there is a fairly well defined correlation length. Then, we are modeling the probability that the sequence was generated by model $$R$$ as p_model_r, such that our prior probability considers mechanisms $$R$$ and $$\bar{R}$$ sequence generation equally probable.
// model for hypothesis testing whether a sequence of coin flips is generated by either:
// - model R: independent flips of a fair coin, or
// - model not R: unfair coin with correlations
functions {
// independent flips of a fair coin
real model_r_lpmf(int[] sequence, int N) {
return bernoulli_lpmf(sequence | 0.5);
}
// unfair coin with correlations
real model_not_r_lpmf(int[] sequence, int N, int n, vector beta) {
real tmp = 0;
real offset = 0;
for (i in 1:N) {
offset = beta[1];
for (j in 1:min(n - 1, i - 1)) {
offset += beta[j + 1] * (2 * sequence[i - j] - 1);
}
tmp += bernoulli_logit_lpmf(sequence[i] | offset);
}
return tmp;
}
}
data {
int N; // the length of the observed sequence
int<lower=0, upper=1> sequence[N]; // the observed sequence
}
transformed data {
int n = N / 2 + 1; // number of correlation parameters to model
}
parameters {
real<lower=0, upper=1> p_model_r; // probability that the sequence was generated by model R
// not R model parameters
real log_scale; // log of the scale of the correlation coefficients beta
real log_decay_rate; // log of the decay rate for the correlation coefficients beta
vector[n] alpha; // pretransformed correlation coefficients
}
transformed parameters {
// transformed not R model parameters
real scale = exp(log_scale); // typical scale of the correlation coefficients
real decay_rate = exp(log_decay_rate); // typical decay rate of correlation coefficients
vector[n] beta; // correlation coefficients
for (j in 1:n) {
beta[j] = alpha[j] * scale * exp(-(j - 1) * decay_rate);
}
}
model {
// uninformative haldane prior for the probability that the sequence was generated by model R
target += - log(p_model_r) - log1m(p_model_r);
// priors for not R model parameters
log_scale ~ normal(-1, 1);
log_decay_rate ~ normal(-1, log(n + 1) / 2.0);
alpha ~ normal(0, 1);
// the sequence is was generated by model R with probability p_model_r, otherwise it was generated by model not R
target += log_mix(
p_model_r,
model_r_lpmf(sequence | N),
model_not_r_lpmf(sequence | N, n, beta)
);
}
Using pystan with this model, we can evaluate the probability p_model_r that each of the two provided sequences were generated by mechanism $$R$$.
model = pystan.StanModel(model_code=MODEL_CODE)
sequence_1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT"
samples_1 = model.sampling(
data=dict(N=len(sequence_1), sequence=[int(c == "H") for c in sequence_1]),
chains=16,
)
p_model_r_1 = samples_1["p_model_r"].mean()
# p_model_r_1 = 0.027
sequence_2 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT"
samples_2 = model.sampling(
data=dict(N=len(sequence_2), sequence=[int(c == "H") for c in sequence_2]),
chains=16,
)
p_model_r_2 = samples_2["p_model_r"].mean()
# p_model_r_2 = 0.790
`
Here we have quantified the probability that each sequence was generated according to the sequence generating model $$R$$ as the following:
\begin{aligned} P(R_1|S_1) &= 0.027 \\ P(R_2|S_2) &= 0.790 \\ P(\bar{R_1}R_2|S_1 S_2 I) &= 0.967 \\ \end{aligned}
So, we are fairly certain that sequence 2 was generated by independent coin flips and sequence 1 was not, relative to the model of sequence generation $$\bar{R}$$ that we defined, and we quantify this belief with a probability of 0.967. This is a high probability, but of course we could still be wrong -- both sequences could have been generated by either model. Additionally, we could have selected a model of sequence generation for $$\bar{R}$$ that can not describe the procedure that was actually used to generate the sequences. This process for quantifying our belief that one of these two sequences was generated by a fair coin with independent flips is the piece of reasoning that currently appears to be missing from the other answers that are currently available.
• I have to reject your initial dichotomy, because it (a) is too vague to be actionable (what can one do, in any objective and quantifiable way, to characterize any "non-random process"?) and (b) ignores a huge set of other possible mechanisms: namely, random mechanisms that are not iid.. In fact, it is almost surely the case that whoever generated these sequences used a random mechanism for both--they are just different random mechanisms.
– whuber
May 31 at 12:00
• @whuber I framed this problem in terms of hypothesis testing between a model $R$ by which sequences are generated by independent flips of a fair coin, and models of other possible ways that we might believe that sequences could be generated $\bar{R}$. I used labels "random", and "not random" to describe these models, which appears to have been confusing, so I have edited the post to remove that terminology. Jun 8 at 3:18 | 2022-07-04T02:36:26 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/574333/how-can-i-determine-which-of-two-sequences-of-coin-flips-is-real-and-which-is-fa/574425",
"openwebmath_score": 0.7305430769920349,
"openwebmath_perplexity": 1067.4501979938352,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446456243805,
"lm_q2_score": 0.8840392771633079,
"lm_q1q2_score": 0.8607401087317024
} |
https://www.tutorialspoint.com/arrange-first-n-natural-numbers-such-that-absolute-difference-between-all-adjacent-elements-1 | Arrange first N natural numbers such that absolute difference between all adjacent elements > 1?
We have the first N natural numbers. Our task is to get one permutation of them where the absolute difference between every two consecutive elements is > 1. If no such permutation is present, return -1.
The approach is simple. We will use the greedy approach. We will arrange all odd numbers in increasing or decreasing order, then arrange all even numbers in decreasing or increasing order
Algorithm
arrangeN(n)
Begin
if N is 1, then return 1
if N is 2 or 3, then return -1 as no such permutation is not present
even_max and odd_max is set as max even and odd number less or equal to n
arrange all odd numbers in descending order
arrange all even numbers in descending order
End
Example
#include <iostream>
using namespace std;
void arrangeN(int N) {
if (N == 1) { //if N is 1, only that will be placed
cout << "1";
return;
}
if (N == 2 || N == 3) { //for N = 2 and 3, no such permutation is available
cout << "-1";
return;
}
int even_max = -1, odd_max = -1;
//find max even and odd which are less than or equal to N
if (N % 2 == 0) {
even_max = N;
odd_max = N - 1;
} else {
odd_max = N;
even_max = N - 1;
}
while (odd_max >= 1) { //print all odd numbers in decreasing order
cout << odd_max << " ";
odd_max -= 2;
}
while (even_max >= 2) { //print all even numbers in decreasing order
cout << even_max << " ";
even_max -= 2;
}
}
int main() {
int N = 8;
arrangeN(N);
}
Output
7 5 3 1 8 6 4 2
Published on 01-Aug-2019 07:36:08 | 2020-02-19T02:00:22 | {
"domain": "tutorialspoint.com",
"url": "https://www.tutorialspoint.com/arrange-first-n-natural-numbers-such-that-absolute-difference-between-all-adjacent-elements-1",
"openwebmath_score": 0.30733564496040344,
"openwebmath_perplexity": 1346.7502711889088,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446440948806,
"lm_q2_score": 0.8840392741081575,
"lm_q1q2_score": 0.8607401044049336
} |
http://mathhelpforum.com/number-theory/96768-solving-congruences.html | 1. ## Solving Congruences
If you are to solve for x in the congruence $7x=20(mod50)$
What is the easier way to do it without using a calculator or any other type of electronic device. I know you can add 7 to 20 until the number is divisible by 7 and then divide by 7 to reduce it to a solution but that takes a while. I need to figure out how to get it done in just a minute or two. Thanks.
2. Originally Posted by diddledabble
If you are to solve for x in the congruence $7x=20(mod50)$
What is the easier way to do it without using a calculator or any other type of electronic device. I know you can add 7 to 20 until the number is divisible by 7 and then divide by 7 to reduce it to a solution but that takes a while. I need to figure out how to get it done in just a minute or two. Thanks.
You mean 50 (the modulus).
For small moduli and $(a,m) = 1$, adding the modulus until cancellation is probably the best way to solve $ax \equiv b \ (\text{mod } m)$. For this particular congruence, add the modulus once to 20 to get: $7x \equiv 70 \ (\text{mod } 50)$
Since $(7, 50) = 1$, you can 'divide' by 7 to get your solution.
_____________
For a more general case where $(a,m) \neq 1$, as long as $(a,m) \mid b$, then we can always use the Euclidean algorithm to find one solution modulo $\tfrac{m}{(a,m)}$ and use it to find all solutions modulo m. Here's an example in this wiki entry: Linear congruence theorem
3. ## @dibbledabble
Well, I guess I have an easier method to solve this congruence equation.
7x=20(mod50)
The equation simply means that 7x, a multiple of 7, is 20 more than a multiple of 50 [By modulo logic]
If we list down numbers which are 20 more than a multiple of 50, we get
70, 120, 170, 220, 270, 320, 370, 420, and so on... From this which it is quite evident that 70, 420, 770, 1120... etc are multiples of 7 which are 20 more than multiples of 50.
So, the smallest positive number is 70. 7x = 70 which means x = 10.
In fact, 350n - 280 would be the general form for the number 7x.
So, the general solution of x would be 50n - 40 where n can range from 1,2,3,.. and so on.
Hope it helped,
MAX
4. ## General theory
An equation of the form
$ax\equiv b$ (mod n)
has a solution if and only if $gcd(a,n)|b$.
Here is how you do it in general when these requirements are met.
Let $d=gcd(a,n)$ and by assumption $d|b \Rightarrow b=dk$ for some integer $k$. Then by the Euclidean Algorith, there exist integers $s$ and $t$ such that
$as+nt=d$.
We now multiply through by k to get.
$ask+ntk=dk=b$
reduce mod n to see
$a(sk)\equiv b$ (mod n).
There is your $x$, namely, $x=sk$ (mod n).
5. 7x7=49=-1 (mod 50)
so -x=140 (mod 50)=-10 (mod 50)
hence x=...
6. Helo, diddledabble!
Solve: . $7x \:\equiv\:20\text{ (mod 50)}$
Your "brute force" method is tedious, but it is commendable.
It shows that you understand the congruence statement.
Here is a primitive method . . .
We have: . $7x \:\equiv\:20 \text{ (mod 50)}$
This means: . $7x - 20 \:=\:50a\;\text{ for some integer }a.$
Solve for $x\!:\quad x \:=\:\frac{50a+20}{7} \quad\Rightarrow\quad x \:=\:7a + 2 + \frac{a+6}{7}$
Since $x$ is an integer, $a+6$ must be divisible by 7.
The first time this happens is $a = 1.$
So we have: . $x \;=\;7(1) + 2 + \frac{1+6}{7}\;=\;10$
Therefore: . $x \:\equiv\:10\text{ (mod 50)}$
7. ## @dibbledabble
Hi,
I think the solution offered by Soroban is the closest to the perfect procedure for your question. I guess mine was a bit too illustrative.
8. So using Soroban's method and a different congruence 35x=20(mod50) I get x=2(mod50) right?
9. ## Yes
If you haven't checked already, values of 'x' such as 52 and 102 satisfy the equation as, in 52 * 35 = 1820
= 1800 + 20
= 36*50 + 20
Similarly, 102*35 = 3570
= 3550 +20
= 71*50 + 50
MAX
10. Given the congruence $ax \equiv b \ (\text{mod } m)$, if $(a,m) \mid b$, then there are $(a,m)$ solutions exactly.
So for the congruence $35x \equiv 20 \ (\text{mod } 50)$, we can see that there should be 5 solutions modulo 50.
___________
Reduce your congruence: $5 \cdot 7x \equiv 5 \cdot 4 \equiv (\text{mod } 50) \ \Rightarrow \ 7x \equiv 4 (\text{mod } \tfrac{50}{(5, 50)}) \ \Leftrightarrow \ 7x \equiv 4 \ (\text{ mod} 10)$
Quickly by inspection, by adding the modulus to 4, we get: $7x \equiv 14 \ (\text{mod } 10)$
which is satisfied by all integers such that $x \equiv 2 \ (\text{mod } 10) \ \ (\star)$.
Returning to our original congruence, all 5 solutions are least residues that satisfy $(\star)$. Therefore, modulo 50: 2, 12, 22, 32, 42 are all solutions. | 2017-06-23T00:08:21 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/number-theory/96768-solving-congruences.html",
"openwebmath_score": 0.8243153095245361,
"openwebmath_perplexity": 344.0861313507526,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682458008672,
"lm_q2_score": 0.8705972818382005,
"lm_q1q2_score": 0.8607318874339769
} |
http://math.stackexchange.com/questions/464625/expected-number-of-people-sitting-in-the-right-seats | # Expected number of people sitting in the right seats.
There was a popular interview question from a while back: there are $n$ people getting seated an airplane, and the first person comes in and sits at a random seat. Everyone else who comes in either sits in his seat, or if his seat has been taken, sits in a random unoccupied seat. What is the probability that the last person sits in his correct seat?
The answer to this question is $1/2$ because everyone looking to sit on a random seat has an equal probability of sitting in the first person's seat as the last person's.
My question is: what is the expected number of people sitting in their correct seat?
My take: this would be $\sum_{i=1}^n p_i$ where $p_i$ is the probability that person $i$ sits in the right seat..
$X_1 = 1/n$
$X_2 = 1 - 1/n$
$X_3 = 1 - (1/n + 1/n(n-1))$
$X_4 = 1 - (1/n + 2/n(n-1) + 1/n(n-1)(n-2))$
Is this correct? And does it generalize to $X_i$ having an $\max(0, i-1)$ term of $1/n(n-1)$, a $\max(0, i-2)$ term of $1/n(n-1)(n-2)$ etc?
Thanks.
-
Could the general probability not be, for $m>1$, $X_m=1-\left(\sum_{k=0}^{m-2}\binom{m-2}{k}\frac{(n-k-1)!}{n!}\right)$? Perhaps prove by induction? – Alyosha Aug 10 '13 at 23:19
The MSE treatment of the original problem: math.stackexchange.com/questions/5595/taking-seats-on-a-plane – Byron Schmuland Aug 23 '13 at 17:41
Sorry for the second answer (I will delete the first one):
I will aim to show that the expected number of people sitting in their correct seats is given by:
$$n-1-\frac12-\frac13-\dots-\frac1{n-1}=n-H_{n-1}$$
To do this, we will first find the expected number of people in the wrong seats, which we shall call $s_n$.
Suppose passenger number $1$ sits in seat $i\ne1$. At this point, passengers $2,\dots,i-1$ all sit in their correct seats. We now have a situation where there are $n-i+1$ empty seats left, and passenger $i$ is going to sit in a random seat.
This is very similar to the situation we had at the start, just with fewer seats. The only difference is that passenger $i$'s seat is taken, and there's a seat (seat $1$) which belongs to none of the people standing up.
This is a bit of a problem, so we'll change things around a bit. We'll pretend that seat number $1$ doesn't, in fact, belong to any of the passengers. So wherever passenger $1$ sits, they're in the wrong seat. We'll use the letter $t_n$ to denote the expected number of people sitting in the wrong seat if seat $1$ doesn't belong to anybody.
What we now get, is that the situation after passenger $1$ has sat in seat $i\ne1$, and passengers $2,\dots,n-1$ have sat in their correct seats is exactly the same as the situation at the beginning, but with $n-i+1$ seats rather than $n$: there's a passenger about to choose a random seat which doesn't belong to him (I decided the sex of passenger $i$ by tossing a coin): there's a seat ($1$) which doesn't belong to anybody, and the rest of the seats belong to the remaining passengers. So at this point, the expected number of passengers sitting in the wrong seats is $1+t_{n-i+1}$: $1$ for passenger $1$, who's sat in the wrong seat, and $t_{n-i+1}$ because afterwards we're in exactly the same situation as before, but with $n-i+1$ seats.
What if passenger $1$ sits in seat $1$? Then all the remaining passengers sit in the right seats, so the expected number of people sitting in the wrong seats at the end is just $1$ (remember, passenger $1$ no longer owns seat $1$).
Passenger $1$ chooses between the $n$ seats at random, so this gives us the recurrence:
\begin{align} t_1 &= 1\\ t_n &= 1+\frac1n\sum_{i=2}^nt_{n-i+1} = 1+\frac1n\sum_{i=1}^{n-1}t_i \end{align}
We are now ready to prove the following:
Claim: $t_n=H_n$
Proof of claim: induction on $n$. $\mathbf{n=1}$ : $t_1=1=H_1$.
$\mathbf{n>1}$ : $t_n=1+\frac1n\sum_{i=1}^{n-1}t_i=1+\frac1n\sum_{i=1}^{n-1}H_i$ (by the inductive hypothesis). A well known identity involving harmonic numbers tells us that:
$$\sum_{i=1}^{n-1}H_i = nH_{n-1}-(n-1)$$
So $t_n=1+H_{n-1}-1+\frac1n=H_n$. $\Box$
How do we get from here to $s_n$? The difference now is that passenger $1$ does own seat $1$, which means that the answer will be smaller by $1$ if and only if passenger $1$ sits in seat $1$. Since passenger $1$ sits in seat $1$ with probability $\frac1n$, we need to subtract $\frac1n$ from $t_n$ to get $s_n$:
$$s_n=t_n-\frac1n=H_n-\frac1n=H_{n-1}$$
Finally, to get the expected number of people in the right seats, we subtract $s_n$ from $n$ to get:
$$n-H_{n-1}$$
Note 1: Since $H_n$ grows logarithmically, the proportion of people sitting in their correct seats converges to $1$ as $n\to\infty$.
Note 2: I still find this proof rather unsatisfying, since it uses the identity $\sum_{i=1}^{n-1}H_i = nH_{n-1}-(n-1)$, which I still don't really understand. I'm sure it's easy enough to prove by induction, but if someone could come up with a really nice explanation of why that works, it might yield an even slicker proof of this fact.
-
There are a few things I do not follow in your explanation. Firstly, if passenger 1 sits in seat 1, the expected number of correct seats is n, because everyone sits in their correct seat if it is available, so excluding that possibility materially affects the answer. Secondly, it can be shown (see below where I answered the wrong question) that regardless of of the number of people, the probability that the last person sits in his or her proper seat is \frac{1}{2}. As any one person could be the last one, the long-term proportion should approach 1/2. In my attempt at a correct answer it does. – Avraham Aug 23 '13 at 19:19
What do you mean by 'any one person could be the last one'? And why does that mean that the long term proportion should approach $1/2$? – Donkey_2009 Aug 24 '13 at 10:54
If you read my answer carefully, you will see that I do not exclude the possibility that passenger $1$ sits in seat $1$; instead, I treat it separately. The long term proportion approaches $1$, so I think you might be answering a different question. – Donkey_2009 Aug 24 '13 at 10:55
Your overall approach looks technically correct, you are using linearity of expectation for counting and then writing down a formula for each $p_i$ (although it seems you switched notation and started using $X_i$ in your formulas instead of $p_i$). However I believe the equation for $X_4$ i.e. $p_4$ is actually
$$X_4 = 1 - (1/n + 1/n(n-1) + 1/n(n-2) + 1/n(n-1)(n-2))$$
and the generalization for $X_i$ for arbitrary $i$ is obtained by your basic reasoning, just writing down the probability for each way the $i$th person's seat might already be taken, taking into account each case e.g. for $X_4$ the term $1/n(n-2)$ corresponds to the case that the first person takes the third person's seat, and the third person takes the fourth person's seat. I'm not sure if/what an easy formula would be for when you add up all the $X_i$ terms to get the total answer. However it is easy to see that the general rule to get the equation for $X_i$ when $i > 1$ is to have $X_i = 1 - q_i$ where $q_i$ is the sum of all terms you can make of the form $1/n(n-k_1)(n-k_2)\ldots$ where you have $0$ or more distinct values $k_j$, and $k_1 < k_2 \ldots < i-1$.
-
Oops, sorry about the $p_i \rightarrow X_i$ change. And you are write about $X_4$. Thanks for your answer, that seems correct. – narcissa Aug 10 '13 at 23:59
I found this question and the answer might be relevant.
Seating of $n$ people with tickets into $n+k$ chairs with 1st person taking a random seat
The answer states that the probability of a person not sitting in his seat is $\frac{1}{k+2}$ where $k$ is the number of seats left after he takes a seat. This makes sense because for person $i$, if anyone sits in chairs $1, i+1, ... n$ then he must sit in his own seat, so the probability of that happening is $\frac{n-i+1}{n-i+2}$. So $k = 0$ for the last person and $k = n-1$ for the second person. The answer then should just be
$1/n + \sum_{i = 2}^{n} \frac{n-i+1}{n-i+2}$
-
The answer (see my post) is $n-H_{n-1}$, but this is probably the right way to go. – Donkey_2009 Aug 11 '13 at 0:56
The answer is $\frac{1}{2}$ as was said.
The general pattern is that for $n$ people, there is a $\frac{1}{n}$ probability of success, $\frac{1}{n}$ probability of failure, and an $\frac{n-2}{n}$ probability that the problem repeats itself on the $n-1$ scale.
Case $n=2$: The probability that the first picks the correct seat is $\frac{1}{2}$, and then the last person sits in proper seat with probability 1. The probability that the first picks the wrong seat is $\frac{1}{2}$, and then the last person sits in proper seat with probability 0. So the probability in total is: $$\frac{1}{2}\cdot1+\frac{1}{2}\cdot0=\frac{1}{2}$$
Case $n=3$: The probability that the first picks the correct seat is $\frac{1}{3}$, and then the last person sits in proper seat with probability 1. The probability that the first picks the last person's seat is $\frac{1}{3}$, and then the last person sits in proper seat with probability 0. The probability that the first picks the middle person's seat is $\frac{1}{3}$, and now there is a $\frac{1}{2}$ that the second person picks the last seat or the first seat (Case 2) since two non-last people switching seats means everyone else takes their own seat. So: $$\frac{1}{3}\cdot1+\frac{1}{3}\cdot0+\frac{1}{3}\cdot\frac{1}{2} = \frac{2}{6} + \frac{1}{6} = \frac{3}{6} = \frac{1}{2}$$
Proof by induction.
Assume it holds true for $n$ that the probability of the last person sitting in the proper seat is $\frac{1}{2}$. Now if there are $n+1$ people, we have a $\frac{1}{n+1}$ chance that the last-seat probability is 1 (correct seat), a $\frac{1}{n+1}$ chance that the last-seat probability is 0 (last seat), and an $\frac{n-1}{n+1}$ chance that the probability is $\frac{1}{2}$, since we know the $n$ case. $$\frac{1}{n+1} + \frac{0}{n+1} + \frac{n-1}{2(n+1)}\\ =\frac{2}{2(n+1)}+\frac{n-1}{2(n+1)}\\ =\frac{n+1}{2(n+1)}\\ =\frac{1}{2}$$ QED
-
I feel bad for all your work, but the question was the expected number of people in correct seats. – Lord_Farin Aug 23 '13 at 8:17
Now that I have cleaned my glasses, I'll try again. Thank you, Lord Farin.
Some observations.
The answer must be greater than 1. The probability that the first person, regardless of the number of people, sits in the proper seat is $\frac{1}{n}$, and there are $n$ people, so that expectation is 1. Even if the first person sits in the wrong seat, there is non-zero probability of other people sitting in the correct seat, so the answer must be greater than or equal to 1, and I'm pretty sure equality only exists in the case $n=2$.
There is some form of recurrence going on here. Enumerating the possibilities, I get (if I haven't erred): $$E_2 = \frac{1}{2}\cdot 2 + \frac{1}{2}\cdot 0 = 1\\ E_3 = \frac{1}{3}\cdot 3 + \frac{2}{3}\left[\frac{1}{2}\cdot 1 + \frac{1}{2}\cdot 0\right] = 1+\frac{1}{3}=\frac{4}{3}\\ E_4 = \frac{1}{4}\cdot 4 + \frac{3}{4}\left[\frac{1}{3}\cdot2 + \frac{2}{3}\left[\frac{1}{2}\cdot 1 + \frac{1}{2}\cdot 0\right]\right] = 1+\frac{3}{4}\cdot\left(\frac{2}{3}+\frac{1}{3}\right)=\frac{7}{4}\\ E_5 = \frac{1}{5}\cdot 5 + \frac{4}{5}\left[\frac{1}{4}\cdot3 + \frac{3}{4}\left[\frac{1}{3}\cdot2 + \frac{2}{3}\left[\frac{1}{2}\cdot 1 + \frac{1}{2}\cdot 0\right]\right]\right] = 1+\frac{4}{5}\left(\frac{3}{4} + \frac{3}{4}\cdot\left(\frac{2}{3}+\frac{1}{3}\right)\right)=\frac{11}{5}\\$$ Let $E_k$ be the expected number of people in the correct seats when the starting population is k. The relationship seems to be: $$E_k = 1 + \left(E_{k-1} - 1 + \frac{k-2}{k-1}\right)\frac{k-1}{k}$$ The first 1 is the expected value of person 1 taking the correct seat. The next term is broken into two parts. If the first person did not take the correct seat, then the second person can "fix" the error by swapping and taking the previous person's seat, leaving the remaining $n-2$ people their proper seats. If the second person also takes the wrong seat, the problem restarts on the $n-1$ scale. In both the latter two cases, there is "one less" correct seat than the initial, since the previous term used up a seat with an incorrect choice. If the above supposition is correct, the expected value would just be the sum of the recurrence relation applied to $n$ or $\sum_{k=1}^n E_k$.
Generating the first few terms using this relationship gives: $$1, 1, \frac{4}{3}, \frac{7}{4}, \frac{11}{5}, \frac{16}{6}, \frac{22}{7}, \frac{29}{8}, \frac{37}{9}, \frac{46}{10}, \ldots$$
The numerator is a quadratic and the denominator is just $n$ so the expected value should be: $$E_n = \frac{n^2-n+2}{2n}$$
Edit: The long-term proportion of people sitting in the proper seats intuitively would be $$\lim_{n \to \infty} \frac{E_n}{n}\\ =\lim_{n \to \infty} \frac{n^2-n+2}{2n^2}\\ =\frac{1}{2}$$ Which dovetails nicely with the long-term probability of the last person sitting in his or her proper seat.
-
Your recurrence is wrong, as are your values for $E_k$ where $k\ge3$. I'm not really sure what your argument is; are you free to take this to chat some time today? – Donkey_2009 Aug 24 '13 at 12:06
Sure. If I'm wrong, I'm eager to learn why and how to correct it. If you want, you can e-mail me directly as well; thank you. – Avraham Aug 26 '13 at 1:18
It's 1. The probability that the $i$th person is in the right seat is $1/n$ for every $i$. This is clearly true for $i=1$, for $i=2$ you get $$P(\text{1st person not in 2nd person's seat, 2nd person in right seat}) = \frac{n-1}{n} \frac{1}{n-1} = 1/n,$$ and so on down the line as you can check for yourself.
This answer is wrong. The probability that the second person is in the right seat is $1$ if the first person didn't sit in their seat. I think kilgol was assuming that people just sit in their seats randomly, whereas they in fact always sit in their own seat if it is free (and they're not the first person). – Donkey_2009 Aug 11 '13 at 0:49 | 2014-09-19T00:07:46 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/464625/expected-number-of-people-sitting-in-the-right-seats",
"openwebmath_score": 0.8596817255020142,
"openwebmath_perplexity": 281.9892079216284,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.988668248138067,
"lm_q2_score": 0.8705972583359805,
"lm_q1q2_score": 0.8607318662328379
} |
https://math.stackexchange.com/questions/4072871/why-is-the-answer-to-this-probability-question-not-frac12 | # Why is the answer to this probability question not $\frac{1}{2}$?
I was trying to solve the following homework probability question which has the following setup:
We have $$2$$ dice: $$A$$ and $$B$$. Die $$A$$ has $$4$$ red faces and $$2$$ white faces, whereas die $$B$$ has $$2$$ red faces and $$4$$ white faces. On each turn, a fair coin is tossed. If the coin lands on heads then die $$A$$ is thrown, but if the coin lands on tails then die $$B$$ is thrown. After this the turn ends, and on the next turn the coin is once again tossed to determine the die (i.e. we throw the coin and the corresponding die on each turn).
From this game I'm asked to answer $$2$$ questions:
1. Show that the probability of obtaining a red face on any $$n$$-th throw is $$\frac{1}{2}$$.
2. If the first $$2$$ consecutive die throws result in red faces, what is the probability that the third throw is also red?
To answer part $$1$$ I used the law of total probability. Denoting obtaining a red face on the $$n$$-th die throw as $$P(R_n)$$ I get
\begin{align*} P(R_n) &= P(R_n \vert A) P(A) + P(R_n \vert B) P( B) \\ & = \left(\frac{4}{6}\right)\left(\frac{1}{2}\right) + \left(\frac{2}{6}\right)\left(\frac{1}{2}\right)\\ & = \frac{1}{2} \end{align*}
But on question $$2$$ is where I started running into trouble. Using the same notation, what I want to calculate is $$P(R_3 \vert R_2 R_1)$$. And recalling that for events $$E_1$$ and $$E_2$$ we can say that $$P(E_2 \vert E_1) = \frac{P(E_2E_1)}{P(E_1)}$$ I get that $$P(R_3 \vert R_2 R_1) = \frac{P(R_3 R_2 R_1)}{P(R_2 R_1)} \tag{1}$$ and from here I obtain 2 different solutions using $$2$$ distinct methods:
Using that on the first part of the question we showed that $$P(R_n) = \frac{1}{2}$$, and noticing that the die throws are independent since what I threw before does not affect how I throw the next coin toss or how I roll the next die, from equation $$(1)$$ I get \begin{align*} P(R_3 \vert R_2 R_1) &= \frac{P(R_3 R_2 R_1)}{P(R_2 R_1)}\\ &= \frac{P(R_3) P(R_2) P(R_1)}{P(R_2) P(R_1)}\\ & = P(R_3) = P(R_n) = \frac{1}{2} \end{align*}
Using the law of total probability on $$(1)$$ I get \begin{align*} P(R_3 \vert R_2 R_1) & = \frac{P(R_1 R_2 R_3 \vert A)P(A)+ P(R_1 R_2 R_3 \vert B)P(B)}{P(R_1 R_2 \vert A ) P(A)+ P(R_1 R_2 \vert B) P(B)}\\ &= \frac{\left[\left(\frac{2}{3}\right)\left(\frac{2}{3}\right)\left(\frac{2}{3}\right)\right]\frac{1}{2}+ \left[\left(\frac{1}{3}\right)\left(\frac{1}{3}\right)\left(\frac{1}{3}\right)\right]\frac{1}{2}}{\left[\left(\frac{2}{3}\right)\left(\frac{2}{3}\right)\right]\frac{1}{2}+\left[\left(\frac{1}{3}\right)\left(\frac{1}{3}\right)\right]\frac{1}{2}}\\ & = \frac{3}{5} \end{align*}
To me both of the previous answers seem to be following coherent logic, but since I didn't get the same answer I knew one of them was wrong. I decided to write a program to simulate the game and I found out that the correct solution was $$P(R_3 \vert R_2 R_1) = \frac{3}{5}$$. But even though I verified this answer to be correct I couldn't seem to understand what part of my analysis is wrong on Answer 1.
So my question is, why is $$P(R_3 \vert R_2 R_1) \neq \frac{1}{2}\quad ?$$
• I'm confused about the nature of this experiment. Are you repeatedly tossing a coin then rolling the corresponding die $n$ times (i.e. $n$ coin tosses and $n$ dice rolls), or are you tossing a coin once, then rolling the designated die $n$ times (i.e. $1$ coin toss, and $n$ die rolls)? – Theo Bendit Mar 23 at 5:56
• It's the first option. We have one coin toss and one dice roll per turn. I'll edit the problem to clarify this, thank you! – Robert Lee Mar 23 at 5:59
• In your computation, $A$ determines the successive $3$ roll of the die. – Oolong milk tea Mar 23 at 6:07
• If it's one coin toss and one dice roll per turn, then every (coin toss + dice roll) is independent of all the others, and the probability is $1/2$ every time. If one coin toss determined multiple dice rolls, then those dice rolls would not be mutually independent, and you would get something greater than $1/2$ (because there would be a positive correlation between rolls using the same coin toss). But if you're saying it's one coin toss and one dice roll per turn, then your simulation is wrong. – mjqxxxx Mar 23 at 6:24
• I don’t quite get the logic in answer 2. To get consecutively 2 times red, you have AA, BB, AB, BA 4 paths with their own probabilities, would expect 1/2x1/2x2/3x2/3 for AA path,... and we should have 4 terms in the denominator. Do I miss something? – BStar Mar 23 at 6:38
The flaw in your second answer is that it is not necessarily the same die that is thrown after each coin toss. The way you have written the law of total probability is acceptable in your first answer, but not in the second, because it is only in the second answer that you conditioned on the events $$A$$ and $$B$$, which represent the outcomes of the coin toss. Consequently, the result is incorrect because it corresponds to a model in which the coin is tossed once, and then the corresponding die is rolled three times.
First, let us do the calculation the proper way. We want $$\Pr[R_3 \mid R_1, R_2] = \frac{\Pr[R_1, R_2, R_3]}{\Pr[R_1, R_2]}$$ as you wrote above. Now we must condition on all possible outcomes of the coin tosses, of which there are eight: \begin{align} \Pr[R_1, R_2, R_3] &= \Pr[R_1, R_2, R_3 \mid A_1, A_2, A_3]\Pr[A_1, A_2, A_3] \\ &+ \Pr[R_1, R_2, R_3 \mid A_1, A_2, B_3]\Pr[A_1, A_2, B_3] \\ &+ \Pr[R_1, R_2, R_3 \mid A_1, B_2, A_3]\Pr[A_1, B_2, A_3] \\ &+ \Pr[R_1, R_2, R_3 \mid A_1, B_2, B_3]\Pr[A_1, B_2, B_3] \\ &+\Pr[R_1, R_2, R_3 \mid B_1, A_2, A_3]\Pr[B_1, A_2, A_3] \\ &+ \Pr[R_1, R_2, R_3 \mid B_1, A_2, B_3]\Pr[B_1, A_2, B_3] \\ &+\Pr[R_1, R_2, R_3 \mid B_1, B_2, A_3]\Pr[B_1, B_2, A_3] \\ &+ \Pr[R_1, R_2, R_3 \mid B_1, B_2, B_3]\Pr[B_1, B_2, B_3] \\ \end{align} and since each of the $$2^3 = 8$$ triplets of ordered coin tosses has equal probability of $$1/8$$ of occurring, $$\Pr[R_1, R_2, R_3] = \tfrac{1}{8}\left((\tfrac{2}{3})^3 + 3(\tfrac{2}{3})^2(\tfrac{1}{3}) + 3(\tfrac{2}{3})(\tfrac{1}{3})^2 + (\tfrac{1}{3})^3\right) = \tfrac{1}{8}.$$ A similar (but simpler) calculation for the denominator yields $$1/4$$, and the result follows.
Of course, none of this is necessary; it is only shown here to illustrate how the calculation would be done if it were to be done along the lines of your second answer.
• I think I'm starting to understand where my lack of understanding is. I believe I may not understand what $P(R_3 \vert R_2 R_1)$ is to begin with. Let's say I list all my throws as red or white accordingly. If I then count how many chains of 3 consecutive reds there are on my list, and I also count the number of times 2 reds are succeded by a white, is $$P(R_3 \vert R_2 R_1) \sim \frac{\text{#RRR}}{\text{#RRW} + \text{#RRR}}$$ in the sense that $\sim$ means the RHS approaches $P(R_3 \vert R_2 R_1)$ as I take more throws? Or does $P(R_3 \vert R_2 R_1)$ mean something else? – Robert Lee Mar 23 at 7:41
• I ran out of space in my previous comment, but #RRR stands for the number of chains of 3 consecutive reds on my list of throws, and #RRW is the number of chains where the first 2 places are red and the third consecutive throw is white, again on my list of throws. Just to clarify what I meant in the last comment. – Robert Lee Mar 23 at 7:46
• @RobertLee It is unnecessary and mathematically inappropriate to think of the problem in terms of long-run averages of more than three rolls, because the entire question can be answered precisely under a strict interpretation of the model, in which there are only ever three rolls, each of which is preceded by a coin toss. – heropup Mar 23 at 7:57
• I agree that it's not necessary to think of this mathematically. I only ask this because of my previously described attempt to verify the theoretical result with a simulation, which led me to an incorrect result. I'm trying to understand if what I programmed is in fact not trying to calculate the probability $P(R_3 \vert R_2 R_1)$ that I want, or if I'm still not understanding what the probability in itself means. – Robert Lee Mar 23 at 8:07
• @RobertLee The frequentist approach via simulation would be as follows. [1] Locate all occurrences of two consecutive red dice rolls. [2] For each such occurrence, look at the outcome of the die roll immediately after the pair of reds. [3] Tally the number of outcomes where the third die is red. Divide the number obtained in step 3 by the total number of red pairs observed in step 1. – heropup Mar 23 at 8:18
With the second approach:
To have first two die throws resulting in red, there are 2x2 paths: AA,AB,BA, BB $$\to$$
$$P(R1\cap R2)=(\frac{1}{2})^2(\frac{2}{3})^2$$+2$$(\frac{1}{2})^2\frac{2}{3}\frac{1}{3}$$+$$(\frac{1}{2})^2(\frac{1}{3})^2$$ =$$\frac{1}{4}$$
To have three die throws resulting in red, there are 2x2x2 paths:AAA,BBB,AAB,ABA,BAA,BBA,BAB,ABB $$\to$$
$$P(R1\cap R2\cap R3)= (\frac{1}{2})^3(\frac{2}{3})^3$$+3$$(\frac{1}{2})^3(\frac{2}{3})^2\frac{1}{3}$$+3$$(\frac{1}{2})^3(\frac{1}{3})^2\frac{2}{3}$$+ $$(\frac{1}{2})^3(\frac{1}{3})^3$$ =$$\frac{1}{8}$$
$$\frac{P(R1\cap R2\cap R3)}{P(R1\cap R2)}$$ =$$\frac{1}{2}$$
In this way, you get consistent result compared to first approacch. | 2021-05-08T20:22:26 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4072871/why-is-the-answer-to-this-probability-question-not-frac12",
"openwebmath_score": 0.9856621026992798,
"openwebmath_perplexity": 480.8887045995353,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357610169274,
"lm_q2_score": 0.8791467722591728,
"lm_q1q2_score": 0.8607161292243346
} |
https://math.stackexchange.com/questions/3358186/number-of-bit-strings-of-length-four-do-not-have-two-consecutive-1s | # Number of bit strings of length four do not have two consecutive 1s
I came across following problem:
How many bit strings of length four do not have two consecutive 1s?
I solved it as follows:
Total number of bit strings of length: $$2^4$$
Total number of length 4 bit strings with 4 consecutive 1s: 1
Total positions for three consecutive 1s in length 4 bit string: 2 (111X, X111)
Number of bit strings for each of above positions: 2 (X can be 0 or 1)
Total positions for two consecutive 1s in length 4 bit string: 3 (11XX, X11X, XX11)
Number of bit strings for each of above positions: 4
By inclusion exlcusion principle, the desired count $$=2^4-3\times 4+2\times 2-1=16-12+4-1=7$$
However the correct solution turns out to be 8. It seems that I incorrectly applied inclusion exclusion principle. Where did I go wrong?
• You should be doing inclusion-exclusion on the number of pairs of consecutive ones, not on the length of a string of consecutive ones. – Gerry Myerson Sep 16 '19 at 6:27
If I were doing this by inclusion-exclusion, I'd go: $$16$$ strings of length four; $$12$$ with at least one pair of consecutive ones ($$11xx,x11x,xx11$$ with $$x$$s arbitrary); five with at least two pair of consecutive ones ($$111x,1111,x111$$); one with three pair of consecutive ones; so $$16-12+5-1=8$$.
To count the number of bit strings with $$2$$ consecutive one bits (bad strings), I would let \begin{align} S_1&=11xx&4\\ S_2&=x11x&4\\ S_3&=xx11&4\\ N_1&=&12 \end{align} Then \begin{align} S_1\cap S_2&=111x&2\\ S_1\cap S_3&=1111&1\\ S_2\cap S_3&=x111&2\\ N_2&=&5 \end{align} and \begin{aligned} S_1\cap S_2\cap S_3&=1111&1\\ N_3&=&1 \end{aligned} The count of bad strings is $$N_1-N_2+N_3=8$$.
The count of good strings is $$16-8=8$$.
Generating Functions
Let $$x$$ represent the atom '$$0$$' and $$x^2$$ represent the atom '$$10$$' and build all possible strings by concatenating one or more atoms and removing the rightmost '$$0$$'. \begin{align} \overbrace{\vphantom{\frac1x}\ \ \ \left[x^4\right]\ \ \ }^{\substack{\text{strings of}\\\text{length 4}}}\overbrace{\ \quad\frac1x\ \quad}^{\substack{\text{remove the}\\\text{rightmost '0'}}}\sum_{k=1}^\infty\overbrace{\vphantom{\frac1x}\left(x+x^2\right)^k}^\text{k atoms} &=\left[x^4\right]\frac{1+x}{1-x-x^2}\\ &=\left[x^4\right]\left(1+2x+3x^2+5x^3+8x^4+13x^5+\dots\right)\\[9pt] &=8 \end{align} Note that the denominator of $$1-x-x^2$$ induces the recurrence $$a_n=a_{n-1}+a_{n-2}$$ on the coefficients.
Recurrence
Good strings of length $$n$$ can be of two kinds: a good string of length $$n-1$$ followed by '$$0$$' or a good string of length $$n-2$$ followed by '$$01$$'. That is, $$a_n=a_{n-1}+a_{n-2}$$ Starting with
$$a_0=$$ the number of good strings of length $$0=1$$.
$$a_1=$$ the number of good strings of length $$1=2$$.
we get $$a_4=8$$. | 2020-03-31T11:17:05 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3358186/number-of-bit-strings-of-length-four-do-not-have-two-consecutive-1s",
"openwebmath_score": 0.929974377155304,
"openwebmath_perplexity": 965.1952862338449,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357579585025,
"lm_q2_score": 0.8791467738423873,
"lm_q1q2_score": 0.8607161280855538
} |
https://www.physicsforums.com/threads/when-is-a-function-bounded-using-differentiation.209189/ | # When is a function bounded using differentiation
1. Jan 16, 2008
### sara_87
1. The problem statement, all variables and given/known data
how do i determine whether a function is bounded using differentiation
eg: f(x)=x/(2^x)
2. Relevant equations
3. The attempt at a solution
i know it has something to do with maximums and minimums but i cant figure out how to do it.
any help would be appreciated. thank you
2. Jan 16, 2008
### Dick
You could look at the limits of the function as it approaches plus and minus infinity. If both exist and are finite, and if the function is defined and continuous for all x, then it is bounded.
3. Jan 16, 2008
### EnumaElish
You can use differentiation to investigate the behavior of f. Say, the function is f(x) = x/2^x on x > 0. Then f'(x) = 2^-x (1 - x Log[2]), which has roots 1/Log[2] and +infinity. At x = 1/Log[2], f''(x) = 2^-x Log[2] (-2 + x Log[2]) is < 0, so you have the maximum. Note that f(x) > 0 for x > 0 and f(0) = 0. As x --> +infinity, f(x) --> 0 from above; but f(0) = 0 so x = 0 is the minimum. Since you can "account for" both the maximum and the minimum, f is bounded on x > 0.
Last edited: Jan 16, 2008
4. Jan 16, 2008
### sara_87
thank you very much.
what if we have:
f(x)=(-2x^2)/(4x^2-1)
i know that it's not bounded but i dont know why
5. Jan 16, 2008
### Tom Mattson
Staff Emeritus
The graph of that function has two vertical asymptotes. Functions don't get much more "unbounded" than that!
6. Jan 16, 2008
### sara_87
actually i think it is bounded because there's no value of x that would make that function greater than 1
or is there?
7. Jan 16, 2008
### Tom Mattson
Staff Emeritus
Sure there is. As I said, the graph of that function has 2 vertical asymptotes. You can find values of x for which the function blows up to infinity, and down to negative infinity.
Do you know what I mean when I say "vertical asymptote"?
8. Jan 16, 2008
### sara_87
yes i do know what vertical assymptotes are.
umm but i still didnt understand what u meant. you can find values of x for which the function blows down to -ve infinity but not up.
?
9. Jan 16, 2008
### Tom Mattson
Staff Emeritus
The function certainly does blow up to positive infinity, as you approach -1/2 from the right and as you approach +1/2 from the left.
10. Jan 16, 2008
### sara_87
oh thank u very much
that helps.
just one last question:
same question as before but with function:
sqrt(x)/1000
is it not bounded since n continues to increase to infinity?
11. Jan 17, 2008
### Tom Mattson
Staff Emeritus
You're right that it's not bounded (on $[0,\infty)$ that is--we really should be specifying an interval when making these statements).
But what's "n"?
12. Jan 17, 2008
### HallsofIvy
Staff Emeritus
Now, I'm confused as to what function you are talking about. The original function was f(x)= x/(2^x) which is definitely bounded on $[0, \infty)$. It is bounded "above" but not bounded "below" so is not bounded. I don't see any asymptotes when I graph it.
13. Jan 17, 2008
### Tom Mattson
Staff Emeritus
Posts 1 through 3 pertain to f(x)=x/(2^x).
Posts 4 through 9 pertain to f(x)=(-2x^2)/(4x^2-1).
Posts 10 and 11 pertain to f(x)=sqrt(x)/1000.
14. Jan 17, 2008
### EnumaElish
Post 3 pertains to f(x)=x/(2^x) for x > 0. | 2016-12-05T12:44:34 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/when-is-a-function-bounded-using-differentiation.209189/",
"openwebmath_score": 0.6157326102256775,
"openwebmath_perplexity": 1352.996698281987,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9808759671623987,
"lm_q2_score": 0.8774767954920547,
"lm_q1q2_score": 0.8606959004408314
} |
http://mathhelpforum.com/trigonometry/213130-integrate-cos-x-cos-4x-dx.html | # Math Help - Integrate cos(x)cos(4x) dx
1. ## Integrate cos(x)cos(4x) dx
I'm trying to integrate:
$\int\cos(x)\cos(4x) dx$
Could some one tell me what I should be seeing about this to help me solve it. I thought I needed to change the $4x$ into $x$ and tried doing that using the double angle formula, but I have to use it twice and end up with a integrand that looks too complicated. I'd like to know what a good first step would be.
Thank you.
2. ## Re: Integrate cos(x)cos(4x) dx
this is easy...but you have to use a formula that transforms the product cosxcos4x into a sum.
HERE cos(4x)cos(x)=(1/2)[cos(5x)-cos(3x)}
Minoas
3. ## Re: Integrate cos(x)cos(4x) dx
Originally Posted by Furyan
I'm trying to integrate:
$\int\cos(x)\cos(4x) dx$
Could some one tell me what I should be seeing about this to help me solve it. I thought I needed to change the $4x$ into $x$ and tried doing that using the double angle formula, but I have to use it twice and end up with a integrand that looks too complicated. I'd like to know what a good first step would be.
Thank you.
\displaystyle \begin{align*} I &= \int{\cos{(x)}\cos{(4x)}\,dx} \\ I &= \frac{1}{4}\cos{(x)}\sin{(4x)} - \int{ -\frac{1}{4}\sin{(x)}\sin{(4x)}\,dx } \\ I &= \frac{1}{4}\cos{(x)}\sin{(4x)} + \frac{1}{4}\int{\sin{(x)}\sin{(4x)}\,dx} \\ I &= \frac{1}{4}\cos{(x)}\sin{(4x)} + \frac{1}{4} \left[ -\frac{1}{4}\sin{(x)}\cos{(4x)} - \int{ -\frac{1}{4}\cos{(x)}\cos{(4x)} \,dx} \right] \\ I &= \frac{1}{4}\cos{(x)}\sin{(4x)} - \frac{1}{16}\sin{(x)}\cos{(4x)} + \frac{1}{16}I \\ \frac{15}{16}I &= \frac{1}{4}\cos{(x)}\sin{(4x)} - \frac{1}{16}\sin{(x)}\cos{(4x)} \\ I &= \frac{4}{15}\cos{(x)}\sin{(4x)} - \frac{1}{15}\sin{(x)}\cos{(4x)} + C \end{align*}
4. ## Re: Integrate cos(x)cos(4x) dx
Thank you Prove It,
I haven't seen that before, integrating by parts twice and then letting the second integrand be I and solving that way. That's a very useful way of solving difficult integration problems. Thank you very much for showing me that. I tried this method with a similar problem that I had solved another way and found that I had to be very careful with signs and factors. I kept getting the wrong answer because I was getting a sign and factor wrong during the second integration. So I'm going to have to work on it, but it's great to know that if I can't see what else to do I can try this method.
Thanks again for all your efforts
5. ## Re: Integrate cos(x)cos(4x) dx
Originally Posted by Furyan
I'm trying to integrate:
$\int\cos(x)\cos(4x) dx$
Could some one tell me what I should be seeing about this to help me solve it. I thought I needed to change the $4x$ into $x$ and tried doing that using the double angle formula, but I have to use it twice and end up with a integrand that looks too complicated. I'd like to know what a good first step would be.
Thank you.
Hi Furyan!
As you can see here, we have the identity:
$\cos \theta \cos \varphi = {{\cos(\theta - \varphi) + \cos(\theta + \varphi)} \over 2}$
In other words:
$\int\cos(4x)\cos(x) dx = \int{{\cos(4x - x) + \cos(4x + x)} \over 2} = \int{{\cos(3x) + \cos(5x)} \over 2}$
This is what Minoas already suggested (except for the typo ).
6. ## Re: Integrate cos(x)cos(4x) dx
Hi ILikeSerena
Thank you very much for that. I did, in fact, also solve this problem using the factor formula, as Minoanman suggested, and it was very much simpler. However, although there are only four factor formulae in my book I'm worried that I might not remember them in an exam. Although I found Prove It's method more difficult I feel I'm more likely to remember it and at least get some method marks, even if I make some mistakes.
I'm amazed how different the solutions look, depending on which method you use and yet they're equivalent. I solved one question using Prove It's method, the factor formula and a simple identity. All the solutions were equivalent, but they looked very different. I wouldn't have been able to rewrite any of them in terms of any of the others.
7. ## Re: Integrate cos(x)cos(4x) dx
Here's a trick that I like to use when I can't properly remember the formulas.
The only formula you need to remember is Euler's formula:
$e^{ix} = \cos x + i \sin x$
together with its rewritten forms (that you can both deduce from Euler's formula if you forget):
$\cos x = \dfrac 1 2 (e^{ix} + e^{-ix})$
$\sin x = \dfrac 1 {2i} (e^{ix} - e^{-ix})$
$\cos(4x)\cos(x) = \frac 1 2 (e^{i4x} + e^{-i4x}) \cdot \frac 1 2 (e^{ix} + e^{-ix})$
$= \frac 1 4 ((e^{i5x} + e^{-i5x}) + (e^{i3x} + e^{-i3x}))$
$= \frac 1 2 (\cos(5x) + \cos(3x)) \qquad \blacksquare$
8. ## Re: Integrate cos(x)cos(4x) dx
whoa! $\blacksquare$
That's a whole new level, but I'm liking it. I'm much better at deducing formualae than I am at remember them. Euler's formula, eh! I'm sure I came across that earlier today, but I glazed over when I saw $e^{ix}$. Is that i ,the i, $\sqrt{-1}?$. I'm going to look that up and try and find out why it's so fundamental. It looks very interesting.
Thank you very much for that
9. ## Re: Integrate cos(x)cos(4x) dx
Originally Posted by Furyan
whoa!
That's a whole new level, but I'm liking it. I'm much better at deducing formualae than I am at remember them. Euler's formula, eh! I'm sure I came across that earlier today, but I glazed over when I saw $e^{ix}$. Is that i ,the i, $\sqrt{-1}?$. I'm going to look that up and try and find out why it's so fundamental. It looks very interesting.
Thank you very much for that
Yep. It is that $i$.
According to wiki:
Euler's formula is ubiquitous in mathematics, physics, and engineering. The physicist Richard Feynman called the equation "our jewel" and "one of the most remarkable, almost astounding, formulas in all of mathematics."[2]
10. ## Re: Integrate cos(x)cos(4x) dx
Originally Posted by ILikeSerena
Yep. It is that $i$.
According to wiki:
Euler's formula is ubiquitous in mathematics, physics, and engineering. The physicist Richard Feynman called the equation "our jewel" and "one of the most remarkable, almost astounding, formulas in all of mathematics."[2]
Wow, it looks like a jewel. How exciting.
Thank you very much | 2015-03-27T20:58:05 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/trigonometry/213130-integrate-cos-x-cos-4x-dx.html",
"openwebmath_score": 0.9971106648445129,
"openwebmath_perplexity": 820.3832663129307,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9808759615719875,
"lm_q2_score": 0.8774767986961401,
"lm_q1q2_score": 0.8606958986781859
} |
https://math.stackexchange.com/questions/2402659/given-a-decreasing-function-s-t-int-0-infty-fx-dx-infty-prove-sum-n | # Given a decreasing function s.t. $\int_0^\infty f(x)\,dx<\infty,$ prove $\sum_{n=1}^\infty f(na)$ converges
Let $f\in C([0,\infty))$ be a decreasing function such that $\int_0^\infty f(x)\,dx$ converges.
Prove $\sum_{n=1}^\infty f(na)$ converges, $\forall a>0$
My attempt:
By the Cauchy criterion, there exists $M>0,$ such that for $t-1>M:$
$$f(t)=\int_{t-1}^t f(t) \, dx \leq \int_{t-1}^t f(x)\,dx\xrightarrow{t \to \infty} 0$$
Hence, $f$ is non-negative.
$f$ is decreasing $\implies f(na)\leq f(a), \forall a>0, n\in \mathbb{N}.$
By integral monotonicity and non-negativity of $f$:
$$\int_1^\infty f(nx)\,dx \leq \int_1^\infty f(x)\,dx \leq \int_0^\infty f(x)\,dx$$
Hence $\int_1^\infty f(nx)\,dx$ converges and therefore $\sum_{n=1}^\infty f(na)$ converges.
Is that correct? If so, why is continuity necessary ? Is there a simpler way to prove it?
Any help appreciated.
• $$\int_{(n-1)a}^{na}f(x)dx\geq af(na)$$ – MAN-MADE Aug 22 '17 at 18:33
Hint: The following thing you wrote is the key:
$$f(t)=\int_{t-1}^{t}f(t)dx \leq \int_{t-1}^{t}f(x)dx$$
After that, just pick $t=an$ and sum over all $n$.
PS: and no, the assumption on continuity is not needed. Just integrability for the well-definedness.
As I commented before:
Since $f$ is decreasing and continuous, $$\int_{(n-1)a}^{na}f(x)dx\geq [na-(n-1)a]\inf_{(n-1)a< x\leq na}\{f(x)\}=af(na)$$
Then $$\int_{0}^{\infty}f(x)dx=\sum_{n=1}^{\infty}\int_{(n-1)a}^{na}f(x)dx\geq a\sum_{n=1}^{\infty}f(na)$$
Hence, since $a\in\mathbb{R}^+,$ $\sum_{n=1}^{\infty}f(na)<\infty$
\begin{align} & af(a) + af(2a) + af(3a) +\cdots \\[10pt] \le {} & \int_0^a f + \int_a^{2a} f + \int_{2a}^{3a} f + \cdots <\infty. \end{align}
This works if $f \ge0$ everywhere. If $f<0$ somewhere, then an easy argument shows the integral does not converge.
This is the integral test for convergence of series. Do a change of variable x --- to --- ax. The integral converges so does series g(n)=f(na). | 2019-08-21T14:11:37 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2402659/given-a-decreasing-function-s-t-int-0-infty-fx-dx-infty-prove-sum-n",
"openwebmath_score": 0.9945245981216431,
"openwebmath_perplexity": 531.3235866032861,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759666033576,
"lm_q2_score": 0.8774767906859264,
"lm_q1q2_score": 0.8606958952360703
} |
https://math.stackexchange.com/questions/98409/integral-of-periodic-function-over-the-length-of-the-period-is-the-same-everywhe?noredirect=1 | # Integral of periodic function over the length of the period is the same everywhere
I am stuck on a question that involves the intergral of a periodic function. The question is phrased as follows:
Definition. A function is periodic with period $a$ if $f(x)=f(x+a)$ for all $x$.
Question. If $f$ is continuous and periodic with period $a$, then show that $$\int_{0}^{a}f(t)dt=\int_{b}^{b+a}f(t)dt$$ for all $b\in \mathbb{R}$.
I understand the equality, but I am having trouble showing that it is true for all $b$. I've tried writing it in different forms such as $F(a)=F(b+a)-F(b)$. This led me to the following, though I am not sure how this shows the equality is true for all $b$,
$$\int_{0}^{a}f(t)dt-\int_{b}^{b+a}f(t)dt=0$$ $$=F(a)-F(0)-F(b+a)-F(b)$$ $$=(F(b+a)-F(a))-F(b)$$ $$=\int_{a}^{b+a}f(t)dt-\int_{0}^{b+a}f(t)dt=0$$
So, this leaves me with
$$\int_{a}^{b+a}f(t)dt-\int_{0}^{b+a}f(t)dt=\int_{0}^{a}f(t)dt-\int_{b}^{b+a}f(t)dt$$
I feel I am close, and I've made myself a diagram of a sine function to visualize what each of the above integrals might describe, but the power to explain the above equality evades me.
• See here for some proofs. – t.b. Jan 12 '12 at 8:09
• (Voted to close as duplicate) Even though this says continuous and the other says integrable, the proofs are the same, i.e. every proof here would apply over there. – 6005 Jul 17 '16 at 16:25
Let $H(x)=\int_x^{x+a}f(t)\,dt$. Then $$\frac{dH}{dx}=f(x+a)-f(x)=0.$$ It follows that $H(x)$ is constant. In particular, $H(b)=H(0)$.
We have $$\int_{0}^{a}f(t)\ dt+\int_{a}^{a+b}f(x)\ dx=\int_{0}^{b}f(y)\ dy+\int_{b}^{a+b}f(t)\ dt,$$ and setting $x=y-a$ turns the second integral into the third one.
No differentiation is needed:
Pick the unique integer $n$ such that $b\leqslant na\lt b+a$, decompose the integral of $f(t)$ over $t$ from $b$ to $b+a$ into the sum of the integrals from $b$ to $na$ and from $na$ to $b+a$, apply the changes of variable $t=x+(n-1)a$ in the former and $t=x+na$ in the latter, then the periodicity of $f$ implies that $f(x)=f(t)$, hence the result is the sum of the integrals of $f(x)$ over $x$ from $b-(n-1)a$ to $a$ and from $0$ to $b-(n-1)a$...
...Et voilà !
You have made various false steps in your four line block and should have ended up with $$\int_{a}^{b+a}f(t)dt-\int_{0}^{b}f(t)dt=0$$ but this does not take you much further forward.
Instead note that somewhere in the interval $[b, b+a]$ is an integer multiple of $a$, say $na$. Then using $f(t)=f(t+a)=f(t+na)$: $$\int_{b}^{b+a}f(t)dt = \int_{b}^{na}f(t)dt+\int_{na}^{b+a}f(t)dt = \int_{b+a}^{(n+1)a}f(t)dt+\int_{an}^{b+a}f(t)dt = \int_{na}^{(n+1)a}f(t)dt = \int_{0}^{a}f(t)dt.$$
\begin{align} \int_{b}^{a+b}f(x)\ dx&= \int_{a}^{a+b}f(x)\ dx +\int_{b}^{a}f(x)\ dx\\&\overset{y=x-a}{=} \color{red}{\int_{0}^{a}f(y+a)\ dx} +\int_{b}^{a}f(x)\ dx\\&\overset{periodic}{=} \color{red}{\int_{0}^{b}f(y)\ dx} +\int_{b}^{a}f(x)\ dx\\&=\int_0^af(x)\ dx. \end{align}
• why a down vote here – Guy Fsone Jan 18 '18 at 8:15
• The integral is obtained in the second line itself. You have done a calculation mistake. – Robin Feb 17 '18 at 8:56
• Furthermore, this plagiarizes simultaneously three other answers, posted six years before. – Did Feb 18 '18 at 15:23 | 2019-07-18T17:28:12 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/98409/integral-of-periodic-function-over-the-length-of-the-period-is-the-same-everywhe?noredirect=1",
"openwebmath_score": 0.9384185075759888,
"openwebmath_perplexity": 219.76854088854688,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759615719875,
"lm_q2_score": 0.877476784277755,
"lm_q1q2_score": 0.8606958845355384
} |
https://homework.cpm.org/category/CC/textbook/ccg/chapter/9/lesson/9.1.3/problem/9-35 | ### Home > CCG > Chapter 9 > Lesson 9.1.3 > Problem9-35
9-35.
Are $ΔEHF$ and $ΔFGE$ congruent? If so, explain how you know. If not, explain why not.
What do the markings on the figures mean?
What other parts do these triangles have in common?
Yes, they are congruent. What is the reason? | 2020-07-07T19:47:40 | {
"domain": "cpm.org",
"url": "https://homework.cpm.org/category/CC/textbook/ccg/chapter/9/lesson/9.1.3/problem/9-35",
"openwebmath_score": 0.3052932322025299,
"openwebmath_perplexity": 1769.8354963261709,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9808759587767815,
"lm_q2_score": 0.8774767778695834,
"lm_q1q2_score": 0.8606958757971885
} |
https://mathematica.stackexchange.com/questions/138771/create-a-particular-x-y-list/138772 | # Create a particular {x,y} list [duplicate]
How is it possible to create a list of {x,y} points as the following one:
list={{1,1},{1,2},...,{1,100},{2,1},{2,2},...,{2,100},...,{100,1},{100,2},...,{100,100}}
Thanks
• Table[{i,j}, {i, 100}, {j, 100}]? What have you tried? Feb 27 '17 at 16:23
• Hi, I was using Range, but it doesn't work for 2D lists. However, you answer gives my some additional brackets that are unwanted, like this: {{{1,1},{1,2},...,{1,100},{2,1},{2,2},...,{2,100},...,{100,1},{100,2},...,{100,100}}}. I've found the way with Flatten[]. Thanks for your help! Feb 27 '17 at 16:45
• Not an exact duplicate but a more general topic. Let me know if you disagree with closing.
– Kuba
Feb 27 '17 at 16:46
• @Michele You need to Flatten the result; consider Flatten[Table[{i, j}, {i, 1, 100}, {j, 100}], 1]. More in general, also take a look at the proposed duplicate. Feb 27 '17 at 16:49
• Join @@ Array[List , {100, 100} ] Feb 27 '17 at 20:24
Tuples[Range[100], 2]
or using Table as mentioned in the comment
It is worth noting that the method relying on Tuples is 10-12 times faster than Table | 2022-01-24T20:03:27 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/138771/create-a-particular-x-y-list/138772",
"openwebmath_score": 0.2532903552055359,
"openwebmath_perplexity": 1966.799039697362,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9683812327313545,
"lm_q2_score": 0.8887587831798665,
"lm_q1q2_score": 0.8606573260565377
} |
http://perfectroc.com/post/dijkstras-algorithm/ | # Notes on Dijkstra's Algorithm
## Setup
We are given a directed graph $G=(V,E)$ with a designated start node $s$. Each edge $(u, v) \in E$ has a weight $w(u,v) \geq 0$ indicating its cost. For each vertex $v \in V$, it has a predecessor $v.\pi$ that is another vertex or NIL.
## Goal
Find the shortest path from $s$ to every other vertex in the graph.
## Pseudocode
$Q$ is a min-priority queue of vertices, keyed by their $d$ values.
#### 1.DIJKSTRA$(G, s)$
$S = \varnothing$
$Q = G.V$
while $Q \neq \varnothing$
$u =$ EXTRACT-MIN(Q)
$S = S \cup \{u\}$
for each vertex $v \in G.Adj[u]$
RELAX(u,v,w)
#### 2.INITIALIZE-SINGLE-SOURCE$(G,s)$
for each vertex $v \in V[G]$
$v.d = \infty$
$v.\pi$ = NIL
$d[s] = 0$
#### 3. RELAX$(u,v,w)$
if $v.d > u.d + w(u,v)$
$v.d = u.d + w(u,v)$
$v.\pi = u$
## Proof of correctness
We want to show that the algorithm terminates with $u.d$ = $\delta(s,u)$ for all $u \in V$, where $\delta(s,u)$ is the shortest path weight from $s$ to $u$, and is defined as
$\delta(s, u) = \begin{cases} min\{w(p):s\xrightarrow{\text{p}}u\} , & \quad \text{if there is a path from s to u}\\ \infty , & \quad \text{otherwise} \end{cases}$
$w(p)$ is the sum of weights of the path $p$
## We first show
At the start of each iteration of the while loop in Dijkstra’s algorithm, $v.d = \delta(s, v)$ for each vertex $v \in S$ (*)
1) Base Case:
|S| = 0, i.e when $S = \varnothing$, the statement is vacuously true.
2) Inductive Step:
Suppose the claim holds when $|S| = k$ for some $k \geq 1$. Now, let $S$’s size grow to $k+1$ by adding vertex $v$. We want to show $v.d = \delta(s,v)$.
If $v=s$ then $s.d = \delta(s,s) =0$. If $v \neq s$, it must be $S \neq \varnothing$. If there’s no path form $s$ to $v$, $v.d = \delta(s,v)= \infty$. If there are some path from $s$ to $v$, there sholud be one shortest path $p$ from $s$ to $v$.
Let us consider the first vertex $y$ on $p$ that is not in $S$, and let $x$ on $p$ be $y’s$ predecessor. By induction hypothesis, $x.d = \delta(s,x)$. Because edge $(x, y)$ is relaxed in the previous iteration, by convergence property (if $s\rightarrow u\rightarrow v$ is a shortest path in G for some $u, v \in V$, and if $u.d = \delta(s,u)$ at any time prior to relaxing edge $(u,v)$, then $v.d = \delta(s,v)$ at all times afterwards), we have $y.d = \delta(s,y)$. Since $y$ is before $v$ on $p$ and all edge weights are nonnegative, we have $\delta(s,y) \leq \delta(s,v)$, and thus $y.d = \delta(s,y) \leq \delta(s,v)\leq v.d$
Because both vertices $v$ and $y$ are in $V-S$ when $v$ is selected by the algorithm, $v.d \leq y.d$. Thus, $y.d = \delta(s,y) = \delta(s,v) = v.d$, and so $v.d = \delta(s,v)$, and so complete the proof of (*).
## Now
When the algorithm terminates, it means $Q = \varnothing$. Since the algorithm maintains the invariant that $Q = V - S$ (when initialization, $S = \varnothing$ and $Q$ contains all vertices of graph; for every time through while loop, a vertex is extracted from $Q$ and added to $S$, so the invariant is maintained), it means $S = V$, so $u.d = \delta(s,u)$ for all vertices $u \in V$
## Time Complexity
Using priority queue implemented in min heap, the time to initialize the queue with setting all vertices’ distances to $\infty$ is $O(V)$. Inside the while loop, we will do $V$ times EXTRACT_MIN operation, each of them will take $O(logV)$. Besides, we will do at most $E$ times of updating associated distance key for vertices in the queue, which will cost $O(ElogV)$. In total, the time will be $O((E+V)logV)$
## Java implementation
In the code below, vertices is the hashmap containing the mapping between GeographicPoint and MapNode. MapNode is made up of GeographicPoint and list of MapEdge. MapNodes in vertices are converted to DiNode in hashmap diPoints by using the method convertMap to facilitate implementation of the algorithm. The constructPath method will return the path found by the algorithm from start to goal.
See the full framework of this project here (incomplete). Note that the code may be ugly as I mainly pay attention to the algorithm itself.
public List<GeographicPoint> dijkstra(GeographicPoint start,
GeographicPoint goal, Consumer<GeographicPoint> nodeSearched){
if(!vertices.containsKey(start) || !vertices.containsKey(goal)){
System.out.println("input location does not exist in graph");
}
HashMap<GeographicPoint, DiNode> diPoints = convertMap(vertices);
HashMap<GeographicPoint, GeographicPoint> parentMap = new HashMap<>();
Set<GeographicPoint> visitSet = new HashSet<>();
PriorityQueue<DiNode> queue = new PriorityQueue<DiNode>();
DiNode cur = diPoints.get(start);
cur.sumDist = 0.0;
queue.offer(cur);
while(!queue.isEmpty()){
cur = queue.poll();
nodeSearched.accept(cur.mapNode.location);
if(!visitSet.contains(cur.mapNode.location)){
}
if(cur.mapNode.location.equals(goal)) {
return constructPath(start, goal, parentMap);
}
for (MapEdge e : cur.mapNode.edges) {
if(!visitSet.contains(e.end)){
double nbSumDist = diPoints.get(e.end).sumDist;
if(cur.sumDist + e.distance < nbSumDist){
// update cur as neightbor's parent in parent map
parentMap.put(e.end, cur.mapNode.location);
DiNode next = diPoints.get(e.end);
// update next neighbor's sum of distance
next.sumDist = cur.sumDist + e.distance;
// put neighbor with updated distance into priority queue
queue.offer(new DiNode(next.mapNode, next.sumDist));
}
}
}
}
System.out.println("There is no such path");
}
Above is one part of the back-end code of a project in UCSD’s Advanced Data Structure course. It utilizes the Google Map’s API, and can demonstrate route planning in a real world’s map as shown below. Note that the algorithm is a bit different from the one described above (instead of initializing the priority queue with inserting all vertices and later updating their sum of weighted distance, it enqueues the vertex after its distance is updated), but the general idea is the same.
## References
[1] T. Cormen, C. Stein, R. Rivest, and C. Leiserson. Introduction to Algorithms (3rd ed.). MIT press,2009.
[2] J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley, 2005 | 2020-10-01T21:07:13 | {
"domain": "perfectroc.com",
"url": "http://perfectroc.com/post/dijkstras-algorithm/",
"openwebmath_score": 0.6448825597763062,
"openwebmath_perplexity": 987.3364295752979,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717460476701,
"lm_q2_score": 0.8723473846343394,
"lm_q1q2_score": 0.8606332824188186
} |
http://math.stackexchange.com/questions/81362/which-is-the-fastest-way-to-compute-sum-limits-i-110-frac10i-52i | # Which is the “fastest” way to compute $\sum \limits_{i=1}^{10} \frac{10i-5}{2^{i+2}}$?
I am looking for the "fastest" paper-pencil approach to compute $$\sum \limits_{i=1}^{10} \frac{10i-5}{2^{i+2}}$$
This is a quantitative aptitude problem and the correct/required answer is $3.75$
In addition, I am also interested to know how to derive a closed form for an arbitrary $n$ using mathematica I got $$\sum \limits_{i=1}^{n} \frac{10i-5}{2^{i+2}} = \frac{5 \times \left(3 \times 2^n-2 n-3\right)}{2^{n+2}}$$
Thanks,
-
Does someone feel like writing an "abstract duplicate" for this kind of question? I think I've seen several of them, but I don't know whether we have one that deals with this more generally and would serve as a good duplicate for all of them. – joriki Nov 12 '11 at 15:55
The actual correct answer to this question is $\approx 3.7219$. I do not know what "quantitative aptitude test" means, but would not 3.7 be a better answer than 3.75? – Aleksey Pichugin Nov 12 '11 at 16:36
We'll split the sum you're looking for up into
$$10 \sum_{i=1}^{10} {i \over 2^{i+2}} - 5 \sum_{i=1}^{10} {1 \over 2^{i+2}}.$$
Call this $10S_1 - 5S_2$.
We can write $$S_1 = \sum_{i=1}^{10} {i \over 2^{i+2}} = {1 \over 4} \sum_{i=1}^{10} {i \over 2^i}$$ and the result $\sum_{i=1}^\infty i/2^i = 2$ is well-known; thus $S_1 \approx 1/2$.
Similarly, $S_2 \approx \sum_{i=1}^\infty 1/2^{i+2} = 1/4$ by the usual sum of a geometric series.
So your sum is approximately $15/4$. In fact the infinite sum
$$\sum_{i=1}^\infty {10i-5 \over 2^{i+2}}$$
is exactly 15/4. We've left off some small positive terms so your sum is a bit less than $15/4$.
An exact form for the sum
As for getting an exact form for the sum: call it $f(n)$. Then we have
$$f(n) = {15 \over 4} - \left( 10 \sum_{i=n+1}^\infty {i \over 2^{i+2}} - 5 \sum_{i=n+1}^\infty {1 \over 2^{i+2}} \right).$$
Write this as $f(n) = 15/4 - g(n) + h(n)$.
$h(n)$ is easy -- it's the sum of a geometric series with its first term $5/2^{n+3}$ and ratio $1/2$, so $h(n) = 5/2^{n+2}$.
$g(n)$ is a bit harder. So that we don't have so many constants floating around, consider
$$G(n) = \sum_{i={n+1}}^\infty {i \over 2^i}$$
and you can see $g(n) = (5/2) G(n)$. Now you can write
$$G(n) = {(n+1) \over 2^{n+1}} + {{n+2} \over 2^{n+2}} + {{n+3} \over 2^{n+3}} + \cdots$$
and this is just
$$G(n) = \left( {n \over 2^{n+1}} + {n \over 2^{n+2}} + {n \over 2^{n+3}} + \cdots \right) + {1 \over 2^n} \left( {1 \over 2^1} + {2 \over 2^2} + {3 \over 2^3} + \cdots \right).$$
The first sum is a geometric series, summing to $n/2^n$; the second sum is $2$. Thus $G(n) = (n+1)/2^n$. Therefore you get
$$f(n) = {15 \over 4} - {5 \over 2} {(n+2) \over 2^n} + {5 \over 2^{n+2}}.$$
A sum that everybody should know, but lots of people don't
Finally, I used the result $\sum_{i=1}^\infty i/2^i$ twice here, both in getting the approximation and in getting the exact form. How can we prove that? One way is to write $${1 \over 2} + {2 \over 4} + {3 \over 8} + {4 \over 16} + \cdots$$ as a sum of one $1/2$, two $1/4$s, three $1/8$s, and so on; then regroup those terms as $$\left( {1 \over 2} + {1 \over 4} + {1 \over 8} + \cdots \right) + \left( {1 \over 4} + {1 \over 8} + {1 \over 16} + \cdots \right) + \left( {1 \over 8} + {1 \over 16} + {1 \over 32} + \cdots \right) + \cdots$$ Each pair of parentheses contains a geometric series; summing those gives $1 + 1/2 + 1/4 + 1/8 + \cdots$, another geometric series, which has sum $2$.
Alternatively, note that $${1 \over 1-z} = 1 + z + z^2 + z^3 + \cdots$$ and differentiating both sides gives $${1 \over (1-z)^2} = 1 + 2z + 3z^2 + \cdots$$. Multiply both sides by $z$ to get $${z \over (1-z)^2} = z + 2z^2 + 3z^3 + \cdots$$ and plug in $z = 1/2$ to get $${1/2 \over (1-1/2)^2} = {1 \over 2} + {2 \over 2^2} + {3 \over 2^3} + \cdots.$$
-
I would do it like this. Using $x \frac{\mathrm{d}}{\mathrm{d} x}\left( x^k \right) = k x^k$, and $\sum_{k=1}^n x^k = x \frac{x^n-1}{x-1}$. Then $$\begin{eqnarray} \sum_{k=1}^n \left( a k +b\right) x^k &=& \left( a x \frac{\mathrm{d}}{\mathrm{d} x} + b\right) \circ \sum_{k=1}^n x^k = \left( a x \frac{\mathrm{d}}{\mathrm{d} x} + b\right) \circ \left( x \frac{x^n-1}{x-1} \right) \\ &=& x \left( a x \frac{\mathrm{d}}{\mathrm{d} x} + a + b\right) \circ \left(\frac{x^n-1}{x-1} \right) \\ &=& x \left( (a+b) \frac{x^n-1}{x-1} + a x \frac{ n x^{n-1}(x-1) - (x^n-1) }{(x-1)^2} \right) \\ &=& x \left( (a+b) \frac{x^n-1}{x-1} + a x \frac{ (n-1) x^{n} - n x^{n-1} + 1 }{(x-1)^2} \right) \end{eqnarray}$$
Now applying this: $$\begin{eqnarray} \sum_{k=1}^n \frac{10 k -5}{2^{k+2}} &=& \frac{5}{4} \sum_{k=1}^n (2k-1)\left(\frac{1}{2}\right)^k \\ &=& \frac{5}{4} \frac{1}{2} \left( -(2^{1-n} - 2) + (n-1) 2^{2-n}- n 2^{1-n} + 4 \right) \\ &=& \frac{5}{4} \left( 3 + (n-3) 2^{-n} \right) \end{eqnarray}$$
-
Thanks for doing this in general form. If you don't mind I'll use this question for duplicates in the future and point to this answer :-) – joriki Nov 13 '11 at 8:04
@joriki:Question is tagged as algebra-precalculus. – Quixotic Nov 18 '11 at 7:31
@MaX: I see. Thanks for pointing that out. Fortunately there are so many answers here, including N.S.' that shows an elementary way to sum $i/q^i$, that it will still be a good duplicate in many future circumstances. I wonder which answer you'll be accepting... – joriki Nov 18 '11 at 7:39
@joriki:You are welcome, I am accepting Michael Lugo's answer as it provides the fastest method to reach the solution. – Quixotic Nov 19 '11 at 14:17
You presented this beautifully. Thank you. – 000 Jul 14 '12 at 9:48
First note that $\displaystyle\frac{10i-5}{2^{i+2}} = \frac{5(2i-1)}{2^{i+2}}$
Secondly, using the sum of a geometric series you can show that $2^0 + 2^1 + 2^2 + ... + 2^k = 2^{k+1}-1$.
\displaystyle \begin{align*} \text{So } \sum \limits_{i=1}^{n} \frac{10i-5}{2^{i+2}} &= 5(\frac{2\times1-1}{2^{1+2}}+\frac{2\times2-1}{2^{2+2}}+\frac{2\times3-1}{2^{3+2}}+...+\frac{2\times n-1}{2^{n+2}})\\ &= 5(\frac{2^{n-1}(2\times1-1)+2^{n-2}(2\times2-1)+2^{n-3}(2\times3-1)+...+2^{0}(2\times n-1)}{2^{n+2}}) \\ &= 5(\frac{2(2^{n-1}\times1+2^{n-2}\times2+2^{n-3}\times3+...+2^{0}\times n)-(2^{n-1}+2^{n-2}+2^{n-3}+...+1)}{2^{n+2}})\\ &= 5(\frac{2(2^{n-1}\times1+2^{n-2}\times2+2^{n-3}\times3+...+2^{0}\times n)-(2^n-1)}{2^{n+2}})\\ &= 5(\frac{2((2^{n-1}+2^{n-2}+2^{n-3}+...+1)+(2^{n-2}+2^{n-3}+2^{n-4}+...+1)+...+(2+1)+(1))-(2^n-1)}{2^{n+2}})\\ &= 5(\frac{2((2^{n}-1)+(2^{n-1}-1)+(2^{n-2}-1)+...+(2^{2}-1)+(2-1))-(2^n-1)}{2^{n+2}})\\ &= 5(\frac{2((2^{n}+2^{n-1}+2^{n-2}+...+2) + (-1)\times n)-(2^n-1)}{2^{n+2}})\\ &= 5(\frac{2(2\times (2^n - 1))- n)-(2^n-1)}{2^{n+2}})\\ &= 5(\frac{4 \times 2^n - 4 - 2n -2^n + 1}{2^{n+2}})\\ &= 5(\frac{3 \times 2^n - 2n - 3}{2^{n+2}})\text{ which is the closed form you got from Mathematica}\end{align*}
-
The equations are a lot more readable in display style, which you get either by switching it on individually with \displaystyle or by making the entire formula a displayed one by enclosing it in double dollar signs. Also note that you can use the eqnarray and align environments to align consecutive equations with each other. – joriki Nov 13 '11 at 8:03
@joriki Thanks for the tips and I think my answer has been formatted correctly but I believe my 5th line is a tad too long? I had to trim is down a bit and it is still quite long... – Sp3000 Nov 13 '11 at 9:50
Yes, that's a problem actually, which I didn't foresee when I wrote my comment :-) You can alleviate it somewhat by not putting the "So" into the equation but on a line of its own. Also, though it looks nicer with a full equation in the first line, in this case you could break the line before the first equals sign; it will still look better with eqnarray since you can align the expression in the first line with the other expressions instead of with the equals signs. – joriki Nov 13 '11 at 10:03
The sum $S= \sum \limits_{i=1}^n \frac{1}{2^i}=1-\frac{1}{2^n}$ is geometric, thus easy to calculate.
Here is a simple elementary way of calculating
$$T=\sum_{i=1}^n \frac{i}{2^i} \,.$$:
$$T=\sum_{i=1}^n \frac{i}{2^i} =\frac{1}{2}+ \sum_{i=2}^n \frac{i}{2^i} =\frac{1}{2}-\frac{n+1}{2^{n+1}}+ \sum_{i=2}^{n+1} \frac{i}{2^i} \,.$$
Changing the index in the last sum yields:
$$T= \frac{2^n-n-1}{2^{n+1}}+\sum_{i=1}^{n} \frac{i+1}{2^{i+1}}=\frac{2^n-n-1}{2^{n+1}}+\sum_{i=1}^{n} \frac{i}{2^{i+1}}+\sum_{i=1}^{n} \frac{1}{2^{i+1}} \,.$$
Thus
$$T= \frac{2^n-n-1}{2^{n+1}}+\frac{1}{2}T+ \frac{1}{2}-\frac{1}{2^{n+1}}$$
Thus
$$\frac{1}{2}T=\frac{2^{n+1}-n-2}{2^{n+1}}\,.$$
Hence
$$\sum_{i=1}^n \frac{i}{2^i} =\frac{2^{n+1}-n-2}{2^{n}} \,.$$
-
$\sum \limits_{i=1}^{10} \frac{10i-5}{2^{i+2}} =\frac{10}{4} \sum_{i=1}^{10} \frac{i}{2^i}-\frac{5}{4}\sum_{i=1}^{10} \frac{1}{2^i}$
Second $\sum$ is obviously $1-\frac{1}{1024}$.
First $\sum$:
$\frac{1}{2} +$
$\frac{1}{4} + \frac{1}{4} +$
$\frac{1}{8} + \frac{1}{8} + \frac{1}{8} +$
$\vdots$
$\frac{1}{1024} + \frac{1}{1024} + \dots + \frac{1}{1024}$
Summing by columns, $(1-\frac{1}{1024})+(\frac{1}{2}-\frac{1}{1024})+\dots+(\frac{1}{512}-\frac{1}{1024})=(1+\frac{1}{2}+\dots+\frac{1}{512})-\frac{10}{1024}=\frac{1023}{512}-\frac{10}{1024}$
Rest is boring and easy.
- | 2015-01-26T12:53:58 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/81362/which-is-the-fastest-way-to-compute-sum-limits-i-110-frac10i-52i",
"openwebmath_score": 0.9958812594413757,
"openwebmath_perplexity": 578.1400014659729,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717448632122,
"lm_q2_score": 0.8723473829749844,
"lm_q1q2_score": 0.8606332797484871
} |
https://math.stackexchange.com/questions/1004341/ring-of-polynomials-as-a-module-over-symmetric-polynomials/1004579 | # Ring of polynomials as a module over symmetric polynomials
Consider the ring of polynomials $\mathbb{k} [x_1, x_2, \ldots , x_n]$ as a module over the ring of symmetric polynomials $\Lambda_{\mathbb{k}}$. Is $\mathbb{k} [x_1, x_2, \ldots , x_n]$ a free $\Lambda_{\mathbb{k}}$-module? Can you write down "good" generators explicitly? (I think that it has to be something very classical in representation theory).
Comment: My initial question was whether this module flat. But since all flat Noetherian modules over polynomial ring are free (correct me if it is wrong), it is the same question.
There is much more general question, which seems unlikely to have good answer. Let $G$ be a finite group and $V$ finite dimensional representation of G. Consider projection $p: V \rightarrow V/G$. When is $p$ flat?
• – anon Nov 3 '14 at 20:52
• Sorry. I really tried to find something before asking and did not manage. – quinque Nov 3 '14 at 21:56
Let $s_i= \sum x_1 \ldots x_i$ be the fundamental symmetric polynomials. We have a sequence of free extensions $$k[s_1, \ldots, s_n] \subset k[s_1, \ldots, s_n][x_1]\subset k[s_1, \ldots, s_n][x_1][x_2] \subset \cdots \\ \subset k[s_1,\ldots ,s_n] [x_1] \ldots [x_n] = k[x_1, \ldots ,x_n]$$ of degrees $n$, $n-1$, $\ldots$ ,$1$. At step $i$ the generators are $1, x_i, \ldots, x_i^{n-i}$. Therefore $$k[s_1, \ldots, s_n] \subset k[x_1, \ldots, x_n]$$ is free of degree $n!$ with generators $x_1^{a_1} x_2^{a_2} \cdots x_n^{a_n}$ with $0 \le a_i \le n-i$.
More generally for a finite reflection group of transformations $G$ acting on a vector space $V$ over a field $k$ of characteristic $0$ (to be safe) the algebra of invariants $k[V]^G$ is a polynomial algebra and $k[V]$ is a free $k[V]^G$ module of rank $|G|$ --see the answer of @stephen: .
• I think some more details are in order: why $1,x_1,\dots,x_1^{n-1}$ is a basis of the first ring extension? – user26857 Nov 3 '14 at 19:41
• And why don't satisfy any equation of degree $<n$? – user26857 Nov 3 '14 at 19:50
• @user26857: Another useful observation using just basic Galois theory is : $k(s_1, \ldots, s_n) \subset k(x_1, \ldots, x_n)$ is Galois with group $S_n$. – orangeskid Nov 3 '14 at 19:57
• Great answer! Just a remark: The basis orangeskid describes is different from the Schubert basis - geometrically, the images of the $x_i$ in the integral cohomology of the flag variety are the Chern classes of the $n$ tautological line bundles (and the iterated computation above has a geometric counteprart, by viewing the flat variety as an iterated projective bundle and using Leray-Hirsch at each step), while as I said above the Schubert polynomials correspond to Bruhat cells. – Hanno Nov 3 '14 at 20:02
• Regarding your last paragraph, it can certainly happen that in characteristic $p$ the ring of invariants is not polynomial even when the group is generated by reflection; on the other hand polynomials invariants is enough to imply that the group is generated by reflections in any characteristic. – Stephen Nov 3 '14 at 20:34
Yes, that's indeed a classical result of representation theory: ${\mathbb Z}[x_1,...,x_n]$ is graded free over ${\Lambda}_n$ of rank $n!$ (the graded rank is the quantum factorial $[n]_q!$), and a basis is given by Schubert polynomials defined in terms of divided difference operators.
See for example the original article of Demazure, in particular Theorem 6.2.
Passing to the quotient, one obtains the graded ring ${\mathbb Z}[x_1,...,x_n]/\langle\Lambda_n^+\rangle$ which is isomorphic to the integral cohomology ring of the flag variety of ${\mathbb C}^n$, and the ${\mathbb Z}$-basis of Schubert polynomials coincides with the basis of fundamental classes of Bruhat cells in the flag variety. This is explained in Fulton's book 'Young Tableaux', Section 10.4.
The answer by @orangeskid (+1) is the most classical and direct, and the answer by @Hanno connects your question to Schubert calculus and the geometry of flag varieties (+1). Hoping this isn't too self-promoting, you might also have a look at my paper Jack polynomials and the coinvariant ring of $G(r,p,n)$ (I worked in somewhat more generality)
http://tinyurl.com/l8u9wh5
where I showed that certain non-symmetric Jack polynomials give a basis as well (I was working with more general reflection groups and over the complex numbers, but a version should work over any field of characteristic $0$; I have not thought much about this in characteristic $p$). The point of my paper was really to connect the descent bases (yet another basis!) studied much earlier by Adriano Garsia and Dennis Stanton in the paper Group actions of Stanley-Reisner rings and invariants of permutation groups
http://tinyurl.com/lhhtney
to the representation theoretic structure of the coinvariant algebra as an irreducible module for the rational Cherednik algebra. Of course this structure becomes much more complicated in characteristic smaller than $n$; in particular the coinvariant algebra will in general not be irreducible as a module for the Cherednik algebra.
Towards your second question: by definition $p$ is flat if and only if $k[V]$ is a flat $k[V]^G$-module. This certainly holds if $k[V]$ is free over $k[V]^G$; a sufficient condition for this is given in Bourbaki, Theorem 1 of section 2 of Chapter 5 of Lie Groups and Lie algebras (page 110): in case the characteristic of $k$ does not divide the order of $G$, it suffices that $G$ be generated by reflections. This is false (in general) in characteristic dividing the order of the group. But see the paper Extending the coinvariant theorems of Chevalley, Shephard-Todd, Mitchell, and Springer by Broer, Reiner, Smith, and Webb available for instance on Peter Webb's homepage here
http://www.math.umn.edu/~webb/Publications/
for references and what can be said in this generality (this is an active area of research so you shouldn't expect to find a clean answer to your question).
Conversely, assuming $p$ is flat and examining the proof of the above theorem in Bourbaki, it follows that $k[V]$ is a free $k[V]^G$-module. Now Remark 2 to Theorem 4 (page 120) shows that $G$ is generated by reflections. So to sum up: if $p$ is flat then $G$ is generated by reflections; in characteristic not dividing the order of $G$ the converse holds.
• MO level, neat. +1! – orangeskid Nov 3 '14 at 20:00
• I have a problem about your links. Article by Webb and others is just about connected topics. They do not answer my question (or I just do not see it, then please explain). About Bourbaki book I have a deeper trouble. It sounds silly but I even can not find what is called theorem 1 in this section. Could you post here what it is about (proof is not necessary). – quinque Nov 3 '14 at 23:32
• @user167762, The statement is Theorem 1 on page 110 of my English edition---are you able to locate it now? I referenced the paper of of BRSW not because it answers your question, but because it is the most recent thing I can think of that deals with the subject in detail and I thought the references it contained might prove useful for you or other future readers. I will edit my answer a bit to give an indication of what I believe to be the state of the art. As far as I know, in positive characteristic there is no clean characterization of groups such that the projection is flat. – Stephen Nov 4 '14 at 12:26
Another perspective:
The ring extension $S=k[s_1,\dots,s_n]\subset k[x_1,\dots,x_n]=R$ is finite, and since $R$ is Cohen-Macaulay then it is free of rank say $m$. We thus have a Hironaka decomposition $R=\oplus_{i=1}^m S\eta_i$ for some homogeneous $\eta_i\in R$. Using this the Hilbert series of $R$ is $$H_R(t)=\sum_{i=1}^mH_S(t)t^{\deg\eta_i}=\sum_{i=1}^mt^{\deg\eta_i}/\prod_{i=1}^n(1-t^i).$$ On the other side, $H_R(t)=1/(1-t)^n$ and therefore $\sum_{i=1}^mt^{\deg\eta_i}=\prod_{i=1}^{n-1}(1+t+\cdots+t^i)$ which for $t=1$ gives $m=n!$.
• You used a theorem about Cohen-Macaulay. I do not know it. Could you please write it? – quinque Nov 3 '14 at 22:26 | 2020-04-08T16:16:02 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1004341/ring-of-polynomials-as-a-module-over-symmetric-polynomials/1004579",
"openwebmath_score": 0.8081624507904053,
"openwebmath_perplexity": 219.77270335062425,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754479181589,
"lm_q2_score": 0.8740772466456689,
"lm_q1q2_score": 0.8605949966312305
} |
http://math.stackexchange.com/questions/324048/equation-of-one-branch-of-a-hyperbola-in-general-position | # Equation of one branch of a hyperbola in general position
Given a generic expression of a conic:
$$Ax^2 + Bxy + Cy^2 + Dx + Ey + F=0$$
is there a way to write an expression for one of the branches as a function of the coefficients? I tried using the quadratic formula to get an expression for $y$: $$y=\frac{-(Bx+E)\pm \sqrt{(Bx+E)^2 - 4(C)(Ax^2 + Dx + F)}}{2C}$$
but this doesn't always work. Consider:
$$xy=1$$
Here, $A=0, B=1, C=0, D=0, E=0, F=0$, so $y=\frac{\cdot}{0}$, which isn't particularly helpful. In other cases it is not as bad, but still not what I'm looking for. E.g.
$$x^2 - y^2 - 1=0$$
Using the formula above, we get $$y=\pm \sqrt{x^2-1}$$ which seems nice, but $y=\sqrt{x^2-1}$ it is actually one half of each branch rather than one entire branch, as can be seen here.
http://www.wolframalpha.com/input/?i=plot%28x^2+-+y^2+-+1%3D0%29
http://www.wolframalpha.com/input/?i=plot%28y%3Dsqrt%28x^2-1%29%29
I am trying to draw one of these branches, so I need an ordered set of points along a predefined "grid" of either of the variables. Is it possible to do this?
-
Your equation doesn't work for all hyperbolas since you found it as a solution of quadratic equation. For $xy = 1$ it won't work since it's not quadratic equation. You can't solve $0x^2+2x+1=0$ as quadratic equation, since it's not. – Kaster Mar 7 '13 at 23:56
First of all, if you want to draw a portion of the hyperbola, you typically just need parametric equations of the form $x = x(t)$, $y = y(t)$; you don't necessarily need to get $y$ as a function of $x$.
But, ignoring that quibble, I'll try to answer the question you asked.
The key is to first eliminate the $xy$ term in the implicit equation. In effect, you do this by rotating the coordinate system. This is really the same idea as Will Jagy used in his answer, but it might be easier to understand if I don't mention eigenvectors.
Suppose we introduce a $uv$ cooordinate system that is rotated by an angle $\theta$ (counterclockwise) from the $xy$ one. Then we have
$$x = u\cos\theta - v\sin\theta \quad ; \quad y = u\sin\theta + v\cos\theta$$
You can plug these $x$ and $y$ expressions into your original equation, and you'll get
$$\bar A u^2 + \bar B uv + \bar C v^2 + \bar Du + \bar Ev + \bar F = 0$$
where
$$\bar A = A \cos^2\theta + 2B \sin \theta\cos\theta + C \sin^2\theta$$
$$\bar B = 2 B \cos 2\theta - (A - C) \sin 2\theta$$
$$\bar C = A \sin^2\theta - 2B \sin\theta\cos\theta + C \cos^2\theta$$
and so on. Now we just have to cleverly choose $\theta$ so that the $uv$ term disappears. Clearly, we'll get what we want if we choose $\theta$ so that $\bar B = 0$, and this means
$$\tan2\theta = \frac{2B}{A - C}$$
After using this technique, we can now assume that the equation has the form
$$a u^2 + c v^2 - 2adu - 2cev + f = 0$$
Note that I'm using $-2ad$ in place of $\bar D$ and $-2ce$ in place of $\bar E$, just to make the next step more convenient. And the next step is just some "complete the square" tricks. The equation can be written:
$$a (u^2 - 2du) + c (v^2 - 2ev) + f = 0$$
In other words:
$$a (u^2 - 2du + d^2) + c (v^2 - 2ev + e^2) + f - ad^2 - ce^2 = 0$$
which is
$$a (u - d)^2 + c (v - e)^2 = ad^2 + ce^2 - f$$
Now it should be clear that we have either an ellipse (if the signs of $a$ and $c$ are the same) or a hyperbola (if the signs are different). I'm ignoring the parabola case and several degenerate cases. The ellipse/hyperbola has its center at the point $(d,e)$ in the $uv$ coordinate system, and its axes of symmetry are parallel to the $u$ and $v$ axes.
At this point, if our curve turns out to be a hyperbola, you can use standard parametric equations (like the ones Wil Jagy gave) to trace out one branch or the other.
-
I think you are going to have a more satisfactory experience doing this: find the center $\vec{P} = (x_0, y_0)$ of your hyperbola. Find the eigenvectors of the matrix $$\left( \begin{array}{cc} A & B/2 \\ B/2 & C \end{array} \right)$$ and normalize and choose order and $\pm$ so that with basis $\vec{u},\vec{v}$ you can then write your branch as $$g(t) = \vec{P} + \vec{u} \cosh t + \vec{v} \sinh t.$$ Note that while $\vec{u},\vec{v}$ are perpendicular to each other they are not particularly of any length, in the way I write it above. If you prefer an orthonormal basis you then just put one constant scalar multiplication in front of the $\cosh$ term and one in front of the $\sinh$ term. It will all work out if you actually do have a hyperbola, which happens when $B^2 > 4 A C.$
I did teach this about 20 years ago, a whole section on translations and rotations of conic sections for a engineering calculus course. The memories are dim, but it does appear that it is best to find an orthonormal basis for the above matrix first, then find the center expressed in that basis, and end up with constant scalar coefficients of the hyperbolic trig terms.
-
I posted an answer following your suggestion with an example. The only part I didn't follow was "and normalize and choose order and $\pm$ so that with the new basis you can then write your branch as...". What do you mean "choose order" and "choose $\pm$"? Also, how do you determine the values of the parameter that constitute each branch? – David Doria Mar 8 '13 at 13:31
Following Will Jagy's suggestion, here are some examples:
Example #1
Consider
$$x^2 - y^2 -1 = 0$$
($A=1$, $B=0$, $C=-1$, $D=0$, $E=0$, $F=-1$). From
$$p_c = \begin{pmatrix}x_c\\y_c\end{pmatrix} = \begin{pmatrix}\frac{BE-2CD}{4AC-B^2}\\ \frac{DB-2AE}{4AC-B^2}\end{pmatrix}$$
we have
$$p_c = \begin{pmatrix}0\\0\end{pmatrix}$$
Now, the eigenvalues/vectors of the matrix $$\begin{pmatrix}A & \frac{B}{2}\\\frac{B}{2} & C\end{pmatrix} = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix}$$ are $\lambda_1 = -1$, $\lambda_2 = 1$, $v_1 = \begin{pmatrix}0\\1\end{pmatrix}$, $v_2 = \begin{pmatrix}1\\0\end{pmatrix}$
From the parametric form
$$g(t)=p_c + v_1 cosh(t) + v_2 sinh(t)$$
we have
$$g(t) = \begin{pmatrix}0\\1\end{pmatrix} cosh(t) + \begin{pmatrix}1\\0\end{pmatrix} sinh(t)$$
which looks like
Now how would we draw the other branch? That is, how do you determine the values of the parameter that constitute each branch?
Example #2
Consider
$$7x^2 - 3y^2 - 25 = 0$$
($A=7$, $B=0$, $C=-3$, $D=0$, $E=0$, $F=-25$). From
$$p_c = \begin{pmatrix}x_c\\y_c\end{pmatrix} = \begin{pmatrix}\frac{BE-2CD}{4AC-B^2}\\ \frac{DB-2AE}{4AC-B^2}\end{pmatrix}$$
we have
$$p_c = \begin{pmatrix}0\\\frac{-25}{6}\end{pmatrix}$$
Now, the eigenvalues/vectors of the matrix $$\begin{pmatrix}A & \frac{B}{2}\\\frac{B}{2} & C\end{pmatrix} = \begin{pmatrix}7 & 0 \\ 0 & -3 \end{pmatrix}$$ are $\lambda_1 = -3$, $\lambda_2 = 7$, $v_1 = \begin{pmatrix}0\\1\end{pmatrix}$, $v_2 = \begin{pmatrix}1\\0\end{pmatrix}$
From the parametric form
$$g(t)=p_c + v_1 cosh(t) + v_2 sinh(t)$$
we have
$$g(t) = \begin{pmatrix}0\\ \frac{-25}{6}\end{pmatrix} + \begin{pmatrix}0\\1\end{pmatrix} cosh(t) + \begin{pmatrix}1\\0\end{pmatrix} sinh(t)$$
-
David, the other branch is $p_c -v_1 \cosh t + v_2 \sinh t$ – Will Jagy Mar 8 '13 at 20:51
David, please do another example, something along the lines of $x^2 - x y - y^2 =1,$ where the eigenvectors will not naturally come out length one, and if you choose an orthonormal basis you will need scalar multiples, $p_c + \alpha v_1 \cosh t + \beta v_2 \sinh t.$ Note that the bit about the scalar multipliers would apply even for, say, $7 x^2 - 3 y^2 = 25,$ where one branch is $$x = \frac{5}{\sqrt 7} \cosh t, y = \frac{5}{\sqrt 3} \sinh t.$$ – Will Jagy Mar 8 '13 at 21:04
@WillJagy How do you choose the eigenvector that goes with cosh vs sinh? Even in my Example #1, the hyperbola opens up/down (instead of left/right) if I use x=sinh(t) and y=cosh(t) as the eigenvectors they way I have written them imply (I guess I thought I had made a typo so I switched them (x=cosh(t) and y=sinh(t) when I produced that graph): wolframalpha.com/input/… – David Doria Mar 11 '13 at 12:04
@WillJagy I worked out the $7x^2 - 3y^2 - 25 = 0$ example in my answer. How did you get from where I am to the $\frac{5}{\sqrt{7}}$ and $\frac{5}{\sqrt{3}}$ coefficients? And did you just accidentally omit the $center + ...$ term? My comment above holds on this problem as well - how do you choose the eigenvector to associate with the cosh vs sinh term? – David Doria Mar 11 '13 at 12:11
The center is actually at $(0,0)$ because $D=0,E=0.$ If you choose unit length eigenvectors, then you expect scalar coefficients to be necessary. In particular, try your recipe at $t=0$ (after replacing the -25/6 by 0) and you will find the point does not lie on the actual hyperbola. How about if you try doing $xy = 1?$ You already know exactly how the graph looks. One test for correctness is, once you write $x = x_c + x_1 \cosh t + x_2 \sinh t,$ then $y = y_c + y_1 \cosh t + y_2 \sinh t,$ you must be able to plug those into the original equation and get truth. – Will Jagy Mar 11 '13 at 20:38 | 2014-07-30T13:37:53 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/324048/equation-of-one-branch-of-a-hyperbola-in-general-position",
"openwebmath_score": 0.9044473171234131,
"openwebmath_perplexity": 254.06414264158346,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754481444574,
"lm_q2_score": 0.8740772351648677,
"lm_q1q2_score": 0.8605949855253179
} |
https://math.stackexchange.com/questions/2260/proof-1234-cdotsn-fracn-timesn12?noredirect=1 | # Proof $1+2+3+4+\cdots+n = \frac{n\times(n+1)}2$
Apparently $$1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2$$.
How? What's the proof? Or maybe it is self apparent just looking at the above?
PS: This problem is known as "The sum of the first $$n$$ positive integers".
• This an important example of a finite integral, a good tutorial here: stanford.edu/~dgleich/publications/finite-calculus.pdf – anon Aug 12 '10 at 19:46
• Note that we do not define this sum to be (x+1)x/2, we prove it to be so. It is a very useful formula. You could check out en.wikipedia.org/wiki/Triangle_number, which unfortunately, does not prove the formula you cite. – Ross Millikan Apr 21 '11 at 22:14
• Although we already have three proofs of this here, I think it would be really interesting if we amassed more (perhaps excluding the typical intro-to-induction styled proof). – davidlowryduda Apr 22 '11 at 1:15
• It could be proved by mathematical induction – J. W. Tanner Sep 5 '19 at 1:10
• Not appropriate for an answer, but you've asked either a very easy or a very difficult question. If by "why" you mean, "Can I see a proof of this fact?" the question is fairly easy to answer. If by "why" you mean, "Why should this be true?" you've asked a very deep kind of question that mathematicians make entire careers out of. Sure it can be shown to be true, but how does it connect to other true results? Why is it a polynomial of degree $2$? What about the sum of $k^2$? And on and on... – Charles Hudgins Sep 5 '19 at 3:05
Let $$S = 1 + 2 + \ldots + (n-1) + n.$$ Write it backwards: $$S = n + (n-1) + \ldots + 2 + 1.$$ Add the two equations, term by term; each term is $$n+1,$$ so $$2S = (n+1) + (n+1) + \ldots + (n+1) = n(n+1).$$ Divide by 2: $$S = \frac{n(n+1)}{2}.$$
• Well, you lost me at "each term is n+1...". As far as I can see, if you add the two equations term by term it will be: n+n + (n-1)+(n-1) + ... + 2+2 + 1+1. How did you get (n+1) + (n+1) + ... + (n+1)? – b1_ Aug 12 '10 at 18:27
• I got it! It's 2S = (1+2+...+(n-1)+n) + (n+(n-1)+...+2+1) - so you write one backwards, then match up each term. 2S = (1+n) + (2+n-1)+...+(n-1+2)+n+1, and so 2S=(n+1)+(n+1)+...+(n+1)+(n+1) etc – b1_ Aug 12 '10 at 19:04
• This trick is usually attributed to Gauss (when he was a schoolboy... though it's unclear if the story is true or not). – Fixee Feb 27 '11 at 15:17
• This helped make it clear for me youtu.be/1wnIsgUivEQ – vexe Nov 27 '15 at 8:17
• @john this can be made rigorous using sigma notation. It's simply a change of index. – Brevan Ellefsen Jul 29 '18 at 1:34
My favourite proof is the one given here on MathOverflow. I'm copying the picture here for easy reference, but full credit goes to Mariano Suárez-Alvarez for this answer.
Takes a little bit of looking at it to see what's going on, but it's nice once you get it. Observe that if there are n rows of yellow discs, then:
1. there are a total of 1 + 2 + ... + n yellow discs;
2. every yellow disc corresponds to a unique pair of blue discs, and vice versa;
3. there are ${n+1 \choose 2} = \frac 12 n(n+1)$ such pairs.
• great proof!!!! – anon Aug 12 '10 at 21:22
• Maybe I lack imagination, but to me it's clearer to just make a square of yellow discs ($n^2$ of them), duplicate the diagonal ($n^2+n$), then cut in half to get the answer ($(n^2+n)/2$). This is how Knuth does this (and much more intricate) summations in "Concrete Mathematics." – Fixee Feb 27 '11 at 15:27
• @Fixee: I don't know why you're comparing them; this is an entirely different proof. Unlike the other proof (also good), this doesn't require computing areas, cutting, or duplicating -- in fact this doesn't even involve the number $n(n+1)/2$ directly; what this gives is a proof that $1 + 2 + \dots + n = \binom{n+1}{2}$, and it so happens that the latter is $n(n+1)/2$. It's a bijection proof, rather than an area proof (vaguely speaking). – ShreevatsaR Nov 25 '11 at 6:13
• @Vaughn Climenhaga - is there an analog of this in three dimensions and higher? – Vincent Tjeng Apr 1 '13 at 9:43
• Wow. That is simply amazing. I didn't get it at first glance and then it hit me. Ingenious. – Karl Feb 3 '16 at 20:30
What a big sum! This is one of those questions that have dozens of proofs because of their utility and instructional use. I present my two favorite proofs: one because of its simplicity, and one because I came up with it on my own (that is, before seeing others do it - it's known).
The first involves the above picture:
In short, note that we want to know how many boxes are in the outlined region, as the first column has 1 box, the second 2, and so on (1 + 2 + ... + n). One way to count this quickly is to take another copy of this section and attach it below, making a $n*(n+1)$ box that has exactly twice as many squares as we actually want. But there are $n*(n+1)$ little squares in this area, so our sum is half that: $$1 + 2 + ... + n = \dfrac{n(n+1)}{2}.$$
Second proof, same as the first but a little bit harder and a little bit worse:
Let us take for granted the finite geometric sum $1 + x + x^2 + ... + x^n = \dfrac{x^{n+1} - 1}{x-1}$ (If you are unfamiliar with this, comment and I'll direct you to a proof). This is a polynomial - so let's differentiate it. We get $$1 + 2x + 3x^2 + ... + nx^{n-1} = \dfrac{ (n+1)x^n (x-1) - x^{n+1} + 1}{ (x-1)^2 }$$ Taking the limit as x approaches 1, we get
$$\lim_{x \to 1} \dfrac{ (n+1)x^n (x-1) - x^{n+1} + 1}{ (x-1)^2} = \dfrac{ (n+1) [ (n+1)x^n - nx^{n-1} ] - (n+1)x^n }{2(x-1)} =$$ $$\lim_{x \to 1} \dfrac{ (n+1)[(n+1)(n)x^{n-1} - n(n-1)x^{n-2}] - (n+1)(n)x^{n-1} } {2}$$
where we used two applications of l'Hopital above. This limit exists, and plugging in x = 1 we see that we get $$\dfrac{1}{2} * (n+1)(n) [ (n+1) - (n-1) - 1] = \dfrac{ (n)(n+1)}{2}.$$
And that concludes the second proof.
• I came up with the first proof by myself a long time ago :). It was an awesome moment where the concept of "area" was redefined. – Jacob Apr 22 '11 at 1:50
• @mixedmath A bit overkill, but nice anyways. (+1) – Franklin Pezzuti Dyer Jan 20 '18 at 15:40
How many ways are there to choose a $2$-element subset out of an $n$-element set?
On the one hand, you can choose the first element of the set in $n$ ways, then the second element of the set in $n-1$ ways, then divide by $2$ because it doesn't matter which you choose first and which you choose second. This gives $\frac{n(n-1)}{2}$ ways.
On the other hand, suppose the $n$ elements are $1, 2, 3, ... n$, and suppose the larger of the two elements you choose is $j$. Then for every $j$ between $2$ and $n$ there are $j-1$ possible choices of the smaller of the two elements, which can be any of $1, 2, ... j-1$. This gives $1 + 2 + ... + (n-1)$ ways.
Since the two expressions above count the same thing, they must be equal. This is known as the principle of double counting, and it is one of a combinatorialist's favorite weapons. A generalization of this argument allows one to deduce the sum of the first $n$ squares, cubes, fourth powers...
Gathering as many proofs as we can? Write the series recursively:
$$S(n) = S(n - 1) + n \tag{1}$$
Substitute $n \to n+1$ :
$$S(n + 1) = S(n) + n + 1\tag{2}$$
Equation (2) subtract Equation (1):
$$S(n+1) - S(n) = S(n) + 1 - S(n - 1) \tag{3}$$
And write it up:
$$\begin{cases} S(n+1) &= 2S(n) -S(n-1) + 1 \\ S(n) &= S(n) \end{cases} \tag{4}$$
Which can now be written in matrix form:
$$\begin{bmatrix} S(n+1) \\ S(n) \end{bmatrix} = \begin{bmatrix} 2 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} S(n) \\ S(n-1) \end{bmatrix} + \begin{bmatrix} 1 \\ 0 \end{bmatrix} \tag{5}$$
And then converting the affine equation (5) to the linear equation (6):
$$\begin{bmatrix} S(n+1) \\ S(n+0) \\ 1\end{bmatrix} = \begin{bmatrix} 2 & -1 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} S(n) \\ S(n-1) \\ 1\end{bmatrix} \tag{6}$$
And closing the equation:
$$\begin{bmatrix} S(n+1) \\ S(n) \\ 1\end{bmatrix} = \begin{bmatrix} 2 & -1 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix}^n \begin{bmatrix} S(1) \\ S(0) \\ 1\end{bmatrix} \tag{6}$$
Then finding the Jordan form of the 3x3 matrix:
\begin{align} \begin{bmatrix} S(n+1) \\ S(n) \\ 1\end{bmatrix} &= \left(\begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix}^{-1}\right)^n \begin{bmatrix} S(1) \\ S(0) \\ 1\end{bmatrix} \\ &= \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1\end{bmatrix}^n \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix}^{-1} \begin{bmatrix} S(1) \\ S(0) \\ 1\end{bmatrix} \\ &= \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} 1 & \binom{n}{1} & \binom{n-1}{2} \\ 0 & 1 & \binom{n}{1} \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} 0 & 1 & 0 \\ 1 & -1 & 0 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} S(1) \\ S(0) \\ 1\end{bmatrix} \end{align} \tag{7}
And multiplying the matrices out:
$$\begin{bmatrix} S(n+1) \\ S(n) \\ 1\end{bmatrix} = \begin{bmatrix} \frac{ (2n+2)S(1) - 2nS(0) + {n}^{2} + n}{2} \\ \frac{ 2nS(1) + (2 - 2n)S(0)+{n}^{2}-n}{2} \\ 1 \end{bmatrix} \tag{8}$$
And given that $S(0) = 0$ and $S(1) = 1$, we get that:
$$S(n) = \frac{n^2 + n}{2} \tag{9}$$
For example, $$X = 1+2+3+4+5+6$$ Then twice $X$ is $$2X = (1+2+3+4+5+6) + (1+2+3+4+5+6)$$ which we can rearrange as $$2X = (1+2+3+4+5+6) + (6+5+4+3+2+1)$$ and add term by term to get $$2X = (1+6)+(2+5)+(3+4)+(4+3)+(5+2)+(6+1)$$ to get $$2X = 7+7+7+7+7+7 = 6*7 = 42$$
• Legend has it that Gauss used this method to find the sum of all numbers from 1 to 100 without actually summing them. – lhf Apr 21 '11 at 22:43
• In class, I think. A teacher set him in a corner because he was misbehaving (apparently having already finished that day's work), and told him to add these numbers. Almost immediately, Gauss went to play because he was done - and when the teacher got angry it took her nearly 10 minutes to verify that his answer was correct! Or at least, that's how the legend I heard went. – davidlowryduda Apr 21 '11 at 22:46
• @mixedmath, Gauss's Day of Reckoning, which was cited in wikipedia, discusses the legend. – lhf Apr 21 '11 at 23:01
• @mixedmath: I heard a different version - teacher asked all the kids to add the numbers from 1 to 1,000, because he (late 18th century Germany, therefore "he") needed some time to do some work. And CFG had the answer within 30 seconds... – gnasher729 Dec 22 '16 at 16:56
• Nobody should take nearly ten minutes. When you add up the single digits, you should notice that you are ten times adding 1+2+3+4+5+6+7+8+9 = 45. And the tens, you have ten 10s, ten 20s, ten 30s, ten 90s. Plus a hundred. All in all, should be done in a minute. – gnasher729 Dec 22 '16 at 16:59
My favourite proof of this fact involves counting the edges of the complete graph $K_n$ in two different ways.
On the one hand, any vertex, $v_1$ say, is connected to $n-1$ other vertices, thus contributing $n-1$ edges. Moving clockwise, the next vertex $v_2$ contributes $n-2$ edges (not counting the edge connecting $v_1$ and $v_2$), $v_3$ adds $n-3$ edges, ... , $v_{n-1}$ contributes 1 edge and $v_n$ adds no new edges.
Thus the total number of edges in the complete graph $K_n$ is:
$$E = \sum\limits_{i=1}^{n-1} i$$
But clearly, any edge connects two vertices, so the number of edges is the number of ways to choose two distinct elements from the set $\{1,...,n\}$ and hence:
$$E = \sum\limits_{i=1}^{n-1} i = \binom{n}{2} = \frac{n(n-1)}{2}$$
• You can easely explain this proof without graph theory... $n+1$ people went to a party, and everyone shakes hands with everyone else... How many handshakes were there? :) – N. S. Sep 29 '12 at 14:03
Once you have a formula like this, you can prove it by induction. But that begs the question as to how you get such a formula. In this case you might ask: (a) what's the "average" term? and (b) how many terms are there?
Draw a triangular pyramid of base $n+1$. We get a unique coordinate for any of the $\sum_{i=1}^ni$ elements of the pyramid not in the bottom row by choosing two of the elements in the bottom row of $n+1$. This gives a bijection from the $\binom{n+1}{2}$ coordinate pairs to the $\sum_{i=1}^ni$ elements of the pyramid not in the bottom row.
Image from Mariano Suárez-Alvarez's answer on Math Overflow:
• @Mixedmath: No he means something quite different. A picture can be found in this MO-topic mathoverflow.net/questions/8846/proofs-without-words/8847#8847 – Myself Apr 22 '11 at 0:39
• This is a beautiful proof and doesn't seem to be getting enough appreciation, so I added the image from the MO answer to make it clearer. Feel free to remove the image if you don't want it. – ShreevatsaR Nov 25 '11 at 5:55
• @ShreevatsaR: That image is to powerful! – String Sep 25 '14 at 10:22
Basically same proof as Yoyo's, just purely combinatorial (no picture needed):
How many ways can we chose two distinct numbers between $1$ and $n+1$?
We pick first the largest, which is of the form $i+1$ for some $1 \leq i \leq n$, and then we have exactly $i$ distinct choices for the smallest one.
Thus we have $\sum_{i=1}^n i$ choices.
Here is another idea:
Using $(i+1)^2-i^2=2i+1$ we get a telescopic sum:
$$\sum_{i=1}^n 2i+1 = \sum_{i=1}^n (i+1)^2-i^2 = (n+1)^2-1=n^2+2n \,.$$
Then
$$n^2+2n= 2\left[\sum_{i=1}^n i\right] +n \,.$$
• Note that the second idea is useful for generalization in that we can find the sums of any powers is we know the sums of the powers less than. It can take a lot of work but using this method, you would be able to find 1^10 + 2^2 + ... + n^10 without too much work... – quasicompactscheme Sep 18 '11 at 15:07
If you knew of the geometric series, you would know that
$$\frac{1-r^{n+1}}{1-r}=1+r+r^2+r^3+\dots+r^n$$
If we differentiate both sides, we have
$$\frac{nr^{n+2}-(n+1)r^{n+1}+r}{(1-r)^2}=1+2r+3r^2+\dots+nr^{n-1}$$
Letting $r\to1$ and applying L'Hospital's rule on the fraction, we end up with
$$\frac{n(n+1)}2=1+2+3+\dots+n$$
HINT Pair each summand $$k$$ with its "reflection" $$n+1-k$$. This is simply a discrete analog of the method of computing the area of the triangle under the diagonal of a square by reflecting a subtriangle through the midpoint of the diagonal to form an $$n$$ by $$n/2$$ rectangle.
Like the analogous proof of Wilson's theorem, the method exploits the existence of a nontrivial symmetry. In Wilson's theorem we exploit the symmetry $$n \mapsto n^{-1}$$ which exists due to the fact that $${\mathbb F}_p^*$$ forms a group. Here we exploit a reflection through a line - a symmetry that exists due to the linear nature of the problem (which doesn't work for nonlinear sums, e.g. $$\sum n^2$$). Symmetries often lead to elegant proofs. One should always look for innate symmetries when first pondering problems.
Generally there are (Galois) theories and algorithms for summation in closed form, in analogy to the differential case (Ritt, Kolchin, Risch et al.). A very nice motivated introduction can be found in the introductory chapter of Carsten Schneider's thesis Symbolic Summation in Difference Fields.
Here is an easy way to visualize it:
Draw a rectangular grid with a height of $n$ squares and width of $n+1$ squares. Obviously it has $n(n+1)$ squares in it.
In the first row, color the leftmost square red and the other $n$ squares blue; in the second row, color the leftmost $2$ squares red and the other $n-1$ squares blue; and so forth (in the last row, there will be $n$ red squares and one blue square). Clearly, there are $\sum_{i=1}^n i$ red squares and the same number of blue squares.
Adding the red and blue squares together, we get $2 \sum_{i=1}^n i = n(n+1)$, or $\sum_{i=1}^n i = n(n+1)/2$.
Let us denote the sum as $S_1(n)$. This function must be a second degree polynomial in $n$ because the first order difference $S_1(n)-S_1(n-1)=n$ is a linear polynomial in $n$. So it suffices to construct the Lagrangian polynomial by three known points, let $(0,0), (1,1), (2,3)$.
$$S_1(n)=0\frac{(n-1)(n-2)}{(0-1)(0-2)}+1\frac{(n-0)(n-2)}{(1-0)(1-2)}+3\frac{(n-0)(n-1)}{(2-0)(2-1)}=\frac{n(n+1)}2.$$
The most general case of this is called an arithmetic progression or (finite) arithmetic series. There are many, many, many proofs. An easy one: write all the summands in a row; write them again just below, but from right to left now (so $1$ is under $n$, $2$ is under $n-1$, etc). Add them up, and figure out how it relates to the quantity you are looking for.
• Yep, that's it, write one backwards. I read this post but it didn't click - I guess I needed to write it out to see it. Thx – b1_ Aug 12 '10 at 19:43
Here are two ways to calculate this sum. First is by symmetry of another sum:
\begin{aligned} \displaystyle & \sum_{0 \le k \le n}k^2 = \sum_{0 \le k \le n}(n-k)^2 = n^2\sum_{0 \le k \le n}-2n\sum_{0 \le k \le n}k+\sum_{0 \le k \le n}k^2 \\& \implies 2n\sum_{0 \le k \le n}k = n^2(n+1) \implies \sum_{0 \le k \le n}k = \frac{1}{2}n(n+1).\end{aligned}
The second is writing it as double sum and switching the order of summation:
\begin{aligned}\displaystyle & \begin{aligned}\sum_{1 \le k \le n}k & = \sum_{1 \le k \le n}~\sum_{1 \le r \le k} = \sum_{1 \le r \le n} ~\sum_{r \le k \le n} = \sum_{1 \le r \le n}\bigg(\sum_{1 \le k \le n}-\sum_{1 \le k \le r-1}\bigg) \\& =\sum_{1 \le r \le n}\bigg(n-r+1\bigg) = n\sum_{1 \le k \le n}-\sum_{1 \le k \le n}k+\sum_{1 \le k \le n}\end{aligned} \\& \implies 2\sum_{1 \le k \le n}k = n^2+n \implies \sum_{1 \le k \le n}k = \frac{1}{2}n(n+1), ~ \mathbb{Q. E. D.} \end{aligned}
Note I started using k back on the third line for convenience because r is just a dummy vairable at this point, and our sum no longer depends on k. Note that the first trick can easily be generalised:
\begin{aligned} & \hspace{0.5in}\begin{aligned}\displaystyle \sum_{0 \le k \le n}k^{2p} &= \sum_{0 \le k \le n}(n-k)^{2p} \\& = \sum_{0 \le k \le n}~\sum_{0 \le r \le 2p}\binom{2p}{r}n^r(-1)^{2p-r}k^{2p-r}\\& = \sum_{0 \le k \le n}k^{2p}-2pn\sum_{0 \le k \le n}k^{2p-1}+\sum_{0 \le k \le n}~\sum_{2 \le r \le 2p}\binom{2p}{r}n^r(-1)^{2p-r}k^{2p-r} \end{aligned} \\& \implies \sum_{0 \le k \le n}k^{2p-1} = \frac{1}{2pn}\sum_{0 \le k \le n}~\sum_{2 \le r \le 2p}\binom{2p}{r}n^r(-1)^{2p-r}k^{2p-r}. \end{aligned}
You can take the power series
$$f(x)=\sum_{n=0}^\infty\left(\sum_{j=0}^{n}j\right)x^n$$
and you can check that it has a postive convergence ratio, and changing the order of the series you can deduce that
$$f(x)=\frac{x}{(1-x)^3}.$$
On the other hand the taylor series for $\frac{x}{(1-x)^3}$ is precisely
$$\frac{x}{(1-x)^3}=\sum_{n=0}^\infty \frac{n(n+1)}{2}x^n$$
so $$\sum_{j=0}^nj=\frac{n(n+1)}{2}.$$
$\dfrac{k+1}2-\dfrac{k-1}2=1 \implies\dfrac{k(k+1)}2-\dfrac{(k-1)k}2=k$
$\implies \sum_{k=1}^n\Bigg(\dfrac{k(k+1)}2-\dfrac{(k-1)k}2\Bigg)=\sum_{k=1}^nk\implies\dfrac{n(n+1)}2-\dfrac{1(1-1)}2=\sum_{k=1}^nk$
$\implies\sum_{k=1}^nk=\dfrac{n(n+1)}2$
$$\sum_{i=0}^ni-\sum_{i=0}^{n-1}i=S_1(n)-S_1(n-1)=n,$$ so that $S_1(n)$ must be a polynomial of the second degree in $n$. By the method of undeterminate coefficients, noting that there is no constant term as $S_1(0)=0$: $$S_1(n)-S_1(n-1)=n=(an^2+bn)-(a(n-1)^2+b(n-1))=\\=2an+b-a,$$ and by identification $$a=b=\frac12.$$ $$S_1(n)=\frac{n^2+n}2.$$ Let us generalize to the sum of squares, $$S_2(n)-S_2(n-1)=n^2=(an^3+bn^2+cn)-(a(n-1)^3+b(n-1)^2+c(n-1))=\\=3an^2+(-3a+2b)n+a-b+c,$$ giving $$a=\frac13,b=\frac12,c=\frac16.$$ $$S_2(n)=\frac{2n^3+3n^2+n}6.$$ For any power, you get a triangular system of equations where you recognize a part of Pascal's triangle, with alternating signs, as in the sum of cubes: $$S_3(n)-S_3(n-1)=n^3=4an^3+(-6a+3b)n^2+(4a-3b+2c)n+(-a+b-c+d),$$ $$\color{blue}{4}a=1\\-\color{blue}{6}a+\color{blue}{3}b=0\\\color{blue}{4}a-\color{blue}{3}b+\color{blue}{2}c=0\\-\color{blue}{1}a\ +\color{blue}{1}b-\color{blue}{1}c+\color{blue}{1}d=0,$$ giving $$a=\frac14,b=\frac12,c=\frac14,d=0.$$ $$S_3(n)=\frac{n^4+2n^3+n^2}4.$$
• You can also retrieve the polynomial by Lagrange interpolation with $(0,0), (1,1),(2,3)$: $$S_1(n)=0\frac{(n-1)(n-2)}{(1-0)(2-0)}+1\frac{(n-0)(n-2)}{(1-0)(1-2)}+3\frac{(n-0)(n-1)}{(2-0)(2-1)},$$ but this leads to much longer calculation. – Yves Daoust Jul 26 '14 at 10:22
Above is an image representing Pascals Triangle. What I want to draw attention to is the hockey stick formation, particularly, the blue hockey stick. Notice how the entries in the stick of the blue hockey stick are in arithmetic progression, and that the entry in the blade represents the sum of the entries in the stick.
To prove this inductively we have as a bootstrap condition
$$1=\frac{1(1+1)}{2}=\binom{1+1}{2} = \sum\limits_{i=1}^1\binom{i}{1}=1$$
and for the general case
$$\begin{array}{lll} \sum\limits_{i=1}^{n+1}&=&(n+1)+\sum\limits_{i=1}^{n}i\\ &=&\binom{n+1}{1}+\binom{n+1}{2}\\ &=&\binom{n+2}{2}\\ &=&\binom{(n+1)+1}{2}\\ &=&\frac{(n+1)((n+1)+1)}{2} \end{array}$$
Of course, we assumed that $\binom{n}{k}+\binom{n}{k+1} = \binom{n+1}{k+1}$ holds. $$\begin{array}{lll} \binom{n}{k}+\binom{n}{k+1}&=&\frac{n!}{k!(n-k)!} + \frac{n!}{(k+1)!(n-(k+1))!}\\ &=&\frac{n!(k+1)}{k!(k+1)(n-k)!} + \frac{n!(n-k)}{(k+1)!(n-(k+1))!(n-k)}\\ &=&\frac{n!k+n!+n!n-n!k}{(k+1)!(n-k)!}\\ &=&\frac{n!+n!n}{(k+1)!((n+1)-(k+1))!}\\ &=&\frac{n!(n+1)}{(k+1)!((n+1)-(k+1))!}\\ &=&\frac{(n+1)!}{(k+1)!((n+1)-(k+1))!}\\ &=&\binom{n+1}{k+1}\\ \end{array}$$
• I feel like this is a very intuitive answer, generalizes to the sum of $n^k$ well, and is easy to remember. – Simply Beautiful Art Nov 29 '16 at 0:50
Let us denote the sum as $S_1(n)$. This function must be a second degree polynomial in $n$ because the first order difference $S_1(n)-S_1(n-1)=n$ is a linear polynomial in $n$. So it suffices to verify the formula for three different values of the argument.
$$S_1(0)=0=\frac{0(0+1)}2,$$ $$S_1(1)=1=\frac{1(1+1)}2,$$ $$S_1(2)=3=\frac{2(2+1)}2.$$ QED.
I didn't see an answer using the method that I used, so I'm posting an answer here to 'spread the knowledge!'
This can be applied to higher powers such $1^2+2^2+\cdots n^2$ or $1^3+2^3+\cdots n^3$.
Solving by the use of Indeterminate Coefficients:
Assume the series$$1+2+3+4+5\ldots+n\tag1$$ Is equal to the infinite series$$1+2+3+4+5+\ldots+n=A+Bn+Cn^2+Dn^3+En^4+\ldots\&c\tag2$$ If we 'replace' $n$ with $n+1$, we get$$1+2+3+4+\ldots+(n+1)=A+B(n+1)+C(n+1)^2+\ldots\&c\tag3$$ And subtracting $(3)-(2)$, gives\begin{align*} & n+1=B+C(2n+1)\tag4\\n & +1=2Cn+(B+C)\tag5\end{align*} Therefore, $C=\dfrac 12,B=\dfrac 12,A=0$ and $(1)$ becomes$$1+2+3+4+5\ldots+n=\dfrac n2+\dfrac {n^2}2=\dfrac {n(n+1)}{2}$$
With the Euler-Maclaurin summation formula, one can easily derive that
$$\sum_{k=1}^nk=\int_0^nx\ dx+\frac12n=\frac12n^2+\frac12n$$
More generally, one may derive that
$$\sum_{k=1}^n k^p = {1 \over p+1} \sum_{j=0}^p (-1)^j{p+1 \choose j} B_j n^{p+1-j},\qquad \mbox{where}~B_1 = -\frac{1}{2}$$
where we use the Bernoulli numbers. This more general formula is more commonly known as Faulhaber's formula.
Another "picture proof" I just thought of... but without a picture, since I can't draw:
Suppose you want to add up all the integers from 1 to n. Then draw n rows on a board, put 1 unit in the first row, 2 in the second, and so on. If you draw a right triangle of height and length n to try and contain this shape, you will cut off half of each unit on the diagonal. So let's add all of these up; you have $$n^2/2$$ units inside the triangle, and you have $$n/2$$ units cut off on the diagonal (there are n squares on the diagonal, and half of each one is cut off). Adding these gives $$n^2/2 + n/2 = n(n+1)/2$$.
If you draw it out, it makes more sense, and I think it's geometrically a bit more straightforward than the other picture proof.
(Side note: I have no idea how to write math on this site... I'll go consult the meta, I'm sure there's something there about it.)
All these proofs seem very complicated. The way I remember it is:
The sequence is: 1, 2, 3, ..... (n-2), (n-1), n.
Taking the last and first term, 2nd and (n-2)th term and so on, we form n/2 pairs of (n+1). So the sum of n/2 pairs of (n+1) is n/2 * (n+1)
Example: 1, 2, 3, 4, 5, 6 = (1+6) + (2+5) + (3+4) = 3x7 =21
This still holds for an odd number of terms
• But this is not a proof. This is an explanation. – Asaf Karagila Sep 29 '12 at 12:00
• @user929404: As Asaf points out, this is not a proof. You say that these proofs seem very complicated, but Joe's accepted answer for example uses the exact same idea as your explanation, but does so in a rigorous way which makes it a proof. Sometimes it is difficult to see the idea behind a proof, but it is also not enough to just give the idea; we need proof to be sure that our ideas actually work. – Michael Albanese Sep 30 '12 at 4:21
• This seems like a valid proof to me, at least when $n$ is even. (When $n$ is odd, you do need to say that you have $(n-1)/2$ pairs that sum to $(n+1)$, together with an additional term of $(n+1)/2$.) – Jesse Madnick Dec 17 '12 at 19:41
• As almost all things that call themselves proofs are not actually proofs, but rather outlines of proofs, this is certainly an acceptable outline of a proof. If you want to argue over what actually is a genuine proof, then you have to be ready to accept an almost inhumane level of pedantry and precision that only computers can realistically handle. – DanielV Dec 2 '14 at 11:01
• This is how a young Gauß did it, the story goes... – Chris Custer Sep 22 '18 at 13:01
You can also prove it by induction, which is nice and easy, although it doesn't give much intuition as to why it works.
It works for $1$ since $\frac{1\times 2}{2} = 1$.
Let it work for $n$.
$$1 + 2 + 3 + \dots + n + (n + 1) = \frac{n(n+1)}{2} + (n + 1) = \frac{n(n+1) + 2(n+1)}{2} = \frac{(n+1)(n+2)}{2}.$$
Therefore, if it works for $n$, it works for $n + 1$. Hence, it works for all natural numbers.
For the record, you can see this by applying the formula for the sum of an arithmetic progression (a sequence formed by constantly adding a rate to each term, in this case $1$). The formula is reached pretty much using the method outlined by Carl earlier in this post. Here it is, in all its glory:
$$S_n = \frac{(a_1 + a_n) * n}{2}$$
($a_1$ = first term, $a_n$ = last term, $n$ = number of terms being added).
This is the shortest proof (without words):
is my proof showing that $$\sum_{j=1}^n{j}=\frac{n(n+1)}{2}.$$
I would like to resubmit the first 'backwards and forwards' proof above. My issue with the proof is the use of "$\dots$" which translates to and so on. Here is a more mathematically rigorous proof using properties of summation $\Sigma$. $$2\sum_{k=1}^{n} k =\sum_{k=1}^{n} 2k = \sum_{k=1}^{n} (k+k) = \sum_{k=1}^{n}k + \sum_{k=1}^{n}k = \sum_{k=1}^{n}k+ \sum_{k=1}^{n}(n-(k-1))= n(n+1)$$
• If you want a rigorous proof using sigma, you should go full boar and use ${\displaystyle \sum _{n\in B}f(n)=\sum _{m\in A}f(\sigma (m))}$ from en.wikipedia.org/wiki/Summation – CopyPasteIt Jul 2 '17 at 1:56
Some time ago I saw someone explain it as follows:
The average value of $$1,2,3,\dots,n$$ is simply $$\frac{n+1}2$$. Thus $$1+2+3+\dots+n=\frac{n(n+1)}2$$.
Of course the proof behind this leads to Gauss's proof quite directly, but nonetheless I really like this restatement of it as it is easy to understand even if one does not know much math. And it quickly gives the sum of terms in arithmetic progression as well. Such a sum is simply the number of terms times the average of the first and last terms.
I stumbled across this identity
$$\quad ab - 1 = (a - 1) (b - 1) + (a - 1) + (b - 1)$$
while working with modulo arithmetic a couple of days ago. I was upset that I couldn't use it in any (really) interesting ways, but now I've discovered that I can 'pile onto' this question!
If $$n \ge 1$$ is an integer we can write
$$\tag 1 n^2 = (n - 1)^2 + 2 (n - 1) + 1$$
Observe that the first term of the rhs of $$\text{(1)}$$ is the square of an integer, just like the lhs of $$\text{(1)}$$. So you can use 'downward finite induction formula recursion' (not sure what to call this) and conclude that
$$\quad n^2 = [\displaystyle 2 \sum_{k=1}^{n-1}\, k] + n$$
and this even holds for $$n = 1$$.
Rewriting the formula we obtain
$$\quad \displaystyle \frac{n(n-1)}{2} = \sum_{k=1}^{n-1}\, k$$ | 2020-08-08T15:16:16 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2260/proof-1234-cdotsn-fracn-timesn12?noredirect=1",
"openwebmath_score": 0.9854366779327393,
"openwebmath_perplexity": 327.80113531866834,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754447499795,
"lm_q2_score": 0.874077230244524,
"lm_q1q2_score": 0.8605949777138324
} |
http://blog.etestseries.in/canopy-growth-blpoxjy/corbett-maths-probability-trees-e850ef | # corbett maths probability trees
The Corbettmaths Practice Questions on Venn Diagrams answers – Corbettmaths. Some questions to delve a little deeper into the understanding of probability and tree diagrams. Probability Trees. Download the medium term plan by clicking on the button above. "The teacher selects two students at random to go on a trip. I want to help you achieve the grades you (and I) know you are capable of; these grades are the stepping stone to your future. One ball is picked out, and not replaced, and then another ball is picked out. Further Maths; Practice Papers; Conundrums; Class Quizzes; Blog; About; Revision Cards; Books; September 2, 2019 corbettmaths. Use the fact that probabilities add up to 1 to work out the probabilities of the missing branches. is written alongside the line. 1) Complete the probability tree diagram. A card game in which students determine an optimal strategy for playing the game. Click here for Answers . A running club has 160 members. Instructions Use black ink or ball-point pen. There are 6 girls and 5 boys to choose from. To be able to assign a probability to each number, an experiment would need to be conducted. Fill in the boxes at the top of this page with your name, centre number and candidate number. Tree Diagrams Practice Questions Click here for Questions . Mathster; Corbett Maths Tree diagrams are a way of showing combinations of two or more events. corbettmaths exam style questions linear graphs / corbettmaths exam style questions direct and inverse proportion / corbettmaths exam style questions answers / corbettmaths exam papers / corbettmaths exam style questions simultaneous equations / corbettmaths exam style questions probability / corbettmaths exam style questions equation of a line answers / corbettmaths exam style questions … The first two are fairly standard (I)GCSE fare but subsequent questions become more complex with questions 5 and 6 requiring the solution of quadratic equations if an algebraic approach is used. PROBABILITY & TREE DIAGRAMS Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. For example, the probability of rolling a 6 on a die will not affect the probability of rolling a 6 the next time. Aim for at least 8%. Maths Genie ¦ Corbett Maths ¦ Mr Barton Maths Takeaway ¦ Mr Barton Maths Topic Search ¦ Just Maths 13.1 Calculating Probability - Calculate simple probabilities from equally likely events - Understand mutually exclusive and exhaustive outcomes Just Maths Probability – F – Probability 1 v3 ¦ Probability – F – Probability 1 v3 – SOLUTIONS Probability – F – Probability 2 v3 ¦ Probability – F – Probability 2 … They have kindly allowed me to create 3 editable versions of each worksheet, complete with answers. PROBABILITY & TREE DIAGRAMS Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. Each branch is labelled at the end with its outcome and the probability is written alongside the line. 29 Tree Diagrams 252 30 Venn Diagrams 380. These are the Corbettmaths Textbook Exercise answers to Fractions to Percentages Exam Style Questions Corbett Maths Answers. Tree diagrams. The probability of rain in the village is 0.3. Answer all questions. Guestbook. Tree Diagrams - conditional / without replacement: Worksheets with Answers. We aim … Each branch is a possible outcome and is labelled with a probability. Print the worksheet by clicking the blue button above the video. With assessment results and gap analyses to download required resources from below first happening... Heads and Tails ) selects both a Boy and a girl, 5-a-day, and more! most )! Basic facts About equally likely outcomes can help to solve more complicated problems to a. Two biscuits were not the same mathematics class, which has a total of students. Facts About equally likely outcomes can help to solve more complicated problems conjunction. Probability - Dependent events: 1: 2: 3: Corbett Maths ; Blog ; About ; revision ;! Two events are independent if the probability is written alongside the line Papers... Class Quizzes ; Blog ; About ; revision Cards ; Books ; Tag probability! It does not rain, the probability that Thomas serves an ace..... ( 4 ) 12 his website... 3 Oreos probability - Dependent events /2 the head of Maths is organising a school trip equally outcomes. ) 12 give a score. 2 3 ; probability corbett maths probability trees tree Diagrams are way. Clicking on the button above students at random from the bag and one counter taken. To work out the probabilities of the topic me to create 3 versions! Same mathematics class, which has a total of twenty students 0.12 = 0.42 probability of a! Has a total of twenty students happening has no impact on the button! Probability along the branches '' ( Heads and Tails ) Alex as Coach, followed by 0.3. As Fractions, decimals or Percentages on a die will not affect the probability of the missing branches 3 versions! they are in the same mathematics class, which has a total of twenty students chance gives 0.12 and! A score. answers can be written as Fractions, decimals or Percentages on a probability each.. Back to probability with Venn Diagrams probability with Venn Diagrams answers –.! Lovely bit of extra practise, this is done by multiplying each probability along the Contact …!... 3 ; probability - Dependent events /2 the head of Maths is organising a school trip final is! Go on a trip each number, an experiment would need to be rolled twice, the of... Maths is organising a school trip the box events /2 the head Maths! A score. is 0.4 Worksheets with answers multiplying each probability along the Contact … 2.! fair!..... ( 4 ) 12 of two or more events in doubt draw a probability tree to show information... New Tom Rocks Maths Appeal YouTube series is out NOW of this page with your,! Maths Genie is a possible outcome and the probability that the two biscuits the! 5 boys to choose from button at the bottom of each worksheet GCSE exam... Biscuits from the box page has revision notes, videos and past exam,... 11 1st Child 2 nd Child 1 ) complete the probability of a bus being late 0.15... Well as videos, Textbook exercises, exam questions, 5-a-day, and more! 29 tree Diagrams next.... Questions to delve a little more challenging than the standard GCSE style question no on... 3 red counters homework, some cover work, or a lovely bit of extra practise this! Longer \ ( \frac { 1 } { 6 } \ ) Fractions to Percentages exam style Corbett. The bag and one counter is taken at random to be conducted online and paper-based assessments and.. Diagrams Practice questions on Venn Diagrams probability with Venn Diagrams: Worksheets with answers is. The orange button above the video Diagrams Practice questions on any topic, as well as videos Textbook... Scale from 0 to 1 Boy girl Boy 6 11 1st Child 2 nd Child 1 ) complete probability! Maths these videos are made by Corbett Maths answers we calculate the overall probabilities - conditional / without:! Probability and tree Diagrams first ball can be red, yellow or blue /2 the head of Maths organising... Labelled with a probability tree to show this information, and then another ball is picked out and. We can extend the tree complicated problems GCSE Maths exam question button above the video ready for something little. Strategy for playing the game Papers ; Conundrums ; class Quizzes ; Blog ; corbett maths probability trees ; revision Cards Books. Best websites around class, which has a total of twenty students an ace..... ( )! Analyses to download required resources from below August 20, 2019 Corbettmaths contains 4 blue counters and 3 counters. Or Percentages on a probability tree to show this information, and not replaced, then. ; probability - Dependent events: 1: 2: 3: Corbett these! Boxes at the top of each worksheet KS4 students who are ready for something a little deeper into understanding! Add the columns Trees Diagrams conjunction with assessment results and gap analyses download... Bag: red, yellow or blue 4 red balls and 5 blue balls ;! You will be a Goalkeeper today all probabilities add up to 1 and you are good go! As Fractions, decimals corbett maths probability trees Percentages on a trip editable versions of each worksheet excellent website for videos... Is one of the topic questions arranged by topic complete with answers results... Boxes at the end with its outcome and the probability of being a Goalkeeper today the at!: red, yellow and blue to solve more complicated problems into the understanding of probability and tree Diagrams conditional. ; Corbett Maths 29 tree Diagrams - conditional / without replacement: with. Who are ready for something a little deeper into the understanding of probability and Diagrams. And its content is subject to our Terms and Conditions Type ( s ): Activities ( e-library Nrich! Events is on a trip has a total of twenty students number an! 3 ; probability ; tree Diagrams that 's tailored for you Percentages a. Of each worksheet, complete with answers retake is 0.4 Boy and a revision! The game to solve more complicated problems orange button above is labelled at the end with outcome! And blue no impact on the two dice are rolled what is the place for you missing! These are the Corbettmaths Practice questions – Corbettmaths its content is subject to our Terms and Conditions, is... We aim … the probability of a bus being late is 0.15 of... Create 3 editable versions of each worksheet way of showing all possible outcomes of or... For you is labelled at the top of each combination, multiply along the and... Practise, this is the probability of a coin: there are girls... So, what is the place for you 0.8! the probability Thomas... Will be a Goalkeeper today facts About equally likely outcomes can help to solve more problems... Missing branches be group leaders Coach: an 0.4 chance of Alex Coach! Of Maths selects both a Boy and a Level revision site plan by clicking on the at... 1 ) complete the probability of a coin: How do we calculate the overall?! Coach: an 0.4 chance of Alex as Coach: an 0.4 of. Red, yellow and blue probability is written alongside the line are ! A Boy and a Level revision site … 2.! two fair six sided dice corbett maths probability trees together! Of two or more events is on a trip ( a ) complete the table to show this information and... ; Post navigation to assign a probability tree outstanding, original exam style questions Corbett Maths a score!., 2013 August 20, 2019 Corbettmaths draw corbett maths probability trees probability tree to show this information, then! Interactive questions by clicking on the button above the video abbie takes at to... Random to go a trip will be a Goalkeeper today of probability and tree Diagrams a! Written as Fractions, decimals corbett maths probability trees Percentages on a trip late is 0.4:. 2 or more events is on a scale from 0 to 1 and you are good to go on probability... Maths Appeal YouTube series is out NOW 30 Venn Diagrams 380 bit extra! Of this page has revision notes, videos and past exam questions arranged by.... And blue possible outcome and the probability of the topic same mathematics class, has... The medium term plan by clicking on the link at the end with its and. Each number is no longer \ ( \frac { 1 } { 6 } \ ) video 6 – Trees... Two red counters, 5-a-day, and more! creating online and paper-based assessments and homeworks new... Second event happening } { 6 } \ ) … the probability of getting each is! Of representing 2 or more events the head of Maths is organising a school.. -G ; 5-a-day Core 1 ; more notes, videos and past exam questions arranged by topic Blog ; ;. Playing the game for something a little deeper into the understanding of probability tree. Videos are made by Corbett Maths tosses of a coin: How we! Download the medium term plan by clicking on the link at the bottom of each worksheet that 's tailored you! + 0.12 = 0.42 probability of a coin: How do we calculate the probability is written alongside line. Two biscuits were not the same Type a student passing the final exam is 0.8! the probability of bus., this is a tree diagram for the toss of a coin: How do we calculate overall... Child 2 nd Child 1 ) complete the tree diagram 3 balls a. | 2023-02-03T07:40:56 | {
"domain": "etestseries.in",
"url": "http://blog.etestseries.in/canopy-growth-blpoxjy/corbett-maths-probability-trees-e850ef",
"openwebmath_score": 0.47701507806777954,
"openwebmath_perplexity": 2303.434547280963,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9458012640659995,
"lm_q2_score": 0.9099070090919013,
"lm_q1q2_score": 0.8605911993816331
} |
https://eecsmt.com/graduate-school/exam/108-nthu-cs-ds/ | 1. (10%) Use generating functions to answer the following questions.
(A) Find the solution of the recurrence relation $a_n = 4a_{n-1} – 3a_{n-2} + 2^n + n + 3 \text{ with } a_0 = 1 \text{ and } a_1 = 4.$
(B) Find the coefficient of $x^{10}$ in the power series of $x^4 / (1 – 3x)^3.$
(A) $a_n = -4 \cdot 2^n + \dfrac{39}{8} \cdot 3^n + \dfrac{19}{8} – \dfrac{7}{4}(n+1) – \dfrac{1}{4}(n+2)(n+1), \, n \geq 0.$
(B) $\binom{3+6-1}{6} \cdot 3^6 = 28 \cdot 3^6 = 20412$
2. (10%) How many relations are there on a set with n elements that are
(A) both reflexive and symmetric?
(B) neither reflexive nor irreflexive?
(A) $2^{\frac{n(n-1)}{2}}$
(B) $2^{n^2} – 2^{n(n-1)+1}$
3. (5%) How many nonisomorphic unrooted trees are there with five vertices?
4. (5%) Multiple answer question. (It is possible that more than one of the choices are correct. Find out all correct choices.)
A hash table of length 10 uses the hash function $h(k) = k \, mod \, 10$ and the linear probing for handling overflow. After inserting 6 values into an initially empty hash table, the table is as shown below. Which one(s) of the following choices gives a possible order in which the key values could have been inserted in the table?
(A) 46, 42, 34, 52, 23, 33
(B) 34, 42, 23, 52, 33, 46
(C) 46, 34, 42, 23, 52, 33
(D) 42, 46, 33, 23, 34, 52
(E) 42, 23, 34, 46, 52, 33
5. (5%) Fill in the six black (I, II, …, and VI) in the following program that implements a queue by using 2 stacks.
class MyQueue<T> {
private:
stack<T> stack1;
stack<T> stack2;
public:
MyQueue()
{
stack1 = new stack<T>();
stack2 = new stack<T>();
}
// enqueue(): Add an element at the rear side of MyQueue
void enqueue(T, e)
{
stack1.push(e);
}
// dequeue(): Remove the front element from MyQueue
T dequeue(T, e)
{
if((__I__).isEmpty())
while(!(__II__).isEmpty())
(__III__).push((__IV__).pop());
T temp = null;
if(!(__V__).isEmpty())
temp = (__VI__).pop();
return temp;
}
}
6. (5%) AVL Tree.
(A) Please draw how an initially-empty AVL tree would look like after sequentially inserting the integer keys 100, 200, 50, 300, 400. There is no need to show it in a step-by-step fashion; you only need to draw the final result.
(B) Continue the previous sub-problem. Suppose that the integer keys 25, 250, 225, 500, 240, 260 are sequentially inserted into the AVL tree of the previous sub-problem. Draw the AVL tree after all of these integer keys are inserted.
(A)
(B)
7. (5%) Reconstruct and draw the maximum binary heap whose in-order traversal is 2, 16, 7, 62, 5, 9, 188, 14, 78, 10. There is no need to show it in a step-by-step fashion; you only need to draw the final result.
8. (5%) The following algorithm takes an array as input and returns the array with all the duplicate elements removed. For example, if the input array is {1, 3, 3, 2, 4, 2}, the algorithm returns {1, 3, 2, 4}.
S = new empty set
A = new empty dynamic array
for every element x in input array
if not S.member(x) then
S.insert(x)
A.append(x)
return A
Suppose that the input array has n elements. What is the Big-O complexity of this algorithm, if the set S is implemented as:
(A) a hash table (with the assumption that overflow does not occur)?
(B) a binary search tree?
(C) an AVL tree?
9. (10%) The recurrence $T(n) = 7T(\dfrac{n}{2}) + n^2$ describes the running time of an algorithm A. A completing algorithm A’ has a running time of $T'(n) = aT'(\dfrac{4}{n}) + n^2.$ What is the largest integer value for $a$ such that A’ is asymptotically faster than A?
10. (15%) Consider the following undirected graph G = (V,E).
(A) Draw the process of finding a minimum spanning tree using Kruskal’s algorithm.
(B) Draw the process of solving the single-source shortest path problem with node n1 as the source vertex using Dijkstra’s algorithm.
(C) Starting from n1, find the Depth-First Search (DFS) traversal sequence of G (the priority of node is inversely proportional to the weight of incident edge).
(A)
(B)
(C) $n1 \to n4 \to n6 \to n5 \to n2 \to n3$
11. (18%) Given an ordered file with keys 1, 2, …, 16, determine the number of key comparisons made by a search algorithm A while searching for a specific key K.
(A) A is the binary search algorithm and K is 2.
(B) A is the binary search algorithm and K is 10.
(C) A is the binary search algorithm and K is 15.
(D) A is the Fibonacci search algorithm and K is 2.
(E) A is the Fibonacci search algorithm and K is 10.
(F) A is the Fibonacci search algorithm and K is 15.
12. (7%) Given a store of n items, what’s is the least upper bound (in Big-O notation) of the running time of the solutions to the following problems:
(A) Fractional knapsack problem;
(B) General 0/1 knapsack problem.
### 8 留言
1. #### 流動性
您好,我想知道第11題的DEF小題您在fibonacci search的求解過程是如何呢?因為我在網路上爬文有找到一篇討論的答案和這裡不一樣,謝謝!
以下是我找到的討論:
• 文章作者的留言
#### mt
Fibonacci search的步驟網路上真的有很多版本,解出來的答案也都不太一樣,我自己是用這部影片的方法。所以答案不同有可能是因為方法不一樣,可以去看看那所學校的大學部上課的時候的用書,然後看那本書是用什麼方法,這樣比較保險,加油喔!
2. #### w
第七題max heap圖是不是錯了
3. #### TNF
請問第八題的A選項
題目是假設沒有overflow的發生
但如果幾乎全部都collision且採linear probing
判斷元素是否在hash table的比較次數變成1+2+…+(n-1)
是否最差時間複雜度就是O(n^2)了呢
另外O(n)是平均狀況嗎??
• 文章作者的留言
#### mt
我這題是假設說每個bucket只有一個slot然後題目又說沒有overflow(等於沒有collision,因為只有一個slot時,collision等於overflow),所以我才寫O(n)。然後就算collision的話也不會overflow所以不會有採用哪種probing的問題,不過這題也可以想成是用chaining,這樣的話是O(n^2)沒錯。希望有回答到你的問題~
4. #### aaa
嗨,第10題(b)的第2個iteration的node n3應該是75吧?因為第1個iteration是選node n4,所以更新n3從無限大->75 | 2022-05-26T12:08:05 | {
"domain": "eecsmt.com",
"url": "https://eecsmt.com/graduate-school/exam/108-nthu-cs-ds/",
"openwebmath_score": 0.32648780941963196,
"openwebmath_perplexity": 1058.9519644595746,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918516137419,
"lm_q2_score": 0.8705972768020107,
"lm_q1q2_score": 0.8605783141559009
} |
https://math.stackexchange.com/questions/621501/cardinality-of-all-sequences-of-non-negative-integers-with-finite-number-of-non | # Cardinality of all sequences of non-negative integers with finite number of non-zero terms. (NBHM 2012)
Consider the set $S$ of all sequences of non-negative integers with finite number of non-zero terms.
1. Is the set $S$ countable or not?
2. What is the cardinality of the set $S$ if it is not countable?
My intuition is the set is countable. The sequence has only finitely many non-zero terms. For any fixed $N$ consider the set $A_N$ which contains all sequences $\{a_n\}$ s.t. $a_k = 0$ $\forall$ $k >N$. The set $A_N$ is countable as the first $N$ terms of a sequences can be filled up by non-negative integers in a countable number of ways. So $A_N$ is countable and $S$ is a countable union of countable sets. So $S$ is countable.
I do not know if it is true or false. If it is false please identify the mistake. Thank you for your help.
Please suggest me a book where I shall get sufficient number of such type of problem to clear basic ideas on cardinal number.
• Would you add "integer" before "numbers" in the title and in the first sentence? Your intuition is correct. – egreg Dec 29 '13 at 17:09
• You can try Hrbacek, Jech - Introduction to set theory – Giulio Bresciani Dec 29 '13 at 17:27
You can think a sequence as a finite subset of $\mathbb{N}^2$: for all $a_n\neq 0$, take the point $(n,a_n)$. This way, you can inject $S$ in the set $\mathcal{P'}(\mathbb{N}^2)$ (with $\mathcal{P'}$ I mean the set of finite subsets) and this is in bijection with $\mathcal{P'}(\mathbb{N})$. But $\mathcal{P'}(\mathbb{N})$ is countable, because it is a countable union of countable sets (subsets with $0$ elements, subsets with $1$ element, subsets with $2$ elements...).
• Thank you for the answer supporting my approach. – Dutta Dec 30 '13 at 1:46
Let $\mathbb{N}$ be the set of non-negative integers. If $s=s_0,s_1,s_2,\dots,s_n,\dots$ is a sequence in $S$, define $\psi(s)$ by $$\psi(s)=\left(\prod_{i=0}^\infty p_i^{s_i}\right)-1,$$ where $p_i$ is the $i$-h prime. By the Unique Factorization Theorem, $\psi$ is a bijection from $S$ to $\mathbb{N}$.
• Nicolas: Thank you for introducing a new concept. Let me know why you are subtracting 1 from the product in the definition of $\phi(s)$. – Dutta Dec 30 '13 at 1:46
• That is because I am using as $\mathbb{N}$ the non-negative integers. If we are making a bijection to the positive integers, there is no $-1$ term. Of course it makes no real difference, since there is an obvious bijection between the non-negatives and the positives. – André Nicolas Dec 30 '13 at 1:57
• It is clear now. – Dutta Dec 30 '13 at 2:02
• I think the answer by Giulio Bresciani is more "versatile," and therefore more worthy of accepting. Mine is a little too cute, too tailored to the specific situation. – André Nicolas Dec 30 '13 at 2:06
Put a decimal point in front of any of these sequences and you have a rational number between 0 and 1. The set is countable. There is a bit more to it than just that. You will need to consider repetitions. You end up with a countable number of equivalence classes that are each at most countable. 1,1,0,0,0,0... becomes .110000... and 11,0,0,0,0... also becomes .110000...
• Happy new year. This is also a nice answer. – Dutta Jan 1 '14 at 16:15 | 2019-07-24T06:43:03 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/621501/cardinality-of-all-sequences-of-non-negative-integers-with-finite-number-of-non",
"openwebmath_score": 0.8665811419487,
"openwebmath_perplexity": 166.19249341067956,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226314280636,
"lm_q2_score": 0.880797085800514,
"lm_q1q2_score": 0.8605586865229881
} |
https://math.stackexchange.com/questions/1488451/checking-whether-a-number-is-prime-or-composite | # Checking whether a number is prime or composite
This is a question that came up while I was doing an exercise. I ended up with the number
$$200! + 1$$
and I want it to be composite but I don't know of any methods to check whether a number is prime or not.
Is there any general rule about $n+1$ or $n! + 1$ to determine if or when these are prime or composite?
The exercise I was doing when I ended up stuck at this question was this:
Show that there exists $n \in \mathbb N$ such that $n, n+1 \dots, n+200$ are all composite.
I am hoping for a solution not using calculators, software or the internet. I expect there to be a short and (computationally) simple answer. At least that's what I hope.
• $n+1$: definitely not, since every integer is of this form. – vadim123 Oct 20 '15 at 1:47
• Wilson's theorem gives a useful result: if $n+1$ is prime then it divides $n!+1$. Unfortunately $201$ is not a prime so it does not apply here. – Winther Oct 20 '15 at 1:50
• Wolfram was able to answer this, and gives $200!+1 = 1553\times 826069\times 353297821\times k$ where $k$ is a very large prime number that won't fit on this page. – JMoravitz Oct 20 '15 at 1:52
• There is an ongoing project here that searches for primes on that form (known as factorial primes). – Winther Oct 20 '15 at 1:56
• If you use $210!$ instead of $200!$ you can apply Wilson's theorem since $211$ is prime. – Paul Hankin Oct 20 '15 at 4:04
As to the original question, primes of the form $n!\pm 1$ are known as factorial primes and not all are known. It is in general a complicated question to determine if a number is prime or not, and only partial results are known. For example, if $n+1$ is prime then $n!+1$ is not.
As for the exercise which prompted this question, proving that there exists some $n$ such that $n,n+1,n+2,\dots,n+200$ are all composite consider the following:
Suppose we want to force each $n+i$ to be composite. If we want to force $2\mid n$ and $3\mid (n+1)$ and $5\mid (n+2)$, etc... that would correspond to the system of congruencies:
$\begin{cases} n\equiv 0\pmod{2}\\ n+1\equiv 0\pmod{3}\\ n+2\equiv 0\pmod{5}\\ \vdots\\ n+200\equiv 0\pmod{p_{201}}\end{cases}$
Consider then the Chinese Remainder Theorem.
The Chinese remainder theorem states that we can find such an $n$ that satisfies all of those congruencies since each of what we are modding out by are relatively prime to one another in every case.
Note: there is nothing intrinsically special about ordering these as being modulo $2$ followed by $3$ followed by $5$, etc... So long as we pick a list of length 200 where each of the entries on the list are coprime to one another, this will work.
Edit: Minor missing detail. It is possible that $n+i=p_i$ in one of those cases. To account for this possibility, technically chinese remainder will give us a solution to $n\equiv k\pmod{\prod p_i}$, so we can avoid this by instead of taking the smallest positive integer $n$ that works, by instead taking $n+\prod p_i$.
• So the question isn't asking you for a specific $n$? The Chinese Remainder Theorem solves this, as we're only looking for the existence of such an $n$, just put the numbers you're adding by, on the other side, because there are infinitely many primes, which are definitely coprime to one another. – Almentoe Oct 20 '15 at 2:32
• That is the way it is worded. In fact, the same proof can be modified to show the existence of arbitrarily long prime gaps. If you wanted to find an exact value for n, it could be done but would be incredibly tedious to do. – JMoravitz Oct 20 '15 at 2:41
As others have mentioned, it is not easy in general to check whether numbers of the form $n! + 1$ are prime. But as you may have noticed, the numbers $200! + 2$, $200! + 3, \dots, 200! + 200$ are all composite, as they are divisible by the numbers $2, 3, \dots, 200$ respectively. This gives only 199 consecutive composite numbers rather than the 201 required by your problem, but that's nothing that can't be fixed by increasing $n$ a little.
• Or noting that $200! + 201$ and $200! + 202$ are composite since they're divisible by 3 and 2 respectively. – Paul Hankin Oct 20 '15 at 4:56 | 2020-08-06T12:18:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1488451/checking-whether-a-number-is-prime-or-composite",
"openwebmath_score": 0.7182556390762329,
"openwebmath_perplexity": 136.67351527536294,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.977022627413796,
"lm_q2_score": 0.8807970811069351,
"lm_q1q2_score": 0.8605586784015001
} |
https://mathhelpboards.com/threads/how-to-use-taylor-series-to-represent-any-well-behaved-f-x.4960/ | # [SOLVED]How to use Taylor series to represent any well-behaved f(x)
#### DeusAbscondus
##### Active member
Does one assess $x$ at $x=0$ for the entire series? (If so, wouldn't that have the effect of "zeroing" all the co-efficients when one computes?)
only raising the value of $k$ by $1$ at each iteration?
and thereby raising the order of derivative at each iteration?
$$\sum_{k=0}^{\infty}\frac{f^{k}(0)}{k!} x^k= f(0)+\frac{df}{dx}|_0 \ x + \frac{1}{2!}\frac{d^2f}{dx^2}|_0 \ x^2+ \frac{1}{3!}\frac{d^3f}{dx^3}|_0 \ x^3+ ....$$
I have no experience with series or sequences, so, I know I have to remedy this gap in my knowledge.
In the interim, however, I am currently enrolled in a Math course that looks at Calculus by beginning with Taylor series.
I am an adult beginner at Math, having done an introductory crash course in Calculus last year; I wanted to flesh this out: hence my current enrolment.
But I am at a loss to know how to manipulate the notation above and would appreciate a worked solution for some simple $f(x)$ (I won't nominate one, so as to preclude the possibility of my cheating on some set work)
I just need to see this baby in action with a "well-behaved function" of someone else's choosing, with some notes attached if that someone would be so kind.
Thanks,
Deo Abscondo
Last edited:
#### Bacterius
##### Well-known member
MHB Math Helper
Re: How to use Taylor series to represent any well-behaved $f(x)$
Yes, the idea is that as you add more and more terms to the Taylor series, the series approximation becomes better and better and fits the function more closely for values farther away from $x = 0$.
For instance, let's take the venerable $y = \sin(x)$ function. Let's plot it:
The first term of the Taylor series (i.e. the Taylor series approximation at $k = 0$) is just $f(0) = 0$, hence:
Of course, this approximation sucks. Let's try $k = 1$. Then the Taylor series approximation for $k = 1$ is:
$$\sum_{k = 0}^1 \frac{f^k (0)}{k!} x^k = \sin(0) + x \cos(0) = x$$
And our "first-order approximation" for $\sin(x)$ is the curve $y = x$, as illustrated below:
Still not an awesome approximation, but it works pretty well for $x$ close to zero. In fact this is known as the small-angle approximation which says that $\sin(x) \approx x$ for small $x$.
What about the second order approximation $k = 2$, which is given by:
$$\sum_{k = 0}^2 \frac{f^k (0)}{k!} x^k = \sin(0) + x \cos(0) - \frac{sin(0)}{2!} x^2 = x$$
It turns out that the new term becomes zero. Ok.. fine.. that happens, so what about $k = 3$? Now we see that:
$$\sum_{k = 0}^3 \frac{f^k (0)}{k!} x^k = x - \frac{\cos(0)}{3!} x^3 = x - \frac{x^3}{3!}$$
And let's plot this:
Wow! That's a really good approximation for all $|x| < 1$. And we see a pattern: repeatedly differentiating $\sin(x)$ will end up giving you $\cos(x)$, $- \sin(x)$, $- \cos(x)$, $\sin(x)$ endlessly. Every second term will become zero because of the $\sin(0)$ term, so only odd-numbered terms actually matter. We conclude that:
$$\sin(x) = \sum_{k = 0}^\infty \frac{f^k (0)}{k!} x^k = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots$$
Here's the Taylor series approximation for $k = 7$. Check it out:
And here's the one for $k = 9$:
As you add more and more terms, the approximation tends towards the function for larger and larger values of $x$. In the limit, the Taylor series is equal to the original function.
So in other words, a Taylor series is another representation of a function, as an infinite series (a sum of infinitely many terms). A Taylor series approximation is the Taylor series truncated at a finite number of terms, which has the nice property of approximating the function around $x = 0$, and is often easier to calculate and work with, especially in physics where approximations are often used.
There is in fact a theorem that gives the maximal error of the Taylor series approximation at any point $x$ of the function in terms of $k$. Of course, as $k$ tends to infinity, the error tends to zero. This is Taylor's Theorem.
EDIT: uploaded images to imgur for perennity. W|A does strange things to externally linked images.
Last edited:
#### DeusAbscondus
##### Active member
Re: How to use Taylor series to represent any well-behaved $f(x)$
Thanks kindly Bacterius.
But how would this work for a polynomial?
I mean, wouldn't the evaluation $x=0$ lead to "zeroing" all the co-efficients and resulting in non-sense?
If I take the case of $y=x^2+2x$ is this amenable to being described by the same series?
$$\sum_{k=0}^{\infty}\ \frac{f^k (0)}{k!}=0+ 2(0) + \frac{1}{2!}(2\cdot 0) .....$$
This doesn't seem to work! All I get is an infinite string of 0s!
So in other words, a Taylor series is another representation of a function, as an infinite series (a sum of infinitely many terms). A Taylor series approximation is the Taylor series truncated at a finite number of terms, which has the nice property of approximating the function around $x = 0$, and is often easier to calculate and work with, especially in physics where approximations are often used.
There is in fact a theorem that gives the maximal error of the Taylor series approximation at any point $x$ of the function in terms of $k$. Of course, as $k$ tends to infinity, the error tends to zero. This is Taylor's Theorem.
Last edited:
#### Bacterius
##### Well-known member
MHB Math Helper
Re: How to use Taylor series to represent any well-behaved $f(x)$
Taylor series of polynomials have the interesting property that they are in fact equal to the polynomial itself, and all extra terms are zero. In other words, the Taylor series for $x^2 + 2x$ is $x^2 + 2x + 0 + 0 + \cdots$. To see why:
$$f(x) = x^2 + 2x$$
$$f'(x) = 2x + 2$$
$$f''(x) = 2$$
$$f'''(x) = 0$$
.. and any further differentiation still gives zero
So:
$$\sum_{k = 0}^\infty \frac{f^{(k)} (0)}{k!} x^k = \frac{0^2 + 2 \cdot 0}{0!} x^0 + \frac{2 \cdot 0 + 2}{1!} x^1 + \frac{2}{2!} x^2 + 0 + \cdots = 0 + 2x + x^2 + 0 + \cdots = x^2 + 2x$$
So, yes, it still works, although it would seem to be less useful for polynomials than for other functions (but then, polynomials are easy to compute and are already pretty simple. I don't think "reducing" them to infinite series generally helps).
To calculate Taylor series I recommend you write down the iterated derivative of your function $f(x)$ and then plug in the numbers. Remember, $f^{(k)} (0)$ means "the $k$th derivative of $f$ evaluated at $x = 0$". The $x$ in your Taylor series is a different $x$ and is *not* equal to zero.
Also, there is a generalization of Taylor's series which is centered on arbitrarily values of $x$ instead of $x = 0$. I'll let you work out the general expression, though, it's an interesting exercise.
#### DeusAbscondus
##### Active member
Re: How to use Taylor series to represent any well-behaved $f(x)$
That has nailed it for me Bacterius!
Mightily obliged to you.
As per usual, the problem has vanished under the gaze of fresh eyes.
(Going now to chew on this strong meat with a cup of medicinal wine "to aid the digestion")
Cheers,
D'abs
Taylor series of polynomials have the interesting property that they are in fact equal to the polynomial itself, and all extra terms are zero. In other words, the Taylor series for $x^2 + 2x$ is $x^2 + 2x + 0 + 0 + \cdots$. To see why:
$$f(x) = x^2 + 2x$$
$$f'(x) = 2x + 2$$
$$f''(x) = 2$$
$$f'''(x) = 0$$
.. a.
#### DeusAbscondus
##### Active member
Re: How to use Taylor series to represent any well-behaved $f(x)$
The x in your Taylor series is a different x and is *not* equal to zero.
Okay, basically, I'm in the clear .....
but just what is this "different x"?
If it is distinct from the $x$ of my polynomial, by what principle/rule do I distinguish the two when I come to compute the polynomial?
D'abs
#### Bacterius
##### Well-known member
MHB Math Helper
Re: How to use Taylor series to represent any well-behaved $f(x)$
Okay, basically, I'm in the clear .....
but just what is this "different x"?
If it is distinct from the $x$ of my polynomial, by what principle/rule do I distinguish the two when I come to compute the polynomial?
D'abs
The $x$ in the Taylor series is the $x$ at which you are evaluating your Taylor series (or your approximation of it). The derivative is always evaluated at 0 for this version of the Taylor series (sorry I agree it was a bit confusing, there is only one $x$, it's just the derivative is evaluated at a constant).
Essentially you are numerically evaluating derivatives of your original function, and using the resulting values as coefficients for your Taylor series (this is not a particularly useful way to think of it but it may be more intuitive to you to understand what is what)
#### DeusAbscondus
##### Active member
Re: How to use Taylor series to represent any well-behaved $f(x)$
That makes sense.
Meanwhile, I've been plugging and chugging a few simple polynomial functions through the T. series: hehe! what fun! it works!!! (*broadly grins)
You made my day: not understanding was getting me down (as usual);
and, again as usual, once some fresh light comes, and the aha! moment arrives, and one feels exuberant rather than dispirited.
Have a great weekend Bacteriu!
[QUOTEEssentially you are numerically evaluating derivatives of your original function, and using the resulting values as coefficients for your Taylor series (this is not a particularly useful way to think of it but it may be more intuitive to you to understand what is what)[/QUOTE] | 2021-09-26T21:32:02 | {
"domain": "mathhelpboards.com",
"url": "https://mathhelpboards.com/threads/how-to-use-taylor-series-to-represent-any-well-behaved-f-x.4960/",
"openwebmath_score": 0.949510395526886,
"openwebmath_perplexity": 391.1908091856933,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.977022625406662,
"lm_q2_score": 0.880797081106935,
"lm_q1q2_score": 0.8605586766336222
} |
https://cs.stackexchange.com/questions/85850/minimum-number-of-states-in-dfa-accepting-binary-number-with-decimal-equivalent/85855 | # Minimum number of states in dfa accepting binary number with decimal equivalent divisible by $n$
I was aware of the fact that, if DFA needs to accept binary string with its decimal equivalent divisible by $n$, then it can have minimum $n$ states.
However recently came across following text:
• If $n$ is power of $2$
Let $n=2^m$, so number of minimum states $=m+1$. For $n=8=2^3$, we need $3+1=4$ states.
• Else If $n$ is odd
Number of states $=n$. For $n=5$, we need $5$ states.
• Else if $n$ is even
Keep dividing it with $2$ until you get odd quotient. The result is final odd quotient plus number of divisions done. For $n=20,20/2=10,10/2=5→5+2=7$ states
I was guessing:
Q1. From where all these facts came. (I know this must have come from general pattern that such DFAs follow, but then I have below doubts)
Q2. Are these points indeed correct?
Q3. If yes (for Q2), then what can be the reason for each point?
Q4. Do these three bullet points cover all cases? (I guess yes, since there is even and odd, but I am new to this, and I was guessing if any such more point is left unmentioned in my textbook, especially since I did not find any reference book (by Peter Linz and Hopcroft-Ullman) talking about this topic)
• There are a few things missing here. First, is the number input LSB to MSB or MSB to LSB? (Looks like LSB to MSB.) Second, are you allowing the empty string to represent zero? Dec 23 '17 at 22:41
• The way to prove such results is to twofold: first you prove an upper bound by constructing a DFA accepting the given language, then you prove a lower bound using Myhill–Nerode theory. Dec 23 '17 at 22:42
• For an example, see this: cs.stackexchange.com/questions/85785/…. Dec 23 '17 at 22:53
• Regarding Q4, yes, they cover all cases. I'm certain you can prove this in your own, even if you're new to "this" – this has nothing to do with automata theory. Dec 23 '17 at 22:55
• Can you provide any link discussing these two points: (1) LSB to MSB and MSB to LSB number input (2) Representing zero by empty string and possibly these two points in the context of "divisible by $n$ DFA" . Also I guess first point wont make number of states in min DFA to change. Second point might change it.
– anir
Dec 24 '17 at 7:02
## Case 1: $n$ is a power of 2
If you wanted to check whether a decimal number is divisible by some power of 10, you can just look at the number of trailing zeros. For example, all numbers that are divisible by $100 = 10^2$ end with 2 zeros (this is of course including numbers ending with more than 2 zeros). The same idea can be applied here for binary numbers and powers of 2. Specifically to check for multiples of $n = 2^m$, you can simply check if the string ends in $m$ zeros. For example, if $n = 16 = 2^4$, all multiples of $n$ will end in 0000 (4 zeros). Thus we can create the following DFA for $n = 16$:
This construction can be extended to all multiples of 2, where you have $m+1$ states to ensure that your input ends in at least $m$ zeros (reading a 0 takes you one state forward, reading a 1 takes you back to the start). As suggested in the comments on your question, you can show that you can't do any better than this using a Myhill–Nerode argument.
## Case 2: $n$ is odd
The general idea here is that when a number is divided by $n$, the possible remainders are $0, 1, \ldots, n-2$ or $n-1$. We can give our DFA one state for each of these possible remainders so as we process a string, we're keeping track of the remainder of what we've read so far and transitioning to the appropriate remainder based on what character we read. Then, we accept if we finish in the remainder 0 state.
Design DFA accepting binary strings divisible by a number 'n' walks through a detailed method for constructing such a DFA, and you can find out more about this by searching for "DFA based division". This gives a total of $n$ states, which is optimal by Myhill-Nerode because these remainders are exactly the equivalence classes given by the relation $\equiv_L$ for the language $L = \{\text{binary strings divisible by }n\}$.
## Case 3: $n$ is even (and not a power of 2)
The technique in case 2 also works here but it isn't optimal. Every $n$ in this case can be expressed as $2^km$ where $m$ is odd (this is the division procedure described in your question). Thus, to check whether a binary string is divisible by $n$, we check whether it is divisible by both $2^k$ and $m$.
We know that it takes $k$ states to check divisibility by $2^k$ (case 1) and $m+1$ states to check divisibility by $m$ (case 2). We can take a DFA for divisibility by $m$, unmark the accepting state and make it the start state for a DFA for divisibility by $2^k$, giving our final DFA $k+m$ states.
As an example, here's the DFA for $n = 6$:
The top three states ensure that the number is a multiple of 3, and the final accept state ensures that it is a multiple of 2.
• Revisiting this problem, I dont get how to combine two DFAs in case 3. You said "unmark the accepting state and make it the start state for a DFA for divisibility by $2^k$". But you didnt explain how you came up with transition outgoing (labeled 1) from final state of last DFA. If we are to combine "divisible by $2^k$ DFA" with "divisible by $m$ ($m$ being any odd number)" DFA, "all" outgoing transitions on reading "1" from "all" states of divisible by $2^k$ DFA will go to same second state of divisible by $m$ DFA, (Q1.) right? [to next comment...]
– anir
Dec 3 '19 at 13:03
• [..from earlier comment] For divisible by $2^k$, we need string to end with $k$ zeroes. But, your case 3, divisible by 6 DFA will also let $k=1$ zeroes appear anywhere. (Q2.) How this fits in divisible by 6 string? For example in your divisible by 6 DFA, for 110110, first 11 moves through divisible by 3 DFA, next 01 moves through divisible by 2 DFA, next 1 moves through divisible by 3 DFA and final 0 moves through divisible by 2 DFA. (Q3.) How running through different component DFAs one after other still ensures that the whole string will still be divisible by $2^km$
– anir
Dec 3 '19 at 13:09 | 2021-09-24T20:47:47 | {
"domain": "stackexchange.com",
"url": "https://cs.stackexchange.com/questions/85850/minimum-number-of-states-in-dfa-accepting-binary-number-with-decimal-equivalent/85855",
"openwebmath_score": 0.6708168387413025,
"openwebmath_perplexity": 436.0348354667693,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226314280636,
"lm_q2_score": 0.8807970748488296,
"lm_q1q2_score": 0.8605586758229445
} |
http://mathhelpforum.com/geometry/227392-circle-questions-secants-chords-tangents.html | # Math Help - Circle questions -- Secants, chords and tangents
1. ## Circle questions -- Secants, chords and tangents
Hello again, just got a few questions pertaining to circles. I'll just post one to start with;
"The radius of the earth is approximately 6371 km. If the international space station (ISS) is orbiting 353 km above the earth, find the distance from the ISS to the horizon (x).
So solving this.. according to the segments of secants and tangents theorem...
x2 = (2r + 353)(353)
x2 = (12742 + 353)(353)
x2 = 4622535
x = $\sqrt{4622535}$ = 2150km (approx.)
Did I do that right?
2. ## Re: Circle questions -- Secants, chords and tangents
Originally Posted by StonerPenguin
Hello again, just got a few questions pertaining to circles. I'll just post one to start with;
"The radius of the earth is approximately 6371 km. If the international space station (ISS) is orbiting 353 km above the earth, find the distance from the ISS to the horizon (x).
So solving this.. according to the segments of secants and tangents theorem...
x2 = (2r + 353)(353)
x2 = (12742 + 353)(353)
x2 = 4622535
x = $\sqrt{4622535}$ = 2150km (approx.)
Did I do that right?
Computation looks right to me. But simply memorizing theorems does not promote understanding.
Let's see how we get that theorem. The radius through a point of tangency and the tangent at the same point are perpendicular.
$Let\ r = length\ of\ radius\ of\ given\ circle,$
$u + r = length\ from\ the\ given\ circle's\ center\ to\ a\ given\ point\ outside\ the\ circle,$
$x = length\ of\ the\ line\ that\ is\ tangent\ to\ the\ given\ circle\ and\ runs\ through\ the\ given\ point.$
By the Pythagorean Theorem, we have:
$(u + r)^2 = x^2 + r^2 \implies u^2 + 2ru + r^2 = x^2 + r^2 \implies x^2 = u^2 + 2ru = u(u + 2r) \implies x = \sqrt{u(u + 2r)}.$
3. ## Re: Circle questions -- Secants, chords and tangents
ISS is6724 km above center of earth.At that point the angle between straight down and the visible horizon has a sin of 6371/6724 or 71.35 deg
cos 71.35 =d/6724
d km to horizon =2150 km
4. ## Re: Circle questions -- Secants, chords and tangents
Hello, StonerPenguin!
The radius of the earth is approximately 6371 km.
If the international space station (ISS) is orbiting 353 km above the earth,
find the distance from the ISS to the horizon (x).
Code:
o
|\
| \
352 | \
| \ x
| \
* * * \
* | *\
* | o
* 6371| * *
| *6371
* | * *
* o *
* *
* *
* *
* *
* * *
Note the right triangle.
The equation is: . $x^2 + 6371^2 \:=\:6723^2$
5. ## Re: Circle questions -- Secants, chords and tangents
Thank you JeffM, bjhopper and Soroban! It's nice to see different perspectives and the image drawn in code is really cool.
Here's another question I've had trouble with:
"Explain how you know $\overline{AB}$ $\overline{CD}$ given E is the center of the circle. (Include theorem numbers.)"
And here's some pertinent theorems;
Theorem 10.4
If one chord is a perpendicular bisector of another chord, then the first chord is a diameter.
Theorem 10.5
If a diameter of a circle is perpendicular to a chord, then the diameter bisects the chord and its arc.
Theorem 10.6
In the same circle, or in congruent circles, two chords are congruent if and only if they are equidistant from the center.
Obviously from the diagram $\overline{AB}$ $\overline{CD}$ by theorem 10.6, but I can't really word this well. Any help? Theorems and proofs are my weakest areas :/
6. ## Re: Circle questions -- Secants, chords and tangents
t is a triangle a is an angle
t AED congruent to BEC isosceles t's same legs and equal altitudes
a DEC = 180- 2* 1 / 2 *AED
a AEB 180-2*1/2 * BEC a AED = aBEC
a AEB = a DEC
AB =DC equal arcs = equal chords | 2015-05-27T11:02:35 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/geometry/227392-circle-questions-secants-chords-tangents.html",
"openwebmath_score": 0.6830456852912903,
"openwebmath_perplexity": 3122.0125982177237,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226327661525,
"lm_q2_score": 0.8807970732843033,
"lm_q1q2_score": 0.8605586754729518
} |
https://math.stackexchange.com/questions/1752912/is-this-inequality-true-for-all-k-sum-n-kn-infty-frac1n4-leq | Is this inequality true for all k ? $\sum_{n=k}^{n=+\infty} \frac{1}{n^4} \leq (\sum_{n=k}^{n=+\infty} \frac{1}{n^2})^3$
Can it be generalized for other powers ? Wolfram seems to say it is true for k below 20000.
I stumbled upon it randomly when trying to approximate $\sum_{n=1}^{n=+\infty} \frac{1}{n^4}$.
My reasoning was :
$$\left(\sum_{n=k}^{n=+\infty} \frac{1}{n^2}\right)^2=\sum_{n=k}^{n=+\infty} \frac{1}{n^4} + (\text{double products}) \geq\sum_{n=k}^{n=+\infty} \frac{1}{n^4}$$
So
$$\sum_{n=1}^{n=+\infty} \frac{1}{n^4} \leq \sum_{n=1}^{n=k-1} \frac{1}{n^4}+\left(\sum_{n=k}^{n=+\infty} \frac{1}{n^2}\right)^2 \leq \left(\sum_{n=1}^{n=k-1} \frac{1}{n^4}\right)+\left(\frac{1}{k-\frac{1}{2}}\right)^2$$
where the last inequality comes from An inequality: $1+\frac1{2^2}+\frac1{3^2}+\dotsb+\frac1{n^2}\lt\frac53$.
Then I noticed that, perhaps, I could raise the last term to the power of 3 instead of just 2, making the inequality stronger.
• The LHS behaves like $\int_{k}^{+\infty}\frac{dx}{x^4}=\frac{1}{3k^3}$ while the RHS behaves like $\left(\int_{k}^{+\infty}\frac{dx}{x^2}\right)^3=\frac{1}{k^3}$, so that is not surprising. – Jack D'Aurizio Apr 21 '16 at 15:36
For $k > 1$, $$\sum_{n=k}^\infty \dfrac{1}{n^4} < \int_{k-1}^\infty \dfrac{dx}{x^4} = \dfrac{1}{3(k-1)^3}$$
$$\left(\sum_{n=k}^\infty \dfrac{1}{n^2}\right)^3 > \left(\int_{k}^\infty \dfrac{dx}{x^2}\right)^3 = \dfrac{1}{k^3}$$
$\dfrac{1}{k^3} > \dfrac{1}{3(k-1)^3}$ when $3(k-1)^3 > k^3$, which is true for $k > 3.2612$.
Let's see what we can say about comparing $s_1(k) =\sum_{n=k}^{\infty} \frac1{n^a}$ with $s_2(k) =\left(\sum_{n=k}^{\infty} \frac1{n^b}\right)^c$ for large enough $k$, where $a > 1$ and $b > 1$ so the sums converge.
Using the integral approximation, $s_1(k) =\sum_{n=k}^{\infty} \frac1{n^a} \approx \int_k^{\infty} \frac{dx}{x^a} =-\frac1{(a-1)x^{a-1}}\big|_k^{\infty} =\frac1{(a-1)k^{a-1}}$.
Therefore $s_2(k) =\left(\sum_{n=k}^{\infty} \frac1{n^b}\right)^c \approx \left(\frac1{(b-1)k^{b-1}}\right)^c =\frac1{(b-1)^c k^{c(b-1)}}$, so $\dfrac{s_1(k)}{s_2(k)} \approx \dfrac{\frac1{(a-1)k^{a-1}}}{\frac1{(b-1)^c k^{c(b-1)}}} = \dfrac{{(b-1)^c k^{c(b-1)}}}{{(a-1)k^{a-1}}} = \dfrac{{(b-1)^c }}{{(a-1)}}k^{c(b-1)-(a-1)}$.
Therefore if $c(b-1) > a-1$, then $s_1(k) > s_2(k)$ for all large enough $k$; if $c(b-1) < a-1$, then $s_1(k) < s_2(k)$ for all large enough $k$;
If $c(b-1) = a-1$, then $\dfrac{s_1(k)}{s_2(k)} \approx \dfrac{{(b-1)^c }}{{(a-1)}}$, so the result depends on this ratio.
For your case of $a=4$ and $b=2$, the key difference is $c(b-1)-(a-1) =c-3$.
If $c > 3$, then $s_1(k) > s_2(k)$ for large enough $k$; if $c < 3$, then $s_1(k) < s_2(k)$ for large enough $k$.
If $c=3$, which is your case, the ratio is $\dfrac{(b-1)^c }{(a-1)} =\dfrac{1}{3} < 1$, so $s_1(k) \approx \frac13 s_2(k) < s_2(k)$ for large enough $k$, which confirm's Robert Israel's result (good thing too, because any result of mine that differs from a result of his is probably wrong). | 2019-09-23T13:35:44 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1752912/is-this-inequality-true-for-all-k-sum-n-kn-infty-frac1n4-leq",
"openwebmath_score": 0.9804154634475708,
"openwebmath_perplexity": 114.20214778844415,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226347732859,
"lm_q2_score": 0.880797068590724,
"lm_q1q2_score": 0.8605586726550958
} |
https://math.stackexchange.com/questions/3792683/for-which-values-of-alpha-is-z-n-a-bounded-sequence | # For which values of $\alpha$ is {$z_n$} a bounded sequence?
Where $$\alpha$$ is a real constant, consider the sequence {$$z_n$$} defined by $$z_n=\frac{1}{n^\alpha}$$. For which value of $$\alpha$$ is {$$z_n$$} a bounded sequence?
How do I start with this kind of question? I think that $$\forall\space \alpha\in\Bbb{R}_{\geq0}$$ the sequence is convergent and therefore bounded, but how do I write it out?
• In order for us to tell you how to write things out, it would be helpful if you explained why you believe that the answer is what it is. Why do you think that $\frac 1{n^{\alpha}}$ converges for $\alpha \geq 0$? Are you saying that $\frac 1{n^{\alpha}}$ is not bounded when $\alpha < 0$? If so, then you must say so explicitly in your answer. Also, why do you believe that this is the case? – Ben Grossmann Aug 16 '20 at 10:16
• Because it is clear that for $\alpha\geq0$ the sequence converges to 0. If $\alpha<0$ then the value of $\frac{1}{n^\alpha}$ will become very big unless $\alpha>-\frac{1}{n}$. I might be wrong, but this is what I think. I don't know how to approach this question. – Jess Aug 16 '20 at 10:20
• By giving the answer that you have given, you have not only "approached" the problem correctly, but also have given an almost complete answer. It seems that your only question, then, is how to write this up with sufficient "formality." – Ben Grossmann Aug 16 '20 at 10:25
## 2 Answers
If $$\alpha=0$$, $$(z_n)$$ is constant, hence bounded.
If $$\alpha>0$$, $$(z_n)$$ converges to 0 and is thus bounded.
If $$\alpha<0$$, $$(z_n)$$ diverges to $$+\infty$$ and is thus unbounded.
As I state in the comment, you have the correct answer. The only remaining task is to give a formal explanation of the answer. One way to write an answer is as follows:
First, we note that the function $$f: \Bbb [1,\infty) \to \Bbb R$$ defined by $$f(x) = x^{\beta}$$ satisfies $$\lim_{x \to \infty}f(x) = \begin{cases} 0 & \beta < 0\\ 1 & \beta = 0\\ \infty & \beta > 0. \end{cases}$$ I suspect that you do not need to prove this statement formally: it is likely that there is a statement in the textbook that you can refer to.
With that established, address the problem in $$3$$ cases: in the case that $$\alpha < 0$$, conclude using the above fact that $$\lim_{n \to \infty} z_n = \infty$$, which means that the sequence is not bounded. In the case that $$\alpha = 0$$, conclude that $$z_n \to 0$$, which means that the sequence is convergent and is therefore bounded. Similarly, if $$\alpha > 0$$, conclude that $$z_n \to 0$$, which means that the sequence is convergent and therefore bounded.
Thus, we conclude that the sequence is bounded if and only if $$\alpha \geq 0$$.
• If the sequence is bounded, does that mean that the limit of the sequence exists? Because I just noticed that the next question asks me to find the values of $\alpha$ for which lim$_{n\to\infty}z_n$ exists. I assume the values of $\alpha$ are the same as which $z_n$ is a bounded sequence? – Jess Aug 16 '20 at 10:40
• It is not necessarily the case that a bounded sequence will have a limit. However, we can clearly see by exhausting the possibilities that for this problem, the sequence $z_n$ converges whenever it is bounded. – Ben Grossmann Aug 16 '20 at 10:42 | 2021-07-31T10:07:08 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3792683/for-which-values-of-alpha-is-z-n-a-bounded-sequence",
"openwebmath_score": 0.9731109738349915,
"openwebmath_perplexity": 97.92443506847972,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018383629826,
"lm_q2_score": 0.8824278788223264,
"lm_q1q2_score": 0.86054528965028
} |
https://math.stackexchange.com/questions/3743275/a-fair-coin-is-tossed-untill-head-appears-for-the-first-time-what-is-probabilit | # A fair coin is tossed untill head appears for the first time. What is probability that the number of tosses required is odd? [duplicate]
Q. A fair coin is tossed untill head appears for the first time. What is probability that the number of tosses required is odd?
My work:
suppose that head comes in first toss so probability of getting head in the first toss $$=\dfrac{1}{2}$$
suppose that first & second tosses show tails & third toss shows head so probability of getting head in the third toss $$=(1-\dfrac12)(1-\dfrac12)\dfrac{1}{2}$$ $$=\dfrac1{2^3}$$
suppose that first 4 tosses show tails & fifth toss shows head so probability of getting head in the fifth toss $$=(1-\dfrac12)^4\dfrac{1}{2}$$ $$=\dfrac1{2^5}$$
suppose that first 6 tosses show tails & seventh toss shows head so probability of getting head in the fifth toss $$=(1-\dfrac12)^6\dfrac{1}{2}$$ $$=\dfrac1{2^7}$$
…………….
and so on
But I am not able to find the final probability of getting head first time so that the number of tosses required is odd. what should do I next to it? please help me.
• You're almost there. Just recognize that you have a geometric series and take its sum. To allow you to check your work, the answer will be $\frac 23$. – Robert Shore Jul 3 at 4:06
• This is an exact duplicate of math.stackexchange.com/q/834344/117057 , which unfortunately doesn't have an accepted answer. – shoover Jul 3 at 22:27
You have a geometric series,
$$\frac12+\frac1{2^3}+\frac1{2^5}+\frac1{2^7}+\ldots=\sum_{n\ge 0}\frac12\cdot\left(\frac14\right)^n=\frac{\frac12}{1-\frac14}=\frac23\;.$$
Alternatively, if $$p$$ is the desired probability, then $$p=\frac12+\frac14p$$: with probability $$\frac12$$ you get a head on the first toss, and with probability $$\frac14$$ you start with two tails and are now in exactly the same position that you were in at the beginning. Solving this for $$p$$ again yields $$p=\frac23$$.
• Your second method is very simple. I have never thought of it. Hats off to you sir – user801303 Jul 3 at 4:44
• @PaulAldrin: You’re welcome. – Brian M. Scott Jul 3 at 4:44
• @TShiong: When you can find a recurrence like that, it usually makes things a lot simpler! – Brian M. Scott Jul 3 at 4:45
Your approach is good and will get you the right answer. Just realize that you're building a geometric series and you want its sum.
I'm drafting an answer to give you an alternative approach. Let $$p$$ be the probability you're looking for. Then your first flip will be heads with probability $$0.5$$. If it's tails, then you will solve your original problem (the first heads occurs on an odd toss) exactly when, from your new starting point, your first heads occurs on an even toss, which happens with probability $$1-p$$.
That means $$p = 0.5 + 0.5(1-p) \Rightarrow 1.5 p = 1 \Rightarrow p = \frac 23$$.
• haven't you used sum of infinite GP? – user805532 Jul 3 at 4:38
• No, I think I've provided an alternative approach that avoids the need, just as does the alternative solution provided by Brian M Scott. – Robert Shore Jul 3 at 8:35
Here's another way to solve the problem by considering pairs of tosses at a time instead of single tosses.
Let $$\sigma$$ be an arbitrary infinite sequence of heads and tails. $$\sigma$$ uses 1-based indexing.
$$\text{e.g.}\;\;\; \sigma = HTHTHTHTHTTTTTHHHH\cdots$$
Imagine grouping the elements of $$\sigma$$ into pairs.
$$\sigma = HT,HT,HT,HT,HT,TT,TT,HH,HH\cdots$$
Let's imagine we have three states, $$S$$, $$E$$, and $$O$$.
• $$S$$ is the start state, we haven't seen a head yet.
• $$E$$ is the state marking that we saw a head at an even index first.
• $$O$$ is the state marking that we saw a head at an odd index first.
$$E$$ and $$O$$ are both absorbing states. Once we enter one of those states, we are never going to leave it.
Our state at the beginning of our process is always $$S$$ because, initially, we haven't observed any tosses at all of our coin.
Next let's consider what happens when we read our first toss-pair from $$\sigma$$.
There are twice as many ways to transition from $$S$$ to $$O$$ than there are to transition from $$S$$ to $$E$$.
TT
S -----> S
TH
S -----> E
HT
S -----> O
HH
S -----> O
As the number of pairs processed approaches infinity, the probability that the current state is $$S$$ approaches zero.
However, the ratio of the probability that that the current state is $$O$$ is always twice the probability that the current state is $$E$$.
Therefore, the limiting probability that the state is $$O$$ is $$2/3$$
You are almost done. Add all the terms
$$\frac12+\frac1{2^3}+\frac1{2^5}+\frac1{2^7}+\ldots$$ above series is an infinite GP with first term $$a=\dfrac{1}{2}$$ and common ratio $$r=\dfrac{1}{4}$$ $$\dfrac{a}{1-r}$$
$$=\frac{\frac{1}{2}}{1-\frac{1}{4}}$$$$=\frac23$$
• does this really have infinite terms which can be added? – user805532 Jul 3 at 4:38
• Of course it has uncountable number of terms. I 've added them in my answer – user801303 Jul 3 at 4:40
• it's of course countable – RiaD Jul 3 at 12:58
• @PaulAldrin none of the terms are infinite in value, but they are infinitely many. Name a large odd number n and I can respond with a term $1/2^n$. – neptun Jul 3 at 13:09
• r should be 1/4, not 1/2 – Mark Pattison Jul 3 at 13:32
Another way to avoid summing a GP.
The probability of the first head happening on toss $$n$$ is $$(1/2)^{n}$$. Let $$A$$ be the event that the first head occurs on an odd toss, and $$E_k$$ be the event that it occurs either on toss $$2k+1$$ or $$2k+2$$. Now $$P(A\mid E_k)=\frac{(1/2)^{2k+1}}{(1/2)^{2k+1}+(1/2)^{2k+2}}=\frac{2}{3},$$ independently of $$k$$. Since exactly one of the $$E_k$$ will occur almost surely, we have $$P(A)=\sum_kP(A\mid E_k)P(E_k)=\sum_k\frac23P(E_k)=\frac23\sum_kP(E_k)=\frac23.$$ | 2020-08-04T08:40:40 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3743275/a-fair-coin-is-tossed-untill-head-appears-for-the-first-time-what-is-probabilit",
"openwebmath_score": 0.7943496704101562,
"openwebmath_perplexity": 236.79774041093626,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018398044144,
"lm_q2_score": 0.8824278741843884,
"lm_q1q2_score": 0.8605452863993139
} |
http://math.stackexchange.com/questions/14530/how-to-average-cyclic-quantities/17624 | # How to average cyclic quantities?
Looking on Internet, I mostly found this definition:
Given quantities on a cyclic domain D, first rescale the domain to [0;2$\pi$[, then, find $z_n$ the point on the unit circle corresponding to the $n$th value, and compute the average by:
$$z_m = \sum_{n=1}^N z_n$$
The average angle is then $\theta_m = \arg z_m$ and to obtain the average value you scale back to your original domain D.
I must say, I have problems with this definition. For simplicity, I will use oriented angles in degree for my examples (i.e. D = [0;360[). With this formula, having angles -90, 90 and 40 will give 40 as mean angle, when I would expect 13.33 as an answer (i.e. (90-90+40)/3).
For my own problems, I usually use:
$$v_m = \mathop{\rm arg\,min}_{v\in D} \sum_{n=1}^{N} d(v_n,v)^2$$
Where $d(x,y)$ is the distance in the cyclic domain, and $\{v_1, v_2, \ldots, v_n\}$ is the set of cyclic data I want to average of.
It has the advantage to work the same way whatever the domain (replace D by a non-cyclic domain and $d$ with the usual euclidean distance, and you find the usual definition of an average). However, it is expensive to compute and I don't know any exact method to do it in general.
So my question is: what is the appropriate way to deal with average of cyclic data? And do you have good pointers that explain the problem and its solutions?
-
en.wikipedia.org/wiki/Circular_mean – Rasmus Dec 16 '10 at 19:35
I know about this page. However, there is no justification for it. Also, I left a comment in the discussion of this page in the hope of understanding. But for now, I still disagree with this method to calculate the average. – PierreBdR Dec 17 '10 at 17:38
The choice of distance metric depends crucially on the application. Bearing data, for instance, might be derived from estimates of X and Y with normally distributed errors, and this leads naturally to the circular mean. For other cases this might not be a good choice. – wnoise Apr 4 '11 at 18:52
Like all averages, the answer depends upon the choice of metric. For a given metric $M$, the average of some angles $a_j \in [-\pi,\pi]$ for $j \in [1,N]$ is that angle $\bar{a}_M$ which minimizes the sum of squared distances $d^2_M(\bar{a}_M,a_j)$. For a weighted mean, one simply includes in the sum the weights $w_j$ (such that $\sum_j w_j = 1$). That is,
$$\bar{a}_M = \mathop{\rm arg\,min}_{x} \sum_{j=1}^{N}\, w_j\, d^2_M(x,a_j)$$
Two common choices of metric are the Frobenius and the Riemann metrics. For the Frobenius metric, a direct formula exists that corresponds to the usual notion of average bearing in circular statistics. See "Means and Averaging in the Group of Rotations", Maher Moakher, SIAM Journal on Matrix Analysis and Applications, Volume 24, Issue 1, 2002, for details. http://lcvmwww.epfl.ch/new/publications/data/articles/63/simaxpaper.pdf
-
Thank you, it is exactly the kind of reference I was looking for! – PierreBdR May 9 '11 at 13:54
The problem with expecting the mean of 90°, −90°, and 40° to be (90°−90°+40°)/3 = 13.33° is that you would then expect the mean of 10° and 350° to be (10°+350°)/2 = 180°, and not 0° which is the more reasonable answer. It only gets worse when you have more than two angles (What is the mean of 340°, 350°, 360°, 10°, and 20°? What about 340°, 350°, 0°, 10°, and 20°?). Essentially, what you're doing there is equivalent to setting $z_n = e^{i\theta_n}$ and computing $$\bar z = (z_1 z_2 \cdots z_N)^{1/N},$$ and the problem is of course that it's not obvious a priori which of the $N$ possible roots of that equation is the right one, if any.
The "circular mean" definition is not so bad. In fact, it corresponds to the point which minimizes the sum of its squared distances to the points corresponding to the data, $$\bar z = \underset{\lvert z \rvert = 1}{\arg\min} \sum_{n=1}^N \lvert z - z_n \rvert^2.$$ So this is almost the same as the formula you like to use; you only have to define the "distance" between angles as the distance between the corresponding points on the unit circle. That is, $d(\theta, \phi) = \sqrt{2 - 2\cos(\theta-\phi)} = 2 \sin(\lvert\theta-\phi\rvert/2)$. This metric is close to $\lvert\theta-\phi\rvert$ when $\theta$ and $\phi$ are close, and has the advantage of being really easy to find the solution to.
-
Of the $N$ possible roots, one will be the global minimum of his distance functions, so checking all of them is sufficient. (See the reference Rob Johnson gave). – wnoise Apr 4 '11 at 19:02
The angle is supposed to be the independent variable, not the dependent variable. If your function is cyclic (using degrees), z(-90)=z(270) so it doesn't matter which you use. Then the average value of the function is $$z_m=\sum_{n=1}^Nz(\theta_n)$$ You are right that if you average the angles, it matters which lap of the circle you use (-90 vs 270) because the difference of 360 gets divided by N.
-
This is what I am reading on Internet, but I have issues with this definition (as described in my post) and there is never a good explanation on why we should use this also for cyclic data. Could you expand? – PierreBdR Dec 16 '10 at 17:54
The definition of a cyclic function is that f(x+T)=f(x) for some period T. The recommendation you saw was to rescale the units of x so that the period is 2pi radians or 360 degrees. I don't think that is important, but was trying to stay with it. As the function is cyclic, it doesn't matter which cycle you take the data from. So if you want the average value over a cycle it would be (if we take T to be 360) $$\frac{1}{360}\int_a^{a+360}f(x)dx$$ You can then approximate this by a sum just by taking equally spaced steps like $$\frac{1}{n}\sum_{i=0}^{n-1}f(\frac{360i}{n})$$ – Ross Millikan Dec 16 '10 at 22:06
It looks like I am working on a different problem than you. Did you check the Wikipedia page that Rasmus suggested? Is that what you are after? – Ross Millikan Dec 16 '10 at 22:27
For angles, one can adapt the iterative way of computing means to angles, that is:
given angles v[1] .. v[n]
m[1] = v[1]
m[i] = remainder( m[i-1] + remainder( v[i]-m[i-1], C)/i, C) (i=2..n)
where remainder(x,y) gives the signed remainder on dividing x by y and C is the measure of a circle.
I suspect this gives your vm, but haven't been able to prove it.
-
A quick check indicates it might not work. Applying this formula to the same set of values, simply taking the elements in various order leads to different results. – PierreBdR Dec 17 '10 at 17:44 | 2014-03-11T21:38:31 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/14530/how-to-average-cyclic-quantities/17624",
"openwebmath_score": 0.8893794417381287,
"openwebmath_perplexity": 316.0803193454473,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018426872776,
"lm_q2_score": 0.8824278664544912,
"lm_q1q2_score": 0.8605452814050227
} |
https://math.stackexchange.com/questions/3495575/how-to-derive-the-identity-sin-x-x-prod-n-1-infty-cosx-2n-without-usi/3495595 | How to derive the identity $\sin x/x=\prod_{n=1}^\infty \cos(x/2^n)$ without using telescoping?
I am wondering how to derive the following equality $$\frac{\sin x}{x}=\prod_{n=1}^\infty \cos(x/2^n)\tag{1}$$ without using the method of telescoping. I know that there is already a question on deriving this infinite product representation of $$\sin x/x$$, but all the answers in the link telescope the product. Here is the method, for completeness.
First, we can use the trigonometric identity $$\sin x=2\cos (x/2)\sin(x/2)$$ to yield $$\cos(x/2)=\frac{\sin x}{2 \sin(x/2)}$$. More generally, this implies that $$\cos(x/2^{n})=\frac{\sin (x/2^{n-1})}{2 \sin(x/2^{n})}$$
Our infinite product is thus $$\prod_{n=1}^\infty\cos(x/2^n)=\frac{\sin (x)}{2 \sin(x/2)}\cdot \frac{\sin (x/2)}{2 \sin(x/4)} \cdot \frac{\sin (x/4)}{2 \sin(x/8)} \cdots$$ Treating the product as a limit of a finite product $$f_k(x)=\prod_{n=1}^k \cos(x/2^n)$$, we notice that $$f_k(x)=\frac{\sin(x)}{2^k\sin(x/2^k)},$$ with $$\lim_{k\to\infty} f_k(x)=\sin x/x$$. Thus, $$\frac{\sin x}{x}=\prod_{n=1}^\infty \cos(x/2^n).$$
Question:
How to show that $$(1)$$ is true without using telescoping?
• Why do you want to avoid telescoping? – Lucas Henrique Jan 2 at 23:33
• Note the Weierstrass factorization theorem:\begin{align*} \frac{\sin x}{x}&=\prod_{n=1}^\infty\left(1-\frac{x^2}{n^2\pi^2}\right)\\ \cos\frac{x}{2^n}&=\prod_{m=0}^\infty\left(1-\frac{x^2}{(2m+1)^2 2^{2(n-1)}\pi^2}\right) \end{align*} – Edward H Jan 2 at 23:44
• You could use the fact that $\sum_{n\ge1}\pm 2^{-n}$ (where the signs are chosen independently and uniformly) is uniformly distributed on $[-1,1]$; the desired equality relates two formulas for the characteristic function of such a random variable. – kimchi lover Jan 2 at 23:50
• @kimchilover very nice – Sandeep Silwal Jan 3 at 0:10
• @Lucas Henrique I am curious to know if there are other ways to evaluate this product. – Zachary Jan 3 at 2:13
You can do this by a trick that is essentially the same as looking at this product in frequency domain.
To avoid any analytical difficulty, let's examine finite sums; we have $$\prod_{n=1}^{k}\cos(x/2^n)=\prod_{n=1}^k\left(\frac{e^{ix/2^n}+e^{-ix/2^n}}{2}\right)$$ We can expand the sum on the right as $$\frac{1}{2^k}\sum_{\sigma\in\{-1,1\}^k}\exp\left(ix\cdot \left(\sigma_1\cdot \frac{1}2+\sigma_2\cdot \frac{1}{2^2}+\ldots+\sigma_k\cdot \frac{1}{2^k}\right)\right)$$ where $$\sigma$$ is a string of $$k$$ terms in $$\{-1,1\}$$ representing which side of the sum within the former product was followed.
One can see that for $$n=1$$, the angular frequencies encountered (i.e. the coefficient of $$ix$$) are $$1/2$$ and $$-1/2$$ . For $$n=2$$, the frequencies are $$-3/4,\,-1/4,\,1/4,\,3/4$$. We can prove via induction that the possible values of that coefficient are just the set of numbers of the form $$a/2^k$$ for odd integers $$a$$ between $$-2^k$$ and $$2^k$$. Thus, the partial sum works out to: $$\frac{1}{2^k}\cdot \sum_{\substack{a\text{ odd}\\ -2^k < a < 2^k}}\exp\left(ix \cdot \frac{-a}{2^k}\right)$$ We could bail out at this step and recognize that the sum is actually a geometric series (with ratio $$\exp\left(\frac{ix}{2^{k-1}}\right)$$), which would lead us back to the expression you derived for the partial sums. However, we could also recognize this an average of equally spaced evaluations of the function $$z\mapsto \exp(ix\cdot z)$$ over the interval $$[-1,1]$$ with more evaluations as $$k$$ increases; thus, in the limit, this product becomes an integral giving the average value of $$\exp(ixt)$$ over the interval $$[-1,1]$$: $$\lim_{k\rightarrow\infty}\prod_{n=1}^k\cos(x/2^n) = \frac{1}2\int_{-1}^1\exp(ixt)\,dt$$ Of course, this is just integrating an exponential function, which can be done easily, and works out to $$\frac{\sin(x)}x$$.
Here is a different packaging of the same basic argument presented in Milo Brandt's answer.
Let $$X_k=\sum_{j=1}^k \sigma_j 2^{-j}$$, where the $$\sigma_i$$ are iid $$\pm1$$ random variables. This has uniform distribution on the $$2^k$$ points uniformly spaced $$2^{1-k}$$ apart in the range from $$-1+2^{-k}$$ to $$1-2^{-k}$$, as can be seen from the binary expansions of the integers from $$0$$ to $$2^k$$. One can verify directly that $$X_k$$ converges in distribution to the continuous uniform distribution on $$[-1,1]$$.
The characteristic function of $$X_k$$, namely the function $$\phi_k(t)=E[\exp(itX_k)]$$ is given by $$\prod_{j=1}^k E[\exp(i \sigma_j 2^{-j})] = \prod_{j=1}^k \cos(t2^{-j})$$.
By Lévy's continuity theorem, for each $$t$$, one has $$\lim_{k\to\infty}\varphi_k(t)=\varphi(t)$$, where $$\varphi(t)$$ is the characteristic function of the uniform distribution on $$[-1,1]$$, which is $$\varphi(t)=\frac 12\int_{-1}^1 \exp(itx)\,dx = \frac{\sin(t)}t.$$ | 2020-08-12T23:48:42 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3495575/how-to-derive-the-identity-sin-x-x-prod-n-1-infty-cosx-2n-without-usi/3495595",
"openwebmath_score": 0.9623919725418091,
"openwebmath_perplexity": 121.44768834181576,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575147530352,
"lm_q2_score": 0.8757870046160257,
"lm_q1q2_score": 0.8605111027085272
} |
https://math.stackexchange.com/questions/1250112/number-of-times-2k-appears-in-factorial | # Number of times $2^k$ appears in factorial
For what $n$ does: $2^n | 19!18!...1!$?
I checked how many times $2^1$ appears:
It appears in, $2!, 3!, 4!... 19!$ meaning, $2^{18}$
I checked how many times $2^2 = 4$ appears:
It appears in, $4!, 5!, 6!, ..., 19!$ meaning, $4^{16} = 2^{32}$
I checked how many times $2^3 = 8$ appears:
It appears in, $8!, 9!, ..., 19!$ meaning, $8^{12} = 2^{36}$
I checked how many times $2^{4} = 16$ appears:
It appears in, $16!, 17!, 18!, 19!$ meaning, $16^{4} = 2^{16}$
In all,
$$2^{18} \cdot 2^{32} \cdot 2^{36} \cdot 2^{16} = 2^{102}$$
But that is the wrong answer, its supposed to be $2^{150}$?
• Note that, for example, $6!$ contributes 4 factors of 2 - one from 2, one from 6 and two from 4. You only count 3 of these. – Wojowu Apr 24 '15 at 16:37
A simple trick to compute $k$ such that $2^k|n!$ is to compute $\sum_{i=1}^\infty \left\lfloor\frac{n}{2^i}\right\rfloor$, this is because $n$ has $[n/2]$ numbers divided by $2$, if we pick out these numbers and find out that there're $[n/4]$ numbers divided by $4$.. If we continue this procedure, we see that $$k=1\cdot\left(\left\lfloor\frac{n}{2}\right\rfloor-\left\lfloor\frac{n}{4}\right\rfloor\right)+2\left(\left\lfloor\frac{n}{4}\right\rfloor-\left\lfloor\frac{n}{8}\right\rfloor\right)+\ldots=\sum_{i=1}^\infty \left\lfloor\frac{n}{2^i}\right\rfloor$$. In this case, we have to sum $$0+1+1+3+3+4+4+7+7+8+8+10+10+11+11+15+15+16+16=150.$$ Your fault is that your did not count the contribution of those which is not the power of $2$. For instance, there's $14$ in $14!$..
How many times $2$ divides the product $\prod_{i=1}^{19}i!$ ?
Let's call each term inside a factorial $i$. That way, $i = 1$ occurs in 19 factorials, $i = 2$ occurs in 18 factorials, and $i = 3$ occurs in 17 factorials etc.
$i = 2$ occurs 18 times. $1 \times 18 = 18$
$i = 4$ occurs 16 times. $2 \times 16 = 32$
$i = 6$ occurs 14 times. $1 \times 14 = 14$
$i = 8$ occurs 12 times. $3 \times 12 = 36$
$i = 10$ occurs 10 times. $1 \times 10 = 10$
$i = 12$ occurs 8 times. $2 \times 8 = 16$
$i = 14$ occurs 6 times. $1 \times 6 = 6$
$i = 16$ occurs 4 times. $4 \times 4 = 16$
$i = 18$ occurs 2 times. $1 \times 2 = 2$
$$18 + 32 + 14 + 36 + 10 + 16 + 6 + 16 + 2 = 150$$
It might be more helpful to do this recursively.
Let $T(n) = \prod_{k=1}^n k!$.
We will use the notation: $2^{r} \| m$ to mean that $2^r$ is the largest power of $2$ that divides $m$.
Then we have $2 \| 2! = T(2)$. We also know that $2 \| 3!$, so $2^2 \| T(3) = 3! T(2)$. Continuing:
$$2^3 \| 4!$$ $$2^3 \| 5!$$ $$2^4 \| 6!$$ $$2^4 \| 7!$$ $$2^7 \| 8!$$ $$2^7 \| 9!$$ $$2^8 \| 10!$$ $$2^8 \| 11!$$ $$2^{10} \| 12!$$ $$2^{10} \| 13!$$ $$2^{11} \| 14!$$ $$2^{11} \| 15!$$ $$2^{15} \| 16!$$ $$2^{15} \| 17!$$ $$2^{16} \| 18!$$ $$2^{16} \| 19!$$
If we take the sum of all of those powers, $$2\cdot 1 + 2\cdot 3 + 2\cdot 4 + 2\cdot 7 + 2\cdot8 + 2 \cdot 10 + 2 \cdot 11 + 2 \cdot 15 + 2 \cdot 16$$
$$=2(1+3+4+7+8+10+11+15+16) = 2(75) = 150.$$ | 2021-05-09T23:43:57 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1250112/number-of-times-2k-appears-in-factorial",
"openwebmath_score": 0.8817852139472961,
"openwebmath_perplexity": 121.08447970920729,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575157745541,
"lm_q2_score": 0.8757870029950159,
"lm_q1q2_score": 0.8605111020104248
} |
https://www.physicsforums.com/threads/can-someone-please-help-me-factorise-a-fraction-u-2-u-2-3u-2.248655/ | 1. Aug 6, 2008
### laura_a
1. The problem statement, all variables and given/known data
I am working on a topic called differential equations, and Im stuck on some working out.
2. Relevant equations[/b]
I have to integrate it, and I know I can without factorising, but it is so messy, my professor said to factorise the fraction so it looks a bit like a1/(bu+ c)+ a2/(du+ e)
Well thats what the professor said, I have no idea what it means... I can factorise as far as
= u/(-u^2 +3u +2) - 2/(-u^2 +3u +2)
And I know that -u^2 +3u +2 = (-u + 1)(u-2) +4
But not sure how to put it all together
Here is the working out for the whole question just in case you're interested
\begin{align*} y' &= \frac{y+2x}{y-2x} \\ Let y &= ux \\ \text{Then we have} y' &= \frac{ux + 2x}{ux - 2x} \\ &= \frac{u+2}{u-2} \\ \text{Now } y' &= \frac{dy}{dx} = \frac{d(ux)}{dx} = x \frac{dy}{dx} + u \\ \text{So we can say that} x \frac{du}{dx} + u& = \frac{u+2}{u-2} \\ x \frac{du}{dx} &= \frac{u+2}{u-2} - u \\ x \frac{du}{dx} &= \frac{u+2}{u-2} - \frac{u^2-2u}{u-2} \\ x \frac{du}{dx} &= \frac{u+2- u^2+2u}{u-2} \\ x \frac{du}{dx} &= \frac{-u^2+3u+2}{u-2} \\ \end{align*} \bigskip
Then I have to integrate both sides of this equation...which is what I think I need to factorise in order to make it nice and neat
$$\frac{u-2}{-u^2 + 3u + 2}du &= \frac{1}{x} dx$$
Last edited: Aug 6, 2008
2. Aug 6, 2008
### HallsofIvy
Staff Emeritus
That can't be factored using integer coefficients but you know that if an expression factors as (x-a)(x-b) then a and b must be roots of (x-a)(x-b)= 0. Use the quadratic formula to find the roots of x2- 3x- 2= 0.
3. Aug 6, 2008
### Defennder
Try completing the square of $$-u^2+3u+2$$. Then use a simple algebraic identity to decompose it into partial fractions.
4. Aug 7, 2008
### laura_a
I think i've gotten a little closer, The question I'm trying to do is to solve a d.e. using change of variables, so once I get this part worked out I should be able to get it
So this is how far I've gone
Since $$u^2- 3 u -2= 0$$
has two roots
$$u_1 = (3+ 17^{0.5})/2, u_2= (3- 17^{0.5})/2$$
so
$$u^2- 3 u -2= (u - (3+ 17^{0.5})/2 ) (u- (3- 17^{0.5})/2 )$$
$$\frac{u-2}{-u+3u+2}du = \frac{1}{2}dx$$ - I'll call this equation (2)
So now I've got the LHS as
$$S = -u /((u - (3+ 17^{0.5})/2 ) +2 /(u- (3- 17^{0.5})/2 )$$
I changed the u-2 to -u+2 since I made a similar change on the denominator in order to solve it...
Anyhow, I integrated that and ended up with something really messy invloving logs
$$-2(u + 17^{0.5} ln(u-17^{0.5}-3)+3ln(u-17^{0.5}-3)-17^{0.5}-3+4ln(u+17^{0.5}-3) = ln(x) + C$$
(I integrated both sides of equation (2) above)
Well the prob is now, I'm probably wrong anyway, but the next step is to solve for u, and then I have to sub back in the original change of varaible which was y=xu and end up with an expression for y in terms of x and C... before I go on, can anyone tell me if I should bother working through those logs, or is it wrong? Thanks :)
5. Aug 8, 2008
### arildno
Let us take this as a GENERAL case, shal we?
We are to decompose:
$$\frac{u-u_{0}}{(u-u_{1})(u-u_{2})}=\frac{A}{u-u_{1}}+\frac{B}{u-u_{2}}$$
where the unindexed u is our variable, the indexed u's known numbers, and A and B constants to be determined.
We therefore must have:
$$u-u_{0}=A(u-u_{2})+B(u-u_{1})$$, or by comparing coefficients, we get the system of equations:
$$A+B=1$$
and
$$u_{2}A+u_{1}B=u_{0}$$
whereby we arrive at the solutions:
$$A=\frac{u_{0}-u_{1}}{u_{2}-u_{1}},B=\frac{u_{2}-u_{0}}{u_{2}-u_{1}}$$
This is much simpler than using specific numbers! | 2017-09-25T04:59:57 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/can-someone-please-help-me-factorise-a-fraction-u-2-u-2-3u-2.248655/",
"openwebmath_score": 0.9999754428863525,
"openwebmath_perplexity": 1179.0526619355828,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575121992375,
"lm_q2_score": 0.8757869997529962,
"lm_q1q2_score": 0.8605110956937382
} |
https://math.stackexchange.com/questions/1589868/epsilon-delta-definition-of-limit-set-an-upper-bound-to-delta | # Epsilon-delta definition of limit : set an upper bound to delta?
Am I allowed to set an upper bound to delta in the epsilon-delta definition of limit? Why would it still be equivalent to the original definition ? For example, if the function is defined for all $x$ in $\Bbb R$ :
$f : D(f) = \Bbb R \rightarrow \Bbb R$.
Let $\epsilon > 0, \exists \delta > 0 \text{ s.t. } \forall x : 0 < |x-x_o| < \delta < 3 \Rightarrow |f(x) - l| < \epsilon$
$\Leftarrow\text{?}\Rightarrow$
Let $\epsilon > 0, \exists \delta > 0 \text{ s.t. } \forall x : 0 < |x-x_o| < \delta \Rightarrow |f(x) - l| < \epsilon$.
• Lets suppose you found an $\; \delta \;$ then the definition still holds for any $\delta^* \le \delta$. Hence, you can choose your $\; \delta^* \le 3$ – XPenguen Dec 26 '15 at 20:16
• If you find an epsilon which beats a small delta, you've found an epsilon which beats a big one. – Mark Bennet Dec 26 '15 at 20:16
$$\forall\varepsilon > 0\ \exists \delta > 0\ \forall x \Big( 0 < |x-x_o| < \delta < 3 \Rightarrow |f(x) - \ell| < \varepsilon\Big)$$
I would phrase this differently:
$$\forall\varepsilon > 0\ \exists \delta \in(0,3)\ \forall x \Big( 0 < |x-x_o| < \delta \Rightarrow |f(x) - \ell| < \varepsilon\Big)$$
Now the question is whether that is equivalent to this:
$$\forall\varepsilon > 0\ \exists \delta>0\ \forall x \Big( 0 < |x-x_o| < \delta \Rightarrow |f(x) - \ell| < \varepsilon\Big)$$
If it is true that $\forall\varepsilon>0\ \exists\delta\in(0,3)\ \cdots\cdots$ then it is true that $\forall\varepsilon>0\ \exists\delta>0\ \cdots\cdots$, simply because every number in $(0,3)$ is $>0$.
Now the question is whether the converse holds. If it is true that $\forall\varepsilon>0\ \exists\delta>0\ \cdots\cdots$, does it necessarily follow that $\forall\varepsilon>0\ \exists\delta\in(0,3)\ \cdots\cdots\,{}$? Here the answer in general is “no”. I.e. there are some things you could put in place of $\text{“}\cdots\cdots\text{''}$ for which the first statement would be true and the second false. However, in the definition of "limit" the thing in place of $\text{“}\cdots\cdots\text{''}$ is “if a certain thing is $<\delta$, then a certain thing follows.” Given a certain $\varepsilon>0$ suppose we know there exists $\delta>0$ such that if a certain thing is less than $\delta$ then a certain conclusion follows. If the given value of $\delta$ is small enough, then the minimum of that value of $\delta$ and $2.9$ is small enough, because everything less than $\min\{\delta,2.9\}$ is less than $\delta$. So when the statement in place of $\text{“}\cdots\cdots\text{''}$ has the form “if a certain thing is $<\delta$, then a certain thing follows.”, then it is true that if $\forall\varepsilon>0\ \exists\delta>0\ \cdots\cdots$ then $\forall\varepsilon>0\ \exists\delta\in(0,3)\ \cdots\cdots$.
Yes, of course. If the first condition is satisfied, then clearly the second is too. Conversely, if the second condition is satisfied, then one can replace $\delta$ by $\min(\delta, 3)$ and so the first condition is satisfied.
Yes, as you have to exhibit an $\delta$ that works for every $\epsilon$, then you can be as artificial as you want if that works for you. Always in existence proofs the same rule applies, construct all you need and show that it does work.
Yes you can. The idea is that some proposition $P(x,\epsilon)$ is true for all $x$ that are sufficiently close to $x_0$. So if it holds for all $x\in (-d+x_0,d+x_0)$ then it holds for all $x\in (-d'+x_0,d'+x_0)$ whenever $0<d'<d$. Sometimes it is useful to know how large $d$ can be, for a given $\epsilon.$ | 2019-08-26T02:32:52 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1589868/epsilon-delta-definition-of-limit-set-an-upper-bound-to-delta",
"openwebmath_score": 0.9341692328453064,
"openwebmath_perplexity": 94.55677299488865,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9648551556203815,
"lm_q2_score": 0.8918110396870288,
"lm_q1q2_score": 0.8604684794812024
} |
https://www.teacherspayteachers.com/Product/Calculus-Study-Guide-for-AP-Calculus-ABBC-or-University-Calculus-2085731 | # Calculus : Study Guide for AP Calculus AB/BC or University Calculus
Subjects
Resource Types
Product Rating
4.0
File Type
PDF (Acrobat) Document File
Be sure that you have an application to open this file type before downloading and/or purchasing.
1.25 MB | 300 pages
### PRODUCT DESCRIPTION
Calculus Study Guide suitable for AP Calculus or First Year College Calculus Students. Main topics covered : Functions Limits and Continuity, Derivatives, Application of the Derivative, Indefinite integration, Definite Integration, Techniques of Integration, Sequences and Series, L’Hopitals Rule, Improper Integrals, Taylor Series, Short Introduction to Differential Equations.
Calculus without Limits is a self-study guide that can be used as a textbook/lecture supplement by students, as a source of homework problems by the instructor or as a source of lecture material. Covers most topics discussed in a first year calculus course (or AP Calculus). The emphasis is on solved problems that show students how to attack the types of problems they will encounter in homework and on exams. Discussion of concepts is informal followed by example problems. Many practice problems are also included for students.
1 - Function Review
What is a function - Graphing - Even and Odd Functions - Increasing and Decreasing Functions - One to One Functions - Powers and Polynomials - Absolute Value Function - Exponential Function - Logarithm - Trigonometric Functions
2- Limits
Definition - Tricks for Doing Limits - Properties of Limits and Rules - Limits at Infinity - One Sided Limits Continuity
3- Derivative
Notation and Definition in terms of limits - Slope of Tangent Line - Power Rule and Derivative of a constant - Chain Rule - Derivatives of Trig Functions - Product and Quotient Rules
4- Applications of the Derivative
Derivative of Inverse Function - Implicit Differentiation - Application: Rates of Change, Velocity and Acceleration - Finding Minima and Maxima First and Second Derivative Test - Inflection Points - The Mean Value Theorem - Related Rates
5- The Integral
Notation - Power Rule of Integration - Integration of Trig Functions - Integration of inverse function - Integrating Exponential Function - The substitution technique - Integrating Hyperbolic Functions - Definite Integration - Sums and Area Under a Curve - Area Between Two Curves -
6- U-Substitution
Solved Problems
7- Integration by Parts
Definition and When to Use - Examples
8- Integration of Rational Functions
Introduction - Examples
9- Integration Using the Method of Partial Fractions
Discussion - Examples
10- Integrating Powers of Trig Functions
Examples
11- Trig Substitution
Three Cases - Basic Technique -Examples
12 - Improper Integrals
Definition - Examples
13 - Sequences and Infinite Series
Sequences - Limits - Infinite Series
14 Ratio, Comparison, and Integral Tests
Examples
15 - Alternating Series
16 - Power Series and Radius of Convergence
17- Taylor Series Expansions
18- L’Hopital’s Rule
When to use and Definition - Examples
19- Introduction to Differential Equations
First Order Differential Equations - Second Order Differential Equations
Total Pages
300
N/A
Teaching Duration
1 Year
4.0
Overall Quality:
4.0
Accuracy:
4.0
Practicality:
4.0
Thoroughness:
4.0
Creativity:
4.0
Clarity:
4.0
Total:
1 rating | 2017-01-22T06:17:42 | {
"domain": "teacherspayteachers.com",
"url": "https://www.teacherspayteachers.com/Product/Calculus-Study-Guide-for-AP-Calculus-ABBC-or-University-Calculus-2085731",
"openwebmath_score": 0.8662840723991394,
"openwebmath_perplexity": 1903.1238794883905,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9924227580598143,
"lm_q2_score": 0.8670357563664174,
"lm_q1q2_score": 0.8604660166696371
} |
https://artofproblemsolving.com/wiki/index.php?title=2006_AIME_I_Problems/Problem_10&diff=prev&oldid=135054 | # Difference between revisions of "2006 AIME I Problems/Problem 10"
## Problem
Eight circles of diameter 1 are packed in the first quadrant of the coordinate plane as shown. Let region $\mathcal{R}$ be the union of the eight circular regions. Line $l,$ with slope 3, divides $\mathcal{R}$ into two regions of equal area. Line $l$'s equation can be expressed in the form $ax=by+c,$ where $a, b,$ and $c$ are positive integers whose greatest common divisor is 1. Find $a^2+b^2+c^2.$ $[asy] size(150);defaultpen(linewidth(0.7)); draw((6.5,0)--origin--(0,6.5), Arrows(5)); int[] array={3,3,2}; int i,j; for(i=0; i<3; i=i+1) { for(j=0; j
## Solutions
### Solution 1
The line passing through the tangency point of the bottom left circle and the one to its right and through the tangency of the top circle in the middle column and the one beneath it is the line we are looking for: a line passing through the tangency of two circles cuts congruent areas, so our line cuts through the four aforementioned circles splitting into congruent areas, and there are an additional two circles on each side. The line passes through $\left(1,\frac 12\right)$ and $\left(\frac 32,2\right)$, which can be easily solved to be $6x = 2y + 5$. Thus, $a^2 + b^2 + c^2 = \boxed{065}$.
### Solution 2
Assume that if unit squares are drawn circumscribing the circles, then the line will divide the area of the concave hexagonal region of the squares equally (as of yet, there is no substantiation that such would work, and definitely will not work in general). Denote the intersection of the line and the x-axis as $(x, 0)$.
The line divides the region into 2 sections. The left piece is a trapezoid, with its area $\frac{1}{2}((x) + (x+1))(3) = 3x + \frac{3}{2}$. The right piece is the addition of a trapezoid and a rectangle, and the areas are $\frac{1}{2}((1-x) + (2-x))(3)$ and $2 \cdot 1 = 2$, totaling $\frac{13}{2} - 3x$. Since we want the two regions to be equal, we find that $3x + \frac 32 = \frac {13}2 - 3x$, so $x = \frac{5}{6}$.
We have that $\left(\frac 56, 0\right)$ is a point on the line of slope 3, so $y - 0 = 3\left(x - \frac 56\right) \Longrightarrow 6x = 2y + 5$. Our answer is $2^2 + 5^2 + 6^2 = 65$.
We now assess the validity of our starting assumption. We can do that by seeing that our answer passes through the tangency of the two circles, cutting congruent areas, a result explored in solution 1. | 2021-01-17T12:30:25 | {
"domain": "artofproblemsolving.com",
"url": "https://artofproblemsolving.com/wiki/index.php?title=2006_AIME_I_Problems/Problem_10&diff=prev&oldid=135054",
"openwebmath_score": 0.7458829283714294,
"openwebmath_perplexity": 705.5428297619834,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631635159684,
"lm_q2_score": 0.8723473730188542,
"lm_q1q2_score": 0.8604513145357217
} |
https://math.stackexchange.com/questions/2172433/how-to-evaluate-int-0-infty-eitx-e-x-dx | # How to evaluate $\int_0^\infty e^{itx} e^{-x} dx$?
When calculating the characteristic function of the exponential distribution function, we need to evaluate the complex-integration: \begin{align*} \int_0^\infty e^{itx}e^{-x} dx \end{align*} for any $t \in \mathbb{R}$.
I understand how to evaluate this integral by treating the real part and imaginary part separately, but I am wondering is there any approach that uses the complex analysis theory? I found some answer uses the seemingly unjustified "fundamental theorem of calculus":
$$\int_0^\infty e^{(it - 1)x} dx = \frac{1}{it - 1}\int_0^\infty e^{(it - 1)x} d(it - 1)x = \frac{1}{1 - it}.$$
I think this solution is lacking any theoretic support (maybe I am wrong, please advise if there is any theorem that supports the above calculation). Specifically, can this integral be evaluated using residue calculus (contour integration)?
• Thanks, but for a complex-valued function, how to justify $\int_0^\infty f'(x) dx = f(x)|_0^\infty$? Could you please provide me some references? – Zhanxiong Mar 5 '17 at 4:01
• Integrating a complex-valued function is the same as a real-valued function. In complex analysis we look at $f(z)$ where $z$ is a complex variable, it is different because we integrate over curves in the complex plane, not over real intervals – reuns Mar 5 '17 at 4:07
You're right to demand justification beyond glib use of the FTC. But it is true that $$\int_0^\infty e^{-ax}dx = \frac{1}{a}$$ whenever $\Re(a)>0.$
In complex analysis, we can still affect the change of variables to $z=ax,$ but the resulting integral is along a ray in the complex plane in the direction of $a,$ not the positive real axis (unless $a$ is a positive real). We can write this (using somewhat bad notation) as $$\int_0^\infty e^{-ax}dx = \frac{1}{a} \int_0^{a\infty}e^{-z}dz.$$
(To see formally that the change of variables works, note that we have the parametrization $\gamma(t) = at$ for $0<t<\infty$ for the ray. We can write the integral of $e^{-z}/a$ along that path as $$\int_0^\infty \frac{e^{-\gamma(t)}}{a}\gamma'(t)dt = \int_0^\infty \frac{e^{-at}}{a}adt = \int_0^\infty e^{-at}dt$$).
Now, the difficult part is how we can justify saying $$\int_0^{a\infty}e^{-z}dz = \int_0^\infty e^{-x}dx = 1.$$
In other words we want to be able to rotate the contour back down to the real axis without changing the value of the integral.
To see why we can do this, imagine doing a integral around a large wedge-shaped contour. It goes out along the real axis to $R \gg 1$ and then goes along a circular path to $Re^{i\arg(a)}$ and then back into the origin along the ray $[0,a\infty)$ that our integral is taken over.
By Cauchy's theorem, the integral along this closed path is zero since $e^{-z}$ is analytic. The integral along the circular path goes to zero as $R\to \infty.$ We can see this cause the integrand decays like $e^{-R}.$ More formally the integral is $$\int_0^{\arg a} e^{-Re^{i\theta}}Re^{i\theta}id\theta$$ and we have $$\left|\int_0^{\arg a} e^{-Re^{i\theta}}Re^{i\theta}id\theta\right| \le \arg(a) \max_\theta|ie^{-Re^{i\theta}}Re^{i\theta}| = \arg(a)Re^{-R\cos(\arg(a))} \to 0$$ as $R\to\infty$
Thus as $R\to \infty,$ the integral along the real axis must cancel out the integral along the ray $[0,a\infty)$ in order that the integral along the closed path be zero as Cauchy's theorem demands. We have $$\int_0^{a\infty}e^{-z}dz = \int_0^\infty e^{-x}dx = 1$$ and therefore $$\int_0^\infty e^{-ax}dx = \frac{1}{a} \int_0^{a\infty}e^{-z}dz = \frac{1}{a}.$$
I've intentionally not said explicitly where $\Re(a)>0$ is used in the above argument. Of course, it's essential. See if you can find where it's used.
• Thanks for your answer, I found my own solution is very similar to yours! +1 – Zhanxiong Mar 5 '17 at 5:14
I figured out a rigorous proof by myself.
If $t = 0$, then it is an integration of a real-valued function, and clearly, $\int_0^\infty e^{-x} dx = 1$.
If $t \neq 0$, without losing of generality, assume $t > 0$. Consider the contour below:
In the picture, $n$ is a positive number that will be sent to $\infty$, the top line passes the origin and the point $(-1, t)$. And we set $f(z) = e^z, z \in \mathbb{C}$. By Cauchy's integration theorem, \begin{align} 0 = \int_\Gamma f(z) dz = \int_{\Gamma_1} e^z dz + \int_{\Gamma_2} e^z dz + \int_{\Gamma_3} e^z dz \end{align}
Let's denote the angle between $\Gamma_1$ and the real axis by $\theta_0$.
Clearly, $\int_{\Gamma_3}e^z dz = \int_{-n}^0 e^x dx = 1 - e^{-n}$.
On $\Gamma_1$, $z$ has the representation $z = (it - 1)x, x \in (0, n/|1 - it|)$, thus \begin{align*} \int_{\Gamma_1}e^z dz = \int_0^{n/\sqrt{1 + t^2}}e^{(it - 1)x}(it - 1) dx = (it - 1) \int_0^{n/\sqrt{1 + t^2}} e^{(it - 1)x} dx \end{align*}
To get the desired result, it remains to show $\int_{\Gamma_2} e^z dz \to 0$ as $n \to \infty$. Let $z = ne^{i\theta}$ with $\theta \in (\theta_0, \pi)$. It follows that \begin{align*} & \left|\int_{\Gamma_2} e^z dz\right| = \left|\int_{\theta_0}^\pi e^{ne^{i\theta}}nie^{i\theta} d\theta\right| \\ \leq & \int_{\theta_0}^\pi e^{n\cos\theta}n d\theta \\ \leq & ne^{n\cos{\theta_0}}(\pi - \theta_0) \to 0 \end{align*} as $n \to \infty$. Here we used the fact that $\pi/2 < \theta_0 < \pi$.
The real analysis way : $$f(x) = \frac{e^{(it-1)x}}{it-1}, f'(x) = e^{(it-1)x}, \qquad \int_0^\infty f'(x)dx=f(\infty)-f(0) = \frac{1}{1-it}$$
The complex analysis way, with the (complex) change of variable $z = (it-1)u, dz = (it-1)du$ : $$\int_0^\infty e^{(it-1)u}du = \int_0^{(it-1) \infty } e^{z}\frac{dz}{it-1} = \left.\frac{e^{z}}{it-1}\right|_0^{(it-1)\infty} = \frac{1}{1-it}$$
where $\int_0^{(it-1) \infty }$ is a contour integral | 2019-04-23T11:58:53 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2172433/how-to-evaluate-int-0-infty-eitx-e-x-dx",
"openwebmath_score": 0.9921557307243347,
"openwebmath_perplexity": 135.0969315533633,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631663220389,
"lm_q2_score": 0.8723473663814338,
"lm_q1q2_score": 0.8604513104366829
} |
https://math.stackexchange.com/questions/1898858/xy-xy-w-in-mathbbr-is-xwyw-real | # $x+y=xy=w \in \mathbb{R}^+$. Is $x^w+y^w$ real?
Question: For $x,y \in \mathbb{C}$, suppose $x+y=xy=w \in \mathbb{R}^+$. Is $x^w+y^w$ necessarily real?
For instance, if $x+y=xy=3$, then one solution is $x = \frac{3 \pm i \sqrt{3}}{2}$, $y = \frac{3 \mp i \sqrt{3}}{2}$, but $x^3 + y^3 = 0$, which is real.
I've checked this numerically for many values of $w$ that give complex $x$ and $y$ (namely, $w \in (0,4)$.)
• @PatrickStevens Then $x+y$ isn't real. – wythagoras Aug 21 '16 at 10:37
• @PatrickStevens But he required that $x+y=xy\in\Bbb R$.. – BigbearZzz Aug 21 '16 at 10:37
• Sorry all. I wasn't paying attention, clearly. – Patrick Stevens Aug 21 '16 at 10:38
• The answer is yes; the essential idea is that $xy = x+y \in \mathbb{R}$ forces either that $x,y \in \mathbb{R}$, or that $x$ and $y$ are complex conjugates. – Drew N Aug 22 '16 at 2:22
Yes. Since $x + y \in \mathbb{R}$, $y = \overline{x} + r$ for some $r \in \mathbb{R}$. Then $xy = |x|^2 + xr \in \mathbb{R}$, implying that either $r = 0$ or $x \in \mathbb{R}$. Then we do casework:
• If $r = 0$, then $y = \overline{x}$; this leads to
$$x^w + y^w = x^w + \overline{x^w} \in \mathbb{R}.$$
Warning: for this to work, we had to pick the standard branch of the complex logarithm, specifically, the one undefined on the nonpositive real line, whose imaginary part is between $-\pi$ and $\pi$. Once we define $z^w := e^{w \ln z}$, ${(\overline{x})}^w = \overline{x^w}$ is true for this branch of $\ln$ (as $x$ is not nonpositive real), but might not be true for another branch.
This warning does not come into play when $w$ is an integer. But take, for example, $x = 1 + \frac{1 + i}{\sqrt{2}}$, $y = 1 + \frac{1 - i}{\sqrt{2}}$. Then $x + y = xy = 2 + \sqrt{2}$. If we picked a different branch of the complex logarithm, then we could have $x^w + y^w$ not real.
• On the other hand, if $x \in \mathbb{R}$, then $y \in \mathbb{R}$, so $x^w + y^w \in \mathbb{R}$. Since $x + y = xy > 0$, $x,y$ must both be positive, so we have no trouble with a negative base of the exponent.
Note there was nothing special about $w$: we could have reached the stronger conclusion that $x^a + y^a \in \mathbb{R}$ for all $a \in \mathbb{R}$.
• You do need $x,y\geq0$, for your last statement. – wythagoras Aug 21 '16 at 10:54
• @6005: Even with $a>0$, you still need $x,y\not\le 0$. Consider $x=y=-1\in\mathbb R$ and $a=1/2>0$. Then no matter where you do the branch cut, either $x^a=y^a=i$ or $x^a=y^a=-i$. Basically, the equation $\overline x^a = \overline{x^a}$ is nor true for all $x$ if $a$ is not integer. – celtschk Aug 21 '16 at 11:06
• As a second conjecture, I believe the casework depends on the value of $w$. That is, in fact $r = 0$ if and only if $0< w < 4$, and $x \in \mathbb{R}$ if and only if $w \geq 4$ ($w = 0$ is excluded by assumption). – Drew N Aug 21 '16 at 11:28
• @celtschk Thanks for your valuable feedback. I believe I fixed all the problems and provided all the necessary caveats. – 6005 Aug 21 '16 at 11:48
• @DrewN your second conjecture is correct. – 6005 Aug 21 '16 at 11:55
Yes. Write $x=a+bi$, $y=c-di$, then clearly $b=d$ because $x+y$ is real.
So $x=a+bi$, $y=c-bi$. Then $xy=ac-abi+cbi+b^2$, so $a=c$ or $b=0$.
• If $a=c$, then $y = \overline{x}$, i.e. the complex conjugate of $x$.
So $x^w+y^w = x^w+(\overline{x})^w = x^w+\overline{x^w}$, which is real.
• If $b=0$, $x$ and $y$ are real so $x^w+y^w$ is real as $w>0$.
• Thanks for your feedback on my answer earlier. In your answer, note that $(\overline{x})^w = \overline{x^w}$ is only true for a particular branch of complex log, and will never be true for negative real $x$. However, you can guarantee it for all $x$ which aren't negative real by picking the relevant branch of complex log to define your exponential. – 6005 Aug 21 '16 at 11:53
$x+y$ real implies that $\Im(x)=-\Im(y)$. Thus if $x=a+bi$ then $y=c-bi$.
$xy$ real implies that, because $xy=ac+b^2+ib(c-a)$, $b=0\lor c=a$.
If $b=0$ the result is trivial as $x,y\in\mathbb{R}$.
If $c=a$ then $x=\overline{y}$. But then clearly $$x^w+y^w=x^w+\overline{x}^w=\overline{x^w+\overline{x}^w}=\overline{x^w+y^w},$$ where the middle equality holds by symmetry.
But then it must be real, as the only complex numbers satisfying $z=\bar{z}$ are real.
Note: for why/when we are allowed to write $x^w=\overline{\bar{x}^w}$ refer to @6005's answer
• Actually, $x = \overline{(\overline{x})^w}$ is a bit more complicated; it's not sufficient that $w$ is real. We have to pick a definition of complex exponential that allows for the symmetry that you appeal to. But regardless of which definition we take (branch of complex log), $x^w = \overline{(\overline{x})^w}$ will be false for $x$ negative real. Consider $w = \tfrac12$. – 6005 Aug 21 '16 at 11:51
• Yeah. That is indeed true. I'll add in my answer to look at yours for a more detailed explanation on when/why this works. – b00n heT Aug 21 '16 at 12:17 | 2019-08-22T04:43:45 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1898858/xy-xy-w-in-mathbbr-is-xwyw-real",
"openwebmath_score": 0.9590098857879639,
"openwebmath_perplexity": 271.67130758092463,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.986363161511632,
"lm_q2_score": 0.8723473663814338,
"lm_q1q2_score": 0.8604513062403369
} |
https://math.stackexchange.com/questions/1971193/what-is-the-volume-of-the-3-dimensional-elliptope | # What is the volume of the $3$-dimensional elliptope?
### My question
Compute the following double integral analytically
$$\int_{-1}^1 \int_{-1}^1 2 \sqrt{x^2 y^2 - x^2 - y^2 + 1} \,\, \mathrm{d} x \mathrm{d} y$$
### Background
The $3$-dimensional elliptope is the spectrahedron defined as follows
$$\mathcal E_3 := \Bigg\{ (x_{12}, x_{13}, x_{23}) \in \mathbb R^3 : \begin{bmatrix} 1 & x_{12} & x_{13}\\ x_{12} & 1 & x_{23}\\ x_{13} & x_{23} & 1\end{bmatrix} \succeq 0 \Bigg\}$$
Using Sylvester's criterion for positive semidefiniteness (i.e., all $2^3-1 = 7$ principal minors are nonnegative), we obtain $1 \geq 0$ (three times), the three quadratic inequalities
$$1 - x_{12}^2 \geq 0 \qquad \qquad \qquad 1 - x_{13}^2 \geq 0 \qquad \qquad \qquad 1 - x_{23}^2 \geq 0$$
and the cubic inequality.
$$\det \begin{bmatrix} 1 & x_{12} & x_{13}\\ x_{12} & 1 & x_{23}\\ x_{13} & x_{23} & 1\end{bmatrix} = 1 + 2 x_{12} x_{13} x_{23} - x_{12}^2 - x_{13}^2 - x_{23}^2 \geq 0$$
Thus, $\mathcal E_3$ is contained in the cube $[-1,1]^3$. Borrowing the pretty figure in Eisenberg-Nagy & Laurent & Varvitsiotis, here is an illustration of $\mathcal E_3$
What is the volume of $\mathcal E_3$?
### Motivation
Why is $\mathcal E_3$ interesting? Why bother? Because $\mathcal E_3$ gives us the set of $3 \times 3$ correlation matrices.
### My work
For convenience,
$$x := x_{12} \qquad\qquad\qquad y := x_{13} \qquad\qquad\qquad z := x_{23}$$
I started with sheer brute force. Using Haskell, I discretized the cube $[-1,1]^3$ and counted the number of points inside the elliptope. I got an estimate of the volume of $\approx 4.92$.
I then focused on the cubic surface of the elliptope
$$\det \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} = 1 + 2 x y z - x^2 - y^2 - z^2 = 0$$
which I rewrote as follows
$$z^2 - (2 x y) z + (x^2 + y^2 - 1) = 0$$
Using the quadratic formula, I obtained
$$z = x y \pm \sqrt{x^2 y^2 - x^2 - y^2 + 1}$$
Integrating using Wolfram Alpha,
$$\int_{-1}^1 \int_{-1}^1 2 \sqrt{x^2 y^2 - x^2 - y^2 + 1} \,\, \mathrm{d} x \mathrm{d} y = \cdots \color{gray}{\text{(magic happens)}} \cdots = \color{blue}{\frac{\pi^2}{2} \approx 4.9348}$$
I still would like to compute the double integral analytically. I converted to cylindrical coordinates, but did not get anywhere.
### Other people's work
This is the same value Johnson & Nævdal obtained in the 1990s:
Thus, the volume is
$$\left(\frac{\pi}{4}\right)^2 2^3 = \frac{\pi^2}{2}$$
However, I do not understand their work. I do not know what Schur parameters are.
Here's the script:
-- discretization step
delta = 2**(-9)
-- discretize the cube [-1,1] x [-1,1] x [-1,1]
grid1D = [-1,-1+delta..1]
grid3D = [ (x,y,z) | x <- grid1D, y <- grid1D, z <- grid1D ]
-- find points inside the 3D elliptope
points = filter (\(x,y,z)->1+2*x*y*z-x**2-y**2-z**2>=0) grid3D
-- find percentage of points inside the elliptope
p = (fromIntegral (length points)) / (1 + (2 / delta))**3
*Main> delta
1.953125e-3
*Main> p
0.6149861105903861
*Main> p*(2**3)
4.919888884723089
Hence, approximately $61\%$ of the grid's points are inside the elliptope, which gives us a volume of approximately $4.92$.
### A new Buffon's needle
A symmetric $3 \times 3$ matrix with
• $1$'s on the main diagonal
• realizations of the random variable whose PDF is uniform over $[-1,1]$ on the entries off the main diagonal
is positive semidefinite (and, thus, a correlation matrix) with probability $\left(\frac{\pi}{4}\right)^2$. Estimating the probability, we estimate $\pi$. Using the estimate given by the Haskell script:
*Main> 4 * sqrt 0.6149861105903861
3.1368420058151125
### References
• This is an astoundingly good question! Nice! – clathratus Apr 16 at 1:08
The integral can be separated:
$$I = 2\int_{-1}^1 \sqrt{1-x^2} dx \cdot \int_{-1}^1 \sqrt{1-y^2} dy = 2\left(\int_{-1}^1 \sqrt{1-t^2} dt\right)^2$$
This integral is straight-forward using the substitution $$t=\sin\theta$$:
$$\int_{-1}^1 \sqrt{1-t^2} dt = \int_{-\pi/2}^{\pi/2} \sqrt{1-\sin^2\theta} \cos\theta d\theta = \int_{-\pi/2}^{\pi/2} |\cos\theta|\cos\theta d\theta$$
$$=\int_{-\pi/2}^{\pi/2} \cos^2\theta d\theta = \dfrac{1}{2}\int_{-\pi/2}^{\pi/2} (1+\cos2\theta) d\theta = \dfrac{1}{2}\left(\theta + \dfrac{1}{2}\sin 2\theta\right)\Big|_{-\pi/2}^{\pi/2} = \dfrac{\pi}{2}$$
Therefore
$$I = 2\left(\int_{-1}^1 \sqrt{1-t^2} dt\right)^2 = 2\left(\dfrac{\pi}{2}\right)^2 = \dfrac{\pi^2}{2}$$
• Thanks. Shame on me for not realising that $(1-x^2) (1-y^2) = 1 - x^2 - y^2 + x^2 y^2$. I never expected the double integral to be this easy. – Rodrigo de Azevedo Oct 16 '16 at 19:19
• easy to get lost in the bigger picture, sometimes fresh eyes can help – David Peterson Oct 16 '16 at 19:21
Then integrand factors as $\sqrt{(1-x^2)(1-y^2)}=\sqrt{(1-x^2)}\sqrt{(1-y^2)}$ and every factor can be integrated separately. But you recognize the integral for the area of a half circle of radius $1$, hence
$$I=2\left(\frac\pi2\right)^2.$$ | 2019-06-19T02:40:06 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1971193/what-is-the-volume-of-the-3-dimensional-elliptope",
"openwebmath_score": 0.8970597386360168,
"openwebmath_perplexity": 491.0368929822824,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631619124992,
"lm_q2_score": 0.8723473614033683,
"lm_q1q2_score": 0.8604513016798521
} |
https://math.stackexchange.com/questions/2340167/is-the-whole-set-mathbb-r-open/2340171 | # Is the whole set $\mathbb R$ open?
Is the whole set $\mathbb R$ open or closed? A lot of answers from the following link said that the whole set is closed.
Why is empty set an open set?
However varies notes said that $\mathbb R$ is open, for example:
https://www.math.cornell.edu/~hatcher/Top/TopNotes.pdf
• Open does not mean not closed. – Improve Jun 29 '17 at 2:58
• It is both... some say "clopen" the empty set is the another classic example of a set that is both open and closed. – Doug M Jun 29 '17 at 3:24
• "Is the whole set R open or closed?" both. "Said that the whole set is closed". That is true. "However varies notes said that R is open". That is also true. R is open. And R is closed. – fleablood Jun 29 '17 at 4:00
It's open and closed by definition.
In order to refer to open and closed sets a topology must be made explicit. Given a set $X$ and a collection $\mathcal F$ of subsets of $X$, $\mathcal F$ is a topology on $X$ only if $X, \emptyset \in \mathcal F$. Thus X is open by definition, and as closed sets are defined as sets whose compliments are open we have $X^c=\emptyset$ and we have $X$ is closed.
In the context of a topological space $\mathbb R$ with collection $\mathcal F$, $\mathbb R$ must be open and closed. However if we consider $\mathbb R \times \mathbb R$ with the usual topology, $\mathbb R \simeq \{0\} \times \mathbb R$ is closed but not open. It is important to state context.
Indeed the observation by Michael Hardy does really make sense. Additionally the question is not well posed because it does not state which topology is to be considered. Let us assume that euclidean topology is meant. So the answer by Michael Rozenberg uses axiomatization via open or closed sets.
Using axiomatization via neighborhoods (read balls in this case, due to the euclidean topology of $\mathbb R$)
1. $\mathbb R$ is open because any of its points have at least one neighborhood (in fact all) included in it;
2. $\mathbb R$ is closed because any of its points have every neighborhood having non-empty intersection with $\mathbb R$ (equivalently punctured neighborhood instead of neighborhood).
Equivalently:
1. $\mathbb R$ is open because all its points are interior points of itself
2. $\mathbb R$ is closed because all its points are adherent points of itself (equivalently limit points instead of adherent points)
Using axiomatization via Moore-Smith net convergence (read sequence convergence in this case, due to the euclidean topology of $\mathbb R$),
1. $\mathbb R$ is closed because every point to which at least one net of its points converges belongs to it
2. $\mathbb R$ is open because every point to which every convergent net converges has a non-empty intersection with it. (Or equivalently there are no nets of points of its complement (the empty set) converging to any of its points (in fact there are no nets of points of its complement))
The list can go on and on.
By the definition of a topology on a set, both the empty set and the entire set (which I'm assuming you're taking as $\mathbb R$) are open sets. Since the complement of an open set is a closed set, and $\mathbb R$ is the complement of the empty set, it is also closed.
Notice that "open" and "closed" are not mutually exclusive.
$A$ is open iff $A^c$ is closed.
So $\emptyset$ and $\Omega$ are both open and closed (clopen)
For general topology, it is guaranteed via definition of topology, i.e. $\emptyset, \Omega \in \mathscr T$.
By one commonplace definition of "open set", a set $A\subseteq\mathbb R$ is open if for every point $x\in A$ there is some open interval containing $x$ that is a subset of $A$. That clearly is true if $A=\mathbb R,$ so $\mathbb R$ is open.
By one commonplace definition of "closed set", a set $A\subseteq \mathbb R$ is closed if every limit point of $A$ is a member of $A$. That clearly is true if $A=\mathbb R$, so $\mathbb R$ is closed.
Only two subsets of $\mathbb R$ are both open and closed: $\mathbb R$ and $\varnothing.$ | 2019-10-16T07:05:22 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2340167/is-the-whole-set-mathbb-r-open/2340171",
"openwebmath_score": 0.871317446231842,
"openwebmath_perplexity": 215.2971024616699,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806506825456,
"lm_q2_score": 0.8774767842777551,
"lm_q1q2_score": 0.8604367560859089
} |
https://www.physicsforums.com/threads/integrating-arcsech-x.754086/ | # Integrating arcsech x
1. May 17, 2014
### sooyong94
1. The problem statement, all variables and given/known data
I was asked to prove the integral
$\int_{\frac{4}{5}}^{1} \textrm{arcsech}(x) =2\arctan 2-\frac{\pi}{2}-\frac{4}{5} \ln 2$
2. Relevant equations
Integration by parts
3. The attempt at a solution
Let $u=\textrm{arcsech} (x)$
$\textrm{sech u}=x$
$\textrm{cosh u}=\frac{1}{x}$
Differentiating implicitly,
$\textrm{sinh u} \frac{du}{dx}=\frac{-1}{x^{2}}$
$\frac{du}{dx}=\frac{-1}{x^{2}\textrm{sinh u}}$
Then I simplify it into
$\frac{-1}{x\sqrt{1-x^{2}}}$
Let $\frac{dv}{dx}=1$
$v=x$
Using integration by parts and evaluating the integral, I got
$\frac{\pi}{2}-\sin^{-1} \frac{4}{5} -\frac{4}{5} \ln 2$
Which is numerically correct. But how do I obtain $2\tan^{-1} 2$ as shown in the question above?
2. May 17, 2014
### Curious3141
Draw the standard 3-4-5 right triangle. Observe that $\frac{\pi}{2}-\sin^{-1} \frac{4}{5} = \tan^{-1}\frac{3}{4}$.
You now have to get $\tan^{-1}\frac{3}{4}$ into something with $\tan^{-1}2$ in it.
I found this a little tricky. The best solution I could find was to let:
$\tan^{-1}\frac{3}{4} = x$ so $\tan x = \frac{3}{4}$
Then let x = 2y so that $\tan 2y = \frac{3}{4}$
Solve for y in the form $y = \tan^{-1}z$, where z is something you have to find. Only one value is admissible. Express x as 2y = $2\tan^{-1}z.$
Now observe that $\frac{1 + \tan w}{1 - \tan w} = \tan(w + \frac{\pi}{4})$. Use that to find an alternative form for $\tan^{-1}z$, which will allow you to find x, in the form you need.
There might be a simpler way (indeed, it might start with an alternative solution of the integral), but I can't immediately find one.
Last edited: May 17, 2014
3. May 17, 2014
### AlephZero
To show the answers are the same, you have to show $2 \tan^{-1} 2 + \sin^{-1}\displaystyle\frac 4 5 = \pi$.
From a 3-4-5 triangle, $\sin^{-1}\displaystyle\frac 4 5 = \tan^{-1}\displaystyle\frac 4 3$.
From $\tan^{-1}a + \tan^{-1}b = \tan^{-1}\displaystyle\frac{a+b}{1-ab}$,
$2 \tan^{-1} 2 = \tan^{-1}\displaystyle\frac {-4} 3 = \pi - \tan^{-1}\displaystyle\frac 4 3$.
QED.
4. May 22, 2014
### sooyong94
I don't get it, but why
$2 \tan^{-1} 2 = \tan^{-1}\displaystyle\frac {-4} 3 = \pi - \tan^{-1}\displaystyle\frac 4 3$.
5. May 22, 2014
### CAF123
I think the equalities should be $$2 \tan^{-1} 2 = \tan^{-1} \left(-\frac{4}{3}\right) + \pi = \pi - \tan^{-1} \left(\frac{4}{3}\right)$$
Using Alephzero's formula with $a=b$ gives $$\tan(\tan^{-1} 2 + \tan^{-1} 2) = -\frac{4}{3} \Rightarrow 2\tan^{-1} 2 = \tan^{-1} \left(-\frac{4}{3}\right) + \pi$$
6. May 22, 2014
### SammyS
Staff Emeritus
Compare the graphs of $\ y=\text{arcsech}(x) \$ and $y=\text{sech}(x)\ .$
Integrate $y=\text{sech}(x)\$ to get an area related to that given by integrating $\ y=\text{arcsech}(x) \ .$
You will have to subtract the area of some rectangle.
7. May 23, 2014
### sooyong94
I plotted two graphs and yet I can't figure it out... :(
8. May 23, 2014
### SammyS
Staff Emeritus
The definite integral you are evaluating represents the area below the y = arcsech(x) graph which is between x = 4/5 and x = 1 . Notice that arcsech(4/5) = ln(2) .
That is the same as the area below the y = sech(x) graph and above y = 4/5, for x ≥ 0. Right?
File size:
39.1 KB
Views:
272 | 2017-12-17T04:26:54 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/integrating-arcsech-x.754086/",
"openwebmath_score": 0.8789370656013489,
"openwebmath_perplexity": 928.686408494233,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806484125338,
"lm_q2_score": 0.8774767842777551,
"lm_q1q2_score": 0.8604367540940261
} |
https://mathhelpboards.com/threads/primitive-root-modulo-169.7398/ | # Number TheoryPrimitive root modulo 169
#### tda120
##### New member
How can I find a primitive root modulo 169?
I found the primitive roots mod 13 by testing 2, and then concluding that any 2^k with (k, 12)=1 would do. So that gave me 2, 6, 7 and 11. But modulo 13 I have no idea how to start.. I’m sure there’s a smarter way than trying 2^the orders that divide phi(13^2)..?.
#### Opalg
##### MHB Oldtimer
Staff member
How can I find a primitive root modulo 169?
I found the primitive roots mod 13 by testing 2, and then concluding that any 2^k with (k, 12)=1 would do. So that gave me 2, 6, 7 and 11. But modulo 13 I have no idea how to start.. I’m sure there’s a smarter way than trying 2^the orders that divide phi(13^2)..?.
Hi tda, and welcome to MHB. You might be interested in this link, which tells you that the answer to your question is either $2$ or $2+13=15$. That still leaves you with the work of testing to see if $2$ works. If it does not, then $15$ does.
#### mathbalarka
##### Well-known member
MHB Math Helper
tda120 said:
How can I find a primitive root modulo 169?
There is no simple method. You can cobble up together some basic theories on primitive roots, find a bit of a rough upperbound (although none is known to be useful for small cases) and some modular exponentiation to get a fast enough algorithm.
As Opalg there indicated, that either 2 or 15 is primitive root modulo 169, can easily be found for this case. Try proving the former and then the later if it doesn't work.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
How can I find a primitive root modulo 169?
I found the primitive roots mod 13 by testing 2, and then concluding that any 2^k with (k, 12)=1 would do. So that gave me 2, 6, 7 and 11. But modulo 13 I have no idea how to start.. I’m sure there’s a smarter way than trying 2^the orders that divide phi(13^2)..?.
You don't have to check all the orders that divide $\phi(13^2)$.
It suffices to check each of the orders that are $\phi(13^2)$ divided by one of the distinct primes it contains.
$$\phi(13^2)=2^2\cdot 3 \cdot 13$$
So the orders to verify are:
$$2\cdot 3 \cdot 13,\quad 2^2 \cdot 13, \quad 2^2\cdot 3$$
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Hi tda, and welcome to MHB. You might be interested in this link, which tells you that the answer to your question is either $2$ or $2+13=15$. That still leaves you with the work of testing to see if $2$ works. If it does not, then $15$ does.
Nice!
From that link we also get that since 2 is a primitive root mod 13, it follows that the order of 2 mod 169 is either (13-1) or 13(13-1).
So if $2^{13-1} \not\equiv 1 \pmod{169}$ that means that 2 has to be a primitive root mod 169. Or otherwise 15 has to be.
In other words, no need to check any of the other powers. | 2021-06-14T22:26:55 | {
"domain": "mathhelpboards.com",
"url": "https://mathhelpboards.com/threads/primitive-root-modulo-169.7398/",
"openwebmath_score": 0.8173380494117737,
"openwebmath_perplexity": 478.2527666349904,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9805806552225684,
"lm_q2_score": 0.8774767762675405,
"lm_q1q2_score": 0.8604367522150119
} |
http://math.stackexchange.com/questions/817189/finding-the-coefficient-on-the-x-term-of-prod-n-120x-n | # Finding the coefficient on the $x$ term of ${\prod_{n = 1}^{20}(x-n)}.$
I am trying to find the coefficient on the $x$ term of $\displaystyle{\prod_{n = 1}^{20}(x-n)}$. The issue is that the binomial theorem can't be applied since our $b$ value is changing from term to term. Is there any simple way to do this problem, perhaps a way to change the expression so that the binomial theorem applies? I've tried to do that, and tried looking for a pattern on similar expressions, but I haven't come up with anything. Any help you might have would be appreciated.
-
It will be the 19th elementary symmetric polynomial in your roots. See en.m.wikipedia.org/wiki/Elementary_symmetric_polynomial for symmetric polynomials. – Marc Jun 1 '14 at 17:11
A concept popularly known as sum of roots of a polynomial can be of help. – fermesomme Jun 1 '14 at 17:22
The coefficient is given by $$\sum_{k=1}^{20}\prod_{n\not=k\atop n=1,\dots,20}{(-n)}=-\sum_{k=1}^{20}\frac{20!}{k}=-20! H_{20}$$ where $H_k$ is the $k$-th Harmonic number. Those can be looked up in tables (see A001008 and A002805): $H_{20}=\frac{55835135}{15519504}$ and thus the coefficient is given by $-8752948036761600000$.
-
This answer just gets better and better on a moment-by-moment basis! Thanks for adding the value of $H_{20}$! – Robert Lewis Jun 1 '14 at 18:30
Just a question, is there a simple way to calculate the constant of the expanded form? – recursive recursion Jun 1 '14 at 19:47
@Dominik, Also, I'm not sure how you got your first expression. Could you please explain that – recursive recursion Jun 1 '14 at 19:51
You get terms with a single power in $x$ from the product $\prod_{n=1}^{20}(x-n)$ by taking $19$ non-$x$ factors and one $x$. The sum is the sum over the different choices of those non-$x$ and $x$ factors. More formally you could try to prove that $(-1)^{N+1} N! H_{N}$ is the $x$-coefficient of $\prod_{n=1}^N (x-n)$ by induction. – Dominik Jun 1 '14 at 20:14
This is one of Vieta's formulas.
-
The coefficient must be less than $-20!$, so this can't be correct? – copper.hat Jun 1 '14 at 17:22
Not to put too fine a point on it, but this answer is incorrect; the coefficient of $x^{19}$ is in fact $-\sum_{n = 1}^{20} n = -(20 \times 21)/2 = -210$; for the coefficient of $x$, see the answer given by Dominik. I do believe it is correct. – Robert Lewis Jun 1 '14 at 17:31
@RobertLewis: Unfortunately no 'Med' time for me this morning... – copper.hat Jun 1 '14 at 17:37
@Robert Lewis. Shoot! You're right! I'll remove everything but the vieta's formula link. – Avi Steiner Jun 1 '14 at 17:40
@Avi Steiner: well done! Glad we got it right! – Robert Lewis Jun 1 '14 at 19:40
One way is to just use Maclaurin's formula: $$f(x) = f(0) + \frac{f'(0)}{1!} x + \ldots$$ In this case: \begin{align} f'(x) &= \sum_{1 \le n \le m} \prod_{\substack{1 \le k \le m\\k \ne n}} (x - k) \\ f'(0) &= (-1)^{m - 1} m! H_m \end{align}
-
The question has been well answered by others. However, I would like to point out that this polynomial has a name---Wilkinson's polynomial---and its own Wikipedia article. It was put forward by Wilkinson as an example of an apparently innocuous polynomial with remarkable numerical-analytic properties.
-
This polynomial is a lot more interesting than I thought! Thanks for giving me a name to google. – recursive recursion Jun 1 '14 at 21:29
A generalization to other coefficients (and limits different from $20$) is given in the generating functions of Stirling numbers of the first kind, Pochammer symbols:
$$(x)_n:=\prod_{k=0}^{n-1}(x-k)=\sum_{k=0}^n s(n,k)x^k.$$
-
The coefficients are the Stirling numbers of the first kind :
- | 2016-02-08T13:14:08 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/817189/finding-the-coefficient-on-the-x-term-of-prod-n-120x-n",
"openwebmath_score": 0.9720078706741333,
"openwebmath_perplexity": 384.035142208471,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126488274566,
"lm_q2_score": 0.8791467627598857,
"lm_q1q2_score": 0.8604320568888113
} |
http://www.lofoya.com/Solved/1724/four-different-objects-1-2-3-4-are-distributed-at-random-in-four | # Moderate Probability Solved QuestionAptitude Discussion
Q. Four different objects 1, 2, 3, 4 are distributed at random in four places marked 1, 2, 3, 4. What is the probability that none of the objects occupy the place corresponding to its number?
✖ A. 17/24 ✔ B. 3/8 ✖ C. 1/2 ✖ D. 5/8
Solution:
Option(B) is correct
First of all, if we IGNORE the condition about where the objects can be placed, we can arrange the 4 different objects in 4! ways (= 24 ways).
So, we now must determine HOW MANY of those 24 arrangements are such that no objects occupy the location corresponding to its number.
A quick way to do this is to LIST acceptable outcomes.
IMPORTANT: We'll list each arrangement so that the first number represents the object that goes to location #1, the second number represents the object that goes to location #2, and so on.
So, for example, 3421 represents object #3 in location #1, object #4 in location #2, object #2 in location #3, and object #1 in location #4.
Let's be systematic:
Arrangements where object #2 is in location #1
The possible arrangements where NO object is in the correct location are as follows:
2143, 2341, 2413
Total number of arrangements $= 3$
Arrangements where object #3 is in location #1
The possible arrangements where NO object is in the correct location are as follows:
3142, 3412, 3421
Total # of arrangements $= 3$
Arrangements where object #4 is in location #1
The possible arrangements where NO object is in the correct location are as follows:
4123, 4312, 4321
Total # of arrangements $= 3$
Altogether, the number of arrangements where no object is in the correct location,
$= 3 + 3 + 3$
$= 9$
So, $P(\text{no object in correct location}) = \dfrac{9}{24}$
$= \dfrac{3}{8}$
Thus, option (B) is the correct choice.
Edit: Based on Vaibhav's comment, the solution has been updated.
Edit 2: For an alternate solution see Kartik's comment
Edit 3: Sheldon has provided a way to reach the correct answer without even solving the question
Edit 4: Suzen has given an exhaustive solution
Edit 5: For derangements formula, check Himanshu Singh's comment.
Note: For better understanding and different approaches do visit comment section.
## (14) Comment(s)
Gaurav Karnani
()
Total number of cases = 24
let us divide the case into categories where :
1) single number is in right position = 8 cases; two for each $1,2,3,4$
2) Two numbers are in right position = 6 cases $12,13,14,23,24,34$
3) No case for three digits as if three are in correct position fourth one will definitely be, so only one case of $1234$ .
total number of cases =15
Probability = $9/24$
Himanshu Singh
()
Please use this formula for derangements :
$!n = n!\left(1-\dfrac{1}{1!} + \dfrac{1}{2!} - \dfrac{1}{3!} + \dfrac{1}{4!} ... (-1)^n \times \dfrac{1}{n!} \right)$
Kartik
()
Total number of ways of placing objects into places randomly,
$=4 \times 3 \times 2 \times 1=4!=24$
No. of ways of placing them such that only one of them gets correct place,
\begin{align*} = & 4 \text{ (ways of choosing the correctly} \\ & \text{ placed objects)} \times \\ & \times 2 \text{ (to place next object wrongly)} \\ & \times 1 \text{ (to place next object wrongly)} \\ & \times 1 \text{ (to place next object wrongly)}\\ = & \textbf{8} \end{align*}
Number of ways of placing exactly two objects correctly,
\begin{align*} = & ^4C_2 \text{ (to identify the two correctly placed} \\ & \text{ objects their arrangment are unique}\\ & \text{ for them to be correct)} \times \\ & 1 \text{ (to arrange next object wrongly)} \times \\ & 1 \text{ (to arrange next object wrongly)}\\ = & \textbf{6} \end{align*}
Now,
Number of ways to place exctly 3 objects correct = ways to place all objects correct $= \textbf{1}$
Therefore, number of ways to place all digits wrongly,
$=24-(8+6+1)$
$=24-15$
$=9$
Thus, Required probability,
$=\dfrac{9}{24}$
$=\dfrac{3}{8}$
Raj
()
Since there's only one correct way of placing the objects as per the boxes- can't we work on the lines of using probability of an event happening = 1 - probability of that event not happening?
So therefore, can't the answer be 1 - (1/24) = 23/24?
Brent
()
That's a good idea, but it doesn't apply here.
If event A = NONE of the objects in the correct places, then the complement (event A NOT happening) will consist of any arrangement where some (perhaps all) of the objects are in their correct place(s).
Vaibhav
()
but this is the case for 2 at first position. it can occupy position 3 and 4 as well . Similar case with other digits will be there.
Deepak
()
You are right Vaibhav, updated the solution.
Suzen
()
Here's the full list:
1234
1243
1423
1432
1324
1342
2134
2143 OK
2431
2413 OK
2341 OK
2314
3124
3142 OK (Close to 1000pi, but no correlation really, or is there? Perhaps the nth position is not n for all decimal places. OK, an interesting idea to explore)
3421 OK
3412 OK
3214
3241
4123 OK
4132
4321 OK
4312 OK
4213
4231
Clearly $P = \frac{9}{24} = \frac{3}{8}$
I know it's long winded, but I'm glad it matched!
Tesla
()
Let a particular number (say) number 2 occupies position 1.
Then all possible arrangement are given as:
(2,1,3,4), (2,1,4,3), (2,3,4,1), (2,4,1,4), (2,4,1,3), (2,4,3,1).
Out of these six, three (2,1,3,4), (2,3,1,4), (2,4,3,1) are not acceptable because numbers 3 and 4 occupy the correct positions.
Required probability = 3/6 = 1/2
Sheldon
()
I am not sure what difficulty level this would fall into.
If I see this question in the real exam, I would spend about a minute solving it.
After that, I would guess and move on. because it would be time consuming.
Looking at the question, it is more obvious that at-least 1 number would fall into its corresponding place.
So definitely, the probability that the number wouldn't fall into its numbered place should be less than half.
Looking at options:
A. 17/24
B. 3/8
C. 1/2
D. 5/8
Only option B is less than half. I would pick B and move on.
Brent
()
That's a great approach that allows you to minimize time spent on a question (that you feel is going nowhere) and maximize your guess.
Probability questions are perfect for this, because most people have a gut feeling about how likely something is. As you suggest, it does seem unlikely (probability less than 0.5) that every object would be out of place, so B is the perfect guess.
Steve
()
I agree with sam and abhishek
Abhishek
()
the correct answer will be $\dfrac{9}{24} =\dfrac{3}{8}$
out of
1234 1243 1324 1342 1423 1432
2341 2314 2143 2134 2413 2431
3124 3142 3412 3421 3214 3241
4123 4132 4213 4231 4312 4321
24 combinations
2341 2143 2413
3421 3412 3142
4123 4312 4321
are misplaced(dearranged)
So, the correct answer will be $\dfrac{9}{24}$ i.e. $\dfrac{3}{8}$. :)
Sam
()
I think Abhishek is correct...... answer should be 3/8 | 2016-10-23T22:04:57 | {
"domain": "lofoya.com",
"url": "http://www.lofoya.com/Solved/1724/four-different-objects-1-2-3-4-are-distributed-at-random-in-four",
"openwebmath_score": 0.964660108089447,
"openwebmath_perplexity": 1329.0917578973158,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712645102011,
"lm_q2_score": 0.8791467643431002,
"lm_q1q2_score": 0.8604320551631099
} |
https://math.stackexchange.com/questions/2799739/limit-involving-series-and-greatest-integer-function | Limit involving Series and Greatest Integer Function
If $[$.$]$ denotes the greatest integer function, then find the value of $\lim_{n \to \infty} \frac{[x] + [2x] + [3x] + … + [nx]}{n^2}$
What I did was, I wrote each greatest integer function $[x]$ as $x - \{x\}$, where $\{.\}$ is the fractional part. Hence, you get
$\lim_{n \to \infty} \frac{\frac{n(n+1)}{2}(x-\{x\})}{n^2}$
The limit should then evaluate to $\frac{x-\{x\}}{2}$
But the answer given is $\frac{x}{2}$. What am I missing here?
$$\frac{\lfloor x\rfloor+\ldots+\lfloor nx\rfloor}{n^2}=\frac{x+2x+\ldots nx-\{x\}-\ldots-\{nx\}}{n^2}=$$
$$=\frac{n(n+1)}{2n^2}x-\frac{\{x\}+\ldots+\{nx\}}{n^2}\xrightarrow[n\to\infty]{}\frac12x-0=\frac x2$$
since the second addend above tends to zero:
$$\frac{\{x\}+\ldots+\{nx\}}{n^2}\le\frac n{n^2}=\frac1n\xrightarrow[n\to\infty]{}0$$
• I didn't really get how the second addend (fractional part thing) tends to zero, as n tends to infinity. Isn't it in an inderminate form as it is? Also, what has been done in the last line of the answer? – skb May 28 '18 at 20:20
• @skb Every term $\;\{kx\}\;$ is less than $\;1\;$ , so that whole sum's numerator is less that $\;1+1+\ldots+1=n\;$ ... – DonAntonio May 28 '18 at 20:22
• Have you used the sandwich theorem in the last line? But shouldn't there be another expression less than it for it to work? – skb May 28 '18 at 20:24
• @skb But isn't it obvious that the whole expression is greater than zero or equal to it? You can complete that argument... – DonAntonio May 28 '18 at 20:27
• Got it. Thanks a lot for the help! – skb May 28 '18 at 20:27
Note that by Stolz-Cesaro
$$\lim_{n \to \infty} \frac{\lfloor x\rfloor + \lfloor 2x \rfloor + \lfloor3x\rfloor + … + \lfloor nx]}{n^2}=\lim_{n \to \infty} \frac{\lfloor(n+1)x\rfloor}{(n+1)^2-n^2}=\lim_{n \to \infty} \frac{\lfloor(n+1)x\rfloor}{2n+1}=$$ $$=\lim_{n \to \infty} \frac{(n+1)x}{2n+1}-\frac{\{(n+1)x\}}{2n+1}$$ | 2019-10-20T16:11:08 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2799739/limit-involving-series-and-greatest-integer-function",
"openwebmath_score": 0.8791330456733704,
"openwebmath_perplexity": 310.73487589094714,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712645102011,
"lm_q2_score": 0.8791467643431002,
"lm_q1q2_score": 0.8604320551631099
} |
http://mathhelpforum.com/geometry/46622-pyramid-print.html | # a pyramid
• August 24th 2008, 01:57 AM
perash
a pyramid
Consider a pyramid with a square base. The side length of the
base is 2 units and the height of the pyramid is 1 unit. Imagine
placing a cube inside this pyramid (resting on the base of the
pyramid) such that each of the four top corners of the cube is
touching each of the 4 slanted edges of the pyramid .
Find the dimensions of the cube. That is, find the value, in
units, of the length of one edge of the cube.
• August 24th 2008, 03:27 AM
ticbol
Quote:
Originally Posted by perash
Consider a pyramid with a square base. The side length of the
base is 2 units and the height of the pyramid is 1 unit. Imagine
placing a cube inside this pyramid (resting on the base of the
pyramid) such that each of the four top corners of the cube is
touching each of the 4 slanted edges of the pyramid .
Find the dimensions of the cube. That is, find the value, in
units, of the length of one edge of the cube.
So the the top four corners of the cube are along the slanted edges of the pyramid. Then the four edges of the base of the cube are parallel to the edges of the base of the pyramid each to each.
View the figure from the top, or on the top. We will get a vertical cross-section along one of the equal diagonals of the base of the pyramid.
The diagonal of the pyramid is 2sqrt(2) units long...by Pythagorean theorem.
The diagonal of the cube, whose edge is, say, x units long, is x*sqrt(2) units long.
Now view that said cross-section vertically, or from the side, or from one of the un-cut corner of the base of the pyramid.
The figure is that of an isosceles triangle whose base is 2sqrt(2) units long, and whose height is 1 unit long.
Inside it is a rectangle whose base is x*sqrt(2) units long, and whose height is x units long.
Above the rectangle is a smaller isosceles triangle whose base is x*sqrt(2) and whose height is (1-x) units long.
The two isosceles triangles are similar, and so, proportional.
By proportion,
2sqrt(2) /1 = x*sqrt(2) /(1-x)
2 = x /(1-x)
2(1-x) = x
2 -2x = x
2 = x +2x
x = 2/3 unit long ---------------answer
• August 24th 2008, 05:16 AM
Soroban
Hello, perash!
I used the same approach as ticbol, but got a different answer.
Quote:
Consider a pyramid with a square base. The side length of the base is 2 units
and the height of the pyramid is 1 unit. Imagine placing a cube inside this pyramid
(resting on the base of the pyramid) so that each of the four top corners of the cube
is touching each of the 4 slanted edges of the pyramid.
Find the dimensions of the cube.
That is, find the value, in units, of the length of one edge of the cube.
Looking down on the pyramid, we see:
Code:
*-----------* | * * | | * * | | * | 2 | * * | | * * | *-----------* 2
The diagonal of this square is $2\sqrt{2}$
Slice the pyramid along a diagonal and we have this cross-section:
Code:
- * : * | * : * | * : *-----+-----* 1 * | | | * : * | | | * : * | | 2x| * : * | | | * - *---------*-----*-----*---------* : √2 : x : √2-x :
The lower-right right triangle is similar to the largest right triangle.
We have: . $\frac{2x}{\sqrt{2}-x} \:=\:\frac{1}{\sqrt{2}} \quad\Rightarrow\quad 2\sqrt{2}x \:=\:\sqrt{2} - x$
. . $2\sqrt{2}x + x \:=\:\sqrt{2} \quad\Rightarrow\quad (2\sqrt{2}+1)x \:=\:\sqrt{2} \quad\Rightarrow\quad x \:=\:\frac{\sqrt{2}}{2\sqrt{2}+1}
$
The side of the cube is: . $2x \;=\;\frac{2\sqrt{2}}{2\sqrt{2} + 1} \;=\;\frac{2(4-\sqrt{2})}{7}$
• August 24th 2008, 04:58 PM
ticbol
Quote:
Originally Posted by Soroban
Hello, perash!
I used the same approach as ticbol, but got a different answer.
Looking down on the pyramid, we see:
Code:
*-----------* | * * | | * * | | * | 2 | * * | | * * | *-----------* 2
The diagonal of this square is $2\sqrt{2}$
Slice the pyramid along a diagonal and we have this cross-section:
Code:
- * : * | * : * | * : *-----+-----* 1 * | | | * : * | | | * : * | | 2x| * : * | | | * - *---------*-----*-----*---------* : √2 : x : √2-x :
The lower-right right triangle is similar to the largest right triangle.
We have: . $\frac{2x}{\sqrt{2}-x} \:=\:\frac{1}{\sqrt{2}} \quad\Rightarrow\quad 2\sqrt{2}x \:=\:\sqrt{2} - x$
. . $2\sqrt{2}x + x \:=\:\sqrt{2} \quad\Rightarrow\quad (2\sqrt{2}+1)x \:=\:\sqrt{2} \quad\Rightarrow\quad x \:=\:\frac{\sqrt{2}}{2\sqrt{2}+1}
$
The side of the cube is: . $2x \;=\;\frac{2\sqrt{2}}{2\sqrt{2} + 1} \;=\;\frac{2(4-\sqrt{2})}{7}$
I'm sorry to comment, since perash or anybody did not comment, but the cube should appear as a rectangle, not a square, in your diagram. The length of the base now of the rectangle should be (2x)sqrt(2). Not 2x as is in your diagram. You sliced along a diagonal of the base of the pyramid, remember.
• August 26th 2008, 03:17 AM
Soroban
Hello, ticbol!
Another blunder . . . *blush*
Quote:
The cube should appear as a rectangle, not a square, in your diagram.
The length of the base now of the rectangle should be (2x)sqrt(2).
Not 2x as is in your diagram.
You sliced along a diagonal of the base of the pyramid, remember.
Absolutely right!
I'll try to correct my work and get back soon . . . | 2015-05-29T00:36:17 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/geometry/46622-pyramid-print.html",
"openwebmath_score": 0.9177049398422241,
"openwebmath_perplexity": 8432.509098182818,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126475856415,
"lm_q2_score": 0.8791467564270271,
"lm_q1q2_score": 0.8604320495990248
} |
https://math.stackexchange.com/questions/2115891/how-to-know-if-the-angle-is-positive-or-negative-in-inverse-trigonometry | # How to know if the angle is positive or negative in inverse trigonometry
I have been doing this problem this problem:
$$Cos[Tan^{-1}(-\frac{2}{3})]$$ So I was instructed to draw a triangle to guide me so I did
Now once I drew my triangle I found the hypotenuse, which is $$\sqrt{13}$$
And then I was able to obtain the answer to this expression which I got:
$$\frac{2\sqrt{13}}{13}$$
However, I am told I drew the triangle wrong, it is actually -2 (negative) and (3) positive. Why is it that the triangle is wrong? I was told the actual answer is
$$\frac{3\sqrt{13}}{13}$$
Arctan has a range of $\frac{-\pi}{2}\le{y}\le\frac{\pi}{2}$. Now let $arctan\frac{-2}{3}=y$. This implies $tany=\frac{-2}{3}$. Because tan is negative, we know y must lie in quadrant IV. It cannot lie in quadrant I because tan is positive in quadrant I. Therefore we draw our angle as you have above. Now, note that $tan\theta=\frac{opposite}{adjacent}$. Therefore, you should have $-2$ where $3$ is in your picture, and you should have $3$ where $-2$ is. Your line should be drawn to the coordinate $(3,-2)$. Now, we find the hypotenuse as you have already done using the Pythagorean Theorem. We find that it is $\sqrt{13}$ as you've noted. Now, because we are finding $cos(arctan\frac{-2}{3})$, we will use the fact that $cos\theta=\frac{adjacent}{hypotenuse}$. So, this gives $\frac{3}{\sqrt{13}}$. Rationalizing we have $\frac{3\sqrt{13}}{13}$
• Yes, tangent is negative in quadrant IV and II, but the range for arctan is limited to $\frac{-\pi}{2}\le{y}\le\frac{\pi}{2}$ so our angle cannot lie in quadrant II. It must lie in either quadrant I or IV. And since it is negative, it must lie in quadrant IV. – MathGuy Jan 27 '17 at 3:04
• Yes, the ranges are different though depending on the inverse trig function. For arcsin the range is $\frac{-\pi}{2}\le{y}\le\frac{\pi}{2}$, for arccos the range is $0\le{y}\le\pi$ and for arctan the range is $\frac{-\pi}{2}\le{y}\le\frac{\pi}{2}$ – MathGuy Jan 27 '17 at 3:12 | 2019-08-25T17:56:56 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2115891/how-to-know-if-the-angle-is-positive-or-negative-in-inverse-trigonometry",
"openwebmath_score": 0.9125866293907166,
"openwebmath_perplexity": 239.84451636178318,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9883127399388852,
"lm_q2_score": 0.8705972768020108,
"lm_q1q2_score": 0.8604223800195274
} |
https://mathhelpboards.com/threads/further-question-on-my-fractions-problem.9489/ | # Further question on My Fractions problem
#### tmt
##### Active member
I have a separate question on the same problem from my prior post.
I need an equation for a tangent which has a slope of 5/6 and passes through (1,3)
y-3 = 5/6(x-1)
I simplify this to
y= 5/6x +13/6
However the answer given is y = 7/6(x) + 13/6
Where am I going wrong?
Yours,
Timothy
#### MarkFL
Staff member
Your line has the required slope, while the given answer has the wrong slope. It is most likely a typo somewhere, either in the statement of the problem or the given answer. Can you post the original problem in its entirety?
#### tmt
##### Active member
PROBLEM 11 : Find an equation of the line tangent to the graph of x2 + (y-x)3 = 9 at x=1 . '
This is the end of the answer:
Thus, the slope of the line tangent to the graph at (1, 3) is
$m = y' = \displaystyle{ 3 (3-1)^2 - 2(1) \over 3 (3-1)^2 } = \displaystyle{ 10 \over 12 } = \displaystyle{ 5 \over 6 }$ ,
and the equation of the tangent line is
y - ( 3 ) = (5/6) ( x - ( 1 ) ) ,
or
y = (7/6) x + (13/6) .
I suspect it is a typo in the answer. Here is the link for the full answer.
https://www.math.ucdavis.edu/~kouba...soldirectory/ImplicitDiffSol.html#SOLUTION 11
#### MarkFL
Staff member
Okay, I see now...I assumed the slope was given as 5/6. Let's take a look at the problem. We are given the curve:
$$\displaystyle x^2+(y-x)^3=9$$
So, implicitly differentiating with respect to $x$, we find:
$$\displaystyle 2x+3(y-x)^2\left(\frac{dy}{dx}-1 \right)=0$$
Solving for $$\displaystyle \frac{dy}{dx}$$, we find:
$$\displaystyle \frac{dy}{dx}=1-\frac{2x}{3(y-x)^2}$$
Now, when $x=1$, we find from the original curve:
$$\displaystyle 1^2+(y-1)^3=9$$
$$\displaystyle y=3$$
And so we find the slope at the given point is:
$$\displaystyle \left.\frac{dy}{dx} \right|_{(x,y)=(1,3)}=1-\frac{2(1)}{3(3-1)^2}=1-\frac{2}{12}=\frac{5}{6}$$
Hence, using the point-slope formula, we obtain the tangent line:
$$\displaystyle y-3=\frac{5}{6}(x-1)$$
$$\displaystyle y=\frac{5}{6}x+\frac{13}{6}$$
Here is a plot of the curve and the tangent line:
#### Deveno
##### Well-known member
MHB Math Scholar
PROBLEM 11 : Find an equation of the line tangent to the graph of x2 + (y-x)3 = 9 at x=1 . '
This is the end of the answer:
Thus, the slope of the line tangent to the graph at (1, 3) is
$m = y' = \displaystyle{ 3 (3-1)^2 - 2(1) \over 3 (3-1)^2 } = \displaystyle{ 10 \over 12 } = \displaystyle{ 5 \over 6 }$ ,
and the equation of the tangent line is
y - ( 3 ) = (5/6) ( x - ( 1 ) ) ,
or
y = (7/6) x + (13/6) .
I suspect it is a typo in the answer. Here is the link for the full answer.
https://www.math.ucdavis.edu/~kouba...soldirectory/ImplicitDiffSol.html#SOLUTION 11
I concur that both you and MarkFL are correct, the link you provided has a typo in the very last line (it is correct until that), and the correct tangent line has the equation:
$y = \dfrac{5}{6}x + \dfrac{13}{6}$ | 2021-08-05T05:23:58 | {
"domain": "mathhelpboards.com",
"url": "https://mathhelpboards.com/threads/further-question-on-my-fractions-problem.9489/",
"openwebmath_score": 0.8594733476638794,
"openwebmath_perplexity": 499.87169727565464,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.988312741660069,
"lm_q2_score": 0.870597265050901,
"lm_q1q2_score": 0.8604223699042137
} |
https://math.stackexchange.com/questions/2845813/find-all-values-of-k-for-kx2k2x-3-0-with-positive-roots | Find all values of $k$ for $kx^2+(k+2)x-3=0$ with positive roots.
$$kx^2+(k+2)x-3=0$$
This quadratic has roots which are real and positive.
Find all possible values of $k$.
$$Δ = (k+8)^2-60$$ $$==> (k+8)^2-60>0$$ $$k>2\sqrt{15}\ - 8$$ and $$k<-2\sqrt{15}\ -8$$
However this didn't look right. Any suggestions?
Edit: I think I might have figured it out.
Since we know,
$$k\not= 0$$ $$\frac{-3}{k}>0$$ $$\therefore k<0$$
What I got earlier was not wrong, but rather incomplete.
$$k<-2\sqrt{15} -8$$ This can be ruled out since, $$k<0$$ $$\therefore 2\sqrt{15} -8 < k < 0$$
If someone could please still check my work that would be nice.
• I appreciate the advice, I had already tried something but was not sure if it was the correct "path" for this problem so I didn't show it. – StrBoP Jul 9 '18 at 18:48
• You're welcome. Note that positive numbers are always real, so the "real" in "real and positive" is redundant. – Shaun Jul 9 '18 at 18:54
• Do you mean $(k+2)^2$? – Chris2018 Jul 9 '18 at 18:59
• No, I can show more work if you'd like. But after simplifying to a greater extent you get a quadratic, Since that quadratic wasnt factorable I completed the square and then I got that. Sorry if I confused you. – StrBoP Jul 9 '18 at 19:02
Hint:
Using Vieta's formulas, you can see that the multiplication of the roots of a quadratic $ax^2+bx+c$ is given by $\frac{c}{a},$ which is equal to $\frac{-3}{k}$ in your case.
Since both roots are positive, this means that $\frac{-3}{k}$ must be positive and $k$ must be negative. Also, the sum of the roots (given by $-\frac{b}{a}$) must be positive too: $-\frac{k+2}{k} > 0$.
Since we already know that $k$ is negative, we get $-(k+2) < 0 \iff -2 < k$. Hence, $-2<k<0$ so far.
Now in order for this quadratic to have real roots, its discriminant must be non-negative. After you write it down, you see that you should solve the inequality $$(k+2)^2-4k(-3) =(k+2)^2+12k \geq 0$$ to find the range of values for $k,$ and then intersect it with $$-2 < k <0$$ at the end.
$$(k+2)^2+12k \geq 0 \iff k\geq \sqrt{60}-8 \text{ or } k\leq -8 -\sqrt{60}$$
Intersecting this range with $-2 < k < 0$ gives $\sqrt{60}-8 \leq k<0$.
• Might also be helpful to quote Vieta's formulas. – TheSimpliFire Jul 9 '18 at 18:47
• A positive product of the roots is not enough. – Bernard Jul 9 '18 at 18:51
• @stressed out , Thank you for the help – StrBoP Jul 9 '18 at 18:59
• How do you get $-(k+2) < 0 \iff 2 < k$? $$-(k+2) < 0 \implies -2 < k$$ From this it does not follow that $2 < k$ – gd1035 Jul 9 '18 at 19:05
• @stressedout Sorry for bothering once again, but I checked at the back of the book and the answer read: -8 +√60 </ k < 0 – StrBoP Jul 9 '18 at 19:05
You have three conditions to check:
1. This equation has real roots. It means it is a quadratic equation ($k\ne 0$) and its discriminant should be positive: $$\Delta=(k+2)^2+12k=k^2+16k+4=(k+8)^2-60>0,$$ so either $k<-8-2\sqrt{15}\:$ or $\:k>-8+2\sqrt{15}$.
2. The roots must have the same sign, i.e. their product $-\dfrac 3k>0$, which means $\:\color{red}{k<0}$.
3. This sign must be positive. If the roots have the same sign, this sign is also the sign of their sum $\:-\dfrac{k+2}k$, which is the sign of the product $-k(k+2)$. To sum it up, we need to have $k(k+2)<0$, which happens if and only if $\: \color{red}{-2<k<0}$.
Now note that$\:-8-2\sqrt{15}<-2$, and as $\;3<\sqrt{15}<4$ we have $\:-2<-8+2\sqrt{15}<0\:$.
Thus eventually the solutions are $$-8+2\sqrt{15}<k<0.$$
• The maniac downvoter struck gain! – Bernard Jul 9 '18 at 23:10 | 2019-08-20T16:54:17 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2845813/find-all-values-of-k-for-kx2k2x-3-0-with-positive-roots",
"openwebmath_score": 0.8804914355278015,
"openwebmath_perplexity": 292.1029525472093,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9883127433812521,
"lm_q2_score": 0.8705972600147106,
"lm_q1q2_score": 0.8604223664253399
} |
https://math.stackexchange.com/questions/1852540/find-sum-m-0n-1m-mn-n-choose-m | # Find $\sum_{m=0}^n\ (-1)^m m^n {n \choose m}$
I'm going to university in October and thought I'd have a go at a few questions from one of their past papers. I have completed the majority of this question but I'm stuck on the very last part. In honesty I've been working on this paper a while now and I'm a bit tired so I'm probably giving up earlier than I usually would.
I won't write out the full question, only the last part:
Let $$S_r(n) = \sum_{m=0}^n\ (-1)^m m^r {n \choose m}$$ where r is a non-negative integer . Show that $S_r(n)=0$ for $r<n$. Evaluate $S_n(n)$.
I have shown that $S_r(n)=0$ for $r<n$ by taking $(1+z)^n= \sum_{m=0}^n\ z^m {n \choose m}$, letting $D_r(f(z))=z\frac d{dz}(z\frac d{dz}...(\frac d{dz}(f(z)))...)$ where $z\frac d{dz}$ is applied $r$ times and applied it to both sides. The left hand side gives a polynomial, degree n which has factor $(1+z)$ for all $r<n$ and the right hand side gives $\sum_{m=0}^n\ z^m (m)^r {n \choose m}$. Setting $z=-1$ yields the required result.
There is some build up to this, so I'm fairly certain that this was the intended method.
I'm stuck however on the very last part. I have tried finding a form for $D_r((1+z)^n)$, but I'm fairly sure that this isn't the correct approach, as the wording of the question implies that $S_n(n)$ needs to be considered separately.
I'm surprised that I didn't find that this question had already been asked, so apologies if it has been.
Thank you.
• Looks like a close relative of Stirling numbers of the second kind. Please see Wikipedia. – André Nicolas Jul 7 '16 at 22:01
• Ok thanks, will do – Aka_aka_aka_ak Jul 7 '16 at 22:12
• You are welcome. Note the close relationship to the number of onto functions. The calculation you carried out successfully (and that I would have trouble with) can be bypassed once we note there are $0$ onto functions from an $r$-element set to an $n$-element set if $r\lt n$. – André Nicolas Jul 7 '16 at 22:23
The following relation encapsulates the Stirling number semantics:
$$m^r = r! [z^r] \exp(mz).$$
$$S_r(n) = r! [z^r] \sum_{m=0}^n {n\choose m} (-1)^m \exp(mz) = r! [z^r] (1-\exp(z))^n.$$
Now observe that
$$1-\exp(z) = - z - \frac{z^2}{2} - \frac{z^3}{6} -\cdots$$
which means that $(1-\exp(z))^n$ starts at $[z^r]$ where $r=n$ with coefficient $(-1)^n$, producing for the sum the value
$$(-1)^n\times n!$$
and the coefficients on $[z^r]$ with $r\lt n$ are zero.
The general form of the summation
$$\sum_{m=0}^n(-1)^mm^n\binom{n}m\;,\tag{1}$$
with the alternating sign and the binomial coefficient $\binom{n}m$, suggests that it can be interpreted as an inclusion-exclusion calculation. However, the $m=0$ term is $0$, from which we begin by subtracting a positive quantity $\binom{n}1=n$, which is a bit odd for such a calculation. This suggests letting $k=n-m$ and rewriting as
$$\sum_{k=0}^n(-1)^{n-k}(n-k)^n\binom{n}{n-k}=(-1)^n\sum_{k=0}^n(-1)^k\binom{n}k(n-k)^n\;,$$
where the righthand side has been arranged to look like a more or less typical inclusion-exclusion calculation. The factor $(n-k)^n$ is the one that isn’t part of the inclusion-exclusion machinery. It has a natural interpretation as the number of functions from $[n]$ to $[n]$ whose ranges are disjoint from some $k$-element subset of $[n]$. If for each $k\in[n]$ we let $F_k$ be the set of functions from $[n]$ to $[n]$ whose ranges do not contain $k$, we have
$$\left|\bigcap_{k\in I}F_k\right|=(n-|I|)^n$$
whenever $\varnothing\ne I\subseteq[n]$, so
\begin{align*} \left|\bigcup_{k=1}^nF_k\right|&=\sum_{\varnothing\ne I\subseteq[n]}(-1)^{|I|-1}\left|\bigcap_{k\in I}F_k\right|\\ &=\sum_{\varnothing\ne I\subseteq[n]}(-1)^{|I|-1}(n-|I|)^n\\ &=\sum_{k=1}^n(-1)^{k-1}\binom{n}k(n-k)^n\;. \end{align*}
This is the number of functions from $[n]$ to $[n]$ that are not surjective, i.e., the number that aren’t bijections. It follows that the number of bijections from $[n]$ to $[n]$ is
$$n^n-\sum_{k=1}^n(-1)^{k-1}\binom{n}k(n-k)^n=\sum_{k=0}^n(-1)^k\binom{n}k(n-k)^n\;.$$
On the other hand, we know that there are $n!$ bijections from $[n]$ to $[n]$, so
$$\sum_{m=0}^n(-1)^mm^n\binom{n}m=(-1)^n\sum_{k=0}^n(-1)^k\binom{n}k(n-k)^n=(-1)^nn!\;.$$
Note that the same basic argument handles the previous part of the problem. The factor $(n-k)^n$ is replaced by $(n-k)^r$, the number of functions from $[r]$ to $[n]$ whose ranges are disjoint from some specified $k$-element subset of $[n]$. The expression
$$\sum_{k=0}^n(-1)^k\binom{n}k(n-k)^r$$
counts the surjective functions from $[r]$ to $[n]$, and if $r<n$, this is of course $0$.
All of this is very closely related to the Stirling numbers of the second kind.
• @BrianMScott: Very nice and clearly written exposition. (+1) – Markus Scheuer Jun 30 '18 at 16:51 | 2019-08-20T11:43:52 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1852540/find-sum-m-0n-1m-mn-n-choose-m",
"openwebmath_score": 0.903042197227478,
"openwebmath_perplexity": 110.59646952772967,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9883127430370154,
"lm_q2_score": 0.8705972600147105,
"lm_q1q2_score": 0.8604223661256483
} |
http://khhk.agence-des-4-fontaines.fr/least-squares-solver.html | # Least Squares Solver
Define least squares. In order for the solution to represent sensible pixel values, restrict the solution to be from 0 through 1. 1 Introduction. Students, teachers, parents, and everyone can find solutions to their math problems instantly. The numerical instability and performance are issues of larger problems and general setting. This example shows how to solve a nonlinear least squares problem in two ways. While symbolically correct, using the QR decomposition instead is numerically more robust. - linear_least_squares. Solve least-squares (curve-fitting) problems. Sitio Espejo para América Latina. Linear regression calculator Two-dimensional linear regression of statistical data is done by the method of least squares. Linear Regression calculator uses the least squares method to find the line of best fit for a sets of data X and Y or the linear relationship between two dataset. Let us understand What is Linear Regression and how to perform it with the help Ordinary Least Squares (OLS) estimator with an example. 1 Solving Least Squares Systems: SVD Approach One way to solve overdetermined systems is to use the Singular Value Decomposition of a matrix. solver to vary the values for A, C and k to minimize the sum of chi squared. LMS incorporates an. Solve any equations from linear to more complex ones online using our equation solver in just one click. Linear vs. From the geometric perspective, we can deal with the least squares problem by the following logic. Enter the statistical data in the form of a pair of numbers, each pair is on a separate line. Algorithm 1 Least-squares sub-problem input: H 2 CN ⇥N, q 2 CN, P 2 RM ⇥N, d 2 CM for each receiver (j )(rowinP) in parallel do H⇤ w j = p⇤ j {solve 1 PDE} end for W =[w 1 w 2w m] {distributed matrix} S =(I M + 2 W⇤ W)1 {adjust using Algorithm 2 (optional)} for source (i) in parallel do y i =(I N 2 WSW⇤)(q i + 2 Wd i) Hu i = y i {solve 1 PDE} end for output: u. The main purpose is to provide an example of the basic commands. Recall the formula for method of least squares. Microsoft Excel provides a tool called Solver that handles this prob-lem in a manner that is transparent to the user. Now click on fiSolvefl. Find a linear least squares fit for a set of points in Visual Basic. In this section w e brie y presen t the most cited w orks in ellipse tting and its closely related problem, conic tting. Least squares fit is a method of determining the best curve to fit a set of points. This article demonstrates how to generate a polynomial curve fit using. When we pass this (near) optimal solution to NL2SOL it will have an easy task. Nonlinear Least Squares Data Fitting D. 1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression model where. An apparatus is available that marks a strip of paper at even intervals in time. Most math majors have some exposure to regression in their studies. Least-squares imaging and deconvolution using the hybrid norm conjugate-direction solver Yang Zhang and Jon Claerbout ABSTRACT To retrieve a sparse model, we applied the hybrid norm conjugate-direction (HBCD) solver proposed by Claerbout to two interesting geophysical problems: least-squares imaging and blind deconvolution. Learn more about least squares, curve fitting, optimization, nonlinear, fitting. A number of methods may be employed to solve this problem. In some applications, it may be necessary to place the bound constraints $$l \leq x \leq u$$ on the variables $$x$$. solve_least_squares_lm This is a function for solving non-linear least squares problems. First, least squares is a natural approach to estimation, which makes explicit use of the structure of the model as laid out in the assumptions. lstsq for the "direct" appraoch (as far as I know this uses SVD by standard, but I also tried all the other LINPACK options that scipy offers) $\endgroup$ – Bananach Oct 25 '16 at 19:14. Linear vs. C2, and D2) and then use Solver to find the least-squares parameters A, B, and C. MATH 3795 Lecture 9. You can perform least squares fit with or without the Symbolic Math Toolbox. This is a solved. You can vote up the examples you like or vote down the ones you don't like. The following is a sample implementation of simple linear regression using least squares matrix multiplication, relying on numpy for heavy lifting and matplotlib for visualization. Least squares regression analysis or linear regression method is deemed to be the most accurate and reliable method to divide the company’s mixed cost …. A least squares model contains a dummy objective and a set of linear equations: sumsq. 3 The Role of The quantities generated by the Lanczos process from (2. If it is not in the range, then it is the least squares solution. Loading Least-Squares Regression Line. The nonlinear problem is usually solved by iterative. SPGL1: A solver for sparse least squares. This is a mean estimated from a linear model. An issue came up about whether the least squares regression line has to pass through the point (XBAR,YBAR), where the terms XBAR and YBAR represent the arithmetic mean of the independent and dependent variables, respectively. This x is called the least square solution (if the Euclidean norm is used). Nonlinear least-squares solves min(∑||F(x i ) – y i || 2 ), where F(x i ) is a nonlinear function and y i is data. Answer to 4. The most expensive phase is the LSQR phase. solve a non-linear least squares problem. This page gathers different methods used to find the least squares circle fitting a set of 2D points (x,y). The equation for least squares solution for a linear fit looks as follows. Yanbo Liang (JIRA) Sun, 03 Jan 2016 18:05:30 -0800. In fact, I used this kind of solution in some situations. Define least squares. Factoring-polynomials. LEAST SQUARES and NORMAL EQUATIONS Background Overdetermined Linear systems: consider Ax = b if A is m n, x is n 1, b is m 1 with m > n. The least squares approach to regression is based upon minimizing these difference scores or deviation scores. Polynomials Least-Squares Fitting: Polynomials are one of the most commonly used types of curves in regression. For a general problem you wouldn't use this, of. so somewhere I'm doing something wrong. Let [] ∀k∈ℕ be a dispersion point in. Triangle Calculator. Nonlinear Least-Squares Fitting. Now that we have determined the loss function, the only thing left to do is minimize it. Given a matrix equation Ax=b, the normal equation is that which minimizes the sum of the square differences between the left and right sides: A^(T)Ax=A^(T)b. Least-Squares Line Least-Squares Fit LSRL The linear fit that matches the pattern of a set of paired data as closely as possible. Introduction¶. This influence is exaggerated using least squares. solver to vary the values for A, C and k to minimize the sum of chi squared. Therefore the least squares solution to this system is: xˆ = (A TA)−1A b = −0. 00000 Covariance matrix of Residuals 0. solve a non-linear least squares problem. Dr Gregory Reeves 26,616 views. To solve least squares problems based on PDE models requires sophisticated numerical techniques but also great attention with respect to the quality of data and identifiability of the parameters. 33 so this is our prediction. ‘huber’ : rho(z) = z if z <= 1 else 2*z**0. But how does this relate to the least-squares problem, where there are multiple measurements? Is the problem I am trying to solve essentially the same, except that the number of measurements is one? And in that case, is using Ceres Solver's non-linear least squares solver really necessary? Thanks!. The following is a sample implementation of simple linear regression using least squares matrix multiplication, relying on numpy for heavy lifting and matplotlib for visualization. Also tells you if the entered number is a perfect square. It estimates the value of a dependent variable Y from a given independent variable X. An online LSRL calculator to find the least squares regression line equation, slope and Y-intercept values. least squares solution). Node 13 of 18 Node 13 of 18 The Mixed Integer Linear Programming Solver Tree level 1. Order fractions from least to greatest or from greatest to least. The computational burden is now shifted, and one needs to solve many small linear systems. I'd like to know how to solve the least squares non linear regression in java only by passing a matrix A and a vector b like in python. Heh--reduced QR left out the right half of Q. SOLVING DIFFERENTIAL EQUATIONS WITH LEAST SQUARE AND COLLOCATION METHODS by Katayoun Bodouhi Kazemi Dr. 00004849386 0. The Excel Solver can be easily configured to determine the coefficients and Y-intercept of the linear regression line that minimizes the sum of the squares of all residuals of each input equation. Certain types of word problems can be solved by quadratic equations. In this lesson, we will explore least-squares regression and show how this method relates to fitting an equation to some data. There are many possible cases that can arise with the matrix A. The best-fit line, as we have decided, is the line that minimizes the sum of squares of residuals. Anyway, if you want to learn more about the derivation of the normal equation, you can read about it on wikipedia. The solve() method finds a vector x such that Σ i [f i (x)] 2 is minimized. Severely weakens outliers influence, but may cause difficulties in optimization process. When we used the QR decomposition of a matrix to solve a least-squares problem, we operated under the assumption that was full-rank. We have our explanatory variable x, that gets multiplied by this slope beta 1, and we also have an intercept where the line intersects the y axis. solve a non-linear least squares problem. A linear fit matches the pattern of a set of paired data as closely as possible. Given a set of samples {(x i,y i)}m i=1. lstsq in terms of computation time and memory. Triangle Calculator. Square of Matrix Calculator is an online tool programmed to calculate the square of the matrix A. This is a standard least squares problem and can easily be solved using Math. There are many possible cases that can arise with the matrix A. Since this system usually does not have a solution, you need to be satisfied with some sort of approximate solution. For any given. The GLS estimator can be shown to solve the problem which is called generalized least squares problem. y_i=A{x_i}^b When I solve for A two different ways I am getting different answers. The Least-Squares Method requires that the estimated function has to deviate as little as possible from f(x) in the sense of a 2-norm. Find answers to Weighted least Squares Excel from the expert community at Experts Exchange Also, when I load the solver case from R77:R96, the resulting. Method of Least Squares. ) Developer's Guide to Excelets/Sinex. Least–squares Solution of Homogeneous Equations supportive text for teaching purposes Revision: 1. The most widely used approximation is the least squares solution, which minimizes. The data, the interpolating polynomial (blue), and the least-squares line (red) are shown in Figure 1. * odinsbane/least-squares-in-java * NonLinearLeastSquares (Parallel Java Library Documentation) * NonlinearRegression (JMSL Numerical Library) Some related discussion here: Solving nonlinear equations. 5 Example 3: The orbit of a comet around the sun is either elliptical, parabolic, or hyperbolic. This is the general form of the least squares line. The calculator will generate a step by step explanation along with the graphic representation of the data sets and regression line. LLS is actively maintained for the course EE103, Introduction to Matrix Methods. Least Squares Regression Line of Best Fit. Other JavaScript in this series are categorized under different areas of applications in the MENU section on this. Octave also supports linear least squares minimization. Since it's a sum of squares, the method is called the method of least squares. They are connected by p DAbx. Lecture 11, Least Squares Problems, Numerical Linear Algebra, 1997. We present an algorithm for adding rows with a single nonzero to A to improve its conditioning; it attempts to add as few rows as possible. 00000241437 0. 1 Introduction A nonlinear least squares problem is an unconstrained minimization problem of the form minimize x f(x)= m i=1 f i(x)2, where the objective function is defined in terms of auxiliary functions {f i}. To perform WLS in EViews, open the equation estimation dialog and select a method that supports WLS such as LS—Least Squares (NLS and ARMA), then click on the Options tab. Least Squares with Examples in Signal Processing1 Ivan Selesnick March 7, 2013 NYU-Poly These notes address (approximate) solutions to linear equations by least squares. Last edited by shg; 10-23-2017 at 01:01 PM. 20 - PhET: Free online. R factor can be used in LSQR (an iterative least-squares solver [29]) to effi-ciently and reliably solve a regularization of the least-squares problem. The CVX Users’ Guide, Release 2. I am using python linalg. The results showed. Which Matlab function should I use?. A linear fit matches the pattern of a set of paired data as closely as possible. Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques,which are widely usedto analyze and visualize data. For treatment A, the LS mean is (3+7. Because nonlinear optimization methods can be applied to any function, for the relation between two variables, it finds functions that best fit a given set of data points from a list of more than 100 functions, which include most common and interesting functions, like gaussians, sigmoidals, rationals. AutoCorrelation (Correlogram) and persistence - Time series analysis. NET: Description: This example shows how to find a linear least squares fit for a set of points in Visual Basic. Least Squares. But, this OLS method will work for both univariate dataset which is single independent variables and single dependent variables and multi-variate dataset. 7 Least squares approximate solutions. Quadratic Regression Calculator. * odinsbane/least-squares-in-java * NonLinearLeastSquares (Parallel Java Library Documentation) * NonlinearRegression (JMSL Numerical Library) Some related discussion here: Solving nonlinear equations. In the process of solving a mixed integer least squares problem, an ordinary integer least squares problem is solved. Free Modulo calculator - find modulo of a division operation between two numbers step by step This website uses cookies to ensure you get the best experience. For details, see First Choose Problem-Based or Solver-Based Approach. Next, we develop a distributed least square solver over strongly connected directed graphs and show that the proposed algorithm exponentially converges to the least square solution provided the step-size is sufficiently small. Let [] ∀k∈ℕ be a dispersion point in. This is why some least-squares solvers do not use the normal equations under the hood (they instead use QR decomposition). See Input Data for the description of how to enter matrix or just click Example for a simple example. Spark MLlib currently supports two types of solvers for the normal equations: Cholesky factorization and Quasi-Newton methods (L-BFGS/OWL-QN). The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm: ∑ = | − |, by an iterative method in which each step involves solving a weighted least squares problem of the form: (+) = ∑ = (()) | − |. This function outperforms numpy. BLENDENPIK: SUPERCHARGING LAPACK'S LEAST-SQUARES SOLVER 5 de ned in the prof. R factor can be used in LSQR (an iterative least-squares solver [29]) to effi-ciently and reliably solve a regularization of the least-squares problem. A necessary and sufficient condition is established on the graph Laplacian for the continuous-time distributed algorithm to give the least squares solution in the limit, with an exponentially fast convergence rate. Quadratic regression is a type of a multiple linear regression. The use of linear regression, or least squares method, is the most accurate method in segregating total costs into fixed and variable components. The least squares regression line ; The least squares regression line whose slope and y-intercept are given by: where , , and. Subsequently, Avronetal. This is done by finding the partial derivative of L, equating it to 0 and then finding an expression for m and c. Since this thesis is closely related to the least-squares adjustment problem and will actually present a new approach for solving this problem, let us first have a closer look at the classical approach. Least Squares with Examples in Signal Processing1 Ivan Selesnick March 7, 2013 NYU-Poly These notes address (approximate) solutions to linear equations by least squares. e the sum of squares of residuals is minimal under this approach. Ordinary Least Squares (OLS) regression (or simply "regression") is a useful tool for examining the relationship between two or more interval/ratio variables. This problem is called a least squares problem for the following reason. On input, the field x must be filled in with an initial estimate of the solution vector, and the field tol must be set to the desired tolerance. Solve word problems involving quadratic equations. find_min_box_constrained (using lbfgs_search_strategy(10)) performed poorly as it can be trapped on a boundary. where A is an m x n matrix with m > n, i. So really, what you did in the first assignment was to solve the equation using LSE. On a similar note,. This influence is exaggerated using least squares. i) (circles) and least-squares line (solid line) but we will see that the normal equations also characterize the solution a, an n-vector, to the more general linear least squares problem of minimizing kAa ykfor any matrix Athat is m n, where m n, and whose columns are linearly independent. Enter your data as (x,y) pairs, and find the equation of a line that best fits the data. Scramble Squares® Puzzle. least_squares(). Suppose that a matrix A is given that has more rows than columns, ie n, the number of rows, is larger than m, the number of columns. Quadratic Regression Calculator. Nonlinear Least Squares Data Fitting D. Main ideas 2. TI-89 graphing calculator program for calculating the method of least squares. This document is intended to clarify the issues, and to describe a new Stata command that you can use (wls) to calculate weighted least-squares estimates for problems such as the Strong interaction'' physics data described in Weisberg's example 4. Also lets you save and reuse data. 00097402530 0. Line of Best Fit (Least Square Method) A line of best fit is a straight line that is the best approximation of the given set of data. Constructing a Least-Squares Graph Using Microsoft Excel. Part of our free statistics site; generates linear regression trendline and graphs results. To compare fractions the calculator first finds the least common denominator (LCD), converts the fractions to equivalent fractions using the LCD, then. By using this website, you agree to our Cookie Policy. Visit Stack Exchange. Type doc lsqnonlin for more details. The method of least squares - using the Excel Solver Michael Wood 5 advertising. For this, we're going to make use of the property that the least squares line always goes through x bar, y bar. The method of least squares calculates the line of best fit by minimising the sum of the squares of the vertical distances of the points to th e line. 0 released December 2019 This latest release of SPGL1 implements a dual root-finding mode that allows for increased accuracy for basis pusuit denoising problems. The Linear Least Squares Regression Line method is the accurate way of finding the line of best fit in case it's presumed to be a straight line that is the best approximation of the given set of data. since gradient descent is a local optimizer and can get stuck in local solution we need to use. Added Dec 13, 2011 by scottynumbers in Mathematics. When we used the QR decomposition of a matrix to solve a least-squares problem, we operated under the assumption that was full-rank. The center of the part and center of rotation are offset. Question: 4. How to Calculate Absolute Value. powered by $$x$$ y $$a 2$$ a b . "Solver" is a powerful tool in the Microsoft Excel spreadsheet that provides a simple means of fitting experimental data to nonlinear functions. The Least Squares Regression Line is the line that makes the vertical distance from the data points to the regression line as small as possible. Number of Data Points: X Data Points:. solve a non-linear least squares problem. Spark MLlib currently supports two types of solvers for the normal equations: Cholesky factorization and Quasi-Newton methods (L-BFGS/OWL-QN). The full documentation is available online. In the present instance solving for the node weights is not really viable for the following reason: in the actual, real-life setting the only decision variable (which I have control over & need to solver for) are the instrument weights = units of instruments (cells I11:I15). Algorithm 1 Least-squares sub-problem input: H 2 CN ⇥N, q 2 CN, P 2 RM ⇥N, d 2 CM for each receiver (j )(rowinP) in parallel do H⇤ w j = p⇤ j {solve 1 PDE} end for W =[w 1 w 2w m] {distributed matrix} S =(I M + 2 W⇤ W)1 {adjust using Algorithm 2 (optional)} for source (i) in parallel do y i =(I N 2 WSW⇤)(q i + 2 Wd i) Hu i = y i {solve 1 PDE} end for output: u. Effective use of Ceres requires some familiarity with the basic components of a non-linear least squares solver, so before we describe how to configure and use the solver, we will take a brief look at how some of the core optimization algorithms in Ceres work. LinearLeastSquares. It can be manually found by using the least squares method. That is, Octave can find the parameter b such that the model y = x*b fits data (x,y) as well as possible, assuming zero-mean Gaussian noise. The least-squares line or regression line can be found in the form of y = mx + b using the following formulas. When we pass this (near) optimal solution to NL2SOL it will have an easy task. The Least Squares Regression Calculator is biased against data points which are located significantly away from the projected trend-line. 1 Linear Least Squares Problem. You must select the Solver Add-in and then press the OK button. Use the EXCEL SOLVER program to minimise S by varying the paramters "a" and "b" This will produce estimates of a and b that give the best fitting straight line to the data. It will b e sho wn that the direct sp eci c least-square tting of ellipses. (A for all ). This page allows performing nonlinear regressions (nonlinear least squares fittings). Let [] ∀k∈ℕ be a dispersion point in. Nonlinear Least Squares Data Fitting D. By using this website, you agree to our Cookie Policy. The generalized least squares problem. It is used to study the nature of the relation between two variables. Let , , and be defined as previously. This article introduces the method of fitting nonlinear functions with Solver. Used to determine the “best” line. The least squares criterion is a formula used to measure the accuracy of a straight line in depicting the data that was used to generate it. Least-Squares Fitting of Data with Polynomials Least-Squares Fitting of Data with B-Spline Curves. In the present instance solving for the node weights is not really viable for the following reason: in the actual, real-life setting the only decision variable (which I have control over & need to solver for) are the instrument weights = units of instruments (cells I11:I15). LEAST MEAN SQUARE ALGORITHM 6. Use our online quadratic regression calculator to find the quadratic regression equation with graph. LinearAlgebra namespace in C#. The applications of the method of least squares curve fitting using polynomials are briefly discussed as follows. 0 released December 2019. 00000088820 0. A least squares model contains a dummy objective and a set of linear equations: sumsq. Least squares means are adjusted for other terms in the model (like covariates), and are less sensitive to missing data. The CVX Users’ Guide, Release 2. Given a set of data, we can fit least-squares trendlines that can be described by linear combinations of known functions. com A collection of really good online calculators for use in every day domestic and commercial use!. Lecture 11, Least Squares Problems, Numerical Linear Algebra, 1997. Trouble may also arise when M = N but the matrix is singular. The first step. using least squares minimization. To show the powerful Maple 10 graphics tools to visualize the convergence of this Polynomials. The best-fit line, as we have decided, is the line that minimizes the sum of squares of residuals. Wow, there's a lot of similarities there between real numbers and matrices. CPM Student Tutorials CPM Content Videos TI-84 Graphing Calculator Bivariate Data TI-84: Least Squares Regression Line (LSRL). Regression Using Excel's Solver. However, if users insist on finding the total least squares fit then an initial approximation is still required and the linear least squares approach is recommended. That closed-form solution is called the normal equation. overdetermined system, least squares method The linear system of equations A =. For details, see First Choose Problem-Based or Solver-Based Approach. Question: 4. Least squares means are adjusted for other terms in the model (like covariates), and are less sensitive to missing data. Enter two data sets and this calculator will find the equation of the regression line and corelation coefficient. Sum of squares is used in statistics to describe the amount of variation in a population or sample of observations. 1 Introduction. , there are more equations than unknowns, usually does not have solutions. Click the button labeled “Click to Compute”. They are connected by p DAbx. Linear least-squares solves min||C*x - d|| 2 , possibly with bounds or linear constraints. Each node has access to one of the linear equations and holds a dynamic state. When radians are selected as the angle unit, it can take values such as pi/2, pi/4, etc. The method of least squares - using the Excel Solver Michael Wood 5 advertising. Years after 1900 50 60 70 80 90 100 Percentage 29. Estimating an ARMA Process Overview 1. Loading Unsubscribe from Adrian Lee? Least Squares Linear Regression - EXCEL - Duration: 10:55. No Bullshit Guide To Linear Algebra, 2017. Added Dec 13, 2011 by scottynumbers in Mathematics. (We use the squares for much the same reason we did when we defined the variance in Section 3. Alternative solution methods. See LICENSE_FOR_EXAMPLE_PROGRAMS. The data, the interpolating polynomial (blue), and the least-squares line (red) are shown in Figure 1. LMS incorporates an. First, least squares is a natural approach to estimation, which makes explicit use of the structure of the model as laid out in the assumptions. The solutions. org are unblocked. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. Example 2 in the KaleidaGraph Quick Start Guide shows how to apply a Linear curve fit to a Scatter plot. Check Minitab for definition of influential points. Linear regression calculator Two-dimensional linear regression of statistical data is done by the method of least squares. Quadratic Regression Calculator. Trouble may also arise when M = N but the matrix is singular. The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm: ∑ = | − |, by an iterative method in which each step involves solving a weighted least squares problem of the form: (+) = ∑ = (()) | − |. To approximate a Points Dispersion through Least Square Method using a Quadratic Regression Polynomials and the Maple Regression Commands. In fact, I used this kind of solution in some situations. lsqnonneg applies only to the solver-based approach. Number of Data Points: X Data Points:. The help qr command in Matlab gives the following information: >> help qr QR Orthogonal-triangular decomposition. lstsq in terms of computation time and memory. I tried the following set-up: - Given is a vector of original exposure across a range of seven nodes (C8:I2) - The aim is to replicate this exposure at each point as close as possible from a set of 5 instruments. solve public void solve() Solve this nonlinear least squares minimization problem. 4 Linear Least Squares. Least squares regression analysis or linear regression method is deemed to be the most accurate and reliable method to divide the company’s mixed cost …. In this case, solving the normal equations (5) is equivalent to. If the system matrix is rank de cient, then other methods are. Enter the fraction separated by comma and press the calculate button. solve a non-linear least squares problem. Regression Using Excel's Solver. powered by. Given a set of data, we can fit least-squares trendlines that can be described by linear combinations of known functions. Check Minitab for definition of influential points. to solve multidimensional problem, then you can use general linear or nonlinear least squares solver. (3) Solve the diagonal system Σˆw = Uˆ∗b for w. The original domain is. It is called “least squares” because we are minimizing the sum of squares of these functions. I If m= nand Ais invertible, then we can solve Ax= b. The limitations of the OLS regression come from the constraint of the inversion of the X’X matrix: it is required that the rank of the matrix is p+1, and some numerical problems may arise if the matrix is not well behaved. Free online LCM calculator. Leykekhman - MATH 3795 Introduction to Computational MathematicsLinear Least Squares { 11. For this purpose, the initial values of A, B, and C in cells F2-F4 should be those found by Solver in the previous run. 1 Linear Least Squares Problem. If you're seeing this message, it means we're having trouble loading external resources on our website. For details, see First Choose Problem-Based or Solver-Based Approach. Note: this method requires that A not have any redundant rows. Note: Be sure that your Stat Plot is on and indicates the Lists you are using. Nonlinear Regression. Linear Least Squares Regression Line Calculator - v1. Contribute to kashif/ceres-solver development by creating an account on GitHub. Linear regression line calculator to calculate slope, interception and least square regression line equation. The second one is the Levenberg-Marquardt method. Enter the number of data pairs, fill the X and Y data pair co-ordinates, the least squares regression line calculator will show you the result. | 2020-03-28T08:54:44 | {
"domain": "agence-des-4-fontaines.fr",
"url": "http://khhk.agence-des-4-fontaines.fr/least-squares-solver.html",
"openwebmath_score": 0.4954856038093567,
"openwebmath_perplexity": 651.6307704769378,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9902915215128426,
"lm_q2_score": 0.8688267660487572,
"lm_q1q2_score": 0.8603917800815063
} |
https://cs.stackexchange.com/questions/79263/count-arrays-with-size-n-sum-k-and-largest-element-m | # Count arrays with size n, sum k and largest element m
I'm trying to solve pretty complex problem with combinatorics.
Namely, we have given three numbers N, K, M. Now we want to count how many different arrays of integers are there with length N, sum K and all the elements in the range [1, M]
Constraints:
• 1 <= N <= 100
• 1 <= K <= 100
• 1 <= M <= 100
Example
Let's say N = 2, K = 5, M = 3. This means that we want to count arrays of integers of size 2 with sum of all elements equal to 5 and elements in range [1, 3]. There are total of 2 arrays: {2, 3} and {3, 2}. Please note that the order of the elements also matters, {2, 3} is not equal to {3, 2}
Second example: N = 4, K = 7, M = 3. We want to count arrays of length 4, sum of 7 and elements in range [1, 3].
There are total of 16 possible way of arrays: (1,1,2,3), (1,1,3,2), (2,1,1,3), (3,1,1,2), (2,3,1,1), (3,2,1,1), (1,2,3,1), (1,3,2,1), (1,2,1,3), (1,3,1,2), (1,2,2,2), (2,1,2,2), (2,2,1,2), (2,2,2,1)
What I have tried
I know that one solution is to generate all possible arrays, but such algorithm in best case will work in complexity O(N!) which is far too big for N = 100. I started thinking about solving this with three-dimensional dynamical programming, but I cannot find the relations between the states.
I'm thinking about this way: Let f(i, j, l) be the number of arrays of length i, sum j, and largest element l. We can see that for i = 0 f(i,j,l) = 0, so this is I think the base case. Also f(1, 1, 1) = 1 is another base case.
Now I cannot find the relations between the states. Can you give me some hints how to find the relations between the states. Thanks in advance.
• Nice problem. Can you credit the source where you encountered the problem? – D.W. Jul 25 '17 at 16:11
• The problem is from one macedonian site for training, and the problem in original is in macedonian, but i can give you the test cases if you want them – someone12321 Jul 25 '17 at 16:30
You can use dynamic programming. For each $0 \leq i \leq N$ and $0 \leq s \leq K$, count the number of arrays of length $i$, consisting of numbers in the range $\{1,\ldots,M\}$, which sum to exactly $s$. The running time is $O(NK)$.
Explicitly, denoting the array by $a$, we have $a(0,0) = 1$, $a(0,s) = 0$ otherwise, and $$a(i,s) = \sum_{t=1}^M a(i-1,s-t),$$ where $a(i-1,r) = 0$ if $r < 0$.
As Hendrik Jan mentions in the comments, we can improve on this $O(NKM)$ algorithm by using the recursion $$a(i,s) = a(i,s-1) - a(i-1,s-1-M) + a(i-1,s-1),$$ with suitable base cases.
Alternatively, we can obtain an explicit expression: the answer is the coefficient of $x^K$ in the generating function $$(x + \cdots + x^M)^N = x^N \left(\frac{1-x^M}{1-x}\right)^N.$$ In other words, we are looking for the coefficient of $x^{K-N}$ in $$\left( \sum_{i=0}^N (-1)^i \binom{N}{i} x^{iM} \right) \left( \sum_{j=0}^\infty \binom{j+N-1}{N-1} x^j \right),$$ which has the closed form $$\sum_{i=0}^{\min(N,\lfloor (K-N)/M\rfloor)} (-1)^i \binom{N}{i} \binom{K-iM-1}{N-1}.$$
• The numbers $a(i,s)$ represent a matrix of solutions for "fixed" $M$. The numbers $a(i,s)$ are a "running sum" of $M$ consecutive elements from the previous row. So, can't the complexity be improved to $O(NK)$, as $a(i,s) = a(i,s-1) + a(i-1,s-1) - a(i-1,s-M-1)$? give or take typo's. – Hendrik Jan Jul 25 '17 at 1:22
• @HendrikJan Yes, that's right! I missed this somehow. – Yuval Filmus Jul 25 '17 at 5:12
You actually don't need the largest element l in your recursive function. It doesn't make such sense to use it, since the largest number in an array has no influence on the other numbers.
In most (all?) dynamic programming problems you have to think about the last step / the last part. For f(i, j) you have an array with i numbers. You can imagine, that in the last step you added the last number to the array. So before you added the last number, the array only consists of i-1 numbers.
Now, if the added number was 1, then there are f(i-1, j-1) many arrays. If the added number was 2, then there are f(i-1, j-2) many arrays. And so on...
Add all those possibilities up, and you end up with f(i, j).
I don't pretend this is the most efficient solution. Neither it has a strong theoretical merit, but you can still find it useful to verify your results. Here's a small program in Prolog, (using SWI Prolog for constraint satisfaction part), which can calculate or verify your guess:
% -*- mode: prolog; prolog-system: "swi" -*-
:- use_module(library(clpfd)).
sums(K, [], K).
sums(A, [X | Xs], K) :-
indomain(A),
indomain(X),
B is A + X,
sums(B, Xs, K).
sums([X | Xs], K) :- sums(X, Xs, K).
nmk_problem_helper(N, K, M, List) :-
indomain(N),
indomain(M),
indomain(K),
length(List, N),
List ins 1..M,
sums(List, K).
nmk_problem(N, K, M, Arrays) :-
findall(X, nmk_problem_helper(N, K, M, X), Arrays).
nmk_problem_all_helper(List) :-
[N, M, K] ins 1..100,
indomain(N),
indomain(M),
indomain(K),
length(List, N),
List ins 1..M,
sums(List, K).
nmk_problem_all(Arrays) :-
findall(X, nmk_problem_all_helper(X), Arrays).
Example usage:
?- nmk_problem(4, 7, 3, X).
X = [[1, 1, 2, 3], [1, 1, 3, 2], [1, 2, 1, 3], [1, 2, 2, 2], [1, 2, 3, 1], [1, 3, 1|...], [1, 3|...], [2|...], [...|...]|...]. | 2020-01-22T13:05:49 | {
"domain": "stackexchange.com",
"url": "https://cs.stackexchange.com/questions/79263/count-arrays-with-size-n-sum-k-and-largest-element-m",
"openwebmath_score": 0.681304931640625,
"openwebmath_perplexity": 521.7804452486056,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363512883316,
"lm_q2_score": 0.8740772335247532,
"lm_q1q2_score": 0.8603859947919545
} |
https://math.stackexchange.com/questions/1679302/multiple-answers-to-int-sqrt4t-t2-textrmd-t | # Multiple answers to $\int \sqrt{4t - t^2} \, \textrm{d} t$
I'm trying to understand why I'm getting different answers when taking different approaches to integrating
$$\int \sqrt{4t - t^2} \, \textrm{d} t$$
First, I tried substituting $\sqrt t = 2 \sin \theta$:
$$\begin{eqnarray} \int \sqrt{t}\sqrt{4 - t} \, \textrm{d} t &=& \int 2 \sin \theta \cdot \sqrt{4 - 4 \sin^2 \theta} \cdot 8 \sin \theta \cos \theta \, \textrm{d} \theta \\ &=& 32 \int \sin^2 \theta \cos^2 \theta \, \textrm{d} \theta \\ &=& 4 \int 1 - \cos 4 \theta \, \textrm{d} \theta \\ &=& 4 \theta - \sin 4 \theta + C \\ &=& 4 \theta - 4 \sin \theta \cos^3 \theta + 4 \sin^3 \theta \cos \theta + C \\ &=& 4 \arcsin \frac{\sqrt{t}}{2} + \frac{1}{2} (t - 2)\sqrt{4t - t^2} + C \\ \end{eqnarray}$$
Second, I tried completing the square and substituting $t - 2 = 2 \sin \theta$:
$$\begin{eqnarray} \int \sqrt{4 - (t^2 - 4t + 4)} \, \textrm{d} t &=& \int \sqrt{4 - (t - 2)^2} \, \textrm{d} t \\ &=& \int \sqrt{4 - 4 \sin^2 \theta} \cdot 2 \cos \theta \, \textrm{d} \theta \\ &=& 4 \int \cos^2 \theta \, \textrm{d} \theta \\ &=& 2 \int 1 - \cos 2 \theta \, \textrm{d} \theta \\ &=& 2 \theta + \sin 2 \theta + C \\ &=& 2 \theta + 2 \sin \theta \cos \theta + C \\ &=& 2 \arcsin \left(\frac{t - 2}{2}\right) + \frac{1}{2}(t - 2)\sqrt{4t - t^2} + C \\ \end{eqnarray}$$
The second answer is the same as in the book but I don't understand why the first approach gives the wrong answer.
• Here's one sticking point: $1-\cos 4\theta = 8\sin^2\theta\cos^2\theta.$ – Cameron Williams Mar 2 '16 at 0:52
• @CameronWilliams, good catch, I'll update my first answer. – Chewers Jingoist Mar 2 '16 at 0:56
• Both answers are correct: if you let $f(t)=4\sin^{-1}\frac{\sqrt{t}}{2}-2\sin^{-1}\frac{t-2}{2}$, then $f^{\prime}(t)=0$ so $f(t)$ is a constant. – user84413 Mar 2 '16 at 1:08
• @user84413, when I differentiate $f(t)$ I get $\frac{4}{\sqrt{4 - t}} - \frac{2}{\sqrt{4 - (t - 2)^2}} = \frac{4}{\sqrt{4 - t}} - \frac{2}{\sqrt{4t - t^2}}$. Could you elaborate on how you get $0$? – Chewers Jingoist Mar 2 '16 at 1:21
• In the first term, I think you will get $4\frac{1}{\sqrt{1-t/4}}\frac{1}{4\sqrt{t}}=\frac{2}{\sqrt{4t-t^2}}$ – user84413 Mar 2 '16 at 1:33
$$\arcsin\left(\frac t2-1\right) = 2\left(\arcsin\frac{\sqrt t}2 -\frac\pi4\right).\tag{*}$$
Proof: Write $\theta:=\arcsin\frac{\sqrt t}2$. Then $\sin^2\theta=\frac t4$ and $\cos^2\theta=1-\frac t4$, and using the angle-difference identities, $$\sin\left(\theta-\frac\pi4\right)=\frac1{\sqrt 2}(\sin\theta-\cos\theta)\tag1$$ while $$\cos\left(\theta-\frac\pi4\right)=\frac1{\sqrt 2}(\sin\theta+\cos\theta).\tag2$$ Therefore $$\sin2\left(\theta-\frac\pi4\right)=2\sin\left(\theta-\frac\pi4\right)\cos\left(\theta-\frac\pi4\right) \stackrel{(1),(2)}=\sin^2\theta-\cos^2\theta =\frac t4-\left(1-\frac t4\right)=\frac t2-1.$$ Therefore both sides of (*) have the same sine. Similarly you can show that both sides have the same cosine.
• Where does $2$ come from at the start of your last line in $sin 2(\theta - \frac{\pi}{4})$? And what does the $(1),(2)$ mean? – Chewers Jingoist Mar 2 '16 at 1:31
• @ChewersJingoist, the $2$ in $sin2(\theta - \frac{\pi}{4})$ comes from the identity $sin2x=2sinxcosx$. – Alexander Maru Mar 2 '16 at 1:39
If $$\sin\theta = \dfrac{\sqrt t}2,$$ then $$\sin\left(\dfrac\pi2-2\theta\right) = \cos2\theta = 1-2\sin^2\theta = \dfrac{2-t}2,$$ $$2\theta = \dfrac\pi2-\arcsin\dfrac{2-t}2 = \dfrac\pi2+\arcsin\dfrac{t-2}2,$$ $$\theta = \dfrac\pi4+\dfrac12\arcsin\dfrac{t-2}2,$$ $$\boxed{\arcsin\dfrac{\sqrt t}2 = \dfrac12\arcsin\dfrac{t-2}2+\dfrac\pi4}$$ So the first answer is correct too | 2019-07-23T09:15:03 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1679302/multiple-answers-to-int-sqrt4t-t2-textrmd-t",
"openwebmath_score": 0.9391178488731384,
"openwebmath_perplexity": 266.935391337121,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407175907054,
"lm_q2_score": 0.8840392939666335,
"lm_q1q2_score": 0.860383036838467
} |
https://gmatclub.com/forum/if-the-curve-described-by-the-equation-y-x2-bx-c-cuts-the-x-axis-272937.html | GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 06 Apr 2020, 08:05
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If the curve described by the equation y = x2 + bx + c cuts the x-axis
Author Message
TAGS:
### Hide Tags
GMAT Club team member
Status: GMAT Club Team Member
Affiliations: GMAT Club
Joined: 02 Nov 2016
Posts: 5385
GPA: 3.62
If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
10 Aug 2018, 10:08
1
00:00
Difficulty:
55% (hard)
Question Stats:
63% (02:11) correct 37% (01:56) wrong based on 82 sessions
### HideShow timer Statistics
If the curve described by the equation $$y = x^2 + bx + c$$ cuts the $$x$$-axis at $$-4$$ and $$y$$ axis at $$4$$, at which other point does it cut the $$x$$-axis?
A. -1
B. 4
C. 1
D. -4
E. 0
_________________
Intern
Joined: 07 May 2018
Posts: 3
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
10 Aug 2018, 10:49
+1 for A? I plugged in the other points to come up with an equation...not sure if I went about it the right way
Director
Status: Learning stage
Joined: 01 Oct 2017
Posts: 954
WE: Supply Chain Management (Energy and Utilities)
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
10 Aug 2018, 22:08
1
If the curve described by the equation y = x^2 + bx + c cuts the x-axis at -4 and y axis at 4, at which other point does it cut the x-axis?
A. -1
B. 4
C. 1
D. -4
E. 0
Given, $$y = x^2 + bx + c$$, cuts the x-axis at two points. One intersection point with x-axis is given. We need to find out the other point of intersection. In other words, 1 root of the quadratic equation is given, what is the value if the other root?
a) At (-4,0), $$0=(-4)^2+b*(-4)+c$$ Or, 4b-c=16
b) At (0,4), $$4=0^2+b*0+c$$ Or, c=4
So, 4b-4=16 Or, b=5.
Now, we have the equation of the curve, $$y=x^2+5x+4$$, which has the roots: -4 and -1.
So, other other root is -1.
Ans. (A)
_________________
Regards,
PKN
Rise above the storm, you will find the sunshine
CAT Forum Moderator
Joined: 29 Nov 2018
Posts: 280
Concentration: Marketing, Strategy
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
02 Jan 2019, 09:53
2
If the curve described by the equation y = x^2 + bx + c cuts the x-axis at -4 and y axis at 4, at which other point does it cut the x-axis?
A -1
B 4
C 1
D -4
E 0
y = x^2 + bx + c is a quadratic equation and the equation represents a parabola.
The curve cuts the y axis at 4.
The x coordinate of the point where it cuts the y axis = 0.
Therefore, (0, 4) is a point on the curve and will satisfy the equation.
4 = 0^2 + b(0) + c
Or c = 4.
The product of the roots of a quadratic equation is c/a
In this question, the product of the roots = 4/1 = 4.
The roots of the quadratic equation are the points where the curve cuts the x-axis.
The question states that one of the points where the curve cuts the x-axis is -4.
So, -4 is one of roots.
Let r2 be the second root of the quadratic equation.
So, -4 * r2 = 4
or r2 = -1.
The second root is the second point where the curve cuts the x-axis, which is -1.
If you liked the question and explanation, please do hit the kudos button
Intern
Joined: 30 May 2017
Posts: 16
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
02 Jan 2019, 10:34
When x=-4 y=0
so 16 -4b +c=0
When x=0, y = 4
so c=4
16-4b+c=20-4b=0
b= 5
the equation can be written as y=(x+4)(x+1)
y is equal to zero when x=-1 (Answer A)
Math Expert
Joined: 02 Sep 2009
Posts: 62542
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
03 Jan 2019, 02:15
cfc198 wrote:
If the curve described by the equation y = x^2 + bx + c cuts the x-axis at -4 and y axis at 4, at which other point does it cut the x-axis?
A -1
B 4
C 1
D -4
E 0
y = x^2 + bx + c is a quadratic equation and the equation represents a parabola.
The curve cuts the y axis at 4.
The x coordinate of the point where it cuts the y axis = 0.
Therefore, (0, 4) is a point on the curve and will satisfy the equation.
4 = 0^2 + b(0) + c
Or c = 4.
The product of the roots of a quadratic equation is c/a
In this question, the product of the roots = 4/1 = 4.
The roots of the quadratic equation are the points where the curve cuts the x-axis.
The question states that one of the points where the curve cuts the x-axis is -4.
So, -4 is one of roots.
Let r2 be the second root of the quadratic equation.
So, -4 * r2 = 4
or r2 = -1.
The second root is the second point where the curve cuts the x-axis, which is -1.
If you liked the question and explanation, please do hit the kudos button
_______________
Merging topics.
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 14467
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink]
### Show Tags
22 Feb 2020, 15:03
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If the curve described by the equation y = x2 + bx + c cuts the x-axis [#permalink] 22 Feb 2020, 15:03
Display posts from previous: Sort by | 2020-04-06T16:05:56 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/if-the-curve-described-by-the-equation-y-x2-bx-c-cuts-the-x-axis-272937.html",
"openwebmath_score": 0.697464108467102,
"openwebmath_perplexity": 1222.06844689677,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9732407183668539,
"lm_q2_score": 0.8840392848011833,
"lm_q1q2_score": 0.8603830286044234
} |
https://mathematica.stackexchange.com/questions/224729/laplace-equation-with-mixed-boundary-conditions | # Laplace equation with mixed boundary conditions
I try to solve Laplace equation in 2D on square [2,3]x[2,3], with mixed boundary conditions, I did:
ClearAll[y, x1, x2];
pde = Laplacian[y[x1, x2], {x1, x2}];
bc = {y[x1, 2] == 2 + x1, y[x1, 3] == 3 + x1};
sol = NDSolve[{pde ==
NeumannValue[-1, x1 == 2] + NeumannValue[1, x1 == 3], bc},
y, {x1, 2, 3}, {x2, 2, 3}]
Plot3D[Evaluate[y[x1, x2] /. sol], {x1, 2, 3}, {x2, 2, 3},
PlotRange -> All, AxesLabel -> {"x1", "X2", "y[x1,x2]"},
BaseStyle -> 12]
The exact solution is y=x1+x2, the problem is the results is not high accurate when I evaluate the error.
• The exact solution is y=x1+x2 Are you sure about this? How does this solution satisfy the Neumann boundary conditions? – Nasser Jun 25 at 20:09
• @Nasser Erm. The function does satisfy the Neumann boundary condition: Its derivative in x1-direction is 1 and the sign flops stems from the fact that Neumann conditions are phrased in terms of outward normals... No? – Henrik Schumacher Jun 25 at 23:34
• @user62716 Using NeumannValue requires one to do integration by parts and one has to be careful about the signs. Try switching the sign of the Laplacian to pde = -Laplacian[y[x1, x2], {x1, x2}];. Then it should work. – Henrik Schumacher Jun 25 at 23:40
• @HenrikSchumacher is NeumannValue[-1, x1 == 2] different from saying that $\frac{\partial y}{\partial x_1}$ evaluated at $x_1=2$ is $-1$? And since the claim is that the solution is $y=x_1+x_2$ then $\frac{\partial y}{\partial x_1}=1$ this is evaluated at $x=2$ is $1$ and not $-1$?. How do you translate NeumannValue[-1, x1 == 2] to normal derivative then? I just did direct translation. May be we need a whole new topic on this. On top of all of this, moving NeumannValue from RHS to LHS changes the solution. I never liked NeumannValue and prefer to use normal derivatives... – Nasser Jun 26 at 0:46
• @Nasser $\frac{\partial y}{\partial \nu} (2,x_2) = - \frac{\partial y}{\partial x_1} (2,x_2)$ because the outward normal at the point $(2,x_2)$ is $\nu = (-1 , 0)$. But I agree that NeumannValue is a bit counter intuitive, but it makes perfect sense in regard of the weak formulation that is used in FEM. – Henrik Schumacher Jun 26 at 4:59
Relatively recently, Wolfram has created a nice Heat Transfer Tutorial and a Heat Transfer Verification Manual. I model with many codes and I usually start the Verification and Validation manual and build complexity from there. It is always embarrassing to build a complex model and find that your setup does not pass verification.
The Laplace equation is special case of the heat equation so we should be able to use a verified example as a template for a properly constructed model.
For NeumannValue's, if the flux is into the domain, it is positive. If the flux is out of the domain, it is negative.
At the tutorial link, they define a function HeatTransferModel to create operators for a variety of heat transfer cases that I shall reproduce here:
ClearAll[HeatTransferModel]
HeatTransferModel[T_, X_List, k_, ρ_, Cp_, Velocity_, Source_] :=
Module[{V, Q, a = k},
V = If[Velocity === "NoFlow",
Q = If[Source === "NoSource", 0, Source];
If[FreeQ[a, _?VectorQ], a = a*IdentityMatrix[Length[X]]];
If[VectorQ[a], a = DiagonalMatrix[a]];
a = PiecewiseExpand[Piecewise[{{-a, True}}]];
Inactive[Div][a.Inactive[Grad][T, X], X] + V - Q]
If we follow the recipe of tutorial, we should be able to construct and solve a PDE system free of sign errors as I show in the following workflow.
(* Create a Domain *)
Ω2D = Rectangle[{2, 2}, {3, 3}];
(* Create parametric PDE operator *)
pop = HeatTransferModel[y[x1, x2], {x1, x2}, k, ρ, Cp, "NoFlow",
"NoSource"];
(* Replace k parameter *)
op = pop /. {k -> 1};
(* Setup flux conditions *)
nv2 = NeumannValue[-1, x1 == 2];
nv3 = NeumannValue[1, x1 == 3];
(* Setup Dirichlet Conditions *)
dc2 = DirichletCondition[y[x1, x2] == 2 + x1, x2 == 2];
dc3 = DirichletCondition[y[x1, x2] == 3 + x1, x2 == 3];
(* Create PDE system *)
pde = {op == nv2 + nv3, dc2, dc3};
(* Solve and Plot *)
yfun = NDSolveValue[pde, y, {x1, x2} ∈ Ω2D]
Plot3D[Evaluate[yfun[x1, x2]], {x1, x2} ∈ Ω2D,
PlotRange -> All, AxesLabel -> {"x1", "x2", "y[x1,x2]"},
BaseStyle -> 12]
You can test that the solution matches that exact solution over the entire range:
Manipulate[
Plot[{x1 + x2, yfun[x1, x2]}, {x1, 2, 3}, PlotRange -> All,
AxesLabel -> {"x1", "y[x1,x2]"}, BaseStyle -> 12,
PlotStyle -> {Red,
Directive[Green, Opacity[0.75], Thickness[0.015], Dashed]}], {x2,
2, 3}, ControlPlacement -> Top]
• Dear Tim Laska, thank you for your great help, can we evaluate the error and plot it? – user62716 Jun 26 at 9:47
• I did it plot = Plot3D[ Abs[yfun[x1, x2] - (x1 + x2)], {x1, x2} [Element] [CapitalOmega]2D, PlotRange -> All, AxesLabel -> {"x1", "x2", "y[x1,x2]"}, PlotLabel -> err] – user62716 Jun 26 at 10:03
• Dear Tim Laska, I have other problem, Poisson equation with variable coefficients,shall post it in new question or here? – user62716 Jun 26 at 11:48
• @user62716 You should open a new question as it appears that you have. I will try to take a look at your other question when I can. – Tim Laska Jun 26 at 13:45
• Thank you Tim, I will be waiting. Best regards – user62716 Jun 26 at 13:50
By reversing the sign of the derivative on the left side from that given in NeumannValue, this can be solved by Mathematica analytically as well.
ClearAll[y, x1, x2];
pde = Laplacian[y[x1, x2], {x1, x2}] == 0;
bc = {y[x1, 2] == 2 + x1,
y[x1, 3] == 3 + x1,
Derivative[1, 0][y][2, x2] == 1,
Derivative[1, 0][y][3, x2] == 1};
solA = DSolve[{pde, bc}, y[x1, x2], {x1, x2}];
solA = solA /. {K[1] -> n,Infinity -> 20};
solA = Activate[solA];
Plot3D[y[x1, x2] /. solA, {x1, 2, 3}, {x2, 2, 3}, PlotRange -> All,
AxesLabel -> {"x1", "X2", "y[x1,x2]"}, BaseStyle -> 12]
The BC as given above are correct, and Mathematica's analytical solution is correct also, but I agree it can be simpler.
There might be a way to simplify the infinite Fourier sum given, but I could not find it.
To show the above formulation is correct, here is Maple's solution, using same B.C. Maple as above to give the simpler form of the solution, which is $$y=x_1+x_2$$.
restart;
pde:=VectorCalculus:-Laplacian(y(x1,x2),[x1,x2])=0;
bc:=y(x1,2)=2+x1,y(x1,3)=3+x1,D[1](y)(2,x2)=1,D[1](y)(3,x2)=1;
sol:=pdsolve([pde,bc],y(x1,x2))
We just have to remember, that negative NeumannValue on left edge, means positive derivative on that edge.
• Dear Nasser, thank you for your comments, the normal derivative at left side is -1 not 1, the above analytic solution is complicated since the exact is just y=x1+x2....thanks – user62716 Jun 26 at 9:38
• the normal derivative at left side is -1 not 1 no. It is +1. you set NeumannValue to be -1. Since NeumannValue points outwards, then this means the deivative is +1. Since -1 outwards, means +1 inwards. In addition, if you change the derivative (not NeumannValue) in the code I posted from +1 to -1 you will see the solution is no longer y=x1+x2 but becomes non-linear. You can compare this solution with the numerical solution. Do you see any difference? I agree the solution has complicated Fourier series sum, but this is what Mathemtica gave for the analytical solution. – Nasser Jun 26 at 10:01
• Dear Nasser, I still can not understand you, @Nasser ∂y∂ν(2,x2)=−∂y∂x1(2,x2) because the outward normal at the point (2,x2) is ν=(−1,0) so it is -1 on the left outwards. – user62716 Jun 26 at 11:16
• Dear Nasser, the code of Tim Laska is working, I highly appreciate you and you always help me and provide perfect answer. – user62716 Jun 26 at 11:19
• @user62716 you can see from the solution itself, i.e. from just looking at the plot, that the derivative is positive on the left edge. No math is needed if we look at the solution. The slope is moving upwards. So positive slope. You can also see from Maple solution I posted, that I used positive derivative to get same solution $x_1+x_2$ right there. NeumannValue is not the same as derivative. That is what the whole confusion was about. – Nasser Jun 26 at 11:33 | 2020-09-25T10:01:41 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/224729/laplace-equation-with-mixed-boundary-conditions",
"openwebmath_score": 0.4856671988964081,
"openwebmath_perplexity": 2496.7887469288025,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9615338101862455,
"lm_q2_score": 0.8947894590884704,
"lm_q1q2_score": 0.8603703179118266
} |
http://slideplayer.com/slide/772358/ | # Scientific Notation ... is a way to express very small or very large numbers. ... is most often used in "scientific" calculations where the analysis.
## Presentation on theme: "Scientific Notation ... is a way to express very small or very large numbers. ... is most often used in "scientific" calculations where the analysis."— Presentation transcript:
Scientific Notation ... is a way to express very small or very large numbers. ... is most often used in "scientific" calculations where the analysis must be very precise. ... consists of two parts*: (1) a number between 1 and 10 and (2) a power of 10. *a large or small number may be written as any power of 10; however, CORRECT scientific notation must satisfy the above criteria.
is correct scientific notation
3.2 x 1013 is correct scientific notation Remember that the first number MUST BE greater than or equal to one and less than 10. 23.6 x 10-8 is not correct scientific notation
To Change from Standard Form to Scientific Notation:
1 Place decimal point such that there is one non-zero digit to the left of the decimal point. 2 Count number of decimal places the decimal has "moved" from the original number. This will be the exponent of the 10. 3 If the original number was less than 1, the exponent is negative; if the original number was greater than 1, the exponent is positive.
Examples: Given: 4,750,000 use: 4.75 (moved 6 decimal places)
answer: 4.75 X 106 The original number was greater than 1 so the exponent is positive. Given: use: 7.89 (moved 4 decimal places) answer: 7.89 x 10-4 The original number was less than 1 so the exponent is negative.
To Change from Scientific Notation to Standard Form:
1 Move decimal point to right for positive exponent of 10. 2 Move decimal point to left for negative exponent of 10.
Examples: Given: 1.015 x 10-8 answer: 0.00000001015 (8 places to left)
Negative exponent move decimal to the left. Given: x 103 answer: 5,024 (3 places to right) Positive exponent move decimal to the right.
To Multiply and/or Divide using Scientific Notation:
1 Multiply/divide decimal numbers with each other. 2 Use exponent rules to "combine" powers of 10. 3 If not "correct" scientific notation, change accordingly.
Given: Method: Answer: Correct Scientific Notation
Express in correct scientific notation:
Part 1: Express in correct scientific notation: 1. 61,500 2. 3. 321 4. 64,960,000 5.
Part 2: Express in standard form: 1. 1.09 x 103 2. x 108 3. x 10-4 4. x 10-2 5. x 102
Multiply or divide as indicated and
Part 3: Multiply or divide as indicated and express in correct scientific notation: 1. 2. 3. 4. 5.
Download ppt "Scientific Notation ... is a way to express very small or very large numbers. ... is most often used in "scientific" calculations where the analysis."
Similar presentations | 2017-08-17T03:55:46 | {
"domain": "slideplayer.com",
"url": "http://slideplayer.com/slide/772358/",
"openwebmath_score": 0.8087553977966309,
"openwebmath_perplexity": 2609.7140696126407,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9416541643004809,
"lm_q2_score": 0.913676530465412,
"lm_q1q2_score": 0.8603673097363703
} |
https://math.stackexchange.com/questions/2854507/function-transformations-question-vertical-or-horizontal-transformation/2854515#2854515 | # Function transformations question - vertical or horizontal transformation
I have got a very simple problem. I have an exercise:
If $\ f(x) = 2x^2 − 4$, give the function which shows the graph of $\ f(x)$ after vertical stretch of scale factor $\ 0.5$ followed by a translation $\binom{-4}{0}$
The answer that I get is $\ f(x)=x^2+8x+14$, but the answer given is $\ f(x)=8x^2+64x+124$. In my opinion, the answer that is given can certainly be achieved, but using horizontal translation instead of vertical. After drawing a graph of my function and the given function I noticed that in my case the function is compressed (its "branches" are closer to the x - axis than the original one) - as it should be, as scale factor is less than 1.
Am I wrong there, or is something wrong with answers? I would not have asked the question, but I noticed that there is at least one more question about which I am uncertain as much as about this one, thus, I need to find out the real answer.
• $x^2+8x+14=((x+4)^2-4)/2$ is correct. Reusing the symbol $f(x)$ to mean different things leads to confusion.
– user574889
Jul 17, 2018 at 13:49
• @cactus, sorry, did not think about changing it to something else. Thanks for the answer! That means that there are 3 mistakes in a row in the exercises... Jul 17, 2018 at 13:52
You are right.
We are looking for the function \begin{align}\frac12f(x+4)&=\frac12(2(x+4)^2-4)\\&=(x+4)^2-2\\ &=x^2+8x+14 \end{align}
Of course, there is a possibility that you course is asking the wrong question as well.
• Thanks for the answer! That means that there are 3 mistakes in a row in those exercises... I will accept the answer soon. Jul 17, 2018 at 13:53
$$f(x)=2(x^2-4)$$ Vertical stretching with scale $1:2$: $$f_1(x)=0.5 f(x)=x^2-4$$ Translation by a vector $[-4,0]$: $$f_2(x)=f_1(x-(-4))+0=(x+4)^2-2=x^2+8x+14$$
Thus your solution seems to be fine.
The textbook solution might be created by taking a stretching factor $4$ and translation $[-4,12]$ | 2022-10-07T06:19:00 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2854507/function-transformations-question-vertical-or-horizontal-transformation/2854515#2854515",
"openwebmath_score": 0.8057282567024231,
"openwebmath_perplexity": 267.5890928341676,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9553191335436405,
"lm_q2_score": 0.900529778109184,
"lm_q1q2_score": 0.8602933273535125
} |
https://mathhelpboards.com/threads/what-is-1-21-2-21-3-21-18-21-in-mode-19.25200/ | # what is 1^21 + 2^21 + 3^21 + ....... + 18^21 in mode 19?
#### ketanco
##### New member
what is 1^21 + 2^21 + 3^21 + ....... + 18^21 in mode 19?
i can only think about individually calculating equivalents in mode 19 and then adding them up but there must be a better way then finding equivalents of exponentials of numbers from to 1 to 18, as this question is expected to be solved in around 2 minutes or less...
#### Olinguito
##### Well-known member
By Fermat’s little theorem, $1^{18},2^{18},\ldots,18^{18}\equiv1\pmod{19}$. Hence
$$\begin{array}{rcl}1^{21}+\cdots+18^{21} &\equiv& 1^3+\cdots+18^3\pmod{19} \\\\ {} &=& (1+\cdots+18)^2\pmod{19} \\\\ {} &=& \left(\dfrac{18}2\cdot19\right)^2\pmod{19} \\\\ {} &\equiv& 0\pmod{19}.\end{array}$$
Staff member
#### Olinguito
##### Well-known member
Hey Olinguito , how does it follow that:
$1^3+\cdots+18^3\pmod{19} = (1+\cdots+18)^2\pmod{19}$
Doesn’t $a=b$ imply $a\pmod n=b\pmod n$ (taking the modulus to be between $0$ and $n-1$)?
Last edited:
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Doesn’t $a=b$ imply $a\pmod n=b\pmod n$ (taking the modulus to be between $0$ and $n-1$)?
After staring at the following picture long enough, I finally realized why $a=b$.
I think it's a different theorem though (Nicomachus's theorem).
Btw, couldn't we instead observe that:
$$1^3+2^3+...+17^3+18^3\equiv 1^3+2^3+...+(-2)^3+(-1)^3 \equiv 0 \pmod{19}$$
#### Olinguito
##### Well-known member
Btw, couldn't we instead observe that:
$$1^3+2^3+...+17^3+18^3\equiv 1^3+2^3+...+(-2)^3+(-1)^3 \equiv 0 \pmod{19}$$
That’s an excellent observation!
#### ketanco
##### New member
After staring at the following picture long enough, I finally realized why $a=b$.
I think it's a different theorem though (Nicomachus's theorem).
Btw, couldn't we instead observe that:
$$1^3+2^3+...+17^3+18^3\equiv 1^3+2^3+...+(-2)^3+(-1)^3 \equiv 0 \pmod{19}$$
yes this is how they xpected us to solve i think... this must be the answer. thanks | 2020-09-29T08:23:43 | {
"domain": "mathhelpboards.com",
"url": "https://mathhelpboards.com/threads/what-is-1-21-2-21-3-21-18-21-in-mode-19.25200/",
"openwebmath_score": 0.8644099235534668,
"openwebmath_perplexity": 2358.100933422333,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9822876997410348,
"lm_q2_score": 0.8757870029950159,
"lm_q1q2_score": 0.8602748006350689
} |
http://math.stackexchange.com/questions/79658/w-bot-is-a-subspace-of-u-bot | # $W^{\bot}$ is a subspace of $U^{\bot}$?
Let $U$ and $W$ be subspaces of an inner product space $V$. If $U$ is a subspace of $W$, then $W^{\bot}$ is a subspace of $U^{\bot}$?.
I don't find the above statement intuitively obvious. Could someone provide a proof?
-
## 2 Answers
If you're orthogonal to everything in a set, then you're also orthogonal to everything in every subset of that set.
Put another way: Elements of $W^\perp$ have to be orthogonal to more vectors than elements of $U^\perp$; they have to be orthogonal to the vectors in $U$ and the vectors in $W\setminus U$ (if any). Therefore there are less of them than if we took the vectors that only have to satisfy the property of being orthogonal to vectors in $U$. (More restrictions$\implies$ Fewer vectors satisfying the restrictions.)
-
And that since you know that $U^\perp,W^\perp$ are subspaces (hopefully) of $V$ then it suffices to show containment. – Alex Youcis Nov 6 '11 at 22:30
So shouldn't it be $U^{\bot}$ is a subspace of $W^{\bot}$ ? That is my confusion. – Mark Nov 6 '11 at 22:33
@Mark: No. If you're orthogonal to everything in $W$, i.e., if you're in $W^\perp$, then you're also orthogonal to everything in a subset of $W$ such as $U$, i.e., you're in $U^\perp$. For a concrete example, think of $\mathbb R^3$ with standard dot product, let $U$ be the span of $(1,0,0)$, and let $W$ be the span of $\{(1,0,0),(0,1,0)\}$. If you have to be orthogonal to both $(1,0,0)$ and $(0,1,0)$, then you are in particular orthogonal to $(1,0,0)$, so $W^\perp\subset U^\perp$. – Jonas Meyer Nov 6 '11 at 22:35
Oh right, I had this at the back of my head, but was unable to explain it clearly. Now I understand, thanks. – Mark Nov 6 '11 at 22:54
It should be intuitive, already at the level of logic:
To be in $W^\perp$, you have to satisfy a certain condition $P(w)$ (namely: 'be orthogonal to $w$') for each and every element $w\in W$.
So given a subset $U\subseteq W$, to be in $U^\perp$ means you have to satisfy $P(w)$ merely for all $u\in U$.
Thus you have to satisfy less properties to be in $U^\perp$, thus it is easier to be in $U^\perp$, thus $U^\perp$ is larger: $W^\perp\subseteq U^\perp$.
It should also be intuitive geometrically: consider $\mathbb{R}^3$, let $U$ be the $x$-axis, and $W$ the $x,y$-plane. Then $U^\perp$ is the $y,z$-plane, and $W^\perp$ is the $z$-axis.
//Edit: I was slow so I missed Joans Meyer's edit, which kind of makes my answer redundant.
- | 2016-05-01T06:10:09 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/79658/w-bot-is-a-subspace-of-u-bot",
"openwebmath_score": 0.9439272880554199,
"openwebmath_perplexity": 228.43185217032388,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877002595527,
"lm_q2_score": 0.8757870013740061,
"lm_q1q2_score": 0.8602747994968822
} |
https://stats.stackexchange.com/questions/297685/pearson-correlation-between-a-variable-and-its-square | # Pearson correlation between a variable and its square
Here is my R code to get familiarised with Pearson's correlation. I generate values of $X$ from 1 to 100, then find the correlation between $X$ and $X^2$:
x=1:100
y=x
for(i in 1:100) {y[i]=x[i]*x[i]}
cor.test(x,y, type="pearson")
I get this result :
Pearson's product-moment correlation
data: x and y
t = 38.668, df = 98, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.9538354 0.9789069
sample estimates:
cor
0.9687564
$r$ seems high to me.
My question is: what exactly does the $r$ coefficient quantify? Does it only quantify the closeness of the relationship between $X$ and $Y$ variable to a linear relationship ?
Or is it also suited to quantify the intensity of a relationship between $X$ and $Y$ broadly speaking (whether this relationship is close to linearity or not)?
My last question is: are there other correlation test better suited than Pearson's test to quantify the intensity of the relationship between two given variables when the kind (linear, quadratic, exponential, etc.) of this relationship is not known a priori or is Pearson's test sufficient to do this kind of job?
• Welcome to our site! You don't need to put "thanks in advance" comments at the end of your questions - in fact we prefer it if you don't, since it means more for future readers to read through. – Silverfish Aug 13 '17 at 9:28
• Not an answer to your statistical question, but you mind find it helpful to know that you don't need to use a for loop in your R code, you can just do x=1:100; y=x^2; cor(x,y) or even cor(1:100, (1:100)^2) – Silverfish Aug 13 '17 at 9:29
• @Silverfish indeed, this is illustrated in the first two lines of code in my answer as well – Glen_b -Reinstate Monica Aug 13 '17 at 9:50
You are curious about whether your value of $r$ is "too high" — it seems you think that, as $X$ and $X^2$ do not have an exactly linear relationship, then the Pearson's $r$ should be rather low. The high $r$ is not telling you that the relationship is linear, but it is telling you that the relationship is rather close to being linear.
If you are specifically interested in the case where $X$ is uniform, you might want to look at this thread on Math SE on the covariance between a uniform distribution and its square. You are using discrete uniform distribution $1,2,\dots,n$ but if you rescaled $X$ by a factor of $1/n$, and hence rescaled $X^2$ by a factor $1/n^2$, the correlation would be unchanged (since correlation is not affected by rescaling by a positive scale factor). You would now have a discrete uniform distribution with equal probability masses on $\frac{1}{n}, \frac{2}{n}, \dots, \frac{n-1}{n}, 1$. For large values of $n$, this approximates a continuous uniform distribution (also called "rectangular distribution") on $[0,1]$.
By an argument analogous to that on the Math SE thread, we have:
$$\operatorname{Cov}(X,X^2) = \mathbb{E}(X^3)-\mathbb{E}(X)\mathbb{E}(X^2) = \int_0^1 x^3 dx - \int_0^1 x dx \cdot \int_0^1 x^2 dx$$
This integrates to $\frac{1}{4} - \frac{1}{2} \cdot \frac{1}{3} = \frac{1}{12}$.
We also have $\operatorname{Var}(X) = \mathbb{E}(X^2)-\mathbb{E}(X)^2 = \frac{1}{3} - \left(\frac{1}{2}\right)^2 = \frac{1}{12}$.
Similarly we find $\operatorname{Var}(X^2) = \mathbb{E}(X^4)-\mathbb{E}(X^2)^2 = \frac{1}{5} - \left(\frac{1}{3}\right)^2 = \frac{4}{45}$.
Hence, if $X \sim U(0,1)$, then:
$$\operatorname{Corr}(X,X^2) = \frac{\operatorname{Cov}(X,X^2)}{\sqrt{\operatorname{Var}(X) \cdot \operatorname{Var}(X^2)}} = \frac{\frac{1}{12}}{\sqrt{{\frac{1}{12}}\cdot{\frac{4}{45}}}} = \frac{\sqrt{15}}{4}$$
To seven decimal places, this is $r = 0.96824583$, even though the relationship is quadratic rather than linear. Now you have taken a discrete uniform distribution on $1, 2, \dots, n$ rather than a continuous one, but for the reasons explained above, increasing $n$ will produce a correlation closer to the continuous case, so that $\sqrt{15}/4$ will be the limiting value. Let us confirm this in R:
corn <- function(n){
x = 1:n
cor(x,x^2)
}
> corn(2)
[1] 1
> corn(3)
[1] 0.9897433
> corn(4)
[1] 0.984374
> corn(5)
[1] 0.9811049
> corn(10)
[1] 0.9745586
> corn(100)
[1] 0.9688545
> corn(1e3)
[1] 0.9683064
> corn(1e6)
[1] 0.9682459
> corn(1e7)
[1] 0.9682458
That correlation of $r=0.9682458$ may sound surprisingly high, but if we inspected a graph of the relationship between $X$ and $X^2$ it would indeed appear approximately linear, and this is all that the correlation coefficient is telling you. Moreover, we can see from our table of output from the corn function that increasing the value of $n$ makes the linear correlation smaller (note that with two points, we had a perfect linear fit and a correlation equal to one!) but that although $r$ is falling, it is bounded below by $\sqrt{15}/4$. In other words, increasing the length of your sequence of integers makes the linear fit somewhat worse, but even as $n$ tends to infinity your $r$ never becomes worse than $0.9682\dots$.
x=1:100; y=x^2
plot(x,y)
abline(lm(y~x))
Perhaps visually you are still not convinced that the correlation looks as strong as the calculated coefficient suggests — clearly the points are below the line of best fit for low and high values of $X$, and above it for intermediate $X$. If it can't capture this quadratic curvature, is the line really such a good fit to the points?
You may find it helpful to compare the overall variation of the $Y$ coordinates about their own mean (the "total variation") to how much the points vary above and below the regression line (the "residual variation" that the regression line was unable to explain). The fraction of the residual variation over the total variation tells you what proportion of the variation was not explained by the regression line; the proportion of variation that is explained by the regression line is then one minus this fraction, and is called the $R^2$. In this case, we can see that the variation of points above and below the line is relatively small compared to the variation in their $Y$ coordinates, and so the proportion unexplained by the regression is small and the $R^2$ is large. It turns out that for a simple linear regression, $R^2$ is equal to the square of the Pearson correlation. In fact $r=\sqrt{R^2}$ if the regression slope is positive (an increasing relationship) or $r=-\sqrt{R^2}$ if the slope is negative (decreasing).
We had a large $R^2$ so our correlation is large also. This is the sense we mean when we state that "a Pearson correlation near $\pm 1$ indicates the linear fit is good" — not that our straight regression line captures the true nature of the relationship between $X$ and $Y$, and so there is no curvature and no discernible pattern in the residual variation, but instead that the line provides a good approximation to the true relationship, and that the proportion of residual variation (i.e. that part left unexplained by the linear model) is small.
Note that had you chosen a discrete uniform on e.g. $-100, -99, \dots, 99, 100$ and rescaled that to being between $[-1,1]$ and you would have found a covariance and correlation of zero, as happens in the linked Math SE thread. There is neither an increasing nor decreasing relationship.
x=-100:100; y=x^2
plot(x,y)
abline(lm(y~x))
As an exercise to think through, what would be the correlation between $-1, -2, -3, \dots, -n$ and its squares? You can easily write some R code to confirm your guess.
If all you care about is the existence of an increasing or decreasing relationship, rather than the extent to which it is linear, you can use a rank-based measure such as Kendall's tau or Spearman's rho, as mentioned in Glen_b's answer. For my first graph, which had a perfectly monotonic increasing relationship, both methods would have given the highest possible correlation (one). For the second graph, which is neither increasing nor decreasing, both would give a correlation of zero.
The Pearson correlation measures the closeness to a linear relationship. If $X$ is positive, then the correlation between $X$ and $X^2$ is often fairly close to 1.
If you want to measure the strength of monotonic relationship, there are a number of other choices, of which the two best known are the Kendall correlation (Kendall's tau), and the Spearman correlation (Spearman's rho)
x=1:100
cor(x,x^2,method="pearson")
[1] 0.9688545
cor(x,x^2,method="kendall")
[1] 1
cor(x,x^2,method="spearman")
[1] 1
I'd add that looking at the correlation of non-random values isn't necessarily where I'd start - it can be useful when exploring edge cases, however.
For the Pearson correlation you may find it useful to consider playing about with the rho and n values here:
n=100
rho=0.6
x=rnorm(100)
z=rnorm(100)
y=rho*x + sqrt(1-rho^2)*z
plot(x,y)
cor(x,y)
(In particular, you might try varying rho from close to -1 up to close to 1)
You may also find these discussions of correlation useful for getting a handle on what correlations do and don't do:
Why zero correlation does not necessarily imply independence
Does the correlation coefficient, r, for linear association always exist?
If A and B are correlated with C, why are A and B not necessarily correlated?
How would you explain covariance to someone who understands only the mean?
Pearson's or Spearman's correlation with non-normal data
How to choose between Pearson and Spearman correlation?
Kendall Tau or Spearman's rho?
If linear regression is related to Pearson's correlation, are there any regression techniques related to Kendall's and Spearman's correlations? | 2020-01-22T07:51:01 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/297685/pearson-correlation-between-a-variable-and-its-square",
"openwebmath_score": 0.7718861699104309,
"openwebmath_perplexity": 398.3522745880172,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877002595527,
"lm_q2_score": 0.8757869997529962,
"lm_q1q2_score": 0.8602747979045842
} |
https://www.physicsforums.com/threads/differentiation-under-integral-sign.370218/ | # Differentiation under integral sign
1. Jan 17, 2010
### neelakash
1. The problem statement, all variables and given/known data
I have to evaluate the numerical value of the derivative of the following integral for x=1
$$\int_{0}^{\ ln\ x}\ e^{\ -\ x\ (\ t^2\ -\ 2)}\ dt$$
2. Relevant equations
The formula for differentitation under integral sign.
3. The attempt at a solution
The upper limit term is straightforward:it is
$$\frac{\ 1}{\ x}\ e^{\ -\ x[\ (\ ln\ x)^{\ 2}\ -\ 2]}$$
The other part is
$$\int_0^{\ ln\ x}\frac{\partial}{\partial\ x}\ e^{\ -\ x(\ t^2\ -2)}\ dt\ =\ -\ e^{\ -\ 2\ x}\ [\int_0^{\ ln\ x}\ t^2\ e^{\ -\ x\ t^2}\ dt\ -\ 2\int_0^{\ ln\ x}\ e^{\ -\ x\ t^2}\ dt\ ]$$
The later can be evaluated and I got the following:
$$\ -\ e^{\ -\ 2\ x}\ [\frac{\ -(\ ln\ x)\ e^{\ -\ x(\ ln\ x)^2}}{\ 2\ x}\ +\int_0^{\ x(\ ln\ x)^2}\frac{\ e^{\ -\ u}}{4x\sqrt{ux}}\ du\ -\int_0^{\ x(\ ln\ x)^2}\frac{\ e^{\ -\ u}}{\sqrt{ux}}\ du}]$$
I found the result as above.However,the two integrals neither cancel with each other nor can be evaluated.Can anyone please check and tell what should be done further.
Neel
Last edited: Jan 17, 2010
2. Jan 17, 2010
### rasmhop
Let,
$$F(x) = \int_{0}^x e^{-e^x (t^2 - 2)} \text{ d}t$$
so you want to find the derivative of F(ln(x)) which you can do using the chain rule.
3. Jan 18, 2010
### neelakash
Does not help;it ultimately reduces to what I have got...
The thing lies in putting the limits without explicitly solving the final two inntegrals.They give zero.
4. Jan 18, 2010
### D H
Staff Emeritus
I helped neelakash with this problem on another forum (http://www.sciforums.com/showthread.php?t=99010). As he arrived at the correct answer there, I have no qualms posting the solution here for future reference by others.
neelakash did use the appropriate technique for differentiating under the integral sign, the Leibniz Integral Rule:
$$\frac{d}{dx}\int_{a(x)}^{b(x)} f(t,x)\,dt = \int_{a(x)}^{b(x)} \frac{\partial} {\partial x} f(t,x)\,dt + f(b(x),x)\frac{db(x)}{dx} - f(a(x),x)\frac{da(x)}{dx}$$
In this particular problem,
$$f(t,x) = \exp\left(-x(t^2-2)\right),\quad a(x)=0, \quad b(x)=\ln x$$
The partial derivative of f(t,x) wrt x is
$$\frac{\partial}{\partial x}f(t,x) = -(t^2-2) \exp\left(-x(t^2-2)\right)$$
Applying the Leibniz Integral Rule,
$$\frac{d}{dx}\left(\int_0^{\ln x} \exp\left(-x(t^2-2)\right)\,dt\right) = -\left(\int_0^{\ln x} (t^2-2) \exp\left(-x\bigl(t^2-2)\right) \,dt\right) + \exp\left(-x(\ln^2x-2)\right)/x$$
There is a sign error in the original post (that exp(-2x) should be an exp(2x)). Additionally, neelakash carried the integration a step too far. That integral on the right-hand side is evaluable in terms of the error function erf(x).
However, there is no reason to do this. neelakash finally saw the "Oh, SNAP!" light that makes this problem particularly easy. From that other forum,
5. Jan 18, 2010
### rasmhop
You're right, I'm sorry for the wrong suggestion. Anyway try substituting x=1 in the terms you found. The integrals should disappear due to ln(x)=0 and the upper limit term should become e^2.
6. Jan 18, 2010
### HallsofIvy
When x= 1, ln(x)= ln(1)= 0 so x ln(x)= 1(0)= 0. Both integrals are from 0 to 0 and so are qual to 0.
7. Jan 18, 2010
### neelakash
Yes,we need not carry out the integral explicitly as the answer comes from observation. | 2018-03-18T19:37:04 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/differentiation-under-integral-sign.370218/",
"openwebmath_score": 0.9087260961532593,
"openwebmath_perplexity": 1593.9571984399427,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98228770337066,
"lm_q2_score": 0.8757869916479466,
"lm_q1q2_score": 0.8602747926677609
} |
https://math.stackexchange.com/questions/4302370/how-do-you-find-good-rational-approximations-to-a-decimal-number | How do you find "good" rational approximations to a decimal number?
When presented with real number as a decimal, are there any methods to finding "good" rational approximations $$a/b$$ to that number? By "good" I mean that $$a$$ and $$b$$ are reasonably small integers. For example suppose you're handed the number $$1.7320508075688772935274463415058723 \dots$$ An obvious way to rationally approximate this numbers is to truncate it after the $$-n$$th decimal place and place it over $$10^n$$. So $$\frac{173}{100} \quad\text{or}\quad \frac{1732050807}{1000000000} \quad\text{or}\quad \frac{1732050807568877}{1000000000000000}$$ are increasingly good rational approximations. A way I can see to improve this is if you chose to truncate at a multiple of $$2$$ or $$5$$, your rational approximation will reduce to one that is more "good". For example $$\frac{1732}{1000} = \frac{433}{250} \qquad\text{and}\qquad \frac{173205}{100000} = \frac{34641}{20000}$$ Is there a clever way to see which multiples of $$2$$ of $$5$$ to truncate after to get a rational approximation that reduces a lot? This works because we express the decimal in base $$10$$, so are there any tricks considering the number in a different base? Is there an idea that's not even on my radar?
There's an obvious algorithmic iterative "bottom-up" way to find a good rational approximation. It's not terribly clever though. I can type it up as an answer in a second if no one else wants to.
• Try the continued fraction Nov 10, 2021 at 16:25
• Yes. Here is a link. This is exactly what continued fractions are good at. Nov 10, 2021 at 16:26
• The usual way to find the best rational approximations is continued fractions. But are you looking to have the best approximation overall or specifically one with a "nice" denominator? Nov 10, 2021 at 16:26
• See also Dirichlet's approximation theorem which essentially says that "good" rational approximations exist, in the sense of small denominator compared to precision, and the Thue-Siegel-Roth theorem (or whatever you want to call it) on how algebraic irrational numbers can't be approximated too well in some technical sense. Nov 10, 2021 at 16:30
• @MikePierce The notion for "best" people use in practice is to minimise various types of heights. The measure you suggest is essentially the naive height, where $H(p/q) = \max\{|p|,|q|\}$ for a rational number $p/q$ written in reduced form. Nov 10, 2021 at 16:33
This is exactly what continued fractions are for.
The continued fraction of $$\sqrt 3=1.7320508075688772935274463415058723\ldots$$ is periodic: $$[1; 1, 2, 1, 2, 1, 2, 1, 2, \ldots]$$. The sequence of approximants is $$2,\frac53,\frac74,\frac{19}{11},\frac{26}{15},\frac{71}{41},\ldots$$ ; each of these is the best possible approximant for its denominator. For instance, $$\frac{19}{11}$$ is the best approximant with a denominator $$\le 11$$.
This is the usual criterion for goodness of approximation by rational numbers: $$\frac{p}{q}$$ is a good approximation to a real number $$\alpha$$ if it minimises $$|\alpha-\frac{p}{q}|$$ over all rationals with denominator $$\le q$$. Powers of ten shouldn't come into it at all.
I looked into this topic too, since i wanted a way to list all fractions in a small sub-interval of $$[0,1]$$. If you just want to look at an implementation that does this I attached some typescript code. My approach came from looking into the Farey sequence $$\cal F_n$$. One can show that if $$a/b$$ and $$c/d$$ are fractions such that no other fraction $$e/f$$ with $$a/b < e/f < c/d$$ and $$f \leq \max(b,d)$$ exists, then $$\frac{a+c}{b+d}$$ is the fration with the smallest denominator between $$a/b$$ and $$c/d$$.
For our case we choose a number $$v \in (0,1)$$ that we want to approximate. Formulating this as an algorithm we can start with the lower bound $$a / b = 0/1$$ and upper bound $$c/d = 1/1$$. Next we define $$\frac{e}{f} = \frac{a+c}{b+d}.$$ Now 3 possible cases can happen. If $$f$$ is too large, the best approximation for $$v$$ is either $$a/b$$ or $$c/d$$. Otherwise we have have $$e/f < v$$ or $$e/f \geq v$$. If $$e/f < v$$ replace $$a/b$$ by $$e/f$$ and repeat the steps. In the other case replace $$c/d$$ by $$e/f$$. More formally:
Choose: A value $$v$$ to approximate and the maximum denominator $$Q$$.
Initialize: $$a/b := 0/1$$ and $$c/d := 1/1$$.
Iterate: for $$a/b < v \leq c/d$$:
• Evaluate: $$e/f := (a+c)/(b+d)$$.
• If $$f \geq Q$$: Return $$a/b$$ or $$c/d$$.
• If $$e/f < v$$: Replace $$a/b := e/f$$ and begin next iteration.
• If $$v \leq e/f$$: Replace \$c/d := e/f and begin next iteration.
One all the steps are finished, you not only have the best approximation for a value $$v \in (0,1)$$ but the closest two fractions $$a/b$$ and $$c/d$$ with $$\frac{a}{b} < v \leq \frac{c}{d}$$.
This algorithm is what I started out with. It has worst case complexity $$O(Q)$$ time and $$O(1)$$ space. One can improve the algorithm by batching up multiple steps. I did not prove the complexity, but I think the improved version has complexity $$O(\log Q)$$ time and $$O(1)$$ memory. I will attach some Typescript code for this improved algorithm.
/**
* Returns the two fractions closest to $$v$$ with $$a/b <= v <= c/d$$. One always has $$a/b \neq c/d$$.
*
* # The Algorithm:
*
* Chooses $$\frac{a^+}{b^+} = \frac{a + e_1 c}{b + e_1 d} <= v$$ with $$b + e_1 d <= Q$$ in the first,
* and $$\frac{c^+}{d^+} = \frac{e_2 a^+ + c}{ e_2 b^+ + d} >= v$$ with $$e_2 b + d <= Q$$ in the second step.
* Iterate until nothing happens anymore.
*
* If $$0 < v < 1$$, the final fractions will be consecutive elements in the Farey sequence $$F_Q$$.
*/
function findLowerAndUpperApproximation(v: number, Q: number) {
// Ensure v in [0,1)
let v_int = Math.floor(v);
v = v - v_int; // only use fractional part of $$v$$ for iteration
let a = 0,
b = 1,
c = 1,
d = 1;
while (true) {
let _tmp = c - v * d; // Temporary denominator for 0 check
// Batch e1 steps in direction c/d
let e1: number | undefined;
if (_tmp != 0) e1 = Math.floor((v * b - a) / _tmp);
if (e1 === undefined || b + e1 * d > Q) e1 = Math.floor((Q - b) / d);
a = a + e1 * c;
b = b + e1 * d;
_tmp = b * v - a; // Temporary denominator for 0 check
// Batch e2 steps in direction a/b
let e2: number | undefined;
if (_tmp != 0) e2 = Math.floor((c - v * d) / _tmp);
if (e2 === undefined || b * e2 + d > Q) e2 = Math.floor((Q - d) / b);
c = a * e2 + c;
d = b * e2 + d;
// If both steps do nothing we are done
if (e1 == 0 && e2 == 0) break;
}
return [v_int*b + a, b, v_int*d + c, d];
}
In each step the denominator of $$c/d$$ should at least double. This would result in complexity $$O(\log Q)$$. However, i haven't worked out the details here. For me the algorithm is good enough to find the best fractions for $$Q = 10^9$$. Going past that my code seems to hit machine precision problems, which most likely affect the divisions in the algorithm. When you use a language with higher precision integer division you should be fine to go almost as far as you want. | 2023-01-31T06:36:01 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4302370/how-do-you-find-good-rational-approximations-to-a-decimal-number",
"openwebmath_score": 0.7420727610588074,
"openwebmath_perplexity": 515.2579696190897,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877012965886,
"lm_q2_score": 0.8757869900269367,
"lm_q1q2_score": 0.860274789259018
} |
http://www.oopsconcepts.com/freight-train-qpjqp/1wm0k.php?15357d=sss-postulate-examples | Example St. Francis Preparatory School. SSS Congruence Postulate If the three sides of a traingle are congruent to the three sides of another triangle, then they are congruent. As a consequence, their angles will be the same. This video is provided by the Learning Assistance Center of Howard Community College. 14 December 2020 . This is one of them (SSS). This means that the pair of triangles have the same three sides and the same three angles (i.e., a total of six corresponding congruent parts). function init() { We discuss what the abbreviations stand for and then students identify which postulate can be used to prove the triangles from the Do Now are congruent. And as seen in the image to the right, we show that trianlge ABC is congruent to triangle CDA by the Side-Side-Side Postulate. The other triangle LMN will change to remain congruent to the triangle PQR. Advertisement. Explanation : If three sides of one triangle is congruent to three sides of another triangle, then the two triangles are congruent. Here we can see that $\left\{ \begin{array}{c} AB\cong DE \\ BC\cong EF \\ CA\cong FD \end{array} \right\}$ All corresponding sides of the triangles are congruent. Topics. 6 Check It Out! Postulates are also called as axioms. Prove: $$\triangle ABD \cong \triangle CBD$$ Solution to Example 2 1. i) ΔABD ≅ ΔACD ii) AP is the perpendicular bisector of BC. AB ... •Example: because of HL. SSS (Side-Side-Side): If three pairs of sides of two triangles are equal in length, then the triangles are congruent. Solutions. SSS Postulate First, there's the side-side-side postulate, or SSS . Introduction to triangle congruency lesson. SSS Congruence Postulate If the three sides of a traingle are congruent to the three sides of another triangle, then they are congruent. SSS Postulate - Every SSS correspondence is a congruence. Solved Example on Postulate Ques: State the postulate or theorem you would use to prove that ∠1 and ∠2 are congruent. To prove that these triangles are congruent, we use SSS postulate, as the corresponding sides of both the triangles are equal. Read time: 3 minutes. We can say that two triangles are congruent if any of the SSS, SAS, ASA, or AAS postulates are satisfied. Solution : The game plan is to make use of the SAS Postulate. Is it true that ∆ABC ≅ ∆XYZ? SSS Congruence Postulate If the three sides of a traingle are conrresponding and congruent to the three sides of the other triangle, th the two triangles are congruent. If the two angles and the non included side of one triangle are congruent to the two angles and the non included side of another triangle, then the two triangles are congruent. Name the postulate, if possible, that makes the triangles congruent. postulate: [noun] a hypothesis advanced as an essential presupposition, condition, or premise of a train of reasoning. Solving sas triangles. Glencoe Geometry. Covid-19 has affected physical interactions between people. Yes! Take Calcworkshop for a spin with our FREE limits course. How do you know? 5 Example 1 Using the AA Similarity Postulate Explain why the triangles are similar and write a similarity statement. Example $$\triangle ABC \cong \triangle XYZ$$ All 3 sides are congruent. Practice Proofs. Sss sas asa and aas congruence date period state if the two triangles are congruent. 1) In triangle ABC, AD is median on BC and AB = AC. pagespeed.lazyLoadImages.overrideAttributeFunctions(); Video Examples: The five postulates of Euclidean Geometry. Similar Triangles Two triangles are said to be similar if they have the same shape. Listen to the audio pronunciation in the Cambridge English Dictionary. Given : In ΔABC, AD is a median on BC and AB = AC. Triangles ABC has three sides congruent to the corresponding three sides in triangle C… Show Answer. Example How can you use congruent triangles to prove j Q @ j D Since QWE ≅ DVK by AAS, you know that ∠ All Rights Reserved. 7, 24, 25; 5, 12, 16; 6, 8, 9; 3, 5, 9 ; Show Video Lesson. Because if we can show specific sides and/or angles to be congruent between a pair of triangles, then the remaining sides and angles are also equal. The following postulate, as well as the SSS and SAS Similarity Theorems, will be used in proofs just as SSS, SAS, ASA, HL, and AAS were used to prove triangles congruent. Geometry › SSS Postulate. The global warming postulate is based almost entirely on models, and today's models are deliberately biased to support global warming. So SAS-- and sometimes, it's once again called a postulate, an axiom, or if it's kind of proven, sometimes is called a theorem-- this does imply that the two triangles are congruent. For a list see Congruent Triangles. Methods of proving triangle congruent mathbitsnotebook(geo. Check out the interactive simulation to explore more congruent shapes and do not forget to try your hand at solving a … Example: Hypotenuse-Leg Theorem (HL theorem) If the hypotenuse and one of the legs (sides) of a right triangle are congruent to hypotenuse and corresponding leg of the other right triangle, the two triangles are said to be congruent. Prove: $$\triangle ABC \cong \triangle EFC$$ Side Angle Side Example Proof. As you will quickly see, these postulates are easy enough to identify and use, and most importantly there is a pattern to all of our congruency postulates. Heather Z. Oregon State University. ZX = CA (side) XY = AB (side) YZ = BC (side) Therefore, by the Side Side Side postulate, the triangles are congruent; Given: $$AB \cong BC, BD$$ is a median of side AC. How to Prove Triangles Congruent? The two triangles also have a common side: AC. Congruence Postulate SSS. On the front of the organizer, students will write SSS on the first tab, SAS on the second tab, and ASA on the third tab. Side Side Side Postulate -> If the three sides of a triangle are congruent to the three sides of another triangle, then the two triangles are congruent. Proof 1. The Multiplication Postulate: If x = y, then x * 3 = y * 3 . The Pythagorean Theorem can be used when we know the length of two sides of a right triangle and we need to get the length of the third side. If all the sides are congruent, then the two triangles are congruent. In this blog, we will understand how to use the properties of triangles, to prove congruency between $$2$$ or more separate triangles. Postulate 18. We refer to this as the Side Side Side Postulate or SSS. The first two postulates side angle side sas and the side side side sss focus predominately on the side aspects whereas the next lesson discusses two additional postulates which focus more on the angles. Category: medical health lung and respiratory health. SSS Congruence Postulate. You can replicate the SSS Postulate using two straight objects -- uncooked spaghetti or plastic stirrers work great. In order to prove that triangles are congruent, all the angles and sides have to be congruent. Prove that $$\triangle LMO \cong \triangle NMO$$ Advertisement. In an exothermic reaction, the transition state is closer to the reactants than to the products in energy (Fig. Glacial-interglacial sea surface temperature changes across the subtropical front east of New Zealand based on alkenone unsaturation ratios and foraminiferal assemblages Relationships Within Triangles. Those are the Angle-Side-Angle (ASA) and Angle-Angle-Side (AAS) postulates. Addition Postulate: If equal quantities are added to equal quantities, the sums are equal. Step: 5. Examples of postulate in a sentence, how to use it. In cat below. Example of Postulate. Now we have the SAS postulate. Figure 12.4 ¯PN ⊥ ¯MQ and ¯MN ~= ¯NQ. SAS Congruence Postulate. It is the only pair in which the angle is an included angle. The Area Postulate - To every polygonal region there corresponds a unique positive real number. Example of Postulate. Example 1. We can tell whether two triangles are congruent without testing all the sides and all the angles of the two triangles. Explain your reasoning. For your better understanding, here is now the exact statement of the SSS Congruence Postulate. MC Megan C. Piedmont College. Asked By: Ayada Lugo | Last Updated: 7th January, 2020. Triangle Congruence Postulates and Theorems - Concept - Solved Examples. Sas triangle congruence postulate explained youtube. CCSS.MATH.CONTENT.7.G.A.2 Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. This includes triangles, and the scaling factor can be thought of as a ratio of side-lengths. EXAMPLE 1 Use the AA Similarity Postulate Determine whether the triangles are similar. Use the SAS Similarity Theorem to determine if triangles are similar. This is the only postulate that does not deal with angles. © and ™ ask-math.com. Postulate is used to derive the other logical statements to solve a problem. EXAMPLE 3 Use the SSS Similarity Theorem Find the value of X-that makes ∆POR ~ ∆TUV Solution Both m R and m V equal 60 , so R V.Next, find the value of x that makes the How to say postulate. This geometry video tutorial provides a basic introduction into triangle congruence theorems. Covid-19 has led the world to go through a phenomenal transition . 8. Top Geometry Educators. Angle-Angle (AA) Similarity Postulate If two angles of one triangle are congruent to two angles of another triangle, then the two triangles are similar. window.onload = init; © 2021 Calcworkshop LLC / Privacy Policy / Terms of Service. 8. Side-angle-side (sas) triangle: definition, theorem & formula. So that actually does lead to another postulate called the right angle side hypotenuse postulate, which is really just a special case of SSA where the angle is actually a right angle. In this mini-lesson, we will learn about the SSS similarity theorem in the concept of the SSS rule of congruence, using similar illustrative examples. Using the Angle Addition Postulate and definition of. Side-Side-Side (SSS) 1. } } } x = 4 Solve for x STEP 2 Check that the side lengths are proportional when x = 4. STUDENT HELP y 1 x 1 A( 7, 5) C( 4, 5) B( 7, 0) H(6, 5) G(1, 2) F(6, 2) 216 Chapter 4 Congruent Triangles 1.Sketch a … Side Angle Side Practice Proofs. Congruent Triangles. SSS Similarity. If all three sides in one triangle are the same length as the corresponding sides in the other, then the triangles are congruent. Step: 4. And here, they wrote the angle first. So we need to learn how to identify congruent corresponding parts correctly and how to use them to prove two triangles congruent. Did you know that there are five ways you can prove triangle congruency? 14 Votes) SAS Postulate. called a linear pair. Explain how the SSS postulate can be used to prove that two triangles are congruent. In this lesson, we will consider the four rules to prove triangle congruence. As Math is Fun accurately states, there only five different congruence postulates that will work for proving triangles congruent. EXAMPLE 6 R E A L I F E EXAMPLE 5 Using Algebra xy Look Back For help with the Distance Formula, see page 19. This means that the corresponding sides are equal and the corresponding angles are equal. What theorem or postulate proves the triangles are congruent in the example? This video explains the evidence for the SAS Triangle Congruence Postulate. For example: Substitution Postulate: A quantity may be substituted for its equal in any expression. Side Side Side Postulate If three sides of one triangle are congruent to three sides of another triangle, then the two triangles are congruent. 97 examples: They are also postulated to stimulate other cells for granuloma formation… Or, if we can determine that the three sides of one triangle are congruent to three sides of another triangle, then the two triangles are congruent. Perhaps the easiest of the three postulates, Side Side Side Postulate (SSS) says triangles are congruent if three sides of one triangle are congruent to the corresponding sides of the other triangle. AB ¯ = DE ¯ [Given.] for (var i=0; i If the three sides of a triangle are congruent to the three sides of another triangle, then the two triangles are congruent. are similar. SSS Congruence Postulate. And they were able to do it because now they can write "right angle," and so it doesn't form that embarrassing acronym. Congruent Triangles. ASA Postulate Example Angle-Angle-Side Whereas the Angle-Angle-Side Postulate (AAS) tells us that if two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of another triangle, then … There's the Side-Angle -Side postulate, or SAS. Try this Drag any orange dot at P,Q,R. postulate meaning: 1. to suggest a theory, idea, etc. Two geometric figures are similar if one is a scaled version of the other. Definition Picture/Example Linear Pair Linear Pair Theorem SSS Congruence Postulate Determine whether the pairs of triangles are congruent or not., Example 1 Given T lies in the interior of ! Holt McDougal Geometry Triangle Similarity: AA, SSS, SAS Example 1: Using the AA Similarity Postulate Explain why the triangles are similar and write a similarity statement. Can you can spot the similarity? Learn more. 4.6/5 (14 Views . A few examples were shown for a better understanding. NOT CONGRUENT The Congruence Postulates SSS ASA SAS AAS SSA AAA Name That Postulate SAS ASA SSS SSA (when possible) Name That Postulate (when possible) ASA SAS AAA SSA Name That Postulate (when possible) SAS SAS SAS Reflexive Property Vertical Angles Vertical Angles Reflexive Property SSA Let’s Practice Indicate the additional information needed to … This is the only postulate that does not deal with angles. How to pronounce postulate. Of a traingle are congruent. Area Postulate - Through a given.... D E F there is no such thing as an essential presupposition, condition, or SAS is. Means that the corresponding sides are congruent. thing as an essential presupposition condition. If the three sides of another triangle, then they are congruent. Zealand based on alkenone unsaturation and! What Postulate would you use the SAS, and the scaling factor can be thought of a! Ruler and protractor, and HL know that there are five ways you can prove triangle congruency Watch //! What theorem or Postulate proves the triangles congruent SSS, SAS, ASA rule and AAS rule the! Ll quickly learn how to identify congruent corresponding parts are congruent… we just need three to make of. Postulate is likely to be similar if they have the same size and.! Third Side point, there 's the Side-Side-Side Postulate of reasoning and ¯MN ~= ¯NQ definition theorem... So we need to prove triangles congruent. rules to prove that triangles are.... Is possible Postulate | define SAS a unique positive real number congruent using methods. The Side-Angle -Side Postulate, as the Side Side Side Side Side Postulate or theorem you would to! Experience ( Licensed & Certified teacher ) Side-Angle-Side Postulate substituted for its in... Given external point, there only five different Congruence postulates that will work proving! Image to the three sides of a traingle are congruent. trianglesare triangles that have the same and... Prove that these triangles are said to be similar if they have the same size and shape exothermic... Both right angles, B and E are congruent if they have the same length as corresponding... Of two triangles are congruent. the SAS Postulate a proof used for right triangl… Postulate 17 ∠ACB... Angle Side example proof quantities, the transition state is closer to the three of... All 3 sides are congruent if they have the same size and shape a few examples were for... Means that the corresponding sides in one triangle are congruent to triangle CDA by the Postulate... Postulate to postulates of Euclidean geometry DeLay are too incompetent to enter into courtroom. If what you Postulate is used to derive the other, then the are... Side-Side-Side ): if three pairs of sides of a traingle are to... Guessed it parts are congruent… we just need three Angle-Side-Angle ( ASA to! About identifying the accurate Side and Angle relationships to the three sides of another triangle, they. Into triangle Congruence postulates and Theorems - Concept - Solved examples are five ways you can replicate the Congruence. Said to be the same length as the corresponding sides of a traingle congruent... Them, the transition state is closer to the reactants than to the audio pronunciation in the image, will. And Angle-Angle-Side ( AAS ) postulates the Division Postulate: if equal,. Similar and write a Similarity statement presupposition, condition, or premise of a are! Of an acute, right, we prove triangle congruency B D E F there a... Congruence Theorems quantities are added to equal quantities are added to equal quantities, the state! A given line subtropical front east of New Zealand based on alkenone unsaturation and. F there is at most one line Parallel to a given external point, there five. ¯Pn ⊥ ¯MQ and ¯MN ~= ¯NQ ) ΔABD ≅ ΔACD ii ) is! Are five ways you can prove triangle congruency are satisfied Math is Fun accurately states there! The global warming Angle between the sides of another triangle, then they are.. The midpoint of BF 2 ) AC = CE corresponds a unique positive number... Agree with me that proving triangles congruent SSS, AAS, and with technology ) geometric shapes with given.. Into a courtroom to begin with in one triangle are the same length as the Side Side Side! Sides have to be similar if they have the same size and shape to enter into a courtroom begin! Has at least one Side length known ccss.math.content.7.g.a.2 Draw ( freehand, with ruler and protractor, with. A spin with our FREE limits course only pair in which the Angle Side Angle Postulate ( has! Example with pictures, want to see ~= ¯NQ x / 7 y... Different Congruence postulates that will work for proving triangles congruent is Fun straightforward... This geometry video tutorial provides a basic principle from which a further idea is formed or… other one place put! In which pair of triangles pictured below could you use to prove triangle ABC, AD is median on and. Postulate is used to prove that two triangles are equal and the corresponding sides in triangle. And AB = AC a train of reasoning fundamental assumption in the image, we use Postulate. Which does not deal with angles an essential presupposition, condition, or obtuse triangle if a triangle is to. Only pair in which pair of triangles pictured below could you use prove... Is true, then both the triangles congruent. prove: \triangle \cong. Then both the lawyer and DeLay are too incompetent to enter into courtroom. Statement of the SSS Postulate ( ASA ) to prove that ∠1 and are!: Substitution Postulate: if x = y, then x * 3 = y, then x / =! Of the SAS Similarity theorem to determine if the three sides of two congruent... Example with pictures, want to see and form a circle with your.. Of BF 2 ) AC = CE both the triangles are congruent. Side-Angle-Side. Can replicate the SSS rule, SAS rule, ASA, SAS, ASA rule and AAS date! In this lesson, we show that trianlge ABC is congruent to sides... Circle with your group Through a given line prove: Side Angle (... The example AP is the only true example of this method for proving triangles congruent. and. Makes the triangles are congruent. are deliberately biased to support global warming - Through a external! Consider the four rules to prove the triangles congruent. example on Postulate:. A warning ; we must be careful about identifying the accurate Side and Angle relationships Fun and straightforward global. Write the associated two column proof accurate Side and Angle relationships were shown for a with... For your better understanding, here is now the exact statement of the SAS, and also BC! To see dot at P, Q, R products in energy ( Fig example $... And all the sides with ruler and protractor, and with technology ) geometric shapes with given conditions DeLay... = ∠DEC [ given. geometry lesson, we prove triangle congruency similar and write a Similarity.! Video tutorial provides a basic introduction into triangle Congruence postulates and Theorems - Concept - Solved examples lesson! Side Postulate or SAS a consequence, their angles will be the same \triangle EFC$... Determine which congruent triangle Vertical angles are equal are five ways you can prove triangle congruency how the SSS,. See how to identify congruent corresponding parts are congruent… we sss postulate examples need three game is! A true statement, which does not deal with angles Similarity statement the perpendicular bisector of.! Video tutorial provides a basic introduction into triangle Congruence postulates and Theorems - Concept - Solved examples for proving triangles! A Similarity statement four rules to prove that these triangles are congruent in the image to the audio in. Can be used to derive the other logical statements to solve a problem AAS Congruence date period state if sss postulate examples... 7Th January, 2020 Congruence Theorems: a quantity may be substituted for its equal in expression. Y / 7 = y * 3 -- uncooked spaghetti or plastic stirrers work.. At least one Side length known temperature changes across the subtropical front east of New Zealand based alkenone..., want to see whether two triangles are equal 's no other one place to put this third.... You into 4 groups and form a circle with your group jenn, Founder Calcworkshop® 15+... Other logical statements to solve a problem we can say that two triangles are congruent, we ’ going! Is the perpendicular bisector of BC that triangles are equal possible, that makes the triangles congruent is Fun straightforward! Below could you use the AA Similarity Postulate Explain why the triangles congruent 97 examples: are! The Postulate or theorem you would use to prove two triangles: Ayada Lugo | Last Updated January... Hence sides AB and CD are congruent, then the two triangles also a! We just need three is true, then they are called the SSS Postulate,! And foraminiferal AD is median on BC and DA are congruent. triangle if a triangle is congruent triangle! Are the SAS Postulate a courtroom to begin with square, all the angles and sides have be... The products in energy ( Fig are too incompetent to enter into a courtroom to begin with Postulate if three!, B and E are congruent. if a triangle is possible of reasoning angles are to. ( AAS ) postulates Side-Angle-Side ( SAS ) triangle: definition, &. \Triangle EFC Advertisement is a warning ; we must be careful identifying. Into triangle Congruence Theorems alkenone unsaturation ratios and foraminiferal careful about identifying the accurate Side and Angle relationships triangle a... Quantities are added to equal quantities, the sums are equal and the corresponding sides in the English!: AC already know, two triangles are similar and write a Similarity statement and DA are congruent using methods. | 2021-09-22T20:37:08 | {
"domain": "oopsconcepts.com",
"url": "http://www.oopsconcepts.com/freight-train-qpjqp/1wm0k.php?15357d=sss-postulate-examples",
"openwebmath_score": 0.4419773519039154,
"openwebmath_perplexity": 1223.4144715405948,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9822877007780707,
"lm_q2_score": 0.8757869900269366,
"lm_q1q2_score": 0.8602747888049067
} |
https://math.stackexchange.com/questions/2554808/finding-approximation-of-largest-eigenvalue | Finding approximation of largest eigenvalue
Given the following matrix, find an approximation of the largest eigenvalue. $$A = \begin{bmatrix} 3 & 2 \\ 7 & 5 \\ \end{bmatrix}$$
And I was also given $$\vec x= \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}$$
How my professor solves this is by calculating the slopes of $A\vec x = \vec b_1$, $A^2 \vec x = \vec b_2$, $A^3 \vec x = \vec b_3$ and so on until we get the slope of $\vec b_i$ converging to the same value. Then when we get the approximated $\vec b$, he plug into $A \vec b = \lambda \vec b$, and the corresponding $\lambda$ is the largest eigenvalue.
Since slope is $\frac yx$ , it works fine for $2 x 2$ matrix. But how do I apply this method for a bigger matrix?
• I've given you a full explanation and representation of the method used down below, make sure to check it out ! Dec 7, 2017 at 1:23
• It's very helpful thank you! Dec 7, 2017 at 1:28
What you mention, is a known numerical analysis method for the approximation of the largest (by absolute value) eigenvalue of a matrix.
Let $$A\in \mathbb R^{n\times n}$$ have $$n$$ linearly independent eigenvalues $$\{ u \}_{i=1}^n$$ as well as a unique eigenvalue $$λ_1$$ such that : $$|λ_1| < |λ_2| \leq \dots \leq |λ_n|$$ where $$λ_1 \in \mathbb R$$ and $$u_1 \in \mathbb R^n$$ : $$Au_1=λ_1u_1.$$
The method "Power Iteration" :
$$\begin{cases} x_k= Ax_{k-1} \to x_k = A^kx_0 \quad k=1,2,\dots \\ x_0 \end{cases}$$
Thorem : The method "Power Iteration" converges $$\forall x_0$$ (that is adequate for the problem given) and it is :
$$\lim_{k\to \infty} ε_k\frac{x_k}{||x_k||_2}=u_1$$
$$\lim_{k \to \infty} \frac{x_{k,i}}{x_{k-1,i}}=λ_1 \quad \forall \space i=1,2,\dots,n \quad \text{with} \quad u_{1,i} \neq 0$$
where $$\{ ε_k\}_{k=1}^\infty = \{ \pm1\}$$ and $$u_1$$ eigenvector of $$A$$ with $$||u_1||_2=1$$.
I've given you a formal explanation of the method according to my old notes and my knowledge, for more, check here.
Here's a hint: You want to determine when $\mathbf{b}_n$ is a near-scalar multiple of $\mathbf{b}_{n-1}$. In $\mathbb{R}^2$, (nonzero) vectors are scalar multiples of one another iff their slopes are equal. A possibly more useful definition is that two vectors $\mathbf{v}$ and $\mathbf{w}$ are scalar multiples of one another if and only if
$$\hat{\mathbf{v}} = \pm\hat{\mathbf{w}},$$
where
$$\hat{\mathbf{v}} = \frac{\mathbf{v}}{|\mathbf{v}|},$$
which extends more nicely to multiple dimensions.
• I'll try to apply this to the problem and see how it works out. Thank you ! Dec 7, 2017 at 1:21
• @dembrownies Great. Let me know if you have any more questions. Dec 7, 2017 at 2:13
You normalize the vector at each iteration by dividing by its length, and wait until the resulting sequence of unit vectors has gotten close enough to converging for your purpose. (This is called the power method. It's pretty much the worst iterative method for eigenvalues there is, but it is the theoretical basis for lots of better methods.)
This is similar to what you're doing when you measure the slope in the 2D case, except in that case you divide by $x$ instead of dividing by $\sqrt{x^2+y^2}$. The point is that all that matters is the direction, not the magnitude.
• that makes more sense. thank you! Dec 7, 2017 at 1:20
The method you are thinking of is called Power Iteration. More formally, take any vector $x_0\ne 0$. (In practice, it is good to choose $x_0$ randomly. Then define
$$x_k := \frac{Ax_{k-1}}{\|Ax_{k-1}\|_2}, \quad k\in\{1,2,\ldots\}$$
With high probability, $x_k$ will converge to an eigenvector of $A$ corresponding to the largest eigenvalue $\lambda_1$ of $A$ with
$$\lambda_1 = \lim_{k\to\infty} x_k^T Ax_k.$$
In practice, in computer arithmetic, this method is numerically stable. For a better method, see for example, this. | 2022-10-01T20:51:39 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2554808/finding-approximation-of-largest-eigenvalue",
"openwebmath_score": 0.8366015553474426,
"openwebmath_perplexity": 214.46545664401265,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513926334712,
"lm_q2_score": 0.8723473746782093,
"lm_q1q2_score": 0.8602665783990686
} |
https://math.stackexchange.com/questions/1627453/why-can-we-convert-a-base-9-number-to-a-base-3-number-by-simply-converting-e/1627475 | # Why can we convert a base $9$ number to a base $3$ number by simply converting each base $9$ digit into two base $3$ digits?
Why can we convert a base $9$ number to a base $3$ number by simply converting each base $9$ digit into two base $3$ digits ?
For example $813_9$ can be converted directly to base $3$ by noting
\begin{array} \space 8_9&=22_3 \\ \space 1_9 &=01_3 \\ \space 3_9 &=10_3 \\ \end{array}
Putting the base digits together ,we get $$813_9=220110_3$$
I know it has to do with the fact that $9=3^2$ but I am not able to understand this all by this simple fact...
• This is actually useful (but not with $9$ and $3$). We can very cheaply go back and forth between hexadecimal (base $16$) and binary. – André Nicolas Jan 26 '16 at 8:30
• This also applies to converting between: binary, octal, hexadecimal systems – Max Payne Jan 26 '16 at 8:31
Consider $N$ in base 3. For simplicity, we can assume that $N_3$ has an even number of digits: if it doesn't, just tack on a leftmost $0$. So let: $$N_3 = t_{2n+1} t_{2n}\dotsc t_{2k+1} t_{2k} \dotsc t_1 t_0.$$ What this positional notation really means is that: $$N = \sum_{i = 0}^{2n+1} t_i 3^i,$$ which we can rewrite as: \begin{align} N &= \sum_{k = 0}^{n} (t_{2k+1} 3^{2k+1} + t_{2k} 3^{2k}) \\ &= \sum_{k = 0}^{n} (3 t_{2k+1} + t_{2k}) 3^{2k} \\ &= \sum_{k = 0}^{n} (3 t_{2k+1} + t_{2k}) 9^{k}. \\ \end{align}
But now, note that for each $k$, $3 t_{2k+1} + t_{2k}$ is precisely the base-9 digit corresponding to the consecutive pair of base-3 digits $t_{2k+1} t_{2k}$.
• Altough I've already accepted an answer,I thought to let you know that this answer has been so insightful.Thanks for your time ! – Mr. Y Jan 26 '16 at 9:18
• How nice — thank you:) and you're welcome. – BrianO Jan 26 '16 at 9:24
• That is the real good prove – JnxF Jan 26 '16 at 9:57
• Oh my - thanks doubly. – BrianO Jan 26 '16 at 10:05
• yes this is the proof. My answer is just a visual. +1 – Max Payne Jan 26 '16 at 10:16
This has more to do with place value.
Consider the image:
The key is the fact that $3^2 = 9$
• All the value below $3^2$ ($3^0$ and $3^1$) has to be accomodated in $9^0$
that is
• all the values in $9^0$ have to be accommodated in the two places below $3^2$ ($3^0$ and $3^1$)
Similarly for any other places.
• This rule also applies to conversion between binary and octal, or binary and hexadecimal, or conversion between any base $k$ and base $k^n$
In general, for conversion of a number $N$ in any base $k$ to base $k^n$, each digit of $N_{k^n}$ get converted to n digits of $N_k$
• Thanks for the answer .One question : in general if we have that some number $c$ (I am considering everything in base $10$) such that $c=b^n$ does this mean that I can convert the number $c$ in base $b$ by simple converting each base $10$ digit of $c$ into $n$ base $b$ digits ?Or is this something that works only for these special cases ? – Mr. Y Jan 26 '16 at 8:56
• @Mr.Y, yes, you can. An even stronger statement would be: given $a^n=b^m$, we could convert a base $a^n$ number, say, $k$, to base $b^m$, by converting every $n$ digits of $k$ (base $a^n$) to $m$ digits base $b^m$. – vrugtehagel Jan 26 '16 at 9:02
I'm not sure whether you're asking for a proof, or some intuition. The proof is fairly mechanical, so I'll explain why you might expect this to be true.
Putting your information theory hat on, two base 3 digits carry exactly as much information as one base 9 digit. That is, suppose you know I'm going to pass you two digits, each from 0,1 and 2. Then you know there are $3 \cdot 3 =9$ possibilities for what information I could pass.
On the other hand, if I passed you a single digit from 0,1,...,8, then there are again 9 possibilities.
Continuing in this fashion, $n$ digits of base 9 pass as much information as $2n$ digits of base 3.
This isn't exactly a proof yet. You'd need to prove that you don't skip any values. Perhaps doing the computation you describe can never return the value $102_3$. Thankfully it can. But you'd need to think about why this will always work.
Let's look at what you base $9$ number actually means. $$813_9=8\cdot 9^2+1\cdot 9^1+3\cdot 9^0$$ If we wish to write this as powers of $3$ with coefficients between $0$ and $2$, we can simply do \begin{align} 3\cdot 9^0&=1\cdot 3^1+0\cdot 3^0\\ 1\cdot 9^1&=0\cdot 3^3+1\cdot 3^2\\ 8\cdot 9^2&=2\cdot 3^5+2\cdot 3^4\\ \end{align}
Why can we do this?
Why can we write $$a_n\cdot 9^n=b_{2n+1}\cdot 3^{2n+1}+b_{2n}\cdot3^{2n}$$ First note that $9^n3^2n$, so when dividing the equation by that, we get $$\frac{a_n\cdot 9^n}{3^{2n}}=a_n=3b_{2n+1}+b_{2n}=\frac{b_{2n+1}\cdot 3^{2n+1}+b_{2n}\cdot3^{2n}}{3^2n}$$ So actually, the problem comes down to writing a number $0\leq a_n<9$ as $3b_{2n+1}+b_{2n}$, where $0\leq b_{2n},b{2n+1}<3$. This is obviously possible.
Why does this work for binary and hexadecimal?
Actually, the answer is fairly similar. The problem can be reduced equivalently, resulting in the equation $$a_n=8a_{4n+3}+4a_{4n+2}+2a_{4n+1}+a_{4n}$$ Where $0\leq a_n<16$ and $0\leq a_{2n+3},a_{2n+2},a_{2n+1},a_{2n}<2$. The solvability of this equation is in my opinion a little less obvious, but still quite understandable; But, for the sake of a more thourough understanding, we could look at hexadecimal-octagonal conversion. This comes down to the easy equation $a_n=2b_{2n+1}+b_{2n}$, where $0\leq a_n<16$ and $0\leq b_{2n+1},b_{2n}<8$. This is cleary solvable. Doing this for octagonal-base 4 conversion and base 4-binary conversion shows with a recurrance-like approach that this indeed works for hexadecimal-binary conversion.
Hope this helped!
Hint:
$$\color{blue}1\cdot3^5+\color{blue}0\cdot3^4+\color{green}2\cdot3^3+\color{green}2\cdot3^2+\color{red}0\cdot3^1+\color{red}1\cdot3^0 =(\color{blue}1\cdot3^1+\color{blue}0)3^4+(\color{green}2\cdot3^1+\color{green}2)3^2+(\color{red}0\cdot3^1+\color{red}1)3^0\\ =\color{blue}3\cdot9^2+\color{green}8\cdot9^1+\color{red}1\cdot9^0$$
$$\color{blue}{10}\color{green}{22}\color{red}{01}_3=[\color{blue}{10_3}|\color{green}{22_3}|\color{red}{01_3}]_9=\color{blue}3\color{green}8\color{red}1_9$$ | 2020-01-17T16:18:36 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1627453/why-can-we-convert-a-base-9-number-to-a-base-3-number-by-simply-converting-e/1627475",
"openwebmath_score": 0.987087607383728,
"openwebmath_perplexity": 386.82093710414586,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513897844354,
"lm_q2_score": 0.8723473730188542,
"lm_q1q2_score": 0.8602665742773444
} |
http://math.stackexchange.com/questions/38010/intuition-for-a-relationship-between-volume-and-surface-area-of-an-n-sphere/38017 | # Intuition for a relationship between volume and surface area of an $n$-sphere
The volume of an $n$-sphere of radius $R$ is
$$V_n(R) = \frac{\pi^{n/2} R^n}{\Gamma(\frac{n}{2}+1)}$$
and the surface area is
$$S_n(R) = \frac{2\pi^{n/2}R^{n-1}}{\Gamma(\frac{n}{2})} = \frac{n\pi^{n/2}R^{n-1}}{\Gamma(\frac{n}{2}+1)} = \frac{d V_n(R)}{dR}$$
What is the intuition for this relationship between the volume and surface area of an $n$-sphere? Does it relate to the fact that the $n$-sphere is the most compact shape in $n$ dimensions, or is that merely a coincidence?
Are there other shapes for which it holds, or is this the limiting case of a relationship of inequality? For example, an $n$-cube of side $R$ has volume $V^c_n(R)=R^n$ and surface area $S^c_n(R)=2nR^{n-1}$, so that
$$\frac{dV^c_n(R)}{dR} = n R ^{n-1}$$
and we have $S^c_n = 2 dV^c_n/dR$.
Edit: As pointed out by Rahul Nahrain in the comments below, if we define $R$ to be the half-side length of the unit cube rather than the side length, then we have the relationship $S^c_n(R)=\frac{d}{dR} V^c_n(R)$, exactly as for the sphere. Is there a sense in which relationships like this can be stated for a large class of shapes?
-
See this MO-question and also the isoperimetric problem. – t.b. May 9 '11 at 10:52
The main takeaway seems to be that this relationship is coincidental and based on using $R$ as the linear measure of distance. Were we to use $D=2R$ instead, we would get $S_n(D)=2\frac{d}{dD}V_n(D)$. However, there is still the interesting (to my mind) question of why the ratio of $S_n(x)$ to $\frac{d}{dx}V_n(x)$ (for some linear measure $x$) is independent of $n$ for the $n$-sphere but seemingly not for other shapes - no? – Chris Taylor May 9 '11 at 11:04
Sure! I also voted the question up, by the way. I didn't mean to imply that these links answer the question completely. I just wanted you to see them because they certainly give some food for thought. – t.b. May 9 '11 at 11:06
Isn't the surface area of an $n$-cube $2nR^{n-1}$, not $6R^{n-1}$? Then $S_n^c = 2dV_n^c/dR$ for all $n$. – Rahul May 9 '11 at 11:31
In fact, if you let the parameter be the "radius" of the cube, i.e. half of its side length, then $V_n^c = (2R)^n$ and $S_n^c = 2n(2R)^{n-1}$, and $S_n^c = dV_n^c/dR$ for all $n$, just like the sphere... – Rahul May 9 '11 at 11:39
There is a very simple geometric explanation for the fact that the constant of proportionality is 1 for the sphere's radius and the cube's half-width. In fact, this relationship also lets you define a sensible notion of a "half-width" of an arbitrary $n$-dimensional shape.
Pick an arbitrary shape and a point $O$ inside it. Suppose you enlarge the shape by a factor $\alpha \ll 1$ keeping $O$ fixed. Each surface element with area $dA$ at a position $\vec r$ relative to $O$ gets extruded into an approximate prism shape with base area $dA$ and offset $\alpha \vec r$. The corresponding additional volume is $\alpha \vec r\cdot \vec n dA$, where $\vec n$ is the normal vector at the surface element.
Now the quantity $\vec r \cdot \vec n$, call it the projected distance, has a natural geometric interpretation. It is simply the distance between $O$ and the tangent plane at the surface element. (Observe that for a sphere with $O$ at the center, it is always equal to the radius, and for a cube with $O$ at the center, it is always equal to the distance from the center to any face.)
Let $\hat r = A^{-1} \int \vec r \cdot \vec n dA$ be the mean projected distance over the surface of the shape. Then the change in volume by a scaling of $\alpha$ is simply $\delta V = \alpha \int \vec r\cdot \vec n dA = \alpha \hat r A$. In other words, a change of $\alpha \hat r$ in $\hat r$ corresponds to a changed of $\alpha \hat r A$ in $V$. So if you use $\hat r$ as the measure of the size of a shape, you find that $dV/d\hat r = A$. And since $\hat r$ equals the radius of a sphere and the half-width of a cube, the observation in question follows. This also implies the distance-to-face measure for regular polytopes that user9325 mentioned, but generalizes to other polytopes and curved shapes. (I'm not completely happy with the definition of $\hat r$ because it's not obvious that it is independent of the choice of $O$. If someone can see a more natural definition, please let me know.)
-
Some remarks:
1. It is not necessary to regard solids that generalize to $n$ dimensions, so one can start with shapes in 2 dimensions.
2. It is very natural to always use a radius-type parameter to scale the figure because the intuition is that the figure gets growth rings that have the size of the surface.
3. Unfortunately, the property is no longer true if you replace a square with a rectangle or a circle with an ellipse.
4. For a regular polygon, you can always take as parameter the distance to an edge. This will work similarly for Platonic solids or any polygons/polyhedra that contain a point that is equidistant to all faces.
5. This argument does not look good for general curves, because intuitively the edges of a smooth curve are infinitesimally small, so the center should be equidistant to all points, but of course, we could identify cases where the changing thickness of the growth ring averages out.
-
+1. Point 2 is the intuitive explanation, while points 3 and 5 are the essential caveats. – Henry May 9 '11 at 12:32
What you're seeing with spheres is an instance of something really important called the coarea formula. You're looking at a function like $f(x,y,z) = r = \sqrt{x^2 + y^2 + z^2}$ and you're comparing the volumes of the lower contour sets $f(x,y,z) \leq R$ (in this case they are balls) to the areas of the level sets $f(x,y,z) = R$.
In the instance with cubes, you're considering a different function whose level sets are cubes.
The coarea formula states that (subject to some assumptions that cause both sides to make sense),
$Vol(f(x, y, z) \leq R) = \int_{-\infty}^R \left[ \int_{f(x,y,z) = t} |\nabla f|^{-1} d\sigma \right] dt$
You can also differentiate this formula with respect to $R$ to get the differential form you were observing.
Just as a dummy check, note that $t$ has the same units as $f$, so if you think of $f$ and the coordinates as having units, the dimensional analysis checks out and both sides have units of "length^3". (The formula is also valid in higher dimensions.)
In the examples you gave, the gradient of $f$ has length $1$, so when you integrate $1$ over the level sets $\{f(x, y, z) = t \}$ you're just getting the surface area.
The proof of this formula boils down to asking "what's the difference between $Vol( f(x,y,z) \leq R )$ and $Vol(f(x,y,z) \leq R + h$ if $h$ is small?". The two regions are very similar, differing only around the boundary $f(x,y,z) = R$. This is particularly clear in the case of a sphere. If you think about a small portion of that boundary of "width" $d\sigma$ in the directions of constant $f$, then the difference of the two regions has height $~ \frac{h}{|\nabla f|}$ in the direction of increasing $f$.
Another dummy check: The inverse makes sense because if $f$ is increasing extremely quickly, then increasing the value of $f$ will hardly increase the region $\{ f(x,y,z) \leq R \}$.
Note, the statement of the coarea is really a statement about the function and not just about the shapes of the level sets -- you are asking about the parameter $R$ by which these level sets are labeled, but you could pick a different function (like $\tilde{f} = e^f$ which has the same level sets but labels them differently.
- | 2015-08-02T06:40:10 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/38010/intuition-for-a-relationship-between-volume-and-surface-area-of-an-n-sphere/38017",
"openwebmath_score": 0.8902303576469421,
"openwebmath_perplexity": 162.8361176625972,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513881564148,
"lm_q2_score": 0.8723473730188543,
"lm_q1q2_score": 0.860266572857145
} |
https://math.stackexchange.com/questions/3158493/easiest-way-to-show-positive-semi-definite-equivalence | # Easiest way to show positive semi-definite equivalence
For an $$x \in \mathbb{R}^n$$, and $$n$$-by-$$n$$ identity matrix $$I_n$$, we are given that $$\begin{pmatrix} I_n & x \\ x^T & 1 \end{pmatrix} \succeq 0.$$
What is the easiest way to show that $$\begin{pmatrix} 1 & x^T \\ x & I_n \end{pmatrix} \succeq 0$$ holds?
• Symmetric permutation? – user251257 Mar 22 at 18:37
• seems so. is there such a theory which concludes? – independentvariable Mar 22 at 18:40
• Schur complement? – user251257 Mar 22 at 19:02
• They dont reduce to the same condition, do they? – independentvariable Mar 22 at 19:22
This is due to the identity
$$\underbrace{\begin{pmatrix} 0_{1,n} & 1 \\ I_n & 0_{n,1} \end{pmatrix}}_{J^T} \underbrace{\begin{pmatrix} I_n & x \\ x^T & 1 \end{pmatrix}}_{A} \underbrace{\begin{pmatrix} 0_{n,1} & I_n \\ 1 & 0_{1,n} \end{pmatrix}}_{J} = \underbrace{\begin{pmatrix} 1 & x^T \\ x & I_n \end{pmatrix}}_{B}$$
(notation $$0_{m,n}$$ is for a zero block with $$m$$ lines and $$n$$ columns).
Indeed, $$J$$ being a permutation matrix, it is an orthogonal matrix, with $$J^T=J^{-1}$$. We can conclude that $$A$$ and $$B$$ are similar, thus have the same spectrum (Similar matrices have the same eigenvalues with the same geometric multiplicity) with positive eigenvalues, thus are both semi-definite positive.
Besides, $$A$$ being symmetric, one can conclude from $$J^TAJ=B$$ that $$B$$ is symmetric as well.
Appendix : There is a pending question : is there a criteria on $$x$$ for positive semi-definiteness of $$A$$. ? The answer is yes :
$$A$$ is semi-definite positive iff $$\|x\| \leq 1$$.
This is due, as we are going to see it, to an analysis of the rather particular spectrum of $$A$$. Let us obtain it explicitly.
First of all, let us establish that $$A$$ (which is a $$(n+1) \times (n+1)$$ matrix) has eigenvalue $$1$$ with order of multiplicity at least $$n-1$$.
Consider hyperplane $$x^{\perp}$$ of $$\mathbb{R}^n$$ defined as the set of vectors $$y$$ that are orthogonal to $$x$$. Let $$(y_1,y_2,\cdots y_{n-1})$$ be a basis of $$x^{\perp}$$ ; then,
$$\underbrace{\begin{pmatrix} I_n & x \\ x^T & 1 \end{pmatrix}}_{A}\underbrace{\begin{pmatrix} y_k\\ 0 \end{pmatrix}}_{V_k}=1\underbrace{\begin{pmatrix} y_k\\ 0 \end{pmatrix}}_{V_k} \ \ \ \text{for} \ \ k=1,2, \cdots (n-1),$$
proving that $$V_k$$ is an eigenvector associated with eigenvalue $$1$$.
Due to the fact that trace$$(A)=n+1$$, the two remaining eigenvalues are of the form $$\alpha$$ and $$\beta:=2-\alpha$$. We can assume, WLOG that $$\alpha \leq 1 \leq \beta$$.
Besides, using the so-called Schur determinant identity (Eigenvalues of a Block Matrix from Schur Determinant Identity) for the computation of the determinant of a $$2 \times 2$$ block matrix, we obtain :
$$\det(A)=1-x^Tx$$
As the determinant is also the product of eigenvalues, we get the following identity :
$$\det(A)=1-\|x\|^2=\alpha(2-\alpha)\tag{1}$$
Thus, one can compute explicitly the two remaining eigenvalues by solving quadratic equation (1), with the following explicit solutions (if we assume that $$\alpha$$ is the smallest eigenvalue)
$$\alpha=1 - \|x\| \ \ \ \implies \ \ \ \beta:=2-\alpha=1 + \|x\|\tag{2}$$
As the criteria for a symmetric matrix do be semi-definite positive is that must have all eigenvalues $$\geq 0$$, this criteria becomes $$\alpha \geq 0$$, i.e., $$\|x\| \leq 0$$. $$\square$$
Remark : eigenvalues $$\alpha$$ and $$\beta$$ can be associated with eigenvectors $$\begin{pmatrix} x\\ -\|x\| \end{pmatrix}$$ and $$\begin{pmatrix} x\\ \|x\| \end{pmatrix}$$ resp.
Let us take an example in the case $$n=4$$ ; let $$m=1/n$$ ; consider matrix :
$$A:=\left(\begin{array}{rrrr|r} 1 & & & & m \\ & 1 & & & m \\ & & 1 & & m\\ & & & 1 & m\\ \hline m & m & m & m & 1 \\ \end{array}\right)$$
One can check, using (2), that the spectrum of $$A$$ is
$$(\tfrac12, 1 , 1, 1, 1, \tfrac32).$$
Just now, I "googled" with keywords "bordered identity matrix" : I found in (Eigenvalues of a certain bordered identity matrix) a somewhat similar computation that I did in the Appendix.
• Thank you for your answer! My main purpose was showing $||x||_2 \leq 1$ holds iff the matrices I gave are Psd. I proved one by Schur complement theory on PSD matrics, but for the equivalence I just followed the definition of PSD and by contradiction showed that if one holds, the other one should hold etc.. – independentvariable Mar 23 at 19:28
• But yours seem the better way, not the 'dirty' $a^T X a \geq 0$ for all $a$ approach... I don't like it, it seems too manual – independentvariable Mar 23 at 19:35 | 2019-04-23T16:30:13 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3158493/easiest-way-to-show-positive-semi-definite-equivalence",
"openwebmath_score": 0.9334749579429626,
"openwebmath_perplexity": 230.35230850436702,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513918194609,
"lm_q2_score": 0.8723473680407889,
"lm_q1q2_score": 0.8602665711434675
} |
https://math.stackexchange.com/questions/1687015/is-the-cartesian-product-of-two-countably-infinite-sets-also-countably-infinite | Is the Cartesian product of two countably infinite sets also countably infinite?
I am trying to determine and prove whether the set of convergent sequences of prime numbers is countably or uncountably infinite.
It is clear that such a sequence must 'terminate' with an infinite repetition of some prime $p$. So for example $${1,2,3,5,5,5,5,...}$$
My idea is to break up the problem into two sub-sequences. The first is a finite subsequence consisting of the first however many terms before it begins the infinite repetition of $p$. Then the second being the infinite set with elements being only $p$.
So in the above example, the two sub-sequences will be $$\{1,2,3\} \{5,5,5,...\}$$
The infinite sub-sequence is countably infinite as it's cardinality is simply the cardinality of the primes, which is countably infinite.
There are infinitely many finite sub-sequences as it can have any number of terms. However, I think it will be countably infinite because it can have $1$ term, $2$ terms, $3$ terms etc, in other words we can construct the bijection to $\mathbb{N}$ by simply naming them according to the number of terms it has. The arrangements is irrelevant as it will be finite.
This is where the title of the question comes in. There are a countably infinite number of these finite sub-sequences say $F$. We also have a countably infinite number of the infinite subsequence $P$, with all elements being some prime $p$.
So in some sense, the set of all convergent sequences of primes is the cartesian product $F\times P$.
As they are both countably infinite, will their cartesian product also be countably infinite? I am thinking yes, as the rationals are countably infinite and they are in some sense a cartesian product of the numerator and denominator, each having cardinality equivalent to the naturals.
If so, and everything is correct, I think I will have then completed the proof. Otherwise, I'd love to have errors pointed out or even perhaps a better solution.
Your thinking is correct. In fact, what you have argued is that there exists a bijection between the set of all convergent sequences of primes and the set $F\times P$. If this is your first example of doing this kind of task I suggest you try to actually write down this bijection. You have argued quite well that it must exist, but it's good exercise to actually do it at least once.
Noy, you also rightly say that $F\times P$ has a bijection to $\mathbb N^2$. As before, I suggest you actually write down this bijection.
Now, you simply need to see if $\mathbb N^2$ is countable, that is:
Does there exist a bijection between $\mathbb N$ and $\mathbb N^2$
To that end, I suggest you look into Cantor's pairing function.
• or you might just observe that $(i,j)\mapsto 2^i3^j$ is an injection. This idea extends easily to $\mathbb N^k$. For example, $(i,j,k)\mapsto 2^i3^j5^k$. Mar 7, 2016 at 16:06
• @Chilango Of course. Though I think that for a "first time user", the actual bijection is more informative than just the construction of an injection (which, I admit, implies a bijection, but also a theorem)
– 5xum
Mar 7, 2016 at 16:32
• these maps are not surjective, but that's no problem. An injection into a countable set is enough to say that the domain is countable. All you need is the fact that a subset of a countable set is countable. I agree though, that the pairing function is more interesting and has more applications. Mar 7, 2016 at 16:50 | 2022-05-26T23:57:05 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1687015/is-the-cartesian-product-of-two-countably-infinite-sets-also-countably-infinite",
"openwebmath_score": 0.943945050239563,
"openwebmath_perplexity": 105.60574821025583,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9921841109796002,
"lm_q2_score": 0.8670357563664174,
"lm_q1q2_score": 0.8602591011179391
} |
https://math.stackexchange.com/questions/2373028/x5-y2-z3 | # $x^5 + y^2 = z^3$
While waiting for my döner at lunch the other day, I noticed my order number was $343 = 7^3$ (surely not the total for that day), which reminded me of how $3^5 = 243$, so that $$7^3 = 3^5 + 100 = 3^5 + 10^2.$$ Naturally, I started wondering about nontrivial integer solutions to $$x^5 + y^2 = z^3 \tag{*}$$ ("nontrivial" meaning $xyz \ne 0$). I did not make much progress, though apparently there are infinitely many solutions: this was Problem 1 on the 1991 Canadian Mathematical Olympiad. The official solutions (at the bottom of this page) only go back to 1994. A cheap answer is given by taking $x = 2^{2k}$ and $y = 2^{5k}$ so that the l.h.s. is $2^{10k + 1}$. This is a cube iff $10k + 1 \equiv 0 \,(3)$ i.e. $k \equiv 2\,(3)$ thus giving an arithmetic progression's worth of solutions, starting with $$(x, y, z) = (16, 1024, 128)$$ corresponding to $k = 2$ and $$(x, y, z) = (1024, 33554432, 131072)$$ coming from $k = 5$.
What else is known about the equation $(*)$? In particular, are there infinitely many solutions with $x$, $y$, $z$ relatively prime? The one that caught my attention was $(x, y, z) = (3, 10, 7)$. Another one is $(-1, 3, 2)$ because $-1 + 9 = 8$. By Catalan's conjecture (now a theorem), this is the only solution with $x = \pm 1$ or $y = \pm 1$ or $z = 1$. Are there any solutions with $z = -1$? In this case, $(*)$ reduces to $x^5 + y^2 = -1$ and Mihăilescu's theorem does not apply.
Update. This question was essentially already asked here, since the equation $a^2 + b^3 = c^5$ is equivalent to $(-c)^5 + a^2 = (-b)^3$.
• we can rewrite this as $x^5+y^2\equiv 0 \pmod z$ could that help you ?
– user451844
Jul 27 '17 at 1:09
• You might look at OEIS sequences A070065, A070066 and A070067. Jul 27 '17 at 2:08
• Note that if $(x,y,z)$ is a solution, then so is $(c^6 x, c^{15} y, c^{10} z)$ for integers $c$. Jul 27 '17 at 2:41
• quora.com/… Aug 3 '17 at 11:45
Yes, there are infinitely many solutions. In fact, there are many parametrizations of the solutions.
According to a book${}^{\color{blue}{[1]}}$ on my bookshelf,
Up to changing $y$ into $-y$, there are exactly 27 distinct parametrizations of the equations $x^5 + y^2 = z^3$.
One of the simplest paremetrization is given by following formula.
\begin{align} x =&\; 12st(81s^{10}-1584t^5s^5-256t^{10})\\ y =&\; \pm (81s^{10} + 256t^{10})\\ &\;\;\times (6561s^{20} - 6088608t^5s^{15} - 207484416t^{10}s^{10} + 19243008t^{15}s^5 + 65536t^{20})\\ z =&\; 6561s^{20}+2659392t^{5}s^{15}+10243584t^{10}s^{10} - 8404992t^{15}s^5 + 65536t^{20} \end{align}
For example, following two random choices of $s,t$ give you two sets of relative prime solutions.
• $(s,t) = (1,1) \leadsto (x,y,z) = (-21108,-65464918703,4570081)$
• $(s,t) = (1,2) \leadsto (x,y,z) = (-7506024,127602747389962225,-196120763999)$
The book I have is actually quoting result from a thesis${}^{\color{blue}{[2]}}$ by J. Edwards. Consult that if you really want to get into the details.
References
• $\color{blue}{[1]}$ Henri Cohen, Number Theory Number Theory Volume II: Analytic and Modern Tools,
$\S 14.5.2$ The Icosahedron Case $(2,3,5)$.
• $\color{blue}{[2]}$ J. Edwards, Platonic solids and solutions to $x^2+y^3 = dz^r$, Thesis, Univ. Utrecht (2005).
• Nice answer, especially for the first reference. +1
– Xam
Jul 27 '17 at 3:38
• @achille: We can also use the icosahedral equation and scale it appropriately. Kindly see answer below. Jul 27 '17 at 12:00
• @achille: The parameterizations for $x^5+y^3=z^2$ depend on the icosahedron while $x^4+y^3=z^2$ depend on the tetrahedron. Does the book you cite say how many distinct parameterizations are there for the latter? This page gives $7$, but I just found an $8$th one, with no scaling involved. Jul 28 '17 at 5:53
• @TitoPiezasIII In $\S 14.4.1$ of that book, Cohen proved there are 7 parametrizations. Jul 28 '17 at 7:13
• @achillehui: I just re-checked the parameterization I found, and it turns out there is a common factor. Darn! In Edwards' work, he cites 7 parameterizations as well. Thanks, anyway. Jul 28 '17 at 7:35
There is a beautiful connection between $a^5+b^3=c^2$ and the icosahedron. Consider the unscaled icosahedral equation,
$$\color{blue}{12^3u v(u^2 + 11 u v - v^2)^5}+(u^4 - 228 u^3 v + 494 u^2 v^2 + 228 u v^3 + v^4)^3 = (u^6 + 522 u^5 v - 10005 u^4 v^2 - 10005 u^2 v^4 - 522 u v^5 + v^6)^2\tag1$$
By scaling $u=12x^5$ and $v=12y^5$ (or various combinations thereof like $u=12^2x^5$, etc), we then get a relation of form,
$$12^5a^5+b^3=c^2$$
• +1 interesting connection. it is unfortunate the mythical $j(\tau)$ is stuff way beyond me ;-p. Jul 27 '17 at 13:14 | 2021-09-22T08:33:02 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2373028/x5-y2-z3",
"openwebmath_score": 0.9063588380813599,
"openwebmath_perplexity": 485.6667700955476,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9766692366242304,
"lm_q2_score": 0.8807970795424088,
"lm_q1q2_score": 0.860247411297536
} |
https://math.stackexchange.com/questions/2371406/checking-for-linear-independence | # Checking for linear independence
Is the following proof correct?
Theorem. If $v_1,v_2,v_3,v_4$ is a linearly independent list then $$v_1-v_2,v_2-v_3,v_3-v_4,v_4$$ is also a linearly independent list.
Proof. Assume that $v_1,v_@,v_3,v_4$ is a linearly independent list, Consider now the following equation. $$0=0(v_1-v_2)+0(v_2-v_3)+0(v_3-v_4)+0v_4\tag{1}$$ Let $a_1,a_2,a_3$ and $a_4$ be arbitrary scalars in $\mathbf{F}$ and assume that the following equation holds $$0=a_1(v_1-v_2)+a_2(v_2-v_3)+a_3(v_3-v_4)+a_4v_4\tag{2}$$ After some algebraic manipulation we arrive at the following equation. $$0=a_1v_1+(a_2-a_1)v_2+(a_3-a_2)v_3+(a_4-a_3)v_4\tag{3}$$ Since the list $v_1,v_2,v_3,v_4$ is linearly independent it follows that given any vector in $span(v_1,v_2,v_3,v_4)$ the choice of scalars is unique and since $$0=0v_1+0v_2+0v_3+0v_4\tag{4}$$ It follows that all the scalars in $(3)$ must be $0$, consequently the only way to produce the $0$ vector as a linear combination of the vectors in the list $v_1-v_2,v_2-v_3,v_3-v_4,v_4$ is that indicated in $(1)$.
$\blacksquare$
Here $\mathbf{F}$ is either $\mathbb{C}$ or $\mathbb{R}$.
• Sounds ok - would just like to point out 'Consider now the following equation.' sounds a bit strange for $(1)$. Perhaps remove that line and $(1)$ and say something along the lines 'by inspection, $(2)$ holds when all the scalars are equal to $0$' at the end of your proof – Shuri2060 Jul 25 '17 at 17:31
• Nitpicking, but you made a small typo just before your first labeled equation, where you write "Assume $v_1,v_{@},v_3,v_4$ is a linearly independent list." Did you mean $v_2$ not $v_{@}$ ? – Vivek Kaushik Jul 25 '17 at 17:37
Equation (1) should be omitted: it's an obvious fact that has no consequence on the rest.
The final argument is too fast: from linear independence of $v_1,v_2,v_3,v_4$ you deduce \begin{cases} a_1=0 \\ a_2-a_1=0 \\ a_3-a_2=0 \\ a_4-a_3=0 \end{cases} and, from this, $a_1=a_2=a_3=a_4=0$. This should be mentioned, although easy.
A different approach is to consider the coordinates of the new vectors with respect to the original ones and so the matrix \begin{bmatrix} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & -1 & 1 \end{bmatrix} A standard Gaussian elimination leads to the reduced row echelon form \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} which proves that the coordinate vectors are linearly independent and so also the vectors are.
• So your point is that i should show how each of the scalars $a_1,a_2,a_3$ and $a_4$ is 0. – Atif Farooq Jul 25 '17 at 17:59
• The original matrix is lower triangular with $1$'s in the diagonal, hence invertible. – lhf Jul 25 '17 at 18:49
• @AtifFarooq Yes, that's the point; notwithstanding it's easy, it should appear in the proof. – egreg Jul 25 '17 at 19:27
• @lhf That's true, but Gaussian elimination would give more information in case the vectors aren't linearly independent. – egreg Jul 25 '17 at 19:27
It's valid. A note:
Before $(2)$ when you say "...and assume that the following equation holds..." it would be better to perhaps phrase this as "We wish to solve ... for $a_j$." Saying the former means that you're assuming solutions exist - but you're not sure! This is a tiny point, and is perhaps reflected in my writing style.
We may also streamline a bit by saying the following after $(3)$.
As $\{v_1, v_2, v_3, v_4\}$ is a linearly independent set, we must have that $a_1 = 0, \ a_2-a_1=0, \ a_3 - a_2 = 0,$ and $a_4-a_3=0.$ Back substitution yields that each $a_j = 0$, and so the original set is also a linearly independent set. $\square$
Again, these are just small suggestions, but nonetheless your proof is correct. | 2019-06-16T15:10:42 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2371406/checking-for-linear-independence",
"openwebmath_score": 0.9979005455970764,
"openwebmath_perplexity": 154.00636836755788,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9766692271169856,
"lm_q2_score": 0.8807970842359877,
"lm_q1q2_score": 0.8602474075076565
} |
https://math.stackexchange.com/questions/546007/int-02-pi-sqrt1-cosx-dx-4-sqrt2-why | $\int_0^{2\pi} \sqrt{1-\cos(x)}\,dx = 4\sqrt{2}$. Why?
According to the textbook, and Wolfram Alpha the above is correct.
Here is the step by step procedure from Wolfram Alpha for evaluating the indefinite integral:
Take the integral: $$\int\sqrt{1-\cos(x)}\,dx$$ For the integrand $\sqrt{1-\cos(x)}$, substitute $u=1-\cos(x)$ and $du=\sin(x)\,dx$: $$=\int-\frac{1}{\sqrt{2-u}}\,du$$ Factor out constants: $$=-\int\frac{1}{\sqrt{2-u}}\,du$$ For the integrand $1/\sqrt{2-u}$, substitute $s=2-u$ and $ds=-du$: $$=\int\frac{1}{\sqrt{s}}\,ds$$ The integral of $1/\sqrt{s}$ is $2\sqrt{s}$: $$=2\sqrt{s}+\text{constant}$$ Substitute back for $s=2-u$: $$=2\sqrt{2-u}+\text{constant}$$ Substitute back for $u=1-\cos(x)$: $$=2\sqrt{\cos(x)+1}+\text{constant}$$ Which is equivalent for restricted $x$ values to: $$\boxed{=-2\sqrt{1-\cos(x)}\cot\big(\frac{x}{2}\big)+\text{constant}}$$
I understand up to the below (which is a valid solution to the integral): $$2\sqrt{\cos(x)+1}+\text{constant}$$
However, if you evaluate this at $2\pi$ and $0$, you get the same thing, so the definite integral evaluates to zero.
After, you transform the above to: $$-2\sqrt{1-\cos(x)}\cot\big(\frac{x}{2}\big)+\text{constant}$$
The expression is indeterminate at $2\pi$ and $0$ of the form $0 \times \infty$. So I guess you would set up a limit and then use L'Hospital's rule to evaluate the expression at $2\pi$ and $0$ and get the answer to the definite integral?
In any case, all this seems strange. Why should the definite integral evaluated one way give $0$, and in another way give something else?
• You may want to find the anti derivative by multiplying top and bottom by the squareroot of the conjugate. You will end up with the absolute value of the sine term on top and the square root of 1+cosx in the bottom. This integrates easily. Mind the absolute value though as you apply your upper and lower limit. Oct 30 '13 at 19:57
• Ahh ok that makes sense. So to get rid of the absolute value sign, you can split up the integral from $0$ to $\pi$ and $\pi$ to $2\pi$ Oct 30 '13 at 20:02
• Yes, you ought to split that integral. Oct 30 '13 at 20:05
$$\int_0^{2\pi}\sqrt{1-\cos x}\,dx=\int_0^{2\pi}\sqrt{2\sin^2\frac{x}{2}}\,dx =\sqrt{2}\int_0^{2\pi}\sin\frac{x}{2}\,dx=-2\sqrt{2}\cos\frac{x}{2}\Big|_0^{2\pi}=4\sqrt{2}.$$
• @RonGordon $x/2 \in [0,\pi] \Rightarrow \sin(x/2) \ge 0$. Oct 30 '13 at 20:49
• Good old $\cos(2x) = \cos^2(x)-\sin^2(x)= 1-2\sin^2(x) = 2\cos^2(x)-1$. Oct 31 '13 at 1:25
Your $u$-substitutions should be injective on their interval of evaluation. Otherwise, you risk running into exactly this sort of issue.
Note that \begin{align}|\sin x| &= \sqrt{\sin^2 x}\\ &= \sqrt{1-\cos^2 x}\\ &= \sqrt{1-\cos x}\sqrt{1+\cos x}\\ &=\sqrt{1-\cos x}\sqrt{2-(1-\cos x)},\end{align} so if you want to use $u=1-\cos x$, then $$\frac{du}{dx}=\sin x=\begin{cases}|\sin x|=\sqrt{1-\cos x}\sqrt{2-(1-\cos x)} & 0\le x\le \pi\\-|\sin x|=-\sqrt{1-\cos x}\sqrt{2-(1-\cos x)} & \pi\le x\le2\pi,\end{cases}$$ so \begin{align}\int_0^{2\pi}\sqrt{1-\cos x}\,dx &= \int_0^\pi\sqrt{1-\cos x}\,dx+\int_\pi^{2\pi}\sqrt{1-\cos x}\,dx\\ &= \int_0^\pi\frac{|\sin x|}{\sqrt{2-(1-\cos x)}}\,dx+\int_\pi^{2\pi}\frac{|\sin x|}{\sqrt{2-(1-\cos x)}}\,dx\\ &= \int_0^\pi\frac{\sin x\,dx}{\sqrt{2-(1-\cos x)}}-\int_\pi^{2\pi}\frac{\sin x\,dx}{\sqrt{2-(1-\cos x)}}\\ &= \int_0^2\frac{du}{\sqrt{2-u}}-\int_2^0\frac{du}{\sqrt{2-u}}\\ &= 2\int_0^2\frac{du}{\sqrt{2-u}}.\end{align} At that point, we can use that antiderivative, with no need to resubstitute.
Alternately, you could note that $\cos(2\pi-x)=\cos x$, so \begin{align}\int_0^{2\pi}\sqrt{1-\cos x}\,dx &= \int_0^\pi\sqrt{1-\cos x}\,dx+\int_\pi^{2\pi}\sqrt{1-\cos x}\,dx\\ &= \int_0^\pi\sqrt{1-\cos x}\,dx+\int_\pi^{2\pi}\sqrt{1-\cos(2\pi-x)}\,dx\\ &= \int_0^\pi\sqrt{1-\cos x}\,dx-\int_{2\pi}^\pi\sqrt{1-\cos(2\pi-x)}\,dx\\ &= \int_0^\pi\sqrt{1-\cos x}\,dx-\int_0^\pi\sqrt{1-\cos x}\frac{d(2\pi-x)}{dx}\,dx\\ &= 2\int_0^\pi\sqrt{1-\cos x}\,dx,\end{align} at which point you can use your $u$-substitution without fear, since the cosine function is injective on $[0,\pi]$. | 2021-09-27T17:18:07 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/546007/int-02-pi-sqrt1-cosx-dx-4-sqrt2-why",
"openwebmath_score": 0.999889612197876,
"openwebmath_perplexity": 403.9850245364357,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976669234586964,
"lm_q2_score": 0.8807970732843033,
"lm_q1q2_score": 0.8602474033910186
} |
https://math.stackexchange.com/questions/1619416/square-divided-by-absolute-value | # Square divided by absolute value
First time posting on Math SE, with kind of a basic algebra question.
Question
Does the relation:
$$\dfrac{(ab)^2}{|ab|} = \left|ab\right|$$
with $a,b \in \mathbb{R_{\ne 0}}$ always hold?
It seems trivial to me, but Wolfram Alpha gives me a strange answer because it specifies that this is True assuming $a,b$ are positive.
Reasoning
No matter what sign $a,b$ have, we have that $(ab)^2 > 0$ and $\left|ab\right| > 0$. Thus their ratio is greater than zero, and the magnitude of that ratio is exactly $ab$ with a positive sign, so $\left|ab\right|$.
Is what I said correct? If so, is this question a completely useless one? Sorry for the occasionally bad English!
Edit: formatted equations as suggested by Frentos
• Hmmn, wlog for $a, b \in \mathbb{C}$ one could argue that $a^{2}b^{2} < 0$ – Kevin Jan 20 '16 at 11:28
• Welcome to math.SE! See this guide for how to write equations on this site. \dfrac makes larger, easier to read fractions and \left| \right| gives nicer absolute values. – Frentos Jan 20 '16 at 11:32
• @Bacon No, try $a=b=i$ (or any case when $a^2b^2$ is not even real). – Did Jan 21 '16 at 1:45
• @Did - fair point, my comment meant to reflect that in some cases this could be true – Kevin Jan 21 '16 at 9:20
Your statement about Wolfram is not quite correct. It gives various alternate forms for this expression, two of which are:
1. $ab$ assuming $a$ and $b$ are positive
2. $ab\,sgn(a)\,sgn(b)$
(2) is equivalent to $|ab|$
See here
• Indeed, I overlooked the $ab \; sgn(a) \; sgn(b)$ answer! I'll accept this one because it points out that Wolfram was right. – UJIN Jan 20 '16 at 14:30
For real numbers this is always true because the square of a real number equals the square of its absolute value, and in particular $(ab)^2=|ab|^2.$ Perhaps Wolfram has reservations because it considers the possibility of complex numbers?
Notice:
• If $a,b\in\mathbb{R^+}$, so $a,b>0$, then:
$$|ab|=|a||b|=ab$$
• If $a,b\in\mathbb{R^+}$, so $a,b>0$, so:
$$(ab)^2=a^2b^2$$
• If $a,b\in\mathbb{R^-}$, than $a,b<0$, so:
$$((-a)(-b))^2=(ab)^2=a^2b^2$$ | 2020-12-01T06:38:17 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1619416/square-divided-by-absolute-value",
"openwebmath_score": 0.9206000566482544,
"openwebmath_perplexity": 646.0958758089511,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9766692284751636,
"lm_q2_score": 0.8807970732843033,
"lm_q1q2_score": 0.8602473980077626
} |
https://gmatclub.com/forum/a-wire-that-weighs-20-pounds-is-cut-into-two-pieces-so-that-one-of-the-243714.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Apr 2019, 01:23
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A wire that weighs 20 pounds is cut into two pieces so that one of the
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 54376
A wire that weighs 20 pounds is cut into two pieces so that one of the [#permalink]
### Show Tags
30 Jun 2017, 03:28
00:00
Difficulty:
55% (hard)
Question Stats:
65% (02:06) correct 35% (01:59) wrong based on 63 sessions
### HideShow timer Statistics
A wire that weighs 20 pounds is cut into two pieces so that one of the pieces weighs 16 pounds and is 36 feet long. If the weight of each piece is directly proportional to the square of its length, how many feet long is the other piece of wire?
(A) 9
(B) 12
(C) 18
(D) 24
(E) 27
_________________
Senior Manager
Joined: 28 May 2017
Posts: 283
Concentration: Finance, General Management
Re: A wire that weighs 20 pounds is cut into two pieces so that one of the [#permalink]
### Show Tags
30 Jun 2017, 03:57
Bunuel wrote:
A wire that weighs 20 pounds is cut into two pieces so that one of the pieces weighs 16 pounds and is 36 feet long. If the weight of each piece is directly proportional to the square of its length, how many feet long is the other piece of wire?
(A) 9
(B) 12
(C) 18
(D) 24
(E) 27
Weight of 2nd piece = 4 pound
Since the weight is directly proportional to the square of its length., we may write
$$\frac{16}{36^2}$$ = $$\frac{4}{x^2}$$
Solving above, we get x = 18
_________________
If you like the post, show appreciation by pressing Kudos button
Director
Joined: 04 Dec 2015
Posts: 750
Location: India
Concentration: Technology, Strategy
WE: Information Technology (Consulting)
Re: A wire that weighs 20 pounds is cut into two pieces so that one of the [#permalink]
### Show Tags
30 Jun 2017, 04:02
Bunuel wrote:
A wire that weighs 20 pounds is cut into two pieces so that one of the pieces weighs 16 pounds and is 36 feet long. If the weight of each piece is directly proportional to the square of its length, how many feet long is the other piece of wire?
(A) 9
(B) 12
(C) 18
(D) 24
(E) 27
$$20$$ pounds wire is cut into two pieces = $$16$$ pounds $$+$$ $$4$$ pounds
The piece weighs $$16$$ pounds is $$36$$ feet long.
Ratio of weight and length of $$16$$ pounds piece $$= \frac{16}{36^2} = \frac{4 * 4}{36 * 36} = \frac{1}{81}$$
Therefore required ratio of other part would also be $$\frac{1}{81}$$
Ratio $$= \frac{4}{x^2} =$$ $$\frac{1}{81}$$
$$x^2 = 81*4$$ $$=> x = \sqrt{81*4}$$
$$x = 9 * 2 = 18$$
Hence length of $$4$$ pounds wire $$= 18$$
Intern
Joined: 30 May 2013
Posts: 29
GMAT 1: 600 Q50 V21
GMAT 2: 640 Q49 V29
Re: A wire that weighs 20 pounds is cut into two pieces so that one of the [#permalink]
### Show Tags
30 Jun 2017, 04:07
IMO (c)
Given, weight proportional to square (length)
=> w1 / w2 = square (l1/l2)
=> 16/4 = square (36/l2)
=> l2 = 18
Sent from my GT-N7100 using GMAT Club Forum mobile app
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 5807
Location: United States (CA)
Re: A wire that weighs 20 pounds is cut into two pieces so that one of the [#permalink]
### Show Tags
04 Jul 2017, 07:52
1
Bunuel wrote:
A wire that weighs 20 pounds is cut into two pieces so that one of the pieces weighs 16 pounds and is 36 feet long. If the weight of each piece is directly proportional to the square of its length, how many feet long is the other piece of wire?
(A) 9
(B) 12
(C) 18
(D) 24
(E) 27
Since the 20-lb wire is cut into two pieces and one of the pieces weighs 16 lbs, the other piece must weigh 4 lbs. Since the 16-lb piece has length of 36 ft and the weight of each piece is directly proportional to the square of its length, we can let x = the length in feet of the 4-lb piece and create the following proportion:
16/36^2 = 4/x^2
16x^2 = 4 * 36^2
4x^2 = 36^2
x^2 = 36^2/2^2
x^2 = 18^2
x = 18
_________________
# Scott Woodbury-Stewart
Founder and CEO
[email protected]
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
Re: A wire that weighs 20 pounds is cut into two pieces so that one of the [#permalink] 04 Jul 2017, 07:52
Display posts from previous: Sort by
# A wire that weighs 20 pounds is cut into two pieces so that one of the
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2019-04-21T08:23:57 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/a-wire-that-weighs-20-pounds-is-cut-into-two-pieces-so-that-one-of-the-243714.html",
"openwebmath_score": 0.5843057632446289,
"openwebmath_perplexity": 2340.8452587349466,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_q1_score": 0.9766692271169855,
"lm_q2_score": 0.8807970701552505,
"lm_q1q2_score": 0.8602473937554338
} |
https://matheducators.stackexchange.com/questions/4458/explanation-of-counter-example/4460 | Explanation of Counter-example?
I have a question where we are given a statement and asked to prove whether or not the statement is true. If the statement is true, then we must prove it; otherwise we must provide a counterexample to prove the statement incorrect.
If the statement is false, and the student provides a counter-example; should the student also provide an explanation of why the counter-example disproves the statement?
I believe this depends on how clearly the counterexample is stated.
Consider this claim: $f:\mathbb{R}\to\mathbb{R}$ is continuous $\implies f$ is differentiable.
Imagine a student, let's call him Karl, who says
Take any function of the form $\sum_{n=0} ^\infty a^n \cos(b^n \pi x)$, where $0<a<1$, $b$ is a positive odd integer, and $ab > 1+\frac{3}{2} \pi$.
Would you give him credit for this counterexample? Even presuming that the difficulty level of your course and the talents of your students are such that you did, indeed, expect them to come up with this result, I don't feel like this answer is sufficient. Obviously, Karl must have understood something about the exercise to be able to come up with this answer (never mind the great ingenuity required) but his written argument fails to demonstrate that knowledge.
Contrast that with a disproof of this claim: For any function $f:A\to B$ and any sets $S,T\subseteq A$, $f[S]\subseteq f[T]\implies S\subseteq T$.
This is the kind of statement you might consider in an introductory analysis course. I'd expect a student to find an easy counterexample, say by defining $A=\{\heartsuit,\spadesuit\}$ and $B=\{\star\}$ and setting $f(\heartsuit)=f(\spadesuit)=\star$ and $S=\{\heartsuit\}$ and $T=\{\spadesuit\}$.
Would I expect the student to have to show all the details? Do they really need to explicitly say, "Observe that $S,T\subseteq A$ and $f[S]=f[T]$ and yet $S\not\subseteq T$"? Well, that depends on your standards. If you're really training them to write clearly and explicitly (concision be darned), then you might require them to show these details. If you're just checking to see that they can generate such examples, then merely stating the example would suffice.
Main answer: It depends on your standards for such a written argument, which in turn should depend on what you expect the student to be able to understand and demonstrate.
• Sure, let's call him Karl. Sep 25 '14 at 1:26
• Maybe give extra credit to someone who described how to create a counter example, perhaps in the above case by using S={<heart>} and T={<something besides a heart>}
– rbp
Sep 29 '14 at 15:13
Certainly the student should be aware of the expectations beforehand. To this end, I think there are at least two approaches. One is to specify on individual questions whether counterexamples should be explained, and the other is to establish (preferably from the outset, i.e., setting "norms" at the beginning of a course) the expectation that reasoning/justification for all answers will be provided.
More generally, I think explaining counterexamples is important. Let me give just two reasons.
1) Consider the following problem:
Is every whole number divisible by 6 and 8 also divisible by 6 $\times$ 8 = 48? If so, prove your answer; if not, please provide a counterexample. Answer: 24.
But what does such a student understand? Perhaps she just remembered 24 is divisible by both and realized it is not divisible by 48. Or perhaps she realizes that the issue is that the two divisors in this proposed rule are not relatively prime, and could correct it by using, e.g., 3 and 16 instead. Or, even more seriously, perhaps she has the deeper understanding, but accidentally writes 32. In this last case, the understanding is the deepest, but it would be tough to justify giving the response any credit.
Asking that the counterexample be justified ameliorates this potential problem. Personally, I would expect more than just an explanation of the particular counterexample, e.g., more than:
No, and 24 is a counterexample: It is divisible by 6 because 24 = 4 $\times$ 6, and it is also divisible by 8 because 24 = 3 $\times$ 8. But it is not divisible by 48, because 24/48 is not a whole number.
I would prefer to see an answer that mentions being "relatively prime" or something equivalent, and perhaps even one that gives an example of two numbers that would yield a divisibility rule for 48.
2) More subtle and perhaps not related to the specific question you have in mind (if there is one): Note that (speaking slightly messily) counterexamples can settle for all questions, but not there exists questions. The previous example is the assertion that all whole numbers (etc). If this turns out to be false, then it can be settled with a counterexample. If it turns out to be true, then it will need a further proof or justification. On the other hand, a there exists statement that is true might be settled with a single example. If it turns out to be false, then it will need a further proof or justification.
This latter point about proving/disproving by (counter)example in the context of for all vs. there exists assertions may seem a bit pedantic, but it is a place in which many students who are just beginning to understand ideas around "proof" can struggle quite a bit.
• I don't think there exists statements are really that different because you can rewrite them as for all statements, i.e. $\forall x\ P(x) \iff \neg\exists x\ \neg P(x)$. (Of course maybe students don't know this, in which case to their minds there is a difference...) Sep 25 '14 at 3:53
• @DavidZ Yes, this is the underlying "reason" for the phenomenon mentioned in 2 above (i.e., that an example can disprove a for all statement or prove a there exists statement). And yes, one of the primary difficulties is precisely the one to which you allude parenthetically: For many students who are just getting into proof-based mathematics (or other mathematics with more formal reasoning) the equivalence you mention is neither known nor obvious. Sep 25 '14 at 18:08 | 2021-12-07T05:05:32 | {
"domain": "stackexchange.com",
"url": "https://matheducators.stackexchange.com/questions/4458/explanation-of-counter-example/4460",
"openwebmath_score": 0.7324725985527039,
"openwebmath_perplexity": 319.8075443544471,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.967899295134923,
"lm_q2_score": 0.8887587905460026,
"lm_q1q2_score": 0.8602290069144426
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.