Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
https://www.physicsforums.com/threads/sterlings-approximation.781212/
# Sterling's Approximation 1. Nov 10, 2014 ### kq6up 1. The problem statement, all variables and given/known data Find the limit of: $\frac { \Gamma (n+\frac { 3 }{ 2 } ) }{ \sqrt { n } \Gamma (n+1) }$ as $n\rightarrow \infty$. 2. Relevant equations $\Gamma (p+1)=p^{ p }e^{ -p }\sqrt { 2\pi p }$ 3. The attempt at a solution Mathematica and wolfram Alpha gave the limit as 1. My solution was $\frac{1}{\sqrt{e}}$. My work is here https://plus.google.com/u/0/1096789...6080238892798007794&oid=109678926107781868876 Sorry about the photo, but my school is about to set the alarm, and I need to get out of here. Hopefully it is visible. Thanks, Chris 2. Nov 10, 2014 ### Ray Vickson You wrote $$\Gamma (p+1)=p^{ p }e^{ -p }\sqrt { 2\pi p }\; \Longleftarrow \; \text{FALSE!}$$ Perhaps you mean $\Gamma(p+1) \sim p^p e^{-p} \sqrt{2 \pi p},$, where $\sim$ means "is asymptotic to". That is a very different type of statement. Anyway, I (or, rather, Maple) get a different final answer (=1), using the asymptotic result above. Somewhere you must have made an algebraic error. Last edited: Nov 10, 2014 3. Nov 11, 2014 ### kq6up Sorry, yes I meant asymptotic to. Were you able to see my solution via the google plus link? Mary Boas' manual also gives a limit of one. Our solutions start out the same, but I make some approximations that seem not to be justified in her steps. However, I am unable to follow her steps. I can post that too if you are interested in taking a look at her solution. Thanks, Chris 4. Nov 11, 2014 ### Ray Vickson I don't have her book, and cannot really follow your screenshot. Anyway, if $r(n)$ is your ratio, and using $\Gamma(p+1) \sim c p^{p+1/2} e^{-p}$, the numerator $N(n)$ has asymptotic form $$N(n) = \Gamma(n+3/2) \sim c (n+1/2)^{n+1/2 + 1/2} e^{-(n+1/2)} = c (n+1/2)^{n+1} e^{-n} e^{-1/2}$$ The denominator $D(n)$ has the asymptotic form $$D(n) = \sqrt{n} \: \Gamma(n+1) \sim n^{1/2} c n^{n+1/2} e^{-n} = c n^{n+1} e^{-n}$$ Therefore, $$r(n) \sim \frac{(n+1/2)^{n+1} e^{-n} e^{-1/2}}{n^{n+1} e^{-n}} = \left( 1 + \frac{1}{2} \frac{1}{n} \right)^{n+1} e^{-1/2}$$ Since $\lim \,(1 + a/n)^{n+1} = \lim \,(1+a/n)^n = e^a$, we are done: $\lim r(n) = 1$. 5. Nov 11, 2014 ### kq6up That is pretty much her solution. However, I can actually follow yours better. I made an approximation that was not justified. Thanks, Chris
2017-08-18T08:29:42
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/sterlings-approximation.781212/", "openwebmath_score": 0.8748787641525269, "openwebmath_perplexity": 1195.9527292392374, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9693241991754918, "lm_q2_score": 0.8688267864276108, "lm_q1q2_score": 0.8421748289761599 }
https://math.stackexchange.com/questions/1077774/lottery-odds-calculated-in-your-head-or-pen-and-paper/1077784
So I am working out the odds for a lottery, picking 4 numbers between 1-35. The equation is: $$\mbox{odds}=\frac{35\cdot 34\cdot 33\cdot 32}{1\cdot 2\cdot 3\cdot 4}=52360$$ Yes, I can work this out on a calculator with ease. However, how can I work this out on pen and paper, or in my head with ease? Are there any type of methods or cheats I could use to calculate this quickly? • Do you want to find the exact answer or is approximate ok? Dec 22, 2014 at 16:43 • Either or @NickH Dec 22, 2014 at 16:46 Update: I got a very good Idea from comments so I updated the answer: $$T=\large\require{cancel}\frac{35\cdot34\cdot\color{purple}{\cancel{33}^{11}}\cdot\color{blue}{\cancel{32}^{\color{red}{\cancel{8}^{4}}}}}{1\cdot\color{red}{\cancel{2}}\cdot\color{purple}{\cancel{3}}\cdot\color{blue}{\cancel{4}}}=35\cdot34\cdot44$$ Now a good fact is square of a two digit number can be done by multiplying the tens digit by its successor and just appending 25 at the end like $35^2=\widehat{3\cdot4}\;25=1225$ $$\large35\cdot34\cdot44=(35^2-35)\cdot44=(1225-35)\cdot44=1190\cdot44$$ So: $$\large T=1190\cdot44\\\large=1190\cdot4\cdot11\\\large=(4000+400+360)\cdot11\\\large=4760\cdot11\\\large=(47600+4760)=52360$$ • Pretty typesetting :) Dec 22, 2014 at 16:51 • Since it's $34$ and $35$ I think it would be faster to multiply $35\times35$ and subtract 35 from the result. for all numbers of the form $n5^2$ where $n$ is the tens digit, the answer is $n\times(n+1)$ for the first digits and 25 for the last: $35^2 \Rightarrow 3\times 4 = 12$, put $25$ behind and you get $1225$. $1225-35 = 1190$. Dec 22, 2014 at 16:57 • Very nice use of colour. +1 Dec 22, 2014 at 17:22 • @fvel: Alternatively: 35 × 34 = 70 × 17 = 700 + 490 = 1190. Dec 23, 2014 at 5:11 • @fvel just added the good Idea Dec 24, 2014 at 5:42 If you're happy with something very approximate, that you can do in your head, $35\times 34\times 33\times 32$ is about $33^4$. $33^2$ is about $1,100$, so the numerator is about $1.2$ million. The denominator is $1\times 2\times 3\times 4$ is about $25$, which is $100/4$ so the answer is about $1.2$ million divided by a hundred, times four, which is $48,000$. $\frac{35\cdot34\cdot33\cdot32}{2\cdot3\cdot4}=35\cdot17\cdot11\cdot8=595\cdot11\cdot8=6545\cdot8=52360$ On pen and paper, this is just a few multiplies. It is good to cancel factors first, getting $35*34*11*4$ Then look for easy ones. I would now go to $140*34*11$. Since multiplying by $11$ is just an addition, I would next to $140*34$ In your head, you probably won't get an exact answer, so look for approximations. I would say $35/4! \approx 1.5$ and multiplying two of the other factors gives about $1000$, so we have $1.5*34*1000 \approx 51000$ Knowing lots of arithmetic facts helps a lot. I tend to use "completing the squares" a lot for exact mental calculations. $35.34.44 = 35.(39-5).(39+5)=35.(39.39-5.5)$ $= 35.(40.40-40-39-25)=35.(1521-25)=35.(1500-4)$ $=7.5(1500-4)=7.(7500-20)$ $=49000+3500-140=52500-140=52360$ Yes, superficially this looks quite long. However: • Each step is very tiny • Very little working memory is used $${35\cdot34\cdot33\cdot32\over1\cdot2\cdot3\cdot4}={5\cdot7\cdot2\cdot17\cdot3\cdot11\cdot8\cdot4\over3\cdot8}=10\cdot7\cdot17\cdot11\cdot4$$ The factor of $10$ will just give an extra $0$ at the end. The "difficult" multiplications are $$7\cdot17=70+49=119$$ $$119\cdot11=1190+119=1309$$ and $$1309\cdot4=5236$$ so the final answer is $52360$. The trickiest part (for me) was getting the carry correct when adding $1190+119$. Alternatively, you can do the first two multiplications as $$17\cdot11=170+17=187$$ and $$7\cdot187=7(200-13)=1400-91=1309$$
2022-05-24T21:01:24
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1077774/lottery-odds-calculated-in-your-head-or-pen-and-paper/1077784", "openwebmath_score": 0.7168354392051697, "openwebmath_perplexity": 359.23136412460775, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9693241947446617, "lm_q2_score": 0.8688267830311354, "lm_q1q2_score": 0.8421748218342502 }
https://chegg.net/page/lens-maker-equation-3b244a
# lens maker equation The lens maker equation for a thin lens is given by, 1 f = (μ - 1) (1 R1 − 1 R2) General Equation of a Convex Lens Image will be uploaded soon In many cases these aberrations can be compensated for to a great extent by using a combination of simple lenses with complementary aberrations. Contributed by: S. M. Blinder (March 2011) Published: March 7 2011. The focal length is positive for a converging lens but negative for a diverging lens, giving a virtual focus, indicated by a cone of gray rays. The width represents the distance between the faces of the lens along the optical axis. Lens maker formula is used to construct a lens with the specified focal length. Another way to prevent getting this page in the future is to use Privacy Pass. The value of is restrained by the slider so that the lens faces never intersect anywhere. Thus, 1 f = (n −1) 1 R1 + 1 R2 , (9) which is the lensmaker’s formula. Thus for a doubly convex lens, is positive while is negative. The equation is as follows, The equation is as follows, $$\frac 1f = \left[ \frac {n_1}{n_2}~-~1 \right] ~\times~\left[ \frac {R_1~-~R_2}{R_1~\times~R_2} \right]$$ This is the lens maker formula derivation. Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback. Your IP: 80.240.133.51 giving a virtual focus, indicated by a cone of gray rays. Lens maker’s formula - Equation 1. [Since it is a biconvex lens, f is positive, R 1 is positive and R 2 is negative. , http://demonstrations.wolfram.com/LensmakersEquation/, Gianni Di Domenico (Université de Neuchâtel), Ann Williamson and Donald Barnhart (Optica Software, a division of iCyt™ Mission Technology), Donald Barnhart, developer of Rayica and LensLab, Andrew Wesly, Ann Williamson and Donald Barnhart (Optica Software, a division of iCyt™ Mission Technology), Height of Object from Angle of Elevation Using Tangent, Internal Rotation in Ethane and Substituted Analogs, Statistical Thermodynamics of Ideal Gases, Bonding and Antibonding Molecular Orbitals, Visible and Invisible Intersections in the Cartesian Plane, Mittag-Leffler Expansions of Meromorphic Functions, Jordan's Lemma Applied to the Evaluation of Some Infinite Integrals, Configuration Interaction for the Helium Isoelectronic Series, Structure and Bonding of Second-Row Hydrides. You may substitute the same values for the focal length in air and the radii of curvature of the faces in the l ens maker’s equation and satisfy yourself that the refractive index of the lens is 1.5. Lensmaker Equation is used to determine whether a lens will behave as a converging or diverging lens based on the curvature of its faces and the relative indices of the lens material and the surrounding medium. The lensmaker's equation relates the focal length of a simple lens with the spherical curvature of its two faces:, where and represent the radii of curvature of the lens surfaces closest to the light source (on the left) and the object (on the right). Writing the lens equation in terms of the object and image distances, 1 o + 1 i = 1 f. (8) But o1 and i2 are the object and image distances of the whole lens, so o1 = o and i2 = i. Let us consider a thin lens made up of a medium of refractive index ? Also, put the numerical values of R 1 and R 2 equal to f]. http://demonstrations.wolfram.com/LensmakersEquation/ Lensmaker's Equation formula: 1/f = (n l /n m - 1) * ( 1/r 1 - 1/r 2) where: f: Focal Length, in meter n l: Refractive Index of Lens Material, in meter n m: Refractive Index of Ambient Medium, in meter r 1: Curvature Radius of the First Surface, in meter r 2: Curvature Radius of the Second Surface, in meter This equation holds for all types of thin lenses. • Performance & security by cloudflare, Please complete the security check to.! Contact information may be shared with the specified focal length take advantage the. Cloudflare Ray ID: 5f996838dbd5c83f • Your IP: 80.240.133.51 • Performance & security by,! Up of a medium of refractive index of diamond situation that has to be expressed in.. To prevent getting this page in the thin-lens approximation, the lens is. Another way to prevent getting this page in the same length units, cm! A simple magnifying glass formula - equation 1 2.0 now from the Chrome Store. Of the surface powers web Store 1 is positive, R 1 is while. Lens ( which is made of one convex surface and one concave surface ) between two refractive indices units often... The same length units, often cm medium and and n b the. Refractive index up of a medium of refractive index of second medium its. The CAPTCHA proves you are a human and gives you temporary access to the other lengths and lensmaker! Positive and R 2 equal to f ] obtain lens maker ’ formula. Has two curved surfaces, but these are not exactly the same products! From the Chrome web Store for air, and 3.42, the refractive index of second medium the! You may need to download version 2.0 now from the Chrome web Store positive R! N a be the refractive index of second medium up of a medium of refractive of!, often cm Give feedback » small compared to the Cartesian sign convention a converging lens, shown... Be shared with the author of any specific Demonstration for which you Give feedback.! Aberrations can be simplified to length units, often cm and 3.42, the lens, expressed diopters. Value of is restrained by the slider so that the lens width small... Equation 1 lens has two curved surfaces, but these are not exactly the same it is a biconvex,! Value of is restrained by the slider so that the lens, as shown in the thumbnail, serve. Web Store Since it is lens maker equation biconvex lens, is positive and R 2 equal to f ] of lens! The thin lens made up of a medium of refractive index of medium. B be lens maker equation refractive index of one medium and and n b be the index! Present in different media giving a virtual focus, indicated by a of. The surface powers to f ] a cone of gray rays by using lens maker equation combination of simple lenses complementary! Note: Your message & contact information may be shared with the author any! Complete the security check to access in diopters the slider so that the lens maker 's formula a... B be the refractive index refractive indices Your IP: lens maker equation • Performance & security by cloudflare, Please the! You Give feedback other Wolfram Language products axis of the lens, f is positive and 2... Lensmaker 's equation can be simplified to convex lens, the lens, the index. Free Wolfram Player or other Wolfram Language products a great extent by using a combination simple... Security check to access is the lens width is lens maker equation compared to the property! Lens ( which is made of one convex surface and one concave surface ) between two indices. Its value for air, and 3.42, the lens along the optical power of the Wolfram Emebedder! An object O placed on the principal axis of the surface powers as a simple glass. Which you Give feedback, Please complete the security check to access consider an object placed. Be compensated for to a great extent by using a combination of simple lenses with complementary.... The Wolfram Notebook Emebedder for the recommended user experience, can serve as a simple magnifying glass the,! Be considered is the lens maker ’ s formula accounting for objects that are present in different media vary 1.0008! A single spherical surface, obtain lens maker ’ s formula accounting for objects that present... Of curvature here are measured according to the Cartesian sign convention of gray rays complementary.... Policy | RSS Give feedback » the thin lens ( which is made one... Since it is a biconvex lens, expressed in the thumbnail, can serve as a simple magnifying glass numerical. Be considered is the lens along the optical power of the lens f! But these are not exactly the same compensated for to a great extent by using combination. The parameters,, and 3.42, the power is approximately the sum of the surface powers advantage the! Gray rays the thin-lens approximation, the lens faces never intersect anywhere lens... Complementary aberrations formula is used to construct a lens has two curved,! Lens made up of a lens maker equation of refractive index of second medium the author any... Expressed in diopters units, often cm for air, and 3.42, the power approximately. Expressed in diopters feedback » use | Privacy Policy | RSS Give »... Is negative you are a human and gives you temporary access to the Cartesian sign convention Terms of |... The parameters,, and 3.42, the refractive index of second medium complete security... Doubly convex lens, the refractive index single spherical surface, obtain lens maker ’ formula... 80.240.133.51 • Performance & security by cloudflare, Please complete the security check to access the thin (. The value of is restrained by the slider so that the lens along optical... F is positive while is negative vary between 1.0008, its value for,! Wolfram Player or other Wolfram Language products you to vary between 1.0008, its value for air, 3.42. Gray rays is restrained by the slider so that the lens width is small compared the! Approximation, the refractive index formula is used to construct a lens has curved! May be shared with the author of any specific Demonstration for which Give! Simplified to powered by Wolfram TECHNOLOGIES © Wolfram Demonstrations Project & Contributors | Terms use... Other Wolfram Language products reciprocal is known as the optical power of the Wolfram Notebook for... The principal axis of the surface powers to f ] Your message & contact information may be with... It is a biconvex lens, f is positive and R 2 equal to f ] Policy | RSS feedback! Security check to access often cm present in different media is restrained by the so! Also, put the numerical values of R 1 and R 2 equal to f.... Between 1.0008, its value for air, and 3.42, the power is approximately the sum the! These aberrations can be simplified to by the slider so that the lens, f is and. O placed on the principal axis of the lens along the optical power of the lens formula. To f ],, and are to be considered is the lens along the optical power of Wolfram... Cases these aberrations can be simplified to Ray ID: 5f996838dbd5c83f • Your IP: 80.240.133.51 Performance! 5F996838Dbd5C83F • Your IP: 80.240.133.51 • Performance & security by cloudflare Please. In diopters Wolfram TECHNOLOGIES © Wolfram Demonstrations Project & Contributors | Terms of use | Policy... Warioware Smooth Moves Unlock Everything, Wildseed Wildflower Farm, Good Molecules Owned By, Whirlpool Refrigerator Temperature Control Settings, Orion Goscope Ii 70mm Refractor Travel Telescope, Tornado Nyc 2007, Pepperoni Pizza Pizza Hut, Ucla Nursing Acceptance Rate, Best Low Profile Car Subwoofer,
2021-05-08T22:38:15
{ "domain": "chegg.net", "url": "https://chegg.net/page/lens-maker-equation-3b244a", "openwebmath_score": 0.4319384694099426, "openwebmath_perplexity": 1802.7147311770025, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9693241991754918, "lm_q2_score": 0.868826769445233, "lm_q1q2_score": 0.8421748125147301 }
https://usa.cheenta.com/tag/algebra/
Categories ## Least Possible Value Problem | AMC-10A, 2019 | Quesstion19 Try this beautiful problem from Algebra based on Least Possible Value. ## Least Possible Value – AMC-10A, 2019- Problem 19 What is the least possible value of $((x+1)(x+2)(x+3)(x+4)+2019)$ where (x) is a real number? • $(2024)$ • $(2018)$ • $(2020)$ ### Key Concepts Algebra least value Answer: $(2018)$ AMC-10A (2019) Problem 19 Pre College Mathematics ## Try with Hints To find out the least positive value of $(x+1)(x+2)(x+3)(x+4)+2019$, at first we have to expand the expression .$((x+1)(x+2)(x+3)(x+4)+2019)$ $\Rightarrow (x+1)(x+4)(x+2)(x+3)+2019=(x^2+5x+4)(x^2+5x+6)+2019)$ Let us take $((x^2+5x+5=m))$ then the above expression becomes $((m-1)(m+1)+2019)$ $\Rightarrow m^2-1+2019$ $\Rightarrow m^2+2018$ Can you now finish the problem ………. Clearly in $(m^2+2018)…….(m^2)$ is positive ( squares of any number is non-negative) and least value is 0 can you finish the problem…….. Therefore minimum value of $m^2+2108$ is $2018$ since $m^2 \geq 0$ for all m belongs to real . Categories ## Numbers of positive integers | AIME I, 2012 | Question 1 Try this beautiful problem from the American Invitational Mathematics Examination, AIME 2012 based on Numbers of positive integers. ## Numbers of positive integers – AIME 2012 Find the number of positive integers with three not necessarily distinct digits, $abc$, with $a \neq 0$ and $c \neq 0$ such that both $abc$ and $cba$ are multiples of $4$. • is 107 • is 40 • is 840 • cannot be determined from the given information ### Key Concepts Integers Number Theory Algebra AIME, 2012, Question 1. Elementary Number Theory by David Burton . ## Try with Hints Here a number divisible by 4 if a units with tens place digit is divisible by 4 Then case 1 for 10b+a and for 10b+c gives 0(mod4) with a pair of a and c for every b [ since abc and cba divisible by 4 only when the last two digits is divisible by 4 that is 10b+c and 10b+a is divisible by 4] and case II 2(mod4) with a pair of a and c for every b Then combining both cases we get for every b gives a pair of a s and a pair of c s So for 10 b’s with 2 a’s and 2 c’s for every b gives $10 \times 2 \times 2$ Then number of ways $10 \times 2 \times 2$ = 40 ways. Categories ## Arithmetic Sequence Problem | AIME I, 2012 | Question 2 Try this beautiful problem from the American Invitational Mathematics Examination, AIME 2012 based on Arithmetic Sequence. ## Arithmetic Sequence Problem – AIME 2012 The terms of an arithmetic sequence add to $715$. The first term of the sequence is increased by $1$, the second term is increased by $3$, the third term is increased by $5$, and in general, the $k$th term is increased by the $k$th odd positive integer. The terms of the new sequence add to $836$. Find the sum of the first, last, and middle terms of the original sequence. • is 107 • is 195 • is 840 • cannot be determined from the given information ### Key Concepts Series Number Theory Algebra AIME, 2012, Question 2. Elementary Number Theory by David Burton . ## Try with Hints After the adding of the odd numbers, the total of the sequence increases by $836 – 715 = 121 = 11^2$. Since the sum of the first $n$ positive odd numbers is $n^2$, there must be $11$ terms in the sequence, so the mean of the sequence is $\frac{715}{11} = 65$. Since the first, last, and middle terms are centered around the mean, then $65 \times 3 = 195$ Hence option B correct. Categories ## Length and Triangle | AIME I, 1987 | Question 9 Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1987 based on Length and Triangle. ## Length and Triangle – AIME I, 1987 Triangle ABC has right angle at B, and contains a point P for which PA=10, PB=6, and $\angle$APB=$\angle$BPC=$\angle$CPA. Find PC. • is 107 • is 33 • is 840 • cannot be determined from the given information ### Key Concepts Angles Algebra Triangles AIME I, 1987, Question 9 Geometry Vol I to Vol IV by Hall and Stevens ## Try with Hints Let PC be x, $\angle$APB=$\angle$BPC=$\angle$CPA=120 (in degrees) Applying cosine law $\Delta$APB, $\Delta$BPC, $\Delta$CPA with cos120=$\frac{-1}{2}$ gives $AB^{2}$=36+100+60=196, $BC^{2}$=36+$x^{2}$+6x, $CA^{2}$=100+$x^{2}$+10x By Pathagorus Theorem, $AB^{2}+BC^{2}=CA^{2}$ or, $x^{2}$+10x+100=$x^{2}$+6x+36+196 or, 4x=132 or, x=33. Categories ## Algebra and Positive Integer | AIME I, 1987 | Question 8 Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1987 based on Algebra and Positive Integer. ## Algebra and Positive Integer – AIME I, 1987 What is the largest positive integer n for which there is a unique integer k such that $\frac{8}{15} <\frac{n}{n+k}<\frac{7}{13}$? • is 107 • is 112 • is 840 • cannot be determined from the given information ### Key Concepts Digits Algebra Numbers AIME I, 1987, Question 8 Elementary Number Theory by David Burton ## Try with Hints Simplifying the inequality gives, 104(n+k)<195n<105(n+k) or, 0<91n-104k<n+k for 91n-104k<n+k, K>$\frac{6n}{7}$ and 0<91n-104k gives k<$\frac{7n}{8}$ so, 48n<56k<49n for 96<k<98 and k=97 thus largest value of n=112. Categories ## Positive Integer | PRMO-2017 | Question 1 Try this beautiful Positive Integer Problem from Algebra from PRMO 2017, Question 1. ## Positive Integer – PRMO 2017, Question 1 How many positive integers less than 1000 have the property that the sum of the digits of each such number is divisible by 7 and the number itself is divisible by $3 ?$ • $9$ • $7$ • $28$ ### Key Concepts Algebra Equation multiplication Answer:$28$ PRMO-2017, Problem 1 Pre College Mathematics ## Try with Hints Let $n$ be the positive integer less than 1000 and $s$ be the sum of its digits, then $3 \mid n$ and $7 \mid s$ $3|n \Rightarrow 3| s$ therefore$21| s$ Can you now finish the problem ………. Also $n<1000 \Rightarrow s \leq 27$ therefore $\mathrm{s}=21$ Clearly, n must be a 3 digit number Let $x_{1}, x_{2}, x_{3}$ be the digits, then $x_{1}+x_{2}+x_{3}=21$ where $1 \leq x_{1} \leq 9,0 \leq x_{2}, x_{3} \leq 9$ $\Rightarrow x_{2}+x_{3}=21-x_{1} \leq 18$ $\Rightarrow x_{1} \geq 3$ Can you finish the problem…….. For $x_{1}=3,4, \ldots ., 9,$ the equation (1) has $1,2,3, \ldots ., 7$ solutions therefore total possible solution of equation (1) =$1+2+\ldots+7=\frac{7 \times 8}{2}=28$ Categories ## Distance and Spheres | AIME I, 1987 | Question 2 Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1987 based on Distance and Spheres. ## Distance and Sphere – AIME I, 1987 What is the largest possible distance between two points, one on the sphere of radius 19 with center (-2,-10,5) and the other on the sphere of radius 87 with center (12,8,-16)? • is 107 • is 137 • is 840 • cannot be determined from the given information ### Key Concepts Angles Algebra Spheres AIME I, 1987, Question 2 Geometry Vol I to Vol IV by Hall and Stevens ## Try with Hints The distance between the center of the spheres is $\sqrt{(12-(-2)^{2}+(8-(-10))^{2}+(-16-5)^{2}}$ =$\sqrt{14^{2}+18^{2}+21^{2}}$=31 The largest possible distance=sum of the two radii+distance between the centers=19+87+31=137. Categories ## Arithmetic Mean | AIME I, 2015 | Question 12 Try this beautiful problem from the American Invitational Mathematics Examination, AIME, 2015 based on Arithmetic Mean. ## Arithmetic Mean of Number Theory – AIME 2015 Consider all 1000-element subsets of the set {1, 2, 3, … , 2015}. From each such subset choose the least element. The arithmetic mean of all of these least elements is $\frac{p}{q}$, where $p$ and $q$ are relatively prime positive integers. Find $p + q$. • is 107 • is 431 • is 840 • cannot be determined from the given information ### Key Concepts Inequalities Algebra Number Theory AIME, 2015, Question 12 Elementary Number Theory by David Burton ## Try with Hints Each 1000-element subset ${ a_1, a_2,a_3,…,a_{1000}}$ of ${1,2,3,…,2015}$ with $a_1<a_2<a_3<…<a_{1000}$ contributes $a_1$ to sum of least element of each subset and set ${a_1+1,a_2+1,a_3+1,…,a_{1000}+1}$. $a_1$ ways to choose a positive integer $k$ such that $k<a_1+1<a_2+1,a_3+1<…<a_{1000}+1$ ($k$ can be anything from $1$ to $a_1$ inclusive Thus, the number of ways to choose the set ${k,a_1+1,a_2+1,a_3+1,…,a_{1000}+1}$ is equal to the sum. But choosing a set ${k,a_1+1,a_2+1,a_3+1,…,a_{1000}+1}$ is same as choosing a 1001-element subset from ${1,2,3,…,2016}$! average =$\frac{2016}{1001}$=$\frac{288}{143}$. Then $p+q=288+143={431}$ Categories ## Algebra and Combination | AIME I, 2000 Question 3 Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 2000 based on Algebra and Combination. ## Algebra and combination – AIME 2000 In expansion $(ax+b)^{2000}$ where a and b are relatively prime positive integers the coefficient of $x^{2}$ and $x^{3}$ are equal, find a+b • is 107 • is 667 • is 840 • cannot be determined from the given information ### Key Concepts Algebra Equations Combination AIME, 2000, Question 3 Elementary Algebra by Hall and Knight ## Try with Hints here coefficient of $x^{2}$= coefficient of $x^{3}$ in the same expression then ${2000 \choose 1998}a^{2}b^{1998}$=${2000 \choose 1997}a^{3}b^{1997}$ then $b=\frac{1998}{3}$a=666a where a and b are relatively prime that is a=1,b=666 then a+b=666+1=667. . Categories ## Algebraic Equation | AIME I, 2000 Question 7 Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 2000 based on Algebraic Equation. ## Algebraic Equation – AIME 2000 Suppose that x,y and z are three positive numbers that satisfy the equation xyz=1, $x+\frac{1}{z}=5$ and $y+\frac{1}{x}=29$ then $z+\frac{1}{y}$=$\frac{m}{n}$ where m and n are relatively prime, find m+n • is 107 • is 5 • is 840 • cannot be determined from the given information ### Key Concepts Algebra Equations Integers AIME, 2000, Question 7 Elementary Algebra by Hall and Knight ## Try with Hints here $x+\frac{1}{z}=5$ then1=z(5-x)=xyz putting xyz=1 gives 5-x=xy and $y=(29-\frac{1}{x}$) together gives 5-x=x$(29-\frac{1}{x}$) then x=$\frac{1}{5}$ then y=29-5=24 and z=$\frac{1}{5-x}$=$\frac{5}{24}$ $z+\frac{1}{y}$=$\frac{1}{4}$ then 1+4=5. .
2021-06-17T17:31:01
{ "domain": "cheenta.com", "url": "https://usa.cheenta.com/tag/algebra/", "openwebmath_score": 0.6250997185707092, "openwebmath_perplexity": 1717.9663031514228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9597620596782467, "lm_q2_score": 0.8774767954920548, "lm_q1q2_score": 0.8421689365613222 }
https://codereview.stackexchange.com/questions/222869/find-missing-element-in-an-array-of-unique-elements-from-0-to-n
# Find missing element in an array of unique elements from 0 to n The task is taken from LeetCode Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the one that is missing from the array. Example 1: Input: [3,0,1] Output: 2 Example 2: Input: [9,6,4,2,3,5,7,0,1] Output: 8 Note: Your algorithm should run in linear runtime complexity. Could you implement it using only constant extra space complexity? My approach is to subtract the sum from 0-n and the sum of the elements in the array. For the sum of the number from 0 to n I use the Gauss forumula: (n * (n + 1)) / 2. For the sum in the array I will have to iterate through the whole array and sum up the elements. My solution has time complexity of $$\O(n)\$$ and space complexity of $$\O(1)\$$. /** * @param {number[]} nums * @return {number} */ var missingNumber = function(nums) { if (nums.length === 0) return -1; const sumOfNums = nums.reduce((ac, x) => ac + x); const sumTillN = (nums.length * (nums.length + 1)) / 2; return sumTillN - sumOfNums; }; • This question is being discussed here: meta.stackexchange.com/q/329996 – Peilonrayz Jun 24 '19 at 16:08 • The discussion, mentioned above, was removed because it was deemed off-topic. It's an legal issue, not suited for discussion on Meta (or anywhere?). I made the point that LeetCode's terms say: "You agree not to copy or publish our content.", which makes sense. In general material on the web is copyrighted, unless explicitly stated otherwise. However, one could argue that this is a case of "fair use" because there's only the intent to discuss the problem and not to pirate it. – KIKO Software Jun 24 '19 at 18:48 I don't think there is a better than $$\O(n)\$$ and $$\O(1)\$$ to this problem. However you can simplify the code a little (trivial) by subtracting from the expected total. const findVal = nums => vals.reduce((t, v) => t - v, nums.length * (nums.length + 1) / 2); Or function findVal(nums) { var total = nums.length * (nums.length + 1) / 2; for (const v of nums) { total -= v } (x * (x + 1)) / 2 === x * (x + 1) / 2
2020-01-23T04:01:55
{ "domain": "stackexchange.com", "url": "https://codereview.stackexchange.com/questions/222869/find-missing-element-in-an-array-of-unique-elements-from-0-to-n", "openwebmath_score": 0.45640912652015686, "openwebmath_perplexity": 1611.9697492137068, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620562254525, "lm_q2_score": 0.8774767890838836, "lm_q1q2_score": 0.8421689273812558 }
http://mathschallenge.net/full/rational_roots_quadratic
#### Problem In the quadratic equation $ax^2 + bx + c = 0$, the coefficients $a$, $b$, $c$ are non-zero integers. Let $b = -5$. By making $a = 2$ and $c = 3$, the equation $2x^2 - 5x + 3 = 0$ has rational roots. But what is most remarkable is that it is possible to interchange these coefficients in any order and the quadratic will still have rational roots. Suppose that $b$ is chosen at random. Prove that there always exist coefficients $a$ and $c$ that will produce rational roots. Moreover, once determined, no matter how these three coefficients are shuffled, the quadratic equation will still yield rational roots. #### Solution We shall prove this in two different ways. Proof 1: Although the first proof is elegant and provides a method for determining one set of values of $a$ and $c$, given $b$, it tells us nothing about the true nature of the problem, nor does it reveal that there are infinitely many different sets of values of $a$ and $c$ that can be determined for any given $b$. It can be verified that the quadratic equation, $2x^2 + x - 3 = 0$ has rational roots and every arrangement of coefficients will yield rational roots. But the important observation is to note that any integral multiple of this "base" equation, with $b = 1$, will lead to another quadratic with rational roots for every arrangement. For example, if we multiply by 7, we get $14x^2 + 7x - 21 = 0$. This is equivalent to making $b = 7$ and determining that $a = 14$ and $c = -21$ produce a quadratic with rational roots for every arrrangement of the coefficients. Proof 2: This proof is perhaps the most revealling and makes use of the fact that the discriminant, $b^2 - 4ac$, must be the square of a rational if the roots of the equation are to be rational. In fact because all the coefficients are integer we can go further by saying that the discriminant must be square. We also note that interchanging the positions of $a$ and $c$ has not effect on the rationality of the discriminant. Hence we only need consider the three cases of $a$, $b$, and $c$ being the coefficient of the $x$ term in the general quadratic. We shall initially consider the cases where $b$ and $c$ are the coefficient of $x$. Let $b^2 - 4ac = r^2$ and $c^2 - 4ab = s^2$. We need to show that $r$ and $s$ are both integer. \begin{align}\therefore r^2 - s^2 &= b^2 - c^2 + 4ab - 4ac\\(r + s)(r - s) &= (b - c)(b + c + 4a)\end{align} Let $r + s = b + c + 4a$ and $r - s = b - c$. Adding both equations we get $2r = 2b + 4a \implies r = b + 2a$, and subtracting gives, $2s = 2c + 4a \implies s = c + 2a$. In other words, we have shown that $r$ and $s$ are both integer. We must now show that the third discriminant, $a^2 - 4bc$, is also square. Now $r^2 = (b + 2a)^2 = b^2 + 4ab + 4a^2 = b^2 - 4ac \implies 4a^2 + 4ab + 4ac = 0$ Similarly, $s^2 = (c + 2a)^2 = c^2 + 4ac + 4a^2 = c^2 - 4ab \implies 4a^2 + 4ab + 4ac = 0$ As $a \ne 0$, from $4a(a + b + c) = 0$ we deduce that $a + b + c = 0$. As $a = -(b + c)$, squaring both sides gives, $a^2 = b^2 + c^2 + 2bc$. Subtracting $4bc$ from both sides: $a^2 - 4bc = b^2 + c^2 -2bc = (b - c)^2$   QED Is it possible that a quadratic equation exists for which any combination of the coefficients will yield rational roots and $a$ + $b$ + $c$ 0? Problem ID: 274 (21 Apr 2006)     Difficulty: 3 Star Only Show Problem
2016-02-07T17:12:19
{ "domain": "mathschallenge.net", "url": "http://mathschallenge.net/full/rational_roots_quadratic", "openwebmath_score": 0.9504358768463135, "openwebmath_perplexity": 146.81537595959952, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918533088548, "lm_q2_score": 0.8519528057272543, "lm_q1q2_score": 0.8421484078650123 }
https://mathhelpboards.com/threads/does-the-sequence-converge.8936/
# Does the sequence converge? #### evinda ##### Well-known member MHB Site Helper Hey!!! I want to check if the sequence $a_{n}=\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+...+\frac{1}{\sqrt{n^2+n}}$ converges. I thought that I could find the difference $a_{n+1}-a_{n}$ to check if $a_{n}$ is increasing or decreasing.I found: $a_{n+1}-a_{n}=\sum_{i=1}^{n}(\frac{1}{\sqrt{(n+1)^{2}+i)}}-\frac{1}{\sqrt{n^2+i}})+\frac{1}{\sqrt{(n+1)^{2}+n+1}}$..But from that we cannot conclude if the difference is negative or positive,right?? So,what else could I do?? #### Plato ##### Well-known member MHB Math Helper Hey!!! I want to check if the sequence $a_{n}=\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+...+\frac{1}{\sqrt{n^2+n}}$ converges. Is it the case that $$\frac{n}{{\sqrt {{n^2} + n} }} \le {a_n} \le 1$$? Is $${a_n}$$ an increasing sequence? #### evinda ##### Well-known member MHB Site Helper Is it the case that $$\frac{n}{{\sqrt {{n^2} + n} }} \le {a_n} \le 1$$? Is $${a_n}$$ an increasing sequence? I found this:$\frac{n}{\sqrt{n^2+n}}\leq a_{n}\leq \frac{n}{\sqrt{n^2+1}}$.. So,could I just say that from the squeeze theorem the limit is $1$,without finding the monotonicity?? #### evinda ##### Well-known member MHB Site Helper I found this:$\frac{n}{\sqrt{n^2+n}}\leq a_{n}\leq \frac{n}{\sqrt{n^2+1}}$.. So,could I just say that from the squeeze theorem the limit is $1$,without finding the monotonicity?? Or can't I do it like that,because it is not given that the sequence converges?? #### Klaas van Aarsen ##### MHB Seeker Staff member I found this:$\frac{n}{\sqrt{n^2+n}}\leq a_{n}\leq \frac{n}{\sqrt{n^2+1}}$.. So,could I just say that from the squeeze theorem the limit is $1$,without finding the monotonicity?? Yep. That works. Monotonicity not required. #### evinda ##### Well-known member MHB Site Helper Yep. That works. Monotonicity not required. Great!!!!Thank you very much!!!!
2021-10-19T13:06:22
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/does-the-sequence-converge.8936/", "openwebmath_score": 0.4475305378437042, "openwebmath_perplexity": 3344.334939949666, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9884918492405833, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.8421484062568797 }
http://math.stackexchange.com/questions/86937/questions-about-power-sets-and-their-ordering
# Questions about power sets and their ordering Okay, so I'm stuck on a question and I'm not sure how to solve it, so here it is: In the following questions, $B_n = \mathcal{P}(\{1, ... , n\})$ is ordered by containment, the set $\{0,1\}$ is ordered by the relation $0 \leq 1$, and $\{0,1\}^n$ is ordered using the product order. a) Draw Hasse diagram of $\{0,1\}^3$ b) Give an order isomorphism from $B_3$ to $\{0,1\}^3$ c) Prove that $B_n$ is isomorphic to its dual. Your proof must include a clear explicit definition of the order isomorphism you use to prove this. d) Prove that $\{0,1\}^n$ is isomorphic to $B_n$. Your proof must include a clear and explicit definition of the order isomorphism you use to prove this. Okay so I got a) and b), I started on c) but what's throwing me off is that $B_n$ is infinite (meaning I have to show that its isomorphic to its dual for all sets (or elements?) of numbers. I started c) by "Let $R$ be an order relation on $S$. The dual order is $R^{-1}$. In other words if $a R b$ then $b R a$." - I've added LaTeX formatting to your question, and also tried to give a more descriptive title; if you think of a better one, or any other edits, please feel free to make them. – Zev Chonoles Nov 30 '11 at 3:03 Also, I would like to express my appreciation for explaining what is confusing you; all too often, users post questions without even demonstrating they have thought about it. People are, of course, much more willing to help those who have put effort into it themselves, so I'm sure you'll get some good answers soon. – Zev Chonoles Nov 30 '11 at 3:04 Thanks soo much, I'm new to this, also the last bit there is R^(-1) sorry, you put R - 1. thanks so much again! – Paul Nov 30 '11 at 3:33 In c) and d) you don't work with infinite sets $B_n$. These sets are finite, for each $B_n$ has exactly $n$ elements and $n$ is a natural number, hence finite! But you're right you have an infinite family of sets: $B_1,B_2,B_3,\ldots$. These facts c) and d) are stated in a general way, that is they mean that whatever value of $n$ you take, the isomorphisms will hold for this certain $n$. To prove these facts you abstractly fix one conrete value of $n$ and provide a proof for this $n$. Below you have some loose hints how proofs can be done. To prove c) consider the function $f:(B_n,\subseteq)\to(B_n,\subseteq^{-1})$ given by the formula $f(A)=\{1,\ldots,n\}\setminus A$ (ie. take simply the complement of $A$). This function preserves the order because for any abstract sets $X,Y\subseteq Z$ we have: $X\subseteq Y \Leftrightarrow Z\setminus Y\subseteq Z\setminus X$. Bijectivity of $f$ is trivial. Proving d) is also very easy. Consider the function $g:B_n\to\{0,1\}^n$ defined as follows: $g(A)=(\chi_A(1),\chi_A(2),\ldots,\chi_A(n))$, where $\chi_A$ is the characteristic function of $A$ ie. $\chi_A(x)=0$ if $x\not\in A$ and $\chi_A(x)=1$ if $x\in A$. I leave you to show that $A\subseteq B$ implies $g(A)\le g(B)$ in the product order sense as well as that $g$ is a bijection. If you have any questions, feel free to ask. - wait so then for d) do I have to show that x is also an element of B? – Paul Nov 30 '11 at 4:00 okay so x is elements of both A and B since its a bijection, but I have to show that its a bijection right? – Paul Nov 30 '11 at 4:16 okay since I have to show that its isomorphic, I have to show that both sets have the same number of elements and they each correspond to each other. That part is clear, but like in c) do I have to show that g(A)\A and g(B)\B? – Paul Nov 30 '11 at 4:22 Paul: I have a hard time to understand you comments. Let's talk about d). You have to show that two orders are iso. The first order consists of subsets of $\{1,\ldots,n\}$, the second one -- of $n$-tuples $(a_1,\ldots,a_n)$, where $a_i\in\{0,1\}$. So you have to translate somehow subsets of $\{1,\ldots,n\}$ to these $n$-tuples. This is done by function $g$ which looks at subsets (by looking at its elements) and assigns to these subsets $n$-tuples. It has to be an order isomorphism, so if $A\subseteq B$ (ordering in $B_n$), then $g(A)\le g(B)$ (ordering in $\{0,1\}^n$). So to check this... – Damian Sobota Nov 30 '11 at 14:06 ... you take two elements $A,B$ of $B_n$, which are subsets of $\{1,\ldots,n\}$, assume that $A\subseteq B$ (ordering in $B_n$) and have to show that $g(A)\le g(B)$ in the sense of the product order on $\{1,\ldots,n\}$. And if $g$ is to be an isomorphism, then it must be a bijection. So you have to show that $g$ is injective (there is no element in $\{0,1\}^n$ corresponding via $g$ to two different elements of $B_n$) and surjective (to each element of $\{0,1\}^n$ there is an element of $B_n$ corresponding to it via $g$). – Damian Sobota Nov 30 '11 at 14:12
2016-04-30T11:28:02
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/86937/questions-about-power-sets-and-their-ordering", "openwebmath_score": 0.9549375772476196, "openwebmath_perplexity": 186.7310920762966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918533088547, "lm_q2_score": 0.8519528000888386, "lm_q1q2_score": 0.8421484022914842 }
http://mathhelpforum.com/statistics/213379-basic-probability-question-deck-cards-print.html
# Basic Probability Question (Deck of Cards) Printable View • February 18th 2013, 11:32 PM Pourdo Basic Probability Question (Deck of Cards) Hi everyone, hoping you can all help me with this question. A card is drawn from a shuffled deck of 52 cards, and not replaced. Then a second card is drawn. What is the probability that the second card is a king? If possible could you please explain the process of arriving at the answer? The answer key gives me "1/13", hopefully that is correct. • February 19th 2013, 05:04 AM Plato Re: Basic Probability Question (Deck of Cards) Quote: Originally Posted by Pourdo A card is drawn from a shuffled deck of 52 cards, and not replaced. Then a second card is drawn. What is the probability that the second card is a king? Notation: $K_1$ is first card is a king. $\overline{K_1}$ is first card is not a king. We want $\mathcal{P}(K_2)=\mathcal{P}(K_1\cap K_2)+\mathcal{P}(\overline{K_1}\cap K_2)$. Well what is $\frac{4}{52}\frac{3}{51}+\frac{48}{52}\frac{4}{51} =~?$ • February 19th 2013, 05:56 AM Kmath Re: Basic Probability Question (Deck of Cards) you may also think like that: P( 2nd card is a K)=P( 2nd card is a Q )=P( 2nd card is a J)=P( 2nd card is a 10)=...=P( 2nd card is an A) and surely P( 2nd card is a K)+P( 2nd card is a Q )+P( 2nd card is a J)+P( 2nd card is a 10)+...+P( 2nd card is an A)=1 that is $13 P(\text{2nd card is a K})=1$ • February 19th 2013, 04:41 PM Soroban Re: Basic Probability Question (Deck of Cards) Hello, Pourdo! This is a classic "trick question" designed to complicated your thinking. Quote: A card is drawn from a shuffled deck of 52 cards, and not replaced. Then a second card is drawn. What is the probability that the second card is a King? Let us alter the problem. Cards are drawn one at a time from a shuffled deck without replacement. What is the probability that the 4th card is a King? . . $\begin{array}{cccccccccc}\square & \square & \square & \boxed{K} & \square & \square & \cdots \end{array}$ You are already thinking of the dozens of possibilities, aren't you? What if the first card is a King?. What if it isn't? What if the second card is a King?. What if it isn't? . . . and so on. We can disregard all of that! Our only concern is: "Is the 4th card a King?" Answer: . $P(\text{4th is King}) \:=\:\frac{4}{52} \:=\:\frac{1}{13}$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Further proof: There are $52!$ possible arrangements of the 52 cards. How many of them have a King in the 4th position? . . There are 4 choices of Kings for the 4th position. . . The other 51 cards can be arranged in $51!$ ways. Hence, there are $4\cdot51!$ ways Therefore: . $P(\text{4th is King}) \:=\:\frac{4\cdot51!}{52!} \:=\:\frac{4}{52} \:=\:\frac{1}{13}$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ In general, the probability that the $n^{th}$ card is a King is $\frac{1}{13}.$ • February 19th 2013, 05:38 PM HallsofIvy Re: Basic Probability Question (Deck of Cards) This is how I would think about it: I presume you know that "if an event can happen in N "equally likely ways" and M of them give event "X" then the probability of event "X" is $\frac{M}{N}$. For example, there are four kings in a standard deck of 52 cards, so the probability of getting a king on the first draw is 4/52= 1/13. But you want a king on the second draw. Since this is "without replacement", there will be 51 cards in the deck on the second draw. But how many kings? That depends on what happened on the first draw. If the first draw was a king, there will be, now, 3 kings left so the probability of a king on the second draw is 3/51= 1/17. If the first draw was not a king, there will be, now, 4 kings left so the probability of a king on the second draw is 4/51. We have to "weight" those two probabilities by the probability they will happen: the probability of a king on the first draw is, as before, 4/52= 1/13. The probability of any thing other than a king on the first draw is 1- 1/13= 12/13. The "weighted average" will be (1/13)(1/17)+ (12/13)(4/51)= 1/221+ 48/663= 3/661+ 48/663= 51/663= 1/13.
2015-02-28T23:55:59
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/statistics/213379-basic-probability-question-deck-cards-print.html", "openwebmath_score": 0.6844570636749268, "openwebmath_perplexity": 404.0017129937035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918526308096, "lm_q2_score": 0.8519528000888386, "lm_q1q2_score": 0.8421484017138218 }
https://math.stackexchange.com/questions/2278382/given-the-end-points-of-the-semi-major-axis-and-2-other-points-is-it-possible
# Given the end points of the semi major axis, and 2 other points, is it possible to find the equation of an ellipse? I've projected a point outwards onto a 2D plane, which forms an ellipse (or very close to one!). The 4 points I now have are the end points of the semi major axis, and two other points: I know the equation of a general ellipse (tilted, and non-centred), but how can one find it from the given information above. $$\dfrac {((x-h)\cos(A)+(y-k)\sin(A))^2}{(a^2)}+\dfrac{((x-h) \sin(A)-(y-k) \cos(A))^2}{(b^2)}=1$$ Let's say these 4 points are "approximate", is there a way to determine an approximate ellipse? If you have the endpoints of one axis of the ellipse plus one other point, it is possible to uniquely identify the ellipse through those three points. Suppose the two endpoints of one axis of the ellipse are $P_1 = (x_1,y_1)$ and $P_2 = (x_2,y_2),$ and let $P_3 = (x_3,y_3)$ be any other point on the ellipse. The center of the ellipse, $C = (h,k),$ is the midpoint of the segment $P_1P_2,$ which is $$\left(\frac{x_1+x_2}{2}, \frac{y_1+y_2}{2}\right).$$ Therefore $h = \frac12(x_1+x_2)$ and $k = \frac12(y_1+y_2).$ The slope of the axis of the ellipse is the slope of the line through $P_1$ and $P_2,$ which is $\frac{y_2-y_1}{x_2-x_1}$ (rise divided by run). The angle $A$ that the axis of the ellipse makes with the $x$-axis is therefore $$A = \arctan\left(\frac{y_2-y_1}{x_2-x_1} \right).$$ The distance between $P_1$ and $P_2$ is the length of the axis of the ellipse, which is twice the semi-axis length $a.$ That is, $$a = \frac12 \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}.$$ Since you only really need $a^2$ to write your formula, however, a simpler calculation is $$a^2 = \frac14 \left((x_2-x_1)^2 + (y_2-y_1)^2\right).$$ So far we have four of the five constants that you need in order to write your formula for the ellipse. We just have to find the value of $b.$ One way to do this is to write the equation that $(x_3,y_3)$ must satisfy, $$\frac{((x_3-h)\sin(A)-(y_3-k)\cos(A))^2}{b^2} + \frac{((x_3-h)\cos(A)+(y_3-k)\sin(A))^2}{a^2} = 1,$$ and then rearrange the equation as follows: $$\frac{((x_3-h)\sin(A)-(y_3-k)\cos(A))^2}{b^2} = 1 - \frac{((x_3-h)\cos(A)+(y_3-k)\sin(A))^2}{a^2},$$ $$\frac{1}{b^2} = \frac{1 - \frac{1}{a^2}((x_3-h)\cos(A)+(y_3-k)\sin(A))^2} {((x_3-h)\sin(A)-(y_3-k)\cos(A))^2},$$ $$b^2 = \frac{((x_3-h)\sin(A)-(y_3-k)\cos(A))^2} {1 - \frac{1}{a^2}((x_3-h)\cos(A)+(y_3-k)\sin(A))^2}.$$ You can evaluate $b^2$ by plugging in the given values $x_3$ and $y_3$ and the already-calculated values $h,$ $k,$ $A,$ and $a.$ You could then assume $b$ is positive, that is, $b = \sqrt{b^2},$ but just having gotten a value of $b^2$ you have enough to write the equation of the ellipse. With four points, you have the problem of which of the two "extra" points to use in order to determine your ellipse. Either choice will give the same values of $h,$ $k,$ $A,$ and $a,$ since those depend only on the two endpoints of the ellipse's axis. But if the positions of the other two points are not exactly on the same ellipse, you will get two different values of $b^2$ depending on which point you choose as the "third" point. One resolution to the problem might be to do the calculation of $b^2$ twice, once for each of the "extra" points, and somehow average the two values that you obtain. • Thanks @David K! Your solution helped to generate an "approximate" ellipse! The last point about averaging the 2 semi-minor axis solutions is a decent idea. Once the averaged b is inserted, I get an ellipse which fits the data fairly well. May 13, 2017 at 14:26
2022-06-29T03:55:40
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2278382/given-the-end-points-of-the-semi-major-axis-and-2-other-points-is-it-possible", "openwebmath_score": 0.8620388507843018, "openwebmath_perplexity": 120.46776564703842, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918526308095, "lm_q2_score": 0.8519527982093666, "lm_q1q2_score": 0.842148399855979 }
http://web03.innovasjon.as/7k6y8/viewtopic.php?tag=bc2f08-area-of-parallelogram-calculator-vectors
Scalar-vector multiplication, Online calculator. To calculate the area of the triangle, build on vectors, one should remember, that the magnitude of the vector product of two vectors equals to the twice of the area of the triangle, build on corresponding vectors: Therefore, the calculation of the area of the triangle, build on vectors divided into several steps. Length of a vector, magnitude of a vector in space. About Area of a Parallelogram Calculator . In this way, the area of the parallelogram is equal to the length (norm) of the cross product. In the last video, we talked about a particular way of finding the area of a rhombus. Calculations include side lengths, corner angles, diagonals, height, perimeter and area of parallelograms. Find the area of parallelogram of the two vectors (6,0,1,3) and (2,1,3,1). You can navigate between the input fields by pressing the keys "left" and "right" on the keyboard. Volume of pyramid formed by vectors, Online calculator. And a rhombus is a parallelogram. A parallelogram is a quadrilateral with opposite sides parallel. Area of a parallelogram given sides and angle Calculator, $$\normalsize Parallelogram\ (a,b,\theta\rightarrow S)\\. Thus we can give the area of a triangle with the following formula: (5) ft. in an acra. Welcome to OnlineMSchool. By substituing (1), this theorem can be written as We thus see that the length of the cross product of two vectors is equal to the product of the length of each vector and the sine of the angle between the two vectors. Decomposition of the vector in the basis, Exercises. Area of a parallelogram is a region covered by a parallelogram in a two-dimensional plane. The Area of a Parallelogram Calculator is used to help you find the area of a parallelogram based on its base and height. In Geometry, a parallelogram is a two-dimensional figure with four sides. Find the area of the parallelogram whose two adjacent sides are determined by the vectors i vector + 2j vector + 3k vector and 3i vector − 2j vector + k vector. The area of a parallelogram is the \(base \times perpendicular~height~(b \times h)$$.. You can see that this is true by rearranging the parallelogram to make a rectangle. Dot product of two vectors, Online calculator. You can input only integer numbers, decimals or fractions in this online calculator (-2.4, 5/7, ...). a × b = { a y b z - a z b y ; a z b x - a x b z ; a x b y - a y b x }. And what I want to discuss in this video is a general way of finding the area of a parallelogram. Area Ar of a parallelogram may be calculated using different formulas. The area is magnitude of the cross product of the two vectors. b vector = 3i vector − 2j vector + k vector. Already knew that there were 43,540 sq. Using this online calculator, you will receive a detailed step-by-step solution to your problem, which will help you understand the algorithm how to find cross product of two vectors. Is equal to the determinant of your matrix squared. Solution : Let a vector = i vector + 2j vector + 3k vector. Press the button These two vectors form two sides of a parallelogram. The cross product of two vectors a and b is a vector c, length (magnitude) of which numerically equals the area of the parallelogram based on vectors a and b as sides. Component form of a vector with initial point and terminal point in space, Exercises. Through our simulation, you will also understand how the area of parallelogram calculator … The area of parallelogram formed by the vectors a and b is equal to the module of cross product of this vectors: A = | a × b |. The formula is actually the same as that for a rectangle, since it the area of a parallelogram is basically the area of a rectangle which has for sides the parallelogram's base and height. Calculations include side lengths, corner angles, diagonals, height, perimeter and area of parallelograms. Type the values of the vectors:Type the coordinates of points: You can input only integer numbers or fractions in this online calculator. This web site owner is mathematician Dovzhyk Mykhailo. One thing that determinants are useful for is in calculating the area determinant of a parallelogram formed by 2 two-dimensional vectors. Area of triangle formed by vectors, Online calculator. Addition and subtraction of two vectors in space, Exercises. Your feedback and comments may be posted as customer voice. I designed this web site and wrote all the mathematical theory, online exercises, formulas and calculators. When two vectors are given: Below are the expressions used to find the area of a triangle when two vectors are known. Direction cosines of a vector, Online calculator. Formula. Area of a parallelogram calculator tools make the calculation faster and it displays the area of a parallelogram in a fraction of seconds. This free online calculator help you to find area of parallelogram formed by vectors. In Euclidean geometry, a parallelogram is a simple quadrilateral with two pairs of parallel sides. By … This website uses cookies to ensure you get the best experience. A triangle divides a parallelogram into two equal parts, so the area of the triangle will be given by 1/2 x ∣ A B ⃗ ∣ × ∣ A C ⃗ ∣ |\vec {AB} | \times |\vec {AC}| ∣ A B ∣ × ∣ A C ∣ × sin⁡θ. The area of a parallelogram can easily be computed from the direction vectors: Simply treat the vectors as a matrix and take the absolute value of the determinant: Compare with Area: A Parallelogram … Why is this so? This web site owner is mathematician Dovzhyk Mykhailo. Matrices Vectors. So the area of your parallelogram squared is equal to the determinant of the matrix whose column vectors construct that parallelogram. In two dimensional space there is a simple formula for the area of a parallelogram bounded by vectors v and w with v = (a, b) and w = (c, d): namely ad - bc. Vector + 2j vector + 3k vector a particular way of finding the area of a may. Angles, diagonals, height, perimeter and area of parallelograms people/Useful/ Purpose of use Computed the area of. That determinants are useful for is in calculating the area of parallelogram formed vectors! And easy to solve if you want to discuss in this Online calculator -2.4. Fraction of seconds height of a and b vector b y. b z. or parallelogram Suppose two vectors vector. The top edge moving in a straight line of parallelogram formed by vectors Online. This Online calculator faster and it displays the area of parallelogram given sides and angle x vector! Is a simple quadrilateral with opposite sides parallel 2015/04/26 04:25 Male/60 years old level over/A..., decimals or fractions in this Online calculator of parallelogram given base and height value Comment/Request is... You will have a detailed step-by-step solution two-dimensional vectors be calculated using different formulas the determinant the! Male/60 years old level or over/A retired people/Useful/ Purpose of use Computed area. Know how to solve a 2x2 determinant pressing the keys left '' and you will have a step-by-step... Calculator ( -2.4, 5/7,... ) press the button One thing that determinants are quick easy... Is used area of parallelogram calculator vectors find the area of parallelogram = a vector with initial point and terminal point, Online (! Is the distance vertically from the bottom edge to the determinant in two dimensional space are given do! Side lengths, corner angles, diagonals, height, perimeter and of. The bottom edge to the top edge moving in a straight line can the. Solve if you enter in the basis, Exercises height of a parallelogram given and! Vector b y. b z. or that determinants are quick and easy to solve if you Know how to if... Years old level or over/A retired people/Useful/ Purpose of use Computed the area of a parallelogram calculator is a Online. I want to discuss in this Online calculator a straight line functions are limited now because setting JAVASCRIPT... Point on plane, Exercises vectors in space and Three Dimensions in the height and width the... Help you find the area of a parallelogram instantly if you enter in last... Straight line triangle formed by vectors, calculate the area of a parallelogram be... Its base and height is in calculating the area of parallelogram formed by vectors, calculator... The distance vertically from the bottom edge to the top edge moving in a of. Finding the area of a parallelogram calculator is a two-dimensional figure with four.. Select the vectors form of a vector in space four sides using different.. Point in area of parallelogram calculator vectors may be calculated using different formulas, \ ( \normalsize Parallelogram\ a! Calculations include side lengths, corner angles, diagonals, height, perimeter and area of parallelogram. The keys left '' and you will have a detailed step-by-step.! Wrote all the mathematical theory, Online calculator help you to find cross product of two vectors in space Exercises... Determinant in two dimensional space are given: Below are the expressions used to help you to find area... The product of its diagonals vectors form of a parallelogram are of equal length and the opposite or sides... Same line instantly if you Know how to solve if you want to discuss in this Online calculator that the. Cross product of a vector with initial point and terminal point, calculator! And Three Dimensions half the product of a rhombus very easy and quick & perimeter calculator - calculate area perimeter... Base and height free parallelogram area for the given base and height if you want to me. Given sides and angle calculator, \ ( \normalsize Parallelogram\ ( a, b, \theta\rightarrow S ).! With two pairs of parallel sides fraction of seconds on its base and height.! Or fractions in this Online calculator help you to find the area of formed... Of use Computed the area of a parallelogram is area of parallelogram calculator vectors quadrilateral is 360 degrees and Three Dimensions your questionnaire.Sending,. = '' and you will have a detailed step-by-step solution step by step parallelogram on! Thing that determinants are useful for is in calculating the area of a parallelogram given sides and Comment/Request! = a vector, magnitude of a parallelogram calculator is used to help you to find the of! Column vectors construct that parallelogram your questionnaire.Sending completion, area of your parallelogram is... Abel Reels Canada, Sp Paschim Medinipur Mobile Number, Dunsin Oyekan -- More Than A Song, Coffee Shop In Italian, Bespin Duel Lego Release Date, Repp Sports Pre Workout, Harishchandrapur 2 Number Block,
2021-06-13T11:01:28
{ "domain": "innovasjon.as", "url": "http://web03.innovasjon.as/7k6y8/viewtopic.php?tag=bc2f08-area-of-parallelogram-calculator-vectors", "openwebmath_score": 0.7478793859481812, "openwebmath_perplexity": 570.259746055964, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9884918533088547, "lm_q2_score": 0.8519527963298946, "lm_q1q2_score": 0.8421483985757987 }
https://mathoverflow.net/questions/357359/equivalence-of-%CF%83-convex-hull-and-closed-convex-hull/357567
# Equivalence of σ-convex hull and closed convex hull Let $$X$$ be a locally convex topological space, and let $$K \subset X$$ be a compact set. Recalling that the standard convex hull is defined as $$\text{co}(K) = \Big\{ \sum_{i=1}^n a_i x_i : a_i \geq 0,\, \sum_{i=1}^n a_i = 1,\, x_i \in K \Big\},$$ define the $$\sigma$$-convex hull as $$\sigma\text{-}\mathrm{co}(K) = \Big\{ \sum_{i=1}^\infty a_i x_i : a_i \geq 0,\, \sum_{i=1}^\infty a_i = 1,\, x_i \in K \Big\},$$ where the summation is to be understood as convergence of the sequence in the topology of $$X$$. I would like to understand conditions under which $$\sigma\text{-}\mathrm{co}(K)$$ is exactly the closure of $$\mathrm{co}(K)$$. In particular, does this property hold for any separable normed space $$X$$, or are further constraints on $$X$$ (and $$K$$?) required? The motivation for this question is Choquet's theorem, which allows one to write $$\overline{\mathrm{co}}(K) = \Big\{ \int x d\mu(x) : \mu \in M(K) \Big\}$$ with $$M(K)$$ standing for probability measures on $$K$$ for any compact subset $$K$$ in a normed space. I would like to understand the "countable" version of this theorem as presented above, but I could not find any references nor do I have an idea about how one could prove it. • Related: this question Apr 13, 2020 at 14:34 • Consider $\ J:=(0;1)\subseteq\Bbb R.\$ Then the sigma closure is $\ J;\$ it is not the closure, i.e. $\ [0;1].$ Apr 13, 2020 at 22:24 • @WlodAA: It seems, though, that the OP considers only compacts sets $K$. Apr 14, 2020 at 4:14 • @JochenGlueck, thank you. Apr 14, 2020 at 19:30 • there are exercises 1.66 and 1.67 in Fabian, Habala Hajek, Montesinos, Zizler - Banach space theory, dedicated to these notions, although they do not adress your specific question – erz Apr 15, 2020 at 13:42 Wlod AA gave a good counterexample for the case when $$K$$ is not required to be compact, here I give a counterexample $$K$$ compact, first in a locally convex space, and then for a(n infinite-dimensional) separable normed space, and (after an edit) for all infinite-dimensional Banach spaces. There is a standard counterexample if $$X$$ is only required to be locally convex, which is to take $$X = C([0,1])^*$$ with the weak-* topology, and to take $$K$$ to be the set of unital ring homomorphisms $$C([0,1]) \rightarrow \mathbb{R}$$. Making free use of the Riesz representation theorem to consider elements of $$C([0,1])^*$$ as measures on $$[0,1]$$, the elements of $$K$$ are the Dirac $$\delta$$-measures. Now, for each element $$\mu$$ of $$\sigma\mbox{-}\mathrm{co}(K)$$, there exists a countable set $$S \subseteq [0,1]$$ such that $$\mu([0,1]\setminus S) = 0$$. However, $$\overline{\mathrm{co}}(K)$$ consists of $$P([0,1])$$, the set of all positive unital linear functionals on $$C([0,1])$$, i.e. all probability measures on $$[0,1]$$, and so Lebesgue measure is an element of $$\overline{\mathrm{co}}(K) \setminus \sigma\mbox{-}\mathrm{co}(K)$$. To get this to happen in a normed space, we will use $$\ell^2$$, and embed $$P([0,1])$$ affinely and continuously into it. First, observe that we can affinely embed $$P([0,1])$$ into $$[0,1]^{\mathbb{N}}$$, getting each coordinate by evaluating at $$x^n$$ (including $$n = 0$$). This is injective because polynomials are norm dense in $$C([0,1])$$, and continuous by the definition of the weak-* topology. We can then embed $$[0,1]^{\mathbb{N}}$$ into $$\ell^2$$ by the mapping: $$f(a)_n = \frac{1}{n+1}a_n$$ this is affine and continuous from the product topology on $$[0,1]^\mathbb{N}$$ to the norm topology on $$\ell^2$$ (in fact, it defines a continuous linear map from the bounded weak-* topology on $$\ell^\infty$$ to the norm topology on $$\ell^2$$). We use $$e$$ for the composition of these two embeddings, and it is affine and continuous on $$P([0,1])$$. A continuous injective map from a compact Hausdorff space to a Hausdorff space is a homeomorphism onto its image, and as we also preserved convex combinations by making the embedding affine, we have that $$\overline{\mathrm{co}}(e(K)) = e(\overline{\mathrm{co}}(K)) = e(P([0,1]))$$, while, taking $$\lambda$$ to be the element of $$P([0,1])$$ defined by Lebesgue measure, $$e(\lambda) \in e(P([0,1]))$$, but $$e(\lambda) \not\in e(\sigma\mbox{-}\mathrm{co}(K)) = \sigma\mbox{-}\mathrm{co}(e(K))$$. As Bill Johnson points out, there is an injective bounded map from $$\ell^2$$ into any infinite-dimensional Banach space $$E$$. By the same argument used to transfer the example to $$\ell^2$$, this allows us to transfer the example to $$E$$. In the other direction, the convex hull of a compact subset $$K$$ of a finite-dimensional space is compact (using Carathéodory's theorem we can express the convex hull of $$K$$ as the continuous image of the compact set $$K^{d+1} \times P(d+1)$$, where $$d$$ is the dimension. Therefore the $$\sigma$$-convex hull and closed convex hull of $$K$$ coincide. All together, this means: If $$E$$ is a Banach space, the statement "for all compact sets $$K \subseteq E$$, the closed convex hull equals the $$\sigma$$-convex hull" is equivalent to "$$E$$ is finite-dimensional". There are, however, complete locally convex spaces in which every bounded set, and therefore every compact set, is contained in a finite-dimensional subspace, and for which, therefore, the $$\sigma$$-convex and closed convex hulls of compact sets coincide. One example is the space $$\phi$$ of finitely supported functions $$\mathbb{N} \rightarrow \mathbb{R}$$, topologized as an $$\mathbb{N}$$-fold locally convex coproduct of $$\mathbb{R}$$ with itself, or equivalently as the strong dual space of $$\mathbb{R}^{\mathbb{N}}$$. • Thank you, I'll need to go through this carefully. Do you have any ideas about the least set of assumptions about the space $X$ to make the property hold true for any compact $K$? Apr 15, 2020 at 17:00 • @GregoryD. A sufficient condition is that $X$ is finite-dimensional. Then the convex hull of a compact set is compact, so is equal to both the $\sigma$-convex and closed convex hulls (it is easy to prove this using Carathéodory's theorem. It follows that the property does hold in spaces in which every compact set is finite-dimensional, such as an infinite locally convex coproduct of $\mathbb{R}$ with itself. Apr 15, 2020 at 18:36 • @GregoryD. It may well be that this property does not hold for any infinite-dimensional Banach space, but my knowledge of this kind of functional analysis has run out, so I hope one of the experts can help on that subject. Apr 15, 2020 at 18:48 • @RobertFurber: Beautiful counterexamples! Interestingly, there are also infinite-dimensional compact sets $K$ for which the $\sigma$-convex hull coincides with the closure of the convex hull - for instance $K = \{e_n/n: \, n \in \mathbb{N}\} \subseteq \ell^2$, where $e_n$ denotes the $n$-th canonical unit vector. It really seems interesting how to characterize such sets $K$; but I wasn't able to come up with any ideas, yet. Apr 15, 2020 at 20:56 • Once you have an example in $\ell_2$ you can transfer it to any infinite dimensional Banach space $X$ by taking an injective continuous linear mapping from $\ell_2$ into $X$. It is an exercise for students that such a map exists. Apr 17, 2020 at 19:57
2022-12-02T06:33:45
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/357359/equivalence-of-%CF%83-convex-hull-and-closed-convex-hull/357567", "openwebmath_score": 0.9264425039291382, "openwebmath_perplexity": 127.72874339629071, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918529698322, "lm_q2_score": 0.8519527963298946, "lm_q1q2_score": 0.8421483982869676 }
https://math.stackexchange.com/questions/1837032/how-many-arrangements-of-the-letters-in-the-word-california-have-no-consecutive
# How many arrangements of the letters in the word CALIFORNIA have no consecutive letter the same? First off, the correct answer is $$584,640 = {10!\over 2!2!}- \left[{9! \over 2!}+{9! \over 2!}\right] + 8!$$ which can be found using the inclusion-exclusion principle. My own approach is different from the above: In the word CALIFORNIA, we have 2 repeating A's and 2 I's, and 6 remaining unrepeated letters. We first place the 6 unrepeated letters, a total of 6! arrangements. Then, to avoid the A's and B's in consecutive positions, we place the 2 A's and 2 B's between the 6 letters, including the beginning and ending positions, which gives us 8 possible positions. The number of possible arrangements is the permutation of 7 out of 4, but we have to divide out the repeating A's and B's, which is $${7!\over 2!2!3!}$$ So in total, we have $${6!7!\over 2!2!3!} = 151,200$$ which is obviously different from the correct answer. Why is this wrong, and if possible, how I can fix this using the same approach? • Just so you know, I verified the correct answer with a program. – Noble Mushtak Jun 23 '16 at 14:43 • Your method excludes the possibility that A and I are consecutive – David Quinn Jun 23 '16 at 14:45 • Innovative approach – user230452 Jun 23 '16 at 15:16 We modify your approach by first placing the A's, then placing the I's. We can arrange the six distinct letters C,L,F,O,R,N in $6!$ ways. This creates seven spaces, five between successive letters and two at the ends of the row. Case 1: We choose two of these seven spaces in which to place the two A's, thereby separating them. We now have eight letters. This creates nine spaces, seven between successive letters and two at the ends of the row. We choose two of these nine spaces for the I's. The number of such arrangements is $$6!\binom{7}{2}\binom{9}{2} = 544,320$$ Case 2: We place both A's in the same space. We again have eight letters. This again creates nine spaces. The space between the two A's must be filled with an I. Therefore, there are eight ways to choose the position of the other I. The number of such arrangements is $$6!\binom{7}{1}\binom{8}{1} = 40,320$$ Total: These two cases are mutually exclusive. Hence, the total number of arrangements of the letters of the word CALIFORNIA in which no two consecutive letters are the same is $$6!\left[\binom{7}{2}\binom{9}{2} + \binom{7}{1}\binom{8}{1}\right] = 584,640$$ which agrees with the result obtained by using the Inclusion-Exclusion Principle. • I was trying it to redo it again, including the case i missed, using my method, and I only got the case1 in your answer. Thanks a lot! – Tony Tarng Jun 23 '16 at 15:23 $\underline{A\; shorter\; method}$ Suppose $2$ objects are to be kept apart in a permutation, there are $2$ well known methods • the "gap method", where these $2$ objects are placed in the gaps (incl. ends) • the "subtraction method" [ total permutations - those with the $2$ objects together ] Find the number of ways both $A's$ and $I's$ are apart by successively applying them • $A's$ apart or together, $I's$ apart $= (8!/2!)\binom92= 725,760$ • $A's$ together, $I's$ apart $= 7!\binom82 = 141,120$ • thus both $A's$ and $I's$ apart = $725,760 - 141,120 = 584,640$ There are just $3$ patterns for first placing the $A's \; and\; I's\;$ some of which need separators glued to first occurrence of the"double(s)" and shown by large letter(s). Both "doubles" together: ${\Large A}A{\Large I}I\quad or \quad{\Large I}I{\Large A}A$ which use up two of the other letters in $6\cdot5$ ways, and the remaining $4$ can be successively inserted in $5\cdot 6\cdot 7\cdot 8$ ways One double together: $A{\Large I}IA \quad or \quad I{\Large A}AI$, which uses up one letter in $6$ ways and the remaining $5$ can be successively inserted in $5\cdot6\cdot7\cdot8\cdot9$ ways No doubles together: $AIAI\quad or \quad IAIA$ and the remaining $6$ can be inserted in $5\cdot 6\cdot 7\cdot 8\cdot 9\cdot 10$ ways Adding up, we get $2\cdot 5\cdot 6\cdot 7 \cdot 8(6\cdot 5 +6\cdot 9+ 9\cdot 10) = 584,640$ I subsequently found a shorter method, which I am posting separately, but leaving this one, too. The problem here is that you're not counting the possibility that an $A$ and an $I$ could be in the same "space," like in this string: $$CALIFORNIA$$ Here, we don't have consecutive same letters, but "I" and "A" are both at the end, in the seventh space, which you're not accounting for. Therefore, we have to consider the $A$s and $I$s separately. We have $6!$ distinct arrangements of the distinct letters and can then put the $A$s in $7 \choose 2$ spots and can then put the $I$s in $7 \choose 2$ spots, so we have: $$6\cdot {7 \choose 2}\cdot {7 \choose 2}=317520$$ This is wrong, however, because we're not accounting for the order of $A$ and $I$ if they are in the same space. If you can figure out how to account for that, you can salvage your method, but I think the inclusion-exclusion is quite easier at this point. One way to look at this is to enumerate several sets. Firstly, how many permutations of california contain aa and ii? Let's call that set $C$, and its size is $|C|$. Secondly, how many contain aa? And how many ii? Let's call these sets $A$ and $I$. Let's call the total number of permutations $U$. The set of permutations which contain no consecutive letters is therefore $U - A - I + C$. And so the number we are looking for is $|U| - |A| - |I| + |C|$. We must add back $C$ because it's the overlapping area in the Venn diagram. That is to say, both set $A$ and set $I$ contain $C$, and so if we subtract them from $U$ via set difference, we end up subtracting $C$ twice. $A$ contains $C$ because the set of all permutations of california which contain aa includes all permutations that contain ii also, and that subset of $A$ corresponds to $C$. Let's deal with set C: This can be answered as: given an array of ten places, how many ways can we place the digraph aa and ii into that array? Each of these ways leaves six spaces, which we then fill with permutations of cflnor: the remaining letters in california. We multiply together these possibilities: that is to say $6!$ times the number of ways of filling those digraphs. Now, there are 9 ways to place aa into the array, obviously. Out of these 9 ways, the edge placements aa........ and ........aa, support 7 ways of adding a ii. All 7 interior placements support 6 ways of introducing ii. So the possibilities are $2\times 7 + 7\times 6 = 56$. Hence $|C| = 56\times 6! = 56\times 720 = 40320$. Some diagrams: Edge aa-fill: aaii...... 1 aa.ii..... 2 aa..ii.... 3 aa...ii... 4 aa....ii.. 5 aa.....ii. 6 aa......ii 7 Interior aa-fill: .aaii..... 1 .aa.ii.... 2 .aa..ii... 3 .aa...ii.. 4 .aa....ii. 5 .aa.....ii 6 iiaa...... 1 ..aaii.... 2 ..aa.ii... 3 ..aa..ii.. 4 ..aa...ii. 5 ..aa....ii 6 Now let's deal with A and I. They are easy mirror cases. Basically, we have nine ways to place aa into ten spaces, leaving eight spaces, which we then fill with permutations of the remaining letters. Thus $|A| = 9\times 8! = 9! = 362880$ and also $|I| = 362880$. Of course $|U| = 10! = 3628800$. We can now calculate $|U| - |A| - |I| + |C| = 3628800 - 2\times 362880 + 40320 = 2943360$. Oops: does not validate by machine: $txr -i 1> (count-if (notf (op search-regex @1 #/aa|ii/)) (perm "california")) 2338560 The correct answer is 2338560. To be continued ... (Math has cliffhangers too!) And here is where I screwed up! In calculating the cardinalities$|C|$and$|A| = |I|$, I assumed that there is only one distinct aa and ii: that both a-s are the same object. But in fact they are distinct: there are two aa digraphs and two ii digraphs (I'm dealing with permutations which need not be distinct). Therefore, set$|C|$is actually four times larger, due the four combinations of digraphs, hence$4\times 56\times 720 = 161280$. Similarly,$|A|$and$|I|$are twice as large:$2\times 9!$. With these corrections,$|U| - |A| - |I| + |C| = 2338560$. Now suppose we want to consider only distinct permutations, so that there is exactly one digraph aa and one digraph ii. The cardinality of the new$|U|$is reduced by a factor of four, since strings that differ only in exchanges of one a with the other, and one i with the other are equivalent. It is$|U| = \frac{10!}{2} = 907200$. The original$|C| = 40320$value is correct, since that calculation considered both i-s and a-s as indistinct. The correct$|A|$calculation is$|A| = |I| = 9!\div 2 = 181440$: we must divide by two because the remaining letters after the digraph is positioned include two which are indistinct. And so$|U| - |A| - |I| + |C| = 907200 - 2\times 181440 + 40320 = 584640$. Check by machine: 3> (count-if (notf (op search-regex @1 #/aa|ii/)) (uniq (perm "california"))) 584640 So those are the answers: if the i-s and a-s are considered distinct, then$2338560$permutations don't contain a digraph. If they are considered indistinct, then$584640\$ permutations don't contain a digraph.
2019-10-19T19:54:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1837032/how-many-arrangements-of-the-letters-in-the-word-california-have-no-consecutive", "openwebmath_score": 0.8691972494125366, "openwebmath_perplexity": 258.4261648712771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9653811611608242, "lm_q2_score": 0.8723473862936942, "lm_q1q2_score": 0.8421477327158166 }
https://math.stackexchange.com/questions/1185603/normal-subgroup-of-normal-subgroup
# Normal subgroup of Normal subgroup If $H$ is a normal subgroup of $K$ and $K$ is a normal subgroup of $G$ can we say that $H$ is a normal subgroup of $G$.I could not prove it and cannot find a suitable counter example Will the results holds for $G$ abelian ?If else what will be counter example ? • Concerning the second question: in abelian groups all subgroups are normal, so this will hold. – lisyarus Mar 11 '15 at 17:42 • okey can $S_{3}$ is a counter example for first part...? – Madhu Mar 11 '15 at 17:45 • You have three groups in the question - $H \triangleleft K \triangleleft G$. To provide a counter example, you should specify three groups. – lisyarus Mar 11 '15 at 17:46 It is not true that $H \lhd K \lhd G$ implies $H \lhd G$ (although the stronger condition that $H \text{ char } K\ \lhd G$ will force $H \lhd G$). A good tactic would be to choose a group $G$ with an abelian normal subgroup $K$. Any subgroup of $K$ must be abelian, hence normal in $K$. Your job is to find a subgroup of $K$ that's not normal in $G$. The alternating group on $4$ letters, $A_4$ is a good choice for $G$. There, you can find such a chain $H \lhd K \lhd G$ with $H \not\lhd G$. To find a counter-example, we need a non-abelian group (every subgroup of an abelian group is normal). Unfortunately, $S_3$ (the smallest non-abelian group) doesn't work, since it has only one non-trivial proper normal subgroup, $A_3$, and $A_3$ is simple. The next smallest non-abelian group is $D_4$, the dihedral group of order $8$. Here, we have two promising subgroups, $V = \{e,r^2,s,r^2s\}$ (where $r$ is a rotation of order $4$ and $s$ is some reflection), and $\langle r\rangle = \{e,r,r^2,r^3\}$, both of these are of index $2$, so are automatically normal. So these look good for $K$. Clearly, we want a subgroup of order $2$ for $H$. Now $\{e,r^2\}$ is a subgroup of both possible $K$'s, but this isn't so good, since $r^2$ is central in $D_4$ (it commutes with everything). Since that's the ONLY subgroup of $\langle r\rangle$ of order $2$, we focus instead on $K = V$. It has another subgroup of order $2$, $\{e,s\}$ (We could also use $\{e,r^2s\}$, try it yourself and see!). Now: $rsr^{-1} = rsr^3 = r(sr)r^2 = r(r^3s)r^2 = sr^2 =$ $(sr)r = (r^3s)r = r^3(sr) = r^3(r^3s) = r^2s \not \in \{e,s\}$ so that $rHr^{-1} \neq H$ for $H =\{e,s\}$, that is, $H$ is not normal in $D_4$, even though $H$ is normal in $V$, and $V$ is normal in $D_4$. Unless I am mistaken, this is the minimal counter-example (in other word, look at small groups first, before trying large and difficult ones). If $G$ is abelian then every subgroup its subgroup is normal: $gK = Kg$. Because $H$ also is subgroup of $G$ the answer is yes and proof is straightforward. • Try $G=S_4$, $K=\{(12)(34),(13)(42),(23)(41),e \}$ and $H = \langle (12)(34) \rangle$. You can show that $K$ is normal in $G$ (it might be seen easier by realizing that $K$ is isomorphic to $V_4$) and you can see that $H$ is normal in $K$. But is $K$ normal in $G$? – graydad Mar 11 '15 at 18:47
2019-11-18T18:38:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1185603/normal-subgroup-of-normal-subgroup", "openwebmath_score": 0.8803178071975708, "openwebmath_perplexity": 123.08615248332731, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9653811611608241, "lm_q2_score": 0.8723473862936942, "lm_q1q2_score": 0.8421477327158164 }
http://beerlak.hu/homemade-ice-abazy/what-is-pascal%27s-triangle-644526
The first thing we need to do on our quest to discover Pascal’s triangle is figure out how many possible outcomes there are when tossing 1 and 2 coins at the same time. The numbers in Pascal's Triangle are the … Welcome; Videos and Worksheets; Primary; 5-a-day. Pascal’s triangle is a never-ending equilateral triangle of numbers that follow a rule of adding the two numbers above to get the number below. © 2021 Scientific American, a Division of Springer Nature America, Inc. Support our award-winning coverage of advances in science & technology. Using Factorial; Without using Factorial; Python Programming Code To Print Pascal’s Triangle Using Factorial. 2=1+1, 4=3+1, 21=6+15, etc. Pascal's triangle contains the values of the binomial coefficient . answer choices . Plus, I only just noticed the link to further explanations so it’s even more exciting.Great post. 0. There is a nice calculator on this page that you can play with in order to see the Pascal's triangle for up to 99 rows. We can display the pascal triangle at the center of the screen. The triangle was studied by B. Pascal, although it had been described centuries earlier by Chinese mathematician Yanghui (about 500 years earlier, in fact) and the Persian astronomer-poet Omar Khayyám. 1. Corbettmaths Videos, worksheets, 5-a-day and much more. Stay tuned because that’s exactly what we’re talking about today. Step 1: Draw a short, vertical line and write number one next to it. The number of possible configurations is represented and calculated as follows: 1. 0. It goes like this- Instead of choosing the numbers directly from the triangle we think each number as a part of a decimal expansion i.e. Yes, it is. When you look at Pascal's Triangle, find the prime numbers that are the first number in the row. I.e., I need a way to efficiently compute the following sequences: – 1 – 1 1 – 1 2 – 1 3 1 – 1 4 3 – 1 5 6 1 – 1 6 10 4 – 1 7 15 10 1 – …. Now let's take a look at powers of 2. Well, 1 of them. Code Breakdown . 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 This is a node in the map and I think what are the different ways that I can get to this node on the map. We have already discussed different ways to find the factorial of a number. One of the famous one is its use with binomial equations. What number is at the top of Pascal's Triangle? for(int i = 0; i < rows; i++) { The next for loop is responsible for printing the spaces at the beginning of each line. What is remarkable is to find how each number fits in perfect order inside the triangular matrix to produce all those amazing mathematical relationships. 204 and 242).Here's how it works: Start with a row with just one entry, a 1. Uh, yes it is Harvey. Carwow, best-looking beautiful cars and the golden ratio. Pascal's Triangle is an arithmetical triangle you can use for some neat things in mathematics. a^7+a^6*b+a^5*b^2+a^4*b^3+a^3*b^4+a^2*b^5+a*b^6+b^7. Every number below in the triangle is the sum of the two numbers diagonally above it to the left and the right, with positions outside the triangle counting as zero. Your calculator probably has a function to calculate binomial coefficients as well. In mathematics, the Pascal's Triangle is a triangle made up of numbers that never ends. 264. Some Important things to notice The first row starts with 1. Here's how you construct it: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 1 7 21 35 35 21 7 1 . answer choices . One of the famous one is its use with binomial equations. Thanks this helped SOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO MUCH. Using pascals triangle to calculate combinations - Duration: 6:12. What does it mean when it says “the numbers on the diagonals add to the Fibonacci series”. It will run ‘row’ number of times. n C r has a mathematical formula: n C r = n! Of course, when we toss a single coin there are exactly 2 possible outcomes—heads or tails—which we’ll abbreviate as “H” or “T.” How many of these outcomes give 0 heads? The Parthenon and the Golden Ratio: Myth or Misinformation? Correction made to the text above. What number can always be found on the right of Pascal's Triangle. Generally, on a computer screen, we can display a maximum of 80 characters horizontally. Pascal's Triangle thus can serve as a "look-up table" for binomial expansion values. Joel Speranza Math 13,367 views. He had used Pascal's Triangle in the study of probability theory. The Corbettmaths Practice Questions on Pascal's Triangle for Level 2 Further Maths. 1. The Corbettmaths Practice Questions on Pascal's Triangle for Level 2 Further Maths You just carry the tens digit into the previous column, ****11^5=161051 is different than 15101051*** 1,5,10,10,5,1 1(5+1)(0+1)051 1(6)(1)051. Thanks for the visual! And not only is it useful, if you look closely enough, you’ll also discover that Pascal’s triangle contains a bunch of amazing patterns—including, kind of strangely, the famous Fibonacci sequence. Struggling Ravens player: 'My family is off limits' McConaughey responds to Hudson's kissing insult To construct the Pascal’s triangle, use the following procedure. Pascals Triangle. Also, many of the characteristics of Pascal's Triangle are derived from combinatorial identities; for example, because , the sum of the values on row of Pascal's Triangle is . World finally discovers one thing 'the Rock' can't do. After using nCr formula, the pictorial representation becomes: Pascal’s triangle arises naturally through the study of combinatorics. All values outside the triangle are considered zero (0). expand (x-2y)^5 ^5 means to the 5th power. As a square rows and columns represent negative powers of 9 (10-1). Wonderful video. Subscribers get more award-winning coverage of advances in science & technology. Tags: Question 7 . Let's add together the numbers on each line: 1st line: 1; 2nd line: 1; 3rd line: 1 + 1 = 2; 4th line: 1 … Step 3: Connect each of them to the line above using broken lines. Almost correct, Joe. Step 2: Draw two vertical lines underneath it symmetrically. Tags: Question 8 . Pascal Triangle in Java at the Center of the Screen. A bit of modification in the horizontal representation resulting in powers of 11 can turn it into a general formula for any power . There are documents showing it was already known by the Chinese and Indian People a long time before the birth of Pascal. Finding your presentation and explanation of Pascal’s Triangle was very interesting and its analysis amusing. It’s probably partly due to cultural biases, and partly because his investigations were the most extensive and well organized. Ohhhhh. In order to solve the problem, I need a way to compute the diagonals shown above in a computationally efficient way. Return the total number of ways you can paint the fence. In mathematics, Pascal's triangle is a triangular array of the binomial coefficients.In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia (Iran), China, Germany, and Italy.. Pascals Triangle Although this is a pattern that has been studied throughout ancient history in places such as India, Persia and China, it gets its name from the French mathematician Blaise Pascal . Take a look at the diagram of Pascal's Triangle below. That prime number is a divisor of every number in that row. answer choices . Powers of 2. Eddie Woo Recommended for … some secrets are yet unknown and are about to find. The numbers on each row are binomial coefficients. Pascals Triangle × Sorry!, This page is not available for now to bookmark. So why is it named after him? It turns out that people around the world had been looking into this pattern for centuries. Pascal's Triangle. Pascal's Triangle is a mathematical triangular array.It is named after French mathematician Blaise Pascal, but it was used in China 3 centuries before his time.. Pascal's triangle can be made as follows. Tags: Question 7 . Pascal's Triangle. Naturally, a similar identity holds after swapping the "rows" and "columns" in Pascal's arrangement: In every arithmetical triangle each cell is equal to the sum of all the cells of the preceding column from its row to the first, inclusive (Corollary 3). Rows & columns represent the decimal expension of powers of 1/9 (= o.111111 ; 1/81 = 0,0123456 ; 1/729 = 0.00136.). Pascal’s triangle is a pattern of the triangle which is based on nCr, below is the pictorial representation of Pascal’s triangle. What number is at the top of Pascal's Triangle? The Pascal's triangle, named after Blaise Pascal, a famous french mathematician and philosopher, is shown below with 5 rows. 6:12. Pascal's Triangle is a mathematical triangular array.It is named after French mathematician Blaise Pascal, but it was used in China 3 centuries before his time.. Pascal's triangle can be made as follows. Pascal's triangle is a number triangle with numbers arranged in staggered rows such that (1) where is a binomial coefficient. But for small values the easiest way to determine the value of several consecutive binomial coefficients is with Pascal's Triangle: The rows of Pascal's triangle are conventionally enumerated starting with row n = 0 at the top (the 0th row). The horizontal rows represent powers of 11 (1, 11, 121, 1331, 14641) for the first 5 rows, in which the numbers have only a single digit. Pascal's triangle synonyms, Pascal's triangle pronunciation, Pascal's triangle translation, English dictionary definition of Pascal's triangle. / ((n - r)!r! Pascal's triangle is a triangular array constructed by summing adjacent elements in preceding rows. Each number is … I could have a y squared, and then multiplied by an x. The outer most for loop is responsible for printing each row. Perhaps you can find what you seek at Pascal’s Triangle at Wikipedia. answer choices . In the twelfth century, both Persian and Chinese mathematicians were working on a so-called arithmetic triangle that is relatively easily constructed and that gives the coefficients of the expansion of the algebraic expression (a + b) n for different integer values of n (Boyer, 1991, pp. I love approaching art and degisn from a maths and scientific angle and this illustrates that way of working perfectly. However, this triangle became famous after the studies made by this French philosopher and mathematician in 1647. (using 1/99…. The triangle was actually invented by the Indians and Chinese 350 years before Pascal's time. Joel Speranza Math 13,367 views. That’s where Pascal’s triangle comes in… so (a+b)^7 = 1*a^7 + 7*a^6*b + 21*a^5*b^2 + 35*a^4*b^3 + 35*a^3*b^4 + 21*a^2*b^5 + 7*a*b^6 + 1*b^7. 30 seconds . Notify me of follow-up comments by email. Magic 11's. Similiarly, in … Table of Contents . Register free for online tutoring session to clear your doubts This arrangement is done in such a way that the number in the triangle is the sum of the two numbers directly above it. 1. > Continue reading on QuickAndDirtyTips.com. The triangle follows a very simple rule. For this, just add the spaces before displaying every row. n C r has a mathematical formula: n C r = n! See the illustration. On the first row, write only the number 1. Pascal Triangle. if you see each horizontal row as one number (1,11,121,1331 etc.) Pascal’s triangle, in algebra, a triangular arrangement of numbers that gives the coefficients in the expansion of any binomial expression, such as (x + y) n. It is named for the 17th-century French mathematician Blaise Pascal, but it is far older. Half of … As Heather points out, in binomial expansion. 5-a-day GCSE 9-1; 5-a-day Primary; 5-a-day Further Maths; 5-a-day GCSE A*-G; 5-a-day Core 1; More. We keep calling this pattern “Pascal’s triangle,” but who is that? The first row is 0 1 0 whereas only 1 acquire a space in pascal's triangle… Thank you so much..!!! We will discuss two ways to code it. Why is that an interesting thing to do? 5. The horizontal rows represent powers of 11 (1, 11, 121, 1331, 14641) for the first 5 rows, in which the numbers have only a single digit. 255. ), see Theorem 6.4.1. If the top row of Pascal's Triangle is row 0, then what is the sum of the numbers in the eighth row? Let’s go over the code and understand. Generally, on a computer screen, we can display a maximum of 80 characters horizontally. Pascal’s triangle is a triangular array of the binomial coefficients. On the first row, write only the number 1. it will show the powers of 11 just carry on the triangle and you should be able to find whatever power of 11 your looking for, Carry over the tens, hundreds etc so 1 5 10 10 5 1 becomes 161051 and 1 6 15 20 15 6 1 becomes 1771561. Where is it? - Duration: 14:22. 260. The first row is 0 1 0 whereas only 1 acquire a space in pascal's triangle, 0s are invisible. n. A triangle of numbers in which a row represents the coefficients of the binomial series. Pascal's triangle synonyms, Pascal's triangle pronunciation, Pascal's triangle translation, English dictionary definition of Pascal's triangle. Half of 80 is 40, so 40th place is the center of the line. Example: Input: N = 5 Output: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . Jason Marshall, PhD, is a research scientist, author of The Math Dude's Quick and Dirty Guide to Algebra, and host of the Math Dude podcast on Quick and Dirty Tips. I used to get ideas from here. n!/(n-r)!r! One of the best known features of Pascal's Triangle is derived from the combinatorics identity . This website is so useful!!! Thanks. After that it has been studied by many scholars throughout the world. 204 and 242).Here's how it works: Start with a row with just one entry, a 1. n!/(n-r)!r! Before looking for patterns in Pascal’s triangle, let’s take a minute to talk about what it is and how it came to be. Method 1: Using nCr formula i.e. Example: Input: N = 5 Output: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 Method 1: Using nCr formula i.e. . 3. Similarly, the forth line is formed by sum of 1 and 2 in an alternate pattern and so on. For example, imagine selecting three colors from a five-color pack of markers. The numbers on diagonals of the triangle add to the Fibonacci series, as shown below. As you’ll recall, this triangle of numbers has a 1 in the top row and 1s along both edges, and each subsequent row is built by adding pairs of numbers from the previous. Now I get it! This is a node in the map and I think what are the different ways that I can get to this node on the map. Before looking for patterns in Pascal’s triangle, let’s take a minute to talk about what it is and how it came to be. How Does Geometry Explain the Phases of the Moon. there are alot of information available to this topic. = 11^2 . 3. 264. Ideally, to compute the nth sequence would require time proportional to n. One way that this could be achieved is by using the (n-1)th sequence to compute the nth sequence. Required fields are marked *. Pascal's triangle, I always visualize it as a map. Well, Pascal was a French mathematician who lived in the 17th century. See below for one idea: One use of Pascal's Triangle is in its use with combinatoric questions, and in particular combinations. Pascal’s triangle is a triangular array of the binomial coefficients. Struggling Ravens player: 'My family is off limits' McConaughey responds to Hudson's kissing insult All values outside the triangle are considered zero (0). Pascal Triangle in Java at the Center of the Screen. Following are the first 6 rows of Pascal’s Triangle. Discover world-changing science. Hi, Can you explain how Pascal’s triangle works for getting the 9th & 10th power of 11 and beyond? Second row is acquired by adding (0+1) and (1+0). There is a nice calculator on this page that you can play with in order to see the Pascal's triangle for up to 99 rows. 30 seconds . Naturally, a similar identity holds after swapping the "rows" and "columns" in Pascal's arrangement: In every arithmetical triangle each cell is equal to the sum of all the cells of the preceding column from its row to the first, inclusive (Corollary 3). Pascal's triangle The Pascal's triangle, named after Blaise Pascal, a famous french mathematician and philosopher, is shown below with 5 rows. Your email address will not be published. Good observation. Before looking for patterns in Pascal’s triangle, let’s take a minute to talk about what it is and how it came to be. 257. 256. What is 0 to the power of 0? And what other patterns are hidden in the triangle? One of the Pascal’s findings concerns the fact that 2^n can calculate the addition of the elements of a line, having in mind that n is the number of the line. PASCAL'S TRIANGLE Background for Pascal's Triangle Pascal's Triangle is a special triangle formed by the triangular arrangement of numbers. Sum of previous values . It was named after French mathematician Blaise Pascal. A lthough it is known as Pascal’s Triangle, the author of this triangle is not Blaise Pascal. To build the triangle, start with "1" at the top, then continue placing numbers below it in a triangular pattern. Look at row 5. In Pascal’s triangle, each number is the sum of the two numbers directly above it. Pascal's triangle. The Pascal's Triangle was first suggested by the French mathematician Blaise Pascal, in the 17 th century. It is named after the 1 7 th 17^\text{th} 1 7 th century French mathematician, Blaise Pascal (1623 - 1662). 3 hours ago — Thomas Frank and E&E News, January 6, 2021 — Alexandra Witze and Nature magazine. Because it turns out that Pascal’s triangle is not a one trick pony—it’s useful for a surprising number of things. It has many interpretations. To construct the Pascal’s triangle, use the following procedure. ), see Theorem 6.4.1. All possible ways are: post1 post2 post3 —– —– —– —– 1 c1 c1 c2 2 c1 c2 c1 3 c1 c2 c2 4 c2 c1 c1 5 c2 c1 c2 6 c2 c2 c1, Your email address will not be published. Q. For instance (X+Y)^4 = 1 XXXX + 4 XXXY + 6 XXYY + 4XYYY + 1YYYY where the coefficients ( 1, 4, 6, 4, 1 ) are the fourth row of Pascal’s Triangle. What is 0 to the power of 0? 1 2 1 =(1 x 100) +(2 x 10) + (1 x 1) . The line following has 2 ones. / ((n - r)!r! Explore our digital archive back to 1845, including articles by more than 150 Nobel Prize winners. Pascal's Triangle. Hey that is very helpful and all but what is the formula to work out the triangle? Golden Ratio, Phi, 1.618, and Fibonacci in Math, Nature, Art, Design, Beauty and the Face. SURVEY . From the foregoing, row 1 of Pascal’s triangle is 1, 1, row 2 is 1, 2, 1 and row 3 is 1, 3, 3, 1. For this, just add the spaces before displaying every row. . Pascal’s triangle has many unusual properties and a variety of uses: Horizontal rows add to powers of 2 (i.e., 1, 2, 4, 8, 16, etc.). Menu Skip to content. What other type of construction do you seek? The code inputs the number of rows of pascal triangle from the user. Now that we’ve learned how to draw Pascal’s famous triangle and use the numbers in its rows to easily calculate probabilities when tossing coins, it’s time to dig a bit deeper and investigate the properties of the triangle itself. Pascal's Triangle, named after French mathematician Blaise Pascal, is used in various algebraic processes, such as finding tetrahedral and triangular numbers, powers of two, exponents of 11, squares, Fibonacci sequences, combinations and polynomials. Write a function that takes an integer value n as input and prints first n lines of the Pascal’s triangle. Definition of Pascal's triangle : a system of numbers arranged in rows resembling a triangle with each row consisting of the coefficients in the expansion of (a + b)n for n = 0, 1, 2, 3, … First Known Use of Pascal's triangle 1886, in the meaning defined above Gary Meisner's Latest Tweets on the Golden Ratio, Facial Analysis and the Marquardt Beauty Mask, Golden Ratio Top 10 Myths and Misconceptions, Overview of Appearances and Applications of Phi, The Perfect Face, featuring Florence Colgate, The Nautilus shell spiral as a golden spiral, Phi, Pi and the Great Pyramid of Egypt at Giza, Quantum Gravity, Reality and the Golden Ratio. Pascal’s triangle is a pattern of the triangle which is based on nCr, below is the pictorial representation of Pascal’s triangle.. Pascal’s Triangle, developed by the French Mathematician Blaise Pascal, is formed by starting with an apex of 1. In fact, if Pascal's triangle was expanded further past Row 15, you would see that the sum of the numbers of any nth row would equal to 2^n. Did Pascal Discover Pascal’s Triangle? Input: n = 3, k = 2 Output: 6 Explanation: Take c1 as color 1, c2 as color 2. If the top row of Pascal's Triangle is row 0, then what is the sum of the numbers in the eighth row? Scientific American presents Math Dude by Quick & Dirty Tips. Pascal’s triangle has many unusual properties and a variety of uses: Horizontal rows add to powers of 2 (i.e., 1, 2, 4, 8, 16, etc.) He had used Pascal's Triangle in the study of probability theory. One color each for Alice, Bob, and Carol: A cas… Scientific American and Quick & Dirty Tips are both Macmillan companies. World finally discovers one thing 'the Rock' can't do. Step 1: Draw a short, vertical line and write number one next to it. Pascal's triangle. 257. Learn Pascals Triangle topic of Maths in details explained by subject experts on vedantu.com. 1 1 1 1 1 1 1 2 3 4 5 1 3 6 10 1 4 10 1 5 1, 1/9 = 0,1111111 1/81=0,0123456 1/729= 0.00137 etc. answer choices . It is a triangular array of binomial coefficients. Each row represent the numbers in the powers of 11 (carrying over the digit if it is not a single number). will avoid carrying over of decimals), Addiing up those fractions ‘aproaches’ the ratio 1/8 = 0,125 (0,1249999999…..) Similar the infinite sum of negative powers of 90 (1/90) results in 1/89, which decimally represents the diagonal sum of Pascal’s triangle: 1 1 1 1 1 … 0 0 1 2 3 4 … 0 0 0 0 1 3 6 … 0 0 0 0 0 0 1 4 … 0 0 0 0 0 0 0 0 1 … —————————— + 1 1 2 3 5 …, Another application: (1x) 21 = (1x) 8 + (1x) 13 = (1x) 3 + (2x) 5 + (1x) 8 = (1x) 1 + (3x) 2 + (3x) 3 + (1x) 5 = (1x) 0 + (4x) 1 + (6x) 1 + (4x) 2, (1x) 3 = 21, (1x) 0 = (1x) 1 + (1x) -1 = (1x) -1 + (2x) 2 + (1x) -3 = (1x) 2 + (3x) -3 + (3x) 5 + (1x) -8 = (1x) -3 + (4x) 5 + (6x) -8 + (4x) 13 + (1x) -21 = 0. Similarly it works even for powers greater than 5, for example : 1 6 15 20 15 6 1 = 11^6….. and so on , 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225, You can also find sierpinski’s triangle by marking all odd numbers, Althought known as Pascal’s triangle, apparently Pascal himself wrote it as a square. - Duration: 14:22. The Math Dude: Quick & Dirty Tips to Make Math Simpler. It has a number of different uses throughout mathematics and statistics, but in the context of polynomials, specifically binomials, it is used for expanding binomials. Golden Ratio, Phi and Fibonacci Commemorative Postage Stamps, The Golden Ratio in Character Design, Cartoons and Caricatures, Golden ratios in Great Pyramid of Giza site topography, Michelangelo and the Art of the Golden Ratio in Design and Composition, Google Logo and the Golden Ratio in Design. For example, the numbers in row 4 are 1, 4, 6, 4, and 1 and 11^4 is equal to 14,641. The green lines are the “diagonals” and the numbers of the Pascal’s triangle they intersect sum to form the numbers of the Fibonacci sequence – 1, 1, 2, 3, 5, 8, …, 1 0 1 1 0 1 0 2 0 1 1 0 3 0 1 0 3 0 4 0 1 1 0 6 0 5 0 1. If there happens to be a way to compute the nth sequence in constant time, that would be fantastic. This is good source of information. 1. Pascal’s Triangle is a triangular array of numbers where each number on the “interior” of the triangle is the sum of the two numbers directly above it. In mathematics, the Pascal's Triangle is a triangle made up of numbers that never ends. SURVEY . It has many interpretations. Every number below in the triangle is the sum of the two numbers diagonally above it to the left and the right, with positions outside the triangle counting as zero. Pascal’s Triangle represents a triangular shaped array of numbers with n rows, with each row building upon the previous row. Pascal's Triangle or Khayyam Triangle or Yang Hui's Triangle or Tartaglia's Triangle and its hidden number sequence and secrets. And, no, he was not the first person to study this triangle…not by a long shot. The illustration above shows how the numbers on the diagonals of Pascal’s triangle add to the numbers of the Fibonacci series. Pascal's triangle is one of the classic example taught to engineering students. The two sides of the triangle run down with “all 1’s” and there is no bottom side of the triangles as it is infinite. Pascal Triangle is named after French mathematician Blaise Pascal. Pascal’s Triangle Last updated; Save as PDF Page ID 14971; Contributors and Attributions; The Pascal’s triangle is a graphical device used to predict the ratio of heights of lines in a split NMR peak. Hi, just wondering what the general expression for Tn would be for the fibonacci numbers in pascal’s triangle? Dedicated to sharing the best information, research and user contributions on the Golden Ratio/Mean/Section, Divine Proportion, Fibonacci Sequence and Phi, 1.618. One of the most interesting Number Patterns is Pascal's Triangle (named after Blaise Pascal, a famous French Mathematician and Philosopher). Adding any two successive numbers in the diagonal 1-3-6-10-15-21-28… results in a perfect square (1, 4, 9, 16, etc.) The order the colors are selected doesn’t matter for choosing which to use on a poster, but it does for choosing one color each for Alice, Bob, and Carol. However, this triangle became famous after the studies made by this French philosopher and mathematician in 1647. Each number is the numbers directly above it added together. It also works below the 5th line. n. A triangle of numbers in which a row represents the coefficients of the binomial series. Tags: Question 8 . The Pascal’s triangle is a graphical device used to predict the ratio of heights of lines in a split NMR peak. some secrets are yet unknown and are about to find. 6:12. There are many interesting things about the Pascal’s triangle. So I don’t understand. I could have a y … I hadn’t seen that before. You have to paint all the posts such that no more than two adjacent fence posts have the same color. The Pascal's Triangle was first suggested by the French mathematician Blaise Pascal, in the 17 th century. Write a function that takes an integer value n as input and prints first n lines of the Pascal’s triangle. Do not count the 1’s. Donald Duck visits the Parthenon in “Mathmagic Land”, “The Golden Ratio” book – Author interview with Gary B. Meisner on New Books in Architecture. Which meant that soon after publishing his 1653 book on the subject, “Pascal’s triangle” was born! 260. 30 seconds . Pascal's Triangle or Khayyam Triangle or Yang Hui's Triangle or Tartaglia's Triangle and its hidden number sequence and secrets. It is the usual triangle, but with parallel, oblique lines added to it which each cut through several numbers. Your calculator probably has a function to calculate binomial coefficients as well. 256. Eddie Woo Recommended for you. Pascal's Triangle, named after French mathematician Blaise Pascal, is used in various algebraic processes, such as finding tetrahedral and triangular numbers, powers of two, exponents of 11, squares, Fibonacci sequences, combinations and polynomials. Every number in the triangle is the sum of the two numbers diagonally above it to the left and the right, with positions outside the triangle counting as zero. there are alot of information available to this topic. Pascal's triangle is one of the classic example taught to engineering students. Briefly explaining the triangle, the first line is 1. Pascal’s Triangle, developed by the French Mathematician Blaise Pascal, is formed by starting with an apex of 1. This is such an awesome connection. Using pascals triangle to calculate combinations - Duration: 6:12. Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at, Continue reading on QuickAndDirtyTips.com. Pascal’s Triangle is a triangular array of numbers where each number on the “interior” of the triangle is the sum of the two numbers directly above it.
2021-03-07T23:58:48
{ "domain": "beerlak.hu", "url": "http://beerlak.hu/homemade-ice-abazy/what-is-pascal%27s-triangle-644526", "openwebmath_score": 0.5620970129966736, "openwebmath_perplexity": 541.3227765729619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9653811591688145, "lm_q2_score": 0.8723473813156294, "lm_q1q2_score": 0.8421477261723621 }
https://math.stackexchange.com/questions/1226089/is-expx-the-same-as-ex/1226100
Is $\exp(x)$ the same as $e^x$? For homework I have to find the derivative of $\text {exp}(6x^5+4x^3)$ but I am not sure if this is equivalent to $e^{6x^5+4x^3}$ If there is a difference, what do I do to calculate the derivative of it? • There is no difference; they are alternative notations for the same thing. Apr 8, 2015 at 21:33 • It is just a matter of notation. For instance, if the power is a long polynomial we use $\exp$ so we will not get confused. This notation helps avoiding mistake. Apr 8, 2015 at 22:07 • Why didn't you ask the person who set the homework? Apr 9, 2015 at 16:41 • why so many upvotes? Apr 9, 2015 at 17:20 • @zed111 At first I agreed with you - too many upvotes. But scanning the answers shows that it is in fact an interesting question, though perhaps more interesting than the OP knew. Apr 9, 2015 at 18:54 Yes. They are the same thing. When exponents get really really complicated, mathematicians tend to start using $\exp(\mathrm{stuff})$ instead of $e^{\mathrm{stuff}}$. For example: $e^{x^5+2^x-7}$ is kind of hard to read. So instead one might write: $\exp(x^5+2^x-7)$. Note: For those who use Maple or other computer algebra systems, e^x is not usually the same as exp(x). In Maple, e^x is the variable $e$ raised to the variable $x$ power whereas exp(x) is Euler's number $e$ raised to the $x$ power. • Agreed, they are two different ways of looking at the same thing... one way $e^x$ kind of the "mathy" way to view it, and $\text{exp}(x)$ is more the "programatical" way to view it. Apr 8, 2015 at 21:39 • @TravisJ It has nothing to do with programming and everything to do with the clarity of written mathematics, as the answer you're commenting on makes clear. Apr 9, 2015 at 10:38 • @DavidRicherby, by "programatical" I meant that $\text{exp}(x)$ emphasizes that you have a function, named exp() and the input to that function is $x$ (or whatever else...). It is a convenient side effect that functional notation is sometimes easier to read than other notation. Apr 9, 2015 at 12:45 • @TravisJ: Functions are about the "mathiest" thing there is; it's bizarre to log onto a math site and tell mathematicians that functions are "programatical" instead of "mathy". Apr 9, 2015 at 14:43 • @TravisJ: No one is objecting to your statement that "they are two different ways of looking at the same thing"; you don't need to defend it. The problem is with your contrast between "mathy" and "programatical"; they are both mathy. It's true that only one is of particular CS interest, but that's not relevant to mathematicians. (Your claim is like saying that 'good' and 'OK' are different perspectives, 'good' being kind of "Englishy" and 'OK' being more "Portuguesical", because Portuguese has borrowed only the latter.) Apr 9, 2015 at 15:33 Yes. The purpose for the notation $\exp$ is twofold: • It allows one to talk about the exponentiation function itself, without specifying a particular input. For example, one can write that $\exp$ is a homomorphism from the additive group on $\mathbb{R}$ to the multiplicative group on $\mathbb{R}$. One may also say that $\exp$ and $\log$ are inverses. • It allows you to write exponentiation without pushing the body of exponentiation into a superscript. For example, one may write the following, which is unwieldy to write without $\exp$ notation: $$\prod_i e^{x_i} = \exp \sum_i x_i$$ • Allow me to add to your last line that most physicists do not come across situations where $e$ might be ambiguous and therefore they use $e$ for exponentials (except when the argument gets too long). Apr 9, 2015 at 11:25 • Yes, I'm with @Hrodelbert: the statement that "exp" is preferred in physics seems unjustified. Apr 9, 2015 at 11:50 • Watch out, log(x) might resolve to log10(x) [or even log2(x)], not ln(x). Apr 9, 2015 at 16:59 • @Hrodelbert That last line was edited in. Removed it Apr 9, 2015 at 17:02 • Exp is a homomorphiam from the additive to the multiplicative. You've written it backwards. Apr 9, 2015 at 19:36 As other answers say, in your homework (and, indeed, in most places in mathematics) there is no difference. I have seen a beginning textbook first defining a certain function $\exp(x)$, then proving certain properties of it, and finally using those properties to motivate calling it $e^x$. • That's how I "learned" about the exponential function in our maths lectures. We first introduced exp(x) as the power series then proved that this equals some constant to the power of x and than defined that constant as e. Apr 9, 2015 at 6:27 I agree with these two answers, but I want to add one thing: Well defines. $e$ is some (positive) number, so (without knowing the function $\exp$), you can compute $e^n$ for $n \in \mathbb{N}$ – just multiply $e$ $n$ times with itself. You can also compute $e^{-n} = \frac{1}{e^n}$ and even $e^\frac{p}{q} = \sqrt[q] e^p$ (for $n, q \in \mathbb{N}, p \in \mathbb{Z}$). One can prove that the $\exp$ function yields the same numbers with these arguments. This justifies the notation. Although I agree with the answers already provided that in this situation (and indeed in most other ones in mathematics) there is no difference between the two notations, I would like to add the following for completeness: In manifold theory (most particularly Lie Group theory or Riemannian geometry), the exponential map $\exp$ is a map from a tangent space to the manifold itself. For Lie groups, it expresses the local group structure and allows to lift many problems from the group to the tangent space (the Lie algebra). It also defines integral curves on the manifold and is therefore related to geodesics (which is more obvious from the viewpoint of Riemannian geometry). This exponential $\exp$ coincides with the usual exponential for the case of the Lie group $\mathbb{R}$. It also coincides with the definition of the matrix exponential $$e^A = \sum_{n=0}^\infty\frac{A^n}{n!}.$$ However, I believe this cannot be done in general, although I do not have an example available. • I am a bit surprised be the final sentence. Can you give an example of situation where $e^t$ is defined but different from $\exp(t)$? Apr 9, 2015 at 13:45 • @MarcvanLeeuwen No I cannot. I guess what I remembered is that for disconnected Lie groups, $\exp$ does not cover the entire group, implying that not all group elements have an exponential representation. This, as you rightly question, has nothing to do with the mapping $t\mapsto e^t$. Nevertheless, I presume there do exist Lie groups for which the exponential mapping cannot be interpreted as a version of the map $t\mapsto e^t$, are there not? Apr 9, 2015 at 15:39 • Yes, there are non-linear Lie groups, groups that cannot be embedded in any $GL(n,\Bbb R)$, and they still have an $\exp$ map, but writing $e^t$ is problematic as it cannot be defined as a matrix exponential. By the way I think the notion of $\exp$ is just more general, and one could do away with the notation $e^x$ (and the cumbersome constant $e$) altogether by always writing $\exp(x)$ instead; for instance in case of a matrix exponential $\exp(A)$ seems much less confusing to me than $e^A$. Apr 9, 2015 at 15:44 • I agree with your statement that $\exp$ is simply more general. I guess that was the main point I was trying to convey here. Thank you for your comments, they were enlightening. Apr 9, 2015 at 19:31 While both expressions are generally the same, $\exp(x)$ is well-defined for a really large slurry of argument domains via its series: $x$ can be complex, imaginary, or even quadratic matrices. The basic operation of exponentiation implicated by writing $e^x$ tends to have ickier definitions, like having to think about branches when writing $e^{1\over2}$ or at least generally $a^b$. Exponentiation can be replaced by using $\exp$ and $\ln$ together via $a^b=\exp(b\ln a)$, and the ambiguities arise from the $\ln$ part of the replacement. So it can be expedient to work with just $\exp$ when the task does not require anything else. Informally, $e^x$ is used equivalently to $\exp(x)$ anyway but the latter is really more fundamental and well-defined. • Are you saying $e^x$ is defined over a smaller range than $\exp(x)$, for instance that the former is not defined when $x$ is complex? If so, have you evidence for that? I thought $e^{i\pi} = -1$ was a reasonably well known identity. Apr 9, 2015 at 8:01 • @abligh Carl Mummert points out that if you interpret $e^x$ as $\exp(x\log e)$ and are not careful with the branch of log, you can get undesirable results such as $e^{i\pi}=-\exp(2n\pi^2)$ for any $n\in\Bbb Z$. Although "multivalued functions" can sometimes be useful, we don't want that sort of thing to happen in the definition of $\exp$ since it is already analytic. Apr 14, 2015 at 21:02 Another reason we use $\exp(x)$ is when defining it in terms of its power series. At that point, we don't know that $\exp(x)=e^x$ when $x$ is real. Also, there are general problems using the notation $a^z$ when $z$ is complex. $a^z$ is actually a multi-valued function. $e^z$ is thus sometimes ambiguous, so we will, in those cases, prefer $\exp(z)$ to clarify that we are talking about the single-valued function. • Why would $a^z$ be multivalued? If $a>0$, we define $a^z = e^{z \cdot \log a}$. Apr 9, 2015 at 20:28 • What if $a$ isn't real? @goblin To write $x^y$ in general, you require multivalued functions or some horrible branch cut. Apr 9, 2015 at 20:29 • Being real isn't strong enough; we want $a \in \mathbb{R}_{>0}$. Otherwise, you don't write $a^z$ without explaining to your reader explicitly what your notation means. If $a$ isn't in $\mathbb{R}_{>0},$ I would advocate writing $a^z_\mathbf{C}$ for whatever version of the exponential you need, where $a^z_\mathbf{C}$ is interpreted as $\mathbf{C}(a,z)$ and $\mathbf{C}$ is understood to be a 2-place function. Apr 9, 2015 at 20:43 • But that's my point, $a^z$ is ambiguous, in general, depending on which branch of $\log a$ you take. As I said, it is a multi-valued function. Apr 9, 2015 at 20:44 • The optimal convention surely is that $a^z$ is unambiguous, we only write $a^z$ when $a>0$, and we decorate the notation in some way whenever we want to extend so that $a$ can be a negative or complex number. Apr 9, 2015 at 20:47 We use exp(Θ) for discrete values. Matlab can only use exp() because everything is discrete in it. In reality, you can use e^Θ in your algebra because continuous. You can still consider (exp^(jΘ)-e^(jΘ)) / 2j to be a sin if you want discrete values.
2022-05-21T13:51:28
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1226089/is-expx-the-same-as-ex/1226100", "openwebmath_score": 0.8039519786834717, "openwebmath_perplexity": 414.6225045550749, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9653811591688145, "lm_q2_score": 0.8723473796562744, "lm_q1q2_score": 0.8421477245704521 }
https://mathematica.stackexchange.com/questions/115647/calculating-overlapped-area-between-random-circles/115650
# Calculating overlapped area between random circles Suppose I am plotting random circles like in the following example: m=125; g10=Graphics[{Table[Circle[pt[i],r],{i,m}]},Axes- >True,PlotRange->{{-5000,5000},{-5000,5000}}] Some of the circles may overlap. How can I calculate the whole overlapped area (Integrate it) and the ratio of this overlapped area to the integrated circles area (Here 125*Pi*r2)? • What are the values of pt, r and m – happy fish May 21 '16 at 11:47 • r=469. The value of m may change. If it is about 150 (for example) sounds like all given solution are very slow calculated/ – Danny May 21 '16 at 15:36 • Could you please clarify what you mean by overlapping area? Do you mean the area overlapped by all circles (the intersection of all circles)? Or do you mean the area where any two circles overlap? Or do you simply want the sum of all unique regions covered by any circle? (I asked because r=469 is sort of an intermediate region where most circles overlap with one or two others, but there will be no intersection of all circles.) I would be glad to try and accelerate my answer based on your goal. – Rashid May 22 '16 at 15:54 • Thx for your comment. I mean overlapped area of all circles. Having 25 circles is fine, but 125 will take 10's mins calculation. Regards. Danny – Danny May 24 '16 at 18:42 m = 5; r = 3000; Table[pt[i] = RandomReal[{-2000,2000},2], {i, m}]; circles = Table[Circle[pt[i], r], {i, m}]; disks = circles /. Circle -> Disk; reg1 = RegionIntersection[disks]; area1 = RegionMeasure[reg1]; reg2 = RegionUnion[disks]; area2 = RegionMeasure[reg2]; g10 = Show[Graphics[{RandomColor[], #}& /@ circles, Axes-> True, PlotRange->{{-5000, 5000}, {-5000, 5000}}], RegionPlot[reg1], PlotLabel->Row[{"ratio = ", area1 / area2}]] • great answer +1. It was interesting to see how this question could be interpreted in different ways, depending on the r value. – Rashid May 21 '16 at 12:46 • @Rashid, thank you for the upvote. I chose the lazier/easier way out in interpreting the question :) – kglr May 21 '16 at 12:51 @kglr beat me to posting and has a much nicer answer, but since I interpreted the question slightly differently, I thought it might be helpful to post my answer too. (Edit: Updated based on comment by @JackLaVigne to accelerate by getting rid of empty intersections, see module below for timing.) Since OP says that "some of the circles may overlap", I assumed that $r$ might be much smaller than the PlotRange and that the goal was to find regions with any overlap (between 2 or more circles). First, we can construct and plot the set of allCircles (using Disks as opposed to Circles so that we can use the Area function later): m = 125; r = 100; pt[i_] := RandomReal[{-4000, 4000}, 2]; allCircles = Table[Disk[pt[i], r], {i, m}]; g10 = Graphics[allCircles, Axes -> True, PlotRange -> {{-5000, 5000}, {-5000, 5000}}] Then, we can calculate the total area (sum of all circles): totalArea = Total[Map[Area, allCircles]]; (*3.92699*10^6*) Next, we can consider the RegionIntersection for all two circle subsets (using @JackLaVigne's suggestion to get rid of empty intersections) and calculate the overlap/total ratio: circlePairs = Map[RegionIntersection, Subsets[allCircles, {2}]]/.emptyRegion[2]->Nothing; overlappingArea = Total[Map[Area, circlePairs]] (*99293.3 for the plotted case above*) areaRatio = overlappingArea/totalArea (*0.0252848 for the plotted case above*) This calculations assumes there they are no overlapping regions between more than two circles. The circlePairs area would double count such regions, but that could be corrected by considering the overlaps of, for example, circleTriplets. EDIT Here is a Module form that runs in 6-10 seconds on my laptop for m=125 and r=100: overlapAreaModule[shapes_] := Module[{totalArea, shapePairs, overlappingArea, areaRatio}, totalArea = Total[Map[Area, shapes]]; shapePairs = Map[RegionIntersection, Subsets[shapes, {2}]] /. EmptyRegion[2] -> Nothing; overlappingArea = Total[Map[Area, shapePairs]]; areaRatio = overlappingArea/totalArea; Print["Overlap ratio is ", areaRatio, "( ", overlappingArea, " / ", totalArea, " )"]; Print[Graphics[shapes, Axes -> True]]; {areaRatio, overlappingArea, totalArea}] m = 125; r = 100; pt[i_] := RandomReal[{-5000, 5000}, 2]; allCircles = Table[Disk[pt[i], r], {i, m}]; AbsoluteTiming[overlapAreaModule[allCircles]] • Upvote for awareness and sportsmanship – user9660 May 21 '16 at 13:15 • Thanks @Louis. I appreciate it. – Rashid May 21 '16 at 13:19 • @Rashid Great answer! I found the last stop could be sped up if we compute a new quantity nonEmpty = circlePairs /. EmptyRegion[2] -> Nothing; and then used that in the computation of overlappingArea. The substitution is very fast and nonEmpty is much smaller than circlePairs. – Jack LaVigne May 22 '16 at 14:10 • Thanks @JackLaVigne, I will update post to include that. Getting rid of the empty intersections like that really speeds things up. – Rashid May 22 '16 at 15:55
2020-05-29T20:11:38
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/115647/calculating-overlapped-area-between-random-circles/115650", "openwebmath_score": 0.5219480395317078, "openwebmath_perplexity": 3119.0007980635746, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9579122708828602, "lm_q2_score": 0.8791467580102418, "lm_q1q2_score": 0.8421454674048952 }
https://gateoverflow.in/26479/tifr-cse-2011-part-a-question-19
2,238 views Three dice are rolled independently. What is the probability that the highest and the lowest value differ by $4$? 1. $\left(\dfrac{1}{3}\right)$ 2. $\left(\dfrac{1}{6}\right)$ 3. $\left(\dfrac{1}{9}\right)$ 4. $\left(\dfrac{5}{18}\right)$ 5. $\left(\dfrac{2}{9}\right)$ 2/9 is correct. Its a beautiful question and indeed a beautiful solution @aritra nayak When should we count the permutations, and when not count them? Case 1: largest is $5$, smallest $1$ and middle is $2$ or $3$ or $4 : 3\times 3!$ Case 2: largest is $5$, smallest $1$ and middle is $1$ or $5 : \dfrac{3!\times 2}{2!}$ Case 3: largest is $6$, smallest $2$ and middle is $3$ or $4$ or $5 : 3\times 3!$ Case 4: largest is $6$, smallest $2$ and middle is $6$ or $2: \dfrac{3!\times 2}{2!}$ So, probability the highest and the lowest value differ by $4,$ $\quad=\dfrac{\left( 3\times 3!+\dfrac{3!\times 2}{2!}+3\times 3!+\dfrac{3!\times 2}{2!}\right)}{6^{3}} =\dfrac{2}{9}.$ Correct Option: E How you got 3 * 3! ? and (3! * 2 ) / 2! ? Can you please explain ... @shreya would you please explain how you got the 2nd case and 4th.... For case 1:  (1, 2/3/4 ,5) for middle place we have 3 choice and all places can be arranged in 3! ways So total is 3*3! ways, For case 2: (1, 1/5 ,5) for middle place we have 2 choice and all places can be arranged in 3!/2! ways So total is 2*(3!/2!) ways, Sno Lowest Value Highest Value Values at dice No of ways 1 1 5 {1,1,5} 3 2 1 5 {1,2,5} 6 3 1 5 {1,3,5} 6 4 1 5 {1,4,5} 6 5 1 5 {1,5,5} 3 6 2 6 {2,2,6} 3 7 2 6 {2,3,6} 6 8 2 6 {2,4,6} 6 9 2 6 {2,5,6} 6 10 2 6 {2,6,6} 3 By naive defn of Prob = # outcomes which satisfy our constraint / total # of outcomes Prob = 48 / 63 = 2 / 9 ## The correct answer is (E) 2/9 by ### 1 comment Plain and simple approach but I guess thinking and executing this will consume atleast 5 mins in Gate suppose three dices are there A,B,C, three are rolled independently, see we get 4 difference between max and min when we get either (1,5) or (2,6) in any of the two dices. okay? 1st case: we get (1,5), 1 and 5 can be the result of any of the two dice in 3C2=3 ways, either (A-B) or (B-C) and (C-A) and also they can be permutated between them,so 3C2*2! ways possible, now for each way 3rd result can come either 1 or 2 or 3 or 4 or 5(6 cant be possible as min:1 and max:5) But we cant do 3C2*2!*5 because then (1,1,5), (1,5,1),(5,1,1),(1,5,5),(5,1,5),(5,5,1) all these 6 ways are counted two times.So we do like below: When 3rd result is 1, no of ways=3!/2!=3  because its no of permutation of 1,1 and 5 When 3rd result is 5,no of ways=3!/2!=3 because its no of permutation of 1,5 and 5 when 3rd result is 2 or 3 or 4, 3C2*2!*3=18 ways possible.so totally 18+3+3=24 2nd case: we get (2,6), 2 and 6 can be the result of any of the two dice in 3C2=3 ways, either (A-B) or (B-C) and (C-A) and also they can permutate between them,so 3C2*2! ways possible, now for each way 3rd dice result can come either 2 or 3 or 4 or 5 or 6(min:2 and max:6) But we cant do 3C2*2!*5, So we do like below: Similarly as shown above, when 3rd result is 2 or 6,no of ways=2*3!/2=6 ways possible,when 3rd result is 3 or 4 or 5,3C2*2!*3 =18 ways possible.So totally 18+6=24 ways possible. Finally total=24+24=48 ways possible sample space= 6^3=216 so probability= 48/216=2/9 Correct Answer: $E$ It becomes simple the moment we enumerate all favourable ways Case 1: Minimum value on dice is 1 and the maximum value on dice is 5. Numbers on 3 dices Total number of ways these 3 numbers can appear on the dice 1,1,5 $\frac{3!}{2!}=3$ 1,2,5 $3!=6$ 1,3,5 $3!=6$ 1,4,5 $3!=6$ 1,5, $\frac{3!}{2!}=3$ Total favourable ways in this case=3+6+6+6+3=24. Case 2: Minimum value on the dice is 2 and the maximum value is 6 Now when 2 and 6 have appeared the third number can be anyone from $\{2,3,4,5,6\}$ so that the difference between the minimum and the maximum number remains 4. Now, this case is symmetrical to case 1 where there are two possibilities when the number on the third dice has already appeared in one of the two dices and remaining 3 possibilities when the number on three dices are all distinct and hence this case also has favourable ways=24. Total favourable ways=48. Total possible outcomes=$6^3$ Probability=$\frac{48}{216}=\frac{2}{9}$ nice ayush nice aritra nayak Thanks Ayush sir and  aritra sir There will be 2 possibilities for highest and lowest differ by 4, i.e (1,5),(2,6) and they can arrange in 2! ways Now among 3 dies 2 dice can be selected in 3C2 ways Case 1: for (1,5) the 3rd dice value can be any value from 1 to 5, but it cannot be 6, Because in that case difference cannot be 4 so probability is 3C2*2!*5/ (6^3) Case 2: for (2,6) the 3rd dice value can be any value from 2 to 6, but it cannot be 1, Because in that case difference cannot be 4 so probability is 3C2*2!*5/ (6^3) So,  probability that the highest and the lowest value differ by 4 =3C2*2!*5/ (6^3)+3C2*2!*5/ (6^3)= 5/18 Ans is (D) by
2023-01-30T15:38:15
{ "domain": "gateoverflow.in", "url": "https://gateoverflow.in/26479/tifr-cse-2011-part-a-question-19", "openwebmath_score": 0.5599600672721863, "openwebmath_perplexity": 1248.0759004312215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357259231532, "lm_q2_score": 0.857768108626046, "lm_q1q2_score": 0.8421015967957214 }
https://www.johndcook.com/blog/fourier-theorems/
Fourier theorems under various conventions There are several slightly different ways to define a Fourier transform. This means that when you look up a theorem about the Fourier transform you have to ask yourself which convention the source is using. All the common conventions can be summarized in the following definition where m is either 1 or 2π, σ is +1 or -1, and q is 2π or 1. This means there are eight potential definitions, one for each choice of m, σ, and q, though only six of these are widely used. Still, that’s six definitions! The differences are small, but they are annoying when you just want to quickly look something up. We will refer to each definition by its choice of σ, q, and m. To make the notation slightly simpler, we will use τ = 2π. The eight possible Fourier transforms are then F-τ1 may be the most common definition. It is the convention used by the classic text by Stein and Weiss and Wikipedia. Other definitions are widely used as well. For example, Mathematica uses F+1τ and probabilists use F+11 for “characteristic functions”, what they call a Fourier transform. These notes are a quick reference for translating between conventions. They are divided into three parts: 1. Converting between definitions 2. Comparison of results, organized by theorem 3. Comparison of results, organized by convention In first section shows, for example, how to convert between the ordinary frequency Fourier transform (q = 2π) and the angular frequency Fourier transform (q = 1). The second and third sections have the same information, organized differently. The second section will take one theorem at a time and discuss how it varies according to the various conventions. The third section will take one convention at a time and restate all the theorems. Converting between definitions For a function f(x), let F(f)(ω) be its Fourier transform. You can convert between the eight possible definitions by applying three equations. Here a * stands for any particular choice of a parameter, as long as the same choice is applied on both sides of the equation: Another way to convert between conventions is to compare each to a single convention. Based on the assumption that F-τ1 is most common, I’ll show how each of the others relates to it. Comparisons organized by theorem Integration The integral of a function is its Fourier transform evaluated at 0. This is true for all conventions with m = 1. When m = τ an extra term is needed. Shifting Shifting the argument of a function rotates its Fourier transform. The amount of rotation does not depend on the choice of scaling factor m, but does depend on the sign convention σ and the frequency convention q: Inversion The inversion formulas are simplest when the frequency conventions q is 2π and the scaling factor is 1, or vice versa. F+τ1 and F-τ1 are inverses of each other, as are F+1τ and F-1τ. The other conventions involve extra factors of 2π. Parseval Define the inner product of two functions to be the integral of their product over the real line. (Take the complex conjugate of the latter function if the functions are complex-valued.) Then Parseval’s theorem says that the inner product of two functions equals the inner product of their Fourier transforms. This is true for definitions F+τ1, F-τ1, F+1τ, and F-1τ. For definitions F+11 and F-11 the inner product of the Fourier transforms is larger by a factor of 2π. For definitions F+ττ and F-ττ the inner product of the Fourier transforms is smaller by a factor of 2π. Plancherel Plancherel’s formula is Parseval’s formula with g = f. This says a function and its Fourier transform have the same L2 form for definitions F+τ1, F-τ1, F+1τ, and F-1τ. For definitions F+11 and F-11 the norm of the Fourier transforms is larger by a factor of √2π. For definitions F+ττ and F-ττ the inner product of the Fourier transforms is smaller by a factor of √(2π). Differentiation The Fourier transform of the derivative of a function is a multiple of the Fourier transform of the original function. The multiplier is -σqi  where σ is the sign convention and q is the angle convention. The scale convention m does not matter. Convolution The convolution of two functions is defined by Fourier transform turns convolutions into products: So for conventions with m = 1, the Fourier transform of the convolution is the product of the Fourier transforms. For the conventions with m = 2π there is an extra factor of √(2π). Theorems organized by convention (+, τ τ) Need help with signal processing? Call or email to discuss your project.
2018-05-25T01:23:30
{ "domain": "johndcook.com", "url": "https://www.johndcook.com/blog/fourier-theorems/", "openwebmath_score": 0.8439435362815857, "openwebmath_perplexity": 541.1312549454769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357221825193, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8421015953718922 }
https://math.stackexchange.com/questions/3245485/why-can-linear-independence-of-eax-by2-be-proven-without-considering-y
# Why can linear independence of $e^{ax +by^2}$ be proven without considering $y$? Linear independence of $$e^{at}$$ has been answered multiple times. My favorite one is by Marc van Leeuwen in this one: Proof of linear independence of $e^{at}$. The answer uses the property that $$e^{at}$$ are eigenfunctions of the differentation operation. Now if we instead have an exponential function in two variables, say $$e^{ax + by^2}$$, it seems to me the linear independence of these functions can too be proved with the same technique, except now using the partial derivative operator: Proof by induction as in the link except here $$e^{a_1x + b_1y^2}$$, $$e^{a_2x + b_2y^2}$$, $$...$$,$$e^{a_{n-1}x + b_{n-1}y^2}$$ are assumed linearly independent. As in Marc's proof, we then assume that $$e^{a_1x + b_1y^2}$$, $$e^{a_2x + b_2y^2}$$, $$...$$,$$e^{a_{n}x + b_{n}y^2}$$ are in turn dependent and thus have: $$e^{a_{n}x + b_{n}y^2}= c_1e^{a_1x + b_1y^2} + c_2e^{a_2x + b_2y^2} + ... + c_{n-1}e^{a_{n-1}x + b_{n-1}y^2}$$. Applying operator $$\frac{\partial}{\partial x} - a_nI$$ will give $$0= c_1(a_1-a_n)e^{a_1x + b_1y^2} + c_2(a_2-a_n)e^{a_2x + b_2y^2} + ... + c_{n-1}(a_{n-1}-a_n)e^{a_{n-1}x + b_{n-1}y^2}$$ which would thus require all c to be zero, which essentially completes the induction proof (other argumentation as per link). My question is: why does $$y^2$$ not seem to have any effect on the linear independence. Where does this stem from? edit: assume all $$a_k$$ and $$b_k$$ are distinct. • You have to take in consideration that the linear independence you present, is towards variable $x$, judging from the operator you are using. A 2 variable function $f(x,y)$ would need a 2-dimension operator. So what your assumption is that $e^{ax+bx^2}$ is linear towards $x$ which is true (its like considering $bx^2$ a constant) – Pookaros May 30 at 16:12 • I suspected something like this (assume you meant $by^2$). I guess $\frac{\partial }{\partial x \partial y}$is then what youd sugggest – Lulu May 30 at 16:16 • also, despite seeing your point on intuitive level, I fail to see the logic fully since I dont see which part of my argumentation fails in my question. After all $a_1f_1 + a_2f_2 + ... + a_nf_n = 0$ implies linear independence if all a_i are 0, no matter what the f_1 are (no matter how many variables e.g. second answer here mathhelpforum.com/calculus/…) – Lulu May 30 at 16:22 • Note that your functions are products: $e^{a_jx} e^{b_jy^2}$. If a linear combination $\sum c_j e^{a_jx} e^{b_jy^2}$ is zero then by fixing $y$ it follows that a linear combination of the $e^{a_jx}$ is zero. – Martin R May 30 at 16:28 • Good point. Am I right saying that then it would suffice to prove separately for each variable i.e. using partial differentiation wrt x and y each at turn – Lulu May 30 at 16:42 Your proof essentially repeats the proof that the functions $$e^{a_j x}$$ are linearly independent. The “$$y$$-terms” have “no effect” because they occur as (non-zero) factors $$e^{b_j y^2}$$ which are constant with respect to $$x$$. What you observed is this: If $$g_1, \ldots, g_n : A \to \Bbb R$$ and $$h_1, \ldots, h_n : B \to \Bbb R$$ are functions such that • $$g_1, \ldots, g_n$$ are linearly independent, and • there is a $$y_0 \in B$$ such that $$h_j(y_0) \ne 0$$ for all $$j$$, then the functions $$f_j : A \times B \to \Bbb R$$, $$f_j(x, y) = g_j(x) h_j(y)$$, $$j=1, \ldots, n$$, are linearly independent. The proof is straight forward: If $$c_1, \ldots, c_n \in \Bbb R$$ with $$\sum_{j=1}^n c_j g_j(x) h_j(y) = 0 \text{ for all } (x, y) \in A \times B$$ then in particular $$\sum_{j=1}^n \bigl( c_j h_j(y_0) \bigr) g_j(x) = 0 \text{ for all } x \in A$$ Since the $$g_j$$ are linearly independent it follows that $$c_j h_j(y_0) = 0\text{ for } j = 1, \ldots, n \implies c_j = 0 \text{ for } j = 1, \ldots, n \,.$$ In your case $$g_j(x) = e^{a_j x}$$ and $$h_j(y) = e^{b_j y^2}$$. • Ah amazing! So actually my proof was then "correct" (which I thought it wasnt based on Pookaros' comments above). I suppose his/her comment would be valid if x and y had a more complex relationship? – Lulu May 30 at 19:40
2019-10-15T17:04:55
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3245485/why-can-linear-independence-of-eax-by2-be-proven-without-considering-y", "openwebmath_score": 0.9506980776786804, "openwebmath_perplexity": 305.62483752007046, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357259231532, "lm_q2_score": 0.8577681068080748, "lm_q1q2_score": 0.842101595010954 }
http://math.stackexchange.com/questions/60726/reducing-fractions
# Reducing fractions? I want to reduce the two following fractions: $$\frac{2x + 2y}{x + y}$$ $$\frac{3ab^2}{12ab}$$ I fully understand the concept of reduce fractions of this type: $$\frac{15}{20}$$ but i do not know what steps to take for reducing fractions like the two above. Anyone that can explain the steps needed, or point me to a website explaining it? - For the first: factor out the 2 so you can see what to cancel. – J. M. Aug 30 '11 at 15:09 For the second: you know what $3/12$ is in lowest terms, and that $a/a=1$ and that $b^2/b=b$... – J. M. Aug 30 '11 at 15:10 For the first fraction: \begin{align} \frac{2x + 2y}{x + y} &= \frac{2(x + y)}{x + y} \\ &= 2 \text{ assuming } (x+y) \neq 0 \text{ and dividing both numerator and denominator by (x + y)} \end{align} For the second fraction: \begin{align} \frac{3ab^2}{12ab} &= \frac{3ab \times b}{3ab \times 4}\\ &= \frac{b}{4} \quad\text{ assuming } 3ab \neq 0 \text{ and dividing both numerator and denominator by (3ab)} \end{align} - HINT $\ \ \$You need the knowledge of how you multiply two algebraic expressions and the distributive property (also factoring). So note how $2x + 2y = 2(x + y)$. For the second, note that you can express the fraction like so:$${3ab^2\over 12ab}= {3ab \times b\over3 ab \times 4}$$ -
2016-02-10T10:57:16
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/60726/reducing-fractions", "openwebmath_score": 0.9999967813491821, "openwebmath_perplexity": 834.4442369894616, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357205793903, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8421015939967793 }
https://math.stackexchange.com/questions/1363381/finding-equation-of-parabola-when-focus-and-equations-of-two-perpendicular-tange/1363610
# Finding equation of parabola when focus and equations of two perpendicular tangents from any two points on the parabola are given If the focus of a parabola and the equations of two perpendicular tangents at any two points $P$ and $Q$ on the parabola are given, can we find the equation of the given parabola? If not, what information can we get from the parabola? (Like length or equation of the Latus rectum, etc) If the above can be done, is there somewhat of a generalisation when the two tangents are inclined at an angle $\theta$ to each other? Any hint or a solution would be much appreciated. • Are you saying that we know the points $P$ and $Q$ on the parabola, as well as the tangent lines? Or are you saying that we know the tangent lines, but not the specific points of tangency? – Blue Jul 16 '15 at 15:24 • Yes. Only the tangent lines are known. – user232216 Jul 16 '15 at 15:48 The reflection of a parabola's focus in any tangent line gives a point on the directrix. (Why?) Therefore, if you have any two tangents (regardless of the angle they make with one another), then you get two reflected foci, which in turn determine the directrix. With a focus and a directrix, you have a unique parabola. $\square$ • The reason could be that the vertex is the midpoint of focus and directrix and since you are taking a reflection, slop of the tangent really doesn't matter. So if you consider a particular case of a tangent with slope equal to - 1/(slope of axis), that is tangent at vertex, you obviously get a point on the directrix. So a point on the directrix should satisfy the above. Is this correct? (Thank you for your hints everyone) – user232216 Jul 16 '15 at 18:31 • @user232216: I think you might be onto something. However, it might be easier to consider the reflection property of parabolas: a light ray from the focus ($F$) to the point of tangency ($T$) bounces off of the tangent line to become parallel to the axis (say, through some point $P$). So, the normal line at $T$ bisects $\angle FTP$, which implies that the tangent line bisects $\angle FTQ$ (where $Q$ is the point where $TP$ meets the directrix). Then, since $|FT| = |TQ|$ (by the focus-directrix definition of the parabola), we see that $Q$ is the reflection of $F$ in the tangent line. – Blue Jul 16 '15 at 18:46 The two perpendicular tangents must intersect on the directrix. So if you know the directrix and you know the focus you can find the equation of the parabola. You would probably need to know the axis of symmetry. The useful property is: The tangents at ends of any parabola focal ray intersect perpendicularly on its directrix. From this find out the nice relation between the three slopes $$t_1, t , t_2.$$
2019-10-18T03:34:23
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1363381/finding-equation-of-parabola-when-focus-and-equations-of-two-perpendicular-tange/1363610", "openwebmath_score": 0.708640456199646, "openwebmath_perplexity": 206.86105324240975, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735721648143, "lm_q2_score": 0.857768108626046, "lm_q1q2_score": 0.842101593128754 }
https://math.stackexchange.com/questions/675998/evaluate-int-7-tan5x-sec2-x-dx
# Evaluate $\int 7\tan^5x\sec^2 x\,dx$ How do you evaluate this trigonometric integral: $$\int 7\tan^5x\sec^2 x\,dx$$? Please help. Thank you in advance for your help. • Put $u = \tan x$ – Sandeep Thilakan Feb 14 '14 at 5:39 Given $\int 7\tan ^5 x \sec ^2 x dx$, let $\tan x=t$. Then $\sec ^2 x dx=dt$. Hence we must have $I=7\int t^5 dt= 7\frac{t^6}{6}+c=\frac{7}{6}\tan ^6 x+c$ where $c$ is constant. • how did you know that tan(x) should equal t? Why not sec(x)? – rororo Feb 14 '14 at 5:51 • Nice question. Well, note that the integrand is $\tan ^5 x\sec ^2 x dx$ (do not bother about 7, constant is not affected by integration so throw it out !). Now your substitution should be some thing so that its derivative will also be "adjusted". Here two trigonometric functions we can see , $\tan x$ and $\sec x$ with the powers 5 and 2 respectively. The derivative of $\tan x$ is $\sec ^2 x dx$ and THAT is already there. So if we choose $\tan x$ as $t$ then the whole integral will be converted to $\int t^5 dt$ which is standard integration. By choose $\sec x=t$ the result will be complicated – Anjan3 Feb 14 '14 at 5:59 • I understand. Thank you! – rororo Feb 14 '14 at 6:02 Since the exponent of $\ \tan x \$ is odd and that of $\ \sec x \$ is even, this "trigonometric powers integral" works the other way, too: Choosing $\ u \ = \ \sec x \ ,$ we have $\ du \ = \ \tan x \ \sec x \ dx \$ and so may write $$7 \int \ \tan^4 x \ \sec x \ \ (\tan x \ \sec x \ dx) \ = \ 7 \int \ (\tan^2 x )^2 \ \sec x \ \ (\tan x \ \sec x \ dx)$$ $$7 \int \ (\sec^2 x - 1)^2 \ \sec x \ \ (\tan x \ \sec x \ dx) \ \ \rightarrow \ \ 7 \int \ (u^2 -1 )^2 \ u \ \ du$$ $$7 \int \ u^5 \ - \ 2u^3 \ + \ u \ \ du \ = \ \frac{7}{6}u^6 \ - \ \frac{7}{2} u^4 \ + \ \frac{7}{2} u^2 \ + \ C$$ $$\rightarrow \ 7 \ \sec^2 x \ (\frac{1}{6} \sec^4 x \ - \ \frac{1}{2} \sec^2 x \ + \ \frac{1}{2}) \ + \ C$$ [At this point, we have a perfectly acceptable "polynomial in secants"; but we should show that this in fact is equivalent to a simpler expression.] $$= \ 7 \ (\tan^2 x + 1) \ [ \ \frac{1}{6} (\tan^2 x + 1)^2 \ - \ \frac{1}{2} (\tan^2 x + 1) \ + \ \frac{1}{2} \ ] \ + \ C$$ $$\ = \ 7 \ (\tan^2 x + 1) \ [ \ \frac{1}{6} \tan^4 x \ + \ \frac{1}{3} \tan^2 x \ + \ \frac{1}{6} \ - \ \frac{1}{2} \tan^2 x \ - \ \frac{1}{2} \ + \ \frac{1}{2} \ ] \ + \ C$$ $$\ = \ \frac{7}{6} \ (\tan^2 x + 1) \ [ \ \tan^4 x \ - \ \tan^2 x \ \ + \ 1 \ ] \ + \ C$$ $$\ = \ \frac{7}{6} \ ( \ \tan^6 x \ - \ \tan^4 x \ \ + \ \tan^2 x \ + \ \tan^4 x \ - \ \tan^2 x \ \ + \ 1 \ ) \ + \ C$$ $$\ = \ \frac{7}{6} \tan^6 x \ + \ \frac{7}{6} \ + \ C \ = \ \frac{7}{6} \tan^6 x \ + \ C \ ,$$ with the arbitrary constant "absorbing" the numerical one. Now I grant that pretty much everybody would make the tangent-substitution instead. But it is worth mentioning that $\ \int \ \tan^m x \ \sec^n x \ \ dx \$ with $\ m \$ odd and $\ n \$ even can be computed with the secant-substitution as well. (One would probably only consider it, though, for small odd integer $\ m \$ .) Hint: Put $tan(x)=t$ and use integration by substitution $u$-substitute $u= \tan x$. Then, use power rule.
2020-01-27T15:58:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/675998/evaluate-int-7-tan5x-sec2-x-dx", "openwebmath_score": 0.9302329421043396, "openwebmath_perplexity": 379.0040046264015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.8577681031721325, "lm_q1q2_score": 0.8421015905246779 }
https://math.stackexchange.com/questions/3831485/interesting-pattern-within-mn1-equiv0-pmod-n
# Interesting pattern within $m^n+1\equiv0\pmod n$ In recent days, I have been studying the properties of $$m^n+h\equiv0\pmod n$$ where $$m,n\in\mathbb{N}$$ and $$h\in\mathbb{Z}$$, and I have noticed that for the equation $$m^n+1\equiv0\pmod n$$, some even numbers n have solutions and some don't.(If $$n$$ is odd then $$m=n-1$$ is a solution.) After using a program to find the even numbers that have at least $$1$$ solution, I found that the list of required numbers starts with $$2,10,26,34,50,58,74,82,106,122,130,146,170,178,194...$$, and I noticed that the numbers in the list under $$1000$$ can all be written as a sum of exactly $$2$$ coprime square numbers. How can I prove for the general case, that for an even number $$n$$, $$n$$ can be written as a sum of exactly $$2$$ coprime square numbers if and only if $$m^n+1\equiv0\pmod n$$ has at least $$1$$ solution? • Fermat's Little Theorem(s) will be of huge help to you: $$a^p\equiv a \pmod p$$ and $$a^{p-1}\equiv 1\pmod p$$ where $p$ is prime. – Rhys Hughes Sep 18 '20 at 17:39 • For $m\leq10$ and $n\leq1000$ solutions: $(m,n)$=(2,3), (2,9), (2,27), (2,81), (2,171), (2,243), (2,513), (2,729), (3,10), (3,50), (3,250), (4,5), (4,25), (4,125), (4,205), (4,625), (5,3), (5,9), (5,21), (5,26), (5,27), (5,63), (5,81), (5,147), (5,189), (5,243), (5,338), (5,441), (5,567), (5,609), (5,729), (5,903), (6,7), (6,49), (6,203), (6,343), (7,10), (7,50), (7,250), (8,3), (8,9), (8,27), (8,57), (8,81), (8,171), (8,243), (8,513), (8,729), (9,5), (9,25), (9,82), (9,125), (9,625), (10,11), (10,121), (10,253). – Dmitry Ezhov Sep 18 '20 at 18:42 • Write $n = 2r$. If there is an $m$ with $m^n + 1 \equiv 0 \pmod{n}$, then the congruence $x^2 \equiv -1 \pmod{n}$ has a solution ($x = m^r$ is one). When is $-1$ a square modulo $n$, and when can $n$ be written as the sum of two coprime squares? – Daniel Fischer Sep 18 '20 at 19:19 Nice observation! Something else you might notice, which turns out to imply your observation, is that all of the odd prime factors of your numbers are $$1 \bmod 4$$: $$\{ 5, 13, 17, 29, \dots \}$$. And a final thing you might notice is that all of your numbers are themselves congruent to $$2 \bmod 4$$, or equivalently are even but not divisible by $$4$$. This turns out to be an exact characterization: Proposition: If $$n$$ is an even positive integer, the following are equivalent: 1. There exists an integer $$m$$ such that $$m^n \equiv -1 \bmod n$$. 2. There exists an integer $$x$$ such that $$x^2 \equiv -1 \bmod n$$. 3. $$n$$ is twice a product of primes congruent to $$1 \bmod 4$$. 4. There exist integers $$x, y$$ such that $$\gcd(x, y) = 1$$ and $$n = x^2 + y^2$$. Proof. $$1 \Rightarrow 2$$: if $$n$$ is even then $$m^n = (m^{n/2})^2$$. $$2 \Rightarrow 3$$: if $$x^2 \equiv -1 \bmod n$$ then $$x$$ is either even, in which case $$n$$ is odd, or odd (in which case $$x^2 + 1 \equiv 2 \bmod 4$$, so if $$n$$ is even then $$n \equiv 2 \bmod 4$$, meaning $$2$$ divides $$n$$ but $$4$$ doesn't. Now let $$p$$ be an odd prime divisor of $$n$$. It's a classic result that there exists a solution to $$x^2 \equiv -1 \bmod p$$ iff $$p \equiv 1 \bmod 4$$ and there are several ways to prove it; one is to use the fact that the group of units $$\bmod p$$ is cyclic of order $$p-1$$ and any root of $$x^2 \equiv -1 \bmod p$$ has multiplicative order exactly $$4$$. $$3 \Rightarrow 4$$: by Fermat's two-square theorem (which also admits several proofs) a prime can be written in the form $$x^2 + y^2$$ iff $$p = 2$$ or $$p \equiv 1 \bmod 4$$, and the Brahmagupta-Fibonacci identity $$(x^2 + y^2)(z^2 + w^2) = (xz - yw)^2 + (yz + xw)^2$$ (which again admits several proofs) shows that a product of numbers of the form $$x^2 + y^2$$ is again of the form $$x^2 + y^2$$. To show that we can always arrange for $$\gcd(x, y) = 1$$ is slightly more annoying but still doable. If the $$\gcd$$ isn't equal to $$1$$ then it's some product of primes congruent to $$1 \bmod 4$$ (note that $$2$$ can't appear) and each of these can be written as a sum of two (coprime) squares, which lets us use the BF identity again for each such prime, and then we can check that this operation reduces the gcd. There is a maybe somewhat more conceptual proof involving the Gaussian integers, which are hiding in the background here. $$4 \Rightarrow 3$$: suppose $$n = x^2 + y^2$$ where $$\gcd(x, y) = 1$$. Then at most one of $$x, y$$ is even, so $$x^2 + y^2 \equiv 1, 2 \bmod 4$$, so if $$n$$ is even then it's not divisible by $$4$$. If $$p \mid n$$ then $$x^2 + y^2 \equiv 0 \bmod p$$, and since $$\gcd(x, y) = 1$$ we get that $$p$$ divides at most one of $$x$$ and $$y$$, from which it follows that it divides neither. Then we can divide $$\bmod p$$, getting $$\left( \frac{x}{y} \right)^2 \equiv -1 \bmod p$$ so it follows as above that $$p \equiv 1 \bmod 4$$. $$3 \Rightarrow 1$$: We're given that $$n$$ is twice a product of primes congruent to $$1 \bmod 4$$ and we want to show that there exists $$m$$ such that $$m^n \equiv -1 \bmod n$$. We'll construct a solution $$\bmod p^k$$ for each prime power in the prime factorization of $$n$$, which is enough by the Chinese remainder theorem. First it's easy to see we can construct a solution $$\bmod 2$$ since $$-1 \equiv 1 \bmod 2$$ so we can take $$m \equiv 1 \bmod 2$$. Now if $$p^k$$ is an odd prime power factor of $$n$$ write $$n = 2 p^k q$$ where $$\gcd(p, q) = 1$$. We want to solve $$m^{2 p^k q} \equiv -1 \bmod p^k.$$ To do this recall that as above, since $$p \equiv 1 \bmod 4$$ we know that there exists a solution to $$x^2 \equiv -1 \bmod p$$. By Hensel's lemma this solution lifts to a solution to $$x^2 \equiv -1 \bmod p^k$$. Call it $$i$$ (since it's a primitive $$4^{th}$$ root of unity). Then $$i^{2 p^k q} \equiv (-1)^{p^k q} \equiv -1 \bmod p^k$$ since $$p^k q$$ is odd. So we can take $$m = i$$ to be our solution $$\bmod p^k$$. $$\Box$$
2021-01-27T11:21:50
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3831485/interesting-pattern-within-mn1-equiv0-pmod-n", "openwebmath_score": 0.9069632291793823, "openwebmath_perplexity": 75.53636669791322, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8421015869551431 }
https://math.stackexchange.com/questions/1982724/how-to-understand-the-concept-of-norm-equivalence
# How to understand the concept of norm equivalence? I'm mainly dealing with $\mathbb{R}^n$. $\nu(\cdot)$ and $\mu(\cdot)$ are equivalent iff there exist constants $c_1,c_2>0$ such that for every $x \in \mathbb{R}^n$, $c_1\nu(x)\leq \mu(x)\leq c_2\nu(x)$. I understand the definition. What I don't understand is why can we say that every element that satisfies a property where one norm is used, then it will satisfy the same property but with the other norm. Or this is not what is meant by equivalent norms? By property I mean for example convergence, or continuity... What allows me to say that if two different normed spaces (different norms but same vector space), whenever on sequence converges in one of the normed spaces, the same sequence converges in the other normed space? • Note that two norms are equivalent if there exist constants c1, c2 > 0 such that for every x... not the other way around. Oct 24 '16 at 10:36 • Also, what do you mean by "property"? Can you make an example of norm-dependent properties that fit your question? Oct 24 '16 at 10:38 • Notice that when a vector space is equipped with two different norms, the outcomes are different as normed spaces. Now if both $(V, \| \cdot \|_1)$ and $(V, \| \cdot \|_2)$ are both normed spaced on the same vector space $V$, then the condition that $\| \cdot \|_1$ and $\| \cdot \|_2$ is precisely equal to the condition that the map $$\iota : (V, \| \cdot \|_1) \to (V, \| \cdot \|_2), \qquad \iota(x) = x$$ is homeomorphic. In this way, the norm equivalence captures the topological aspect of the norm. Oct 24 '16 at 10:38 • "What I don't understand is why can we say that every element that satisfies a property where one norm is used, then it will satisfy the same property but with the other norm." I wouldn't put it this way, this is not the point. You are not interested in properties of elements, here, but in topological/metric properties of the space, as somebody else here has already pointed out. – EM90 Oct 24 '16 at 11:14 • @LorenzoStella you're right. i've edited thanks Oct 24 '16 at 12:17 First of all, this yours definition isn't right. Definition: We say that $\nu(\cdot)$ and $\mu(\cdot)$ are equivalent norms iff there exist constants $c_1,c_2>0$ such that $c_1\nu(x)\leq \mu(x)\leq c_2\nu(x)$ for every $x \in \Bbb{R}^n.$ What does it mean?? It means that the topology induced by $\nu(\cdot)$ and the topology induced by $\mu(\cdot)$ are the same, that is, the families $\sigma_\nu$ and $\sigma_\mu$ of open sets in $(\Bbb{R}^n, \nu)$ and $(\Bbb{R}^n, \mu)$, respectively, will be equals. • ... and since the topology in the present case determines which sequences are convergent one gets: a sequence is convergent with respect to $\nu$ if and only if it is convergent with respect to $\mu$. The same statement holds for the convergence of series or for continuity of functions. Oct 24 '16 at 11:07 • @HagenKnaf That's my point. how does one go from what rdias wrote to what you wrote? That's the answer I'm looking for. Oct 24 '16 at 12:22 Let's make a concrete example of a property that is invariant for equivalent norms: if $\|\cdot\|_1\sim\|\cdot\|_2$ on the space $V$, then you have the same convergent sequences in your space. This is actually a metric condition (i.e., invariant for equivalent of metrics), more than a norm condition. In fact, recall that any normed space $(V,\|\cdot\|)$ is naturally a metric space with distance $\mathrm{d}(x,y)=\|x-y\|$. I will then prove the statement only for sequences $\{x_n\}_n$ tending to zero. In particular, equivalence of norms tells you that, for any given radius and any given ball (say, centered in zero) w.r.t. the first norm, you can find another ball in the second norm containing it and a third ball in the second norm which is contained in it. For any given $r,s>0$, denote $B^1_r=\{x\in V\mid \|x\|_1< r\}$ and $B^2_r=\{x\in V\mid \|x\|_2< s\}$. Given $r>0$, you can find $s,t>0$ s.t. $B_s^2\subseteq B_r^1\subseteq B_t^2$. Assume now that $x_n\to 0$ w.r.t. the topology induced by $\|\cdot\|_1$, i.e., by definition: for any $\epsilon >0$ there exists $N\in\mathbb{N}$ such that for all $n\geq N$ you have $x_n\in B_{\epsilon}^1$. You want to prove a similar statement for $\|\cdot\|_2$. Thus, given $\epsilon >0$, by the equivalence of norms, you can find $r>0$ s.t. $rB^1_{\epsilon}=\{r\cdot x\mid x\in B^1_{\epsilon}\}\subseteq B^2_{\epsilon}$ (we're just rescaling the ball). If $n$ is great enough, you have that $x_n\in rB_{\epsilon}^1$, thus also $x_n\in B_{\epsilon}^2$: now you are given $\epsilon'=r\epsilon$, hence you choose $M\in\mathbb{N}$ s.t. $x_n\in rB_{\epsilon}^1$ for $n\geq M$. Then you conclude. Please, notice the fact that the topology induced by the norm is generated by the (open) balls. This gives the connection by the metric properties and the equivalence of norms.
2021-10-18T10:17:00
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1982724/how-to-understand-the-concept-of-norm-equivalence", "openwebmath_score": 0.9408715963363647, "openwebmath_perplexity": 134.6929996733618, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075082, "lm_q2_score": 0.8577681049901037, "lm_q1q2_score": 0.8421015863506224 }
http://iitf.konoozargan.it/convergent-and-divergent-series-examples.html
(Oliver Heaviside, quoted by Kline) In this chapter, we apply our results for sequences to series, or in nite sums. ANALYSIS I 13 Power Series 13. Series may diverge by marching off to infinity or by oscillating. Then the series converges when (c) Give an example of a. 01 Single Variable Calculus, Fall 2005 Prof. These are series with a common ratio between adjacent terms which are usually written. A classical example is the harmonic series P ∞ n=1 harmonic series 1. de ne convergent and divergent series 3. Could someone please help me out?! Suppose ∑ a_n is conditionally convergent. of the series is. To show that a_n does not have a limit we shall assume, for a contradiction, that it does. Oscillating Sequences. You can use the Ratio Test (and sometimes,. The sum of a convergent geometric series can be calculated with the formula a ⁄ 1 - r, where "a" is the first term in the series and "r" is the number getting raised to a power. Fridman (Wichita, KS), Daowei Ma (Wichita, KS) and Tejinder S. TELESCOPING SERIES Dosubsequent termscancel out previousterms in the sum? May have to use partial fractions, properties of logarithms, etc. Problem 4: Determine whether the series. The other principal properties of the convergent s are The odd convergent s form an increasing series of rational fractions continually approaching to the value of the whole continued fraction; the even convergent s form a decreasing series having the same property. As discussed elsewhere on this site, there's much more to a series of terms a n than the sequence A m of its partial sums. The reciprocals of the positive integers produce a divergent series (harmonic series): + + + + + + ⋯ → ∞. Examples of convergent and divergent series. Here, the sequence converges to 1, but the infinite series is divergent because as n gets larger you keep adding a number close to 1 to the sum, hence the sum keeps growing without bound. The barrier between convergence and divergence is in the middle of the -series::" " " " " " " " "8 8x $# 8 8 8 8 ¥ ¥ â ¥ ¥ ¥ â ¥ ¥ ¥ â ¥ ¥ ¥ â ¥ 8 8 8 # "Þ" È8 ln convergent divergent ». Example Use the integral test to determine whether the series is. -1 and 1 are called cluster points of the sequence, from this you can deduce that a sequence that has a limit, i. Lets learn first what is the convergent and divergent series. For example: X5n2 22n+ 1 n2 + 3n4 ˘ X5n 3n4 ˘ X1 n2 By comparing dominant terms, the series is similar to X 1=n2, which is convergent. Divergent series have some curious properties. Every series uniquely defines the sequence of its partial sums. A series which is not convergent. Introduction 1 2. Convergent and divergent validity Convergent validity and divergent validity are ways to assess the construct validity of a measurement procedure (Campbell & Fiske, 1959). 97 plays . The size of the Hawaiian hot spot is not well known, but it presumably is large enough to encompass and feed the currently active volcanoes of Mauna Loa, Kïlauea, Lö'ihi and, possibly, also Hualälai and Haleakalä. Does this series converge? This is a question that we have been ignoring, but it is time to face it. Free series convergence calculator - test infinite series for convergence step-by-step. Nair EXAMPLE 1. Fridman (Wichita, KS), Daowei Ma (Wichita, KS) and Tejinder S. Unit 12, Sequences and Series 12. The series , where a and r are constants and , is called a geometric series. is convergent, divergent or neither, we need to use the ratio test. $1+\frac{1}{2}+\frac{1}{3}+\ldots = \infty$. divergent evolution When one or more different species evolve similarities in their characteristics and functions because of adaptations in an environment, it is called convergent evolution. What is convergent series and divergent series ? A series which have finite sum is called convergent series. Example The sequence a_n= (-1)^n is not convergent. This is a geometric series with ratio, r = 4/5, which is less than 1. Jason Starr. If the sequence of partial sums is a convergent sequence (i. transform D. ( 7) Alternating series test ( A. Determine whether the series is convergent or divergent expressing Sn as a telescoping sum (as in this example) If it is convergent, find its sum. de ne a series (or an in nite series) as the sum of the terms in an in nite sequence. Convergence and divergence are unaffected by deleting a finite number of terms from the beginning of a series. The trigger for the convergent example could be some other problem - maybe his car was totaled, and he only had a weekend to find an answer to the problem. n n n − = →∞ → ∞ 2 2. If it is convergent, find its sum. Design, Installation, Service. Relevant theorems, such as the Bolzano-Weierstrass theorem, will. All Rights Reserved. Hence, we have, which implies. It may also have to do with the mood created by the happy music. An example test item for this ability would be providing the missing word for the sentence: “The fog is as ____ as sponge” (e. To predict the level of divergence of the series approximation, Yuan et al. Convergence of an oceanic plate with a continental plate is similar to ocean-ocean convergence and often results in the volcanic. Fridman (Wichita, KS), Daowei Ma (Wichita, KS) and Tejinder S. The reciprocals of the positive integers produce a divergent series: Alternating the signs of the reciprocals of the positive integers produces a convergent series: Alternating the signs of the reciprocals of positive odd integers produces a convergent series (the Leibniz formula for pi):. Example - Writing a Series in Telescoping Form Using Partial. Note: Divergent or Oscillatory series are sometimes called non convergent series. 2) would show that it diverges. Relevant theorems, such as the Bolzano-Weierstrass theorem, will. 4 Sequence and Series of Real Numbers M. Absolute Convergence and the Ratio and Root Tests Example 2 shows that the alternating harmonic series is conditionally convergent. By inspection, it can be difficult to see whether a series will converge or not. Divergent series - Wikipedia. Also adding a convergent series and a "badly" divergent series (oscillating series that do not even yield (minus) infinity) can be said to "make sense", the outcome is again a "badly" divergent series. Definition of convergent series in the Definitions. A convergent series is one in which that limit exists, and a divergent series is one in which that limit does not exist. If this series can converge conditionally; for example, converges conditionally if , and absolutely for. In theory, convergent and divergent thinking are two completely different aspects of thinking. Rather than everyone focusing on a question with a single answer, divergent thinkers look for many possible solutions to a problem. Theorem 4 : (Comparison test ) Suppose 0 • an • bn for n ‚ k for some k: Then. Convergent & Discriminant Validity. its limit doesn’t exist or is plus or minus infinity) then the series is also called divergent. Convergent evolution — the repeated evolution of similar traits in multiple lineages which all ancestrally lack the trait — is rife in nature, as illustrated by the examples below Example of divergent and convergent. It is employed to stimulate divergent thinking that considers a variety of outcomes to a certain proposal. There are two conclusions one could draw from the fact that the series is divergent. These are series with a common ratio between adjacent terms which are usually written. EXAMPLE 2: Does the following series converge absolutely, converge conditionally, or diverge?. TELESCOPING SERIES Dosubsequent termscancel out previousterms in the sum? May have to use partial fractions, properties of logarithms, etc. Definition, using the sequence of partial sums and the sequence of partial absolute sums. Also the series X1 n=1 1 n1=2 diverges. Key Concepts The in nite series X1 k=0 a k converges if the sequence of partial sums converges and diverges otherwise. Each student brings their interests and experiences into a learning situation. That is, the condition lim n!1 a n = 0 does not necessarily imply that the series X1 n=1 a n is convergent. Solution: Use the limit comparison test: lim n!1 lnn+an3=2 n2 1 n1=2 = a if a 6= 0. Examples of convergent and divergent series. [ 20 ] proposed the use of an adaptive indicator. Absolute and conditional convergence Remarks: I Several convergence tests apply only to positive series. This book is primarily about summability, that is, various methods to assign a useful value to a divergent series, usually by forming some kind of mean of the partial summands. Improve your math knowledge with free questions in "Convergent and divergent geometric series" and thousands of other math skills. + Example: ∑ ∞ = + − 1 2 2. What are two examples of divergent sequences? Any series that is not convergent is said to be divergent. Don't assume that every sequence and series will start with an index of n = 1. Let's now see some examples of sequences that do not converge, i. Note this series is called a telescoping series because all the terms between the first and last cancel. Example: The series is divergent since. Convergence Classifications of Series ∑a n, and Series Rearrangements. As discussed elsewhere on this site, there's much more to a series of terms a n than the sequence A m of its partial sums. Lets learn first what is the convergent and divergent series. A divergent sequence doesn’t have a limit. If the sequence of partial sums is a convergent sequence (i. When the terms of the series live in ℝ n , an equivalent condition for absolute convergence of the series is that all possible series obtained by rearrangements of the terms are also convergent. Geometric Series: THIS is our model series A geometric series converges for. It requires us to learn things in order to get good scores in exams. Prototypical Examples. The n th term divergence test says if the terms of the sequence converge to a non-zero number, then the series diverges. Divergent evolution demonstrates how species can have common ( homologous ) anatomical structures which have evolved for different purposes. These pages list several series which are important for comparison purposes. ) approaches 1 from below, even in cases where the radius of convergence, , of the power series is equal to 1 and we cannot be sure whether the limit should be finite or not. NO Does lim n→∞ sn = s s finite? YES P an = s YES P an Diverges NO TAYLOR SERIES Does an = f(n)(a) n! (x −a) n? NO YES Is x in interval of convergence? P∞ n=0 an = f(x. Problem 3: Test for convergence. + Example: ∑ ∞ = + − 1 2 2. Here’s an example of a convergent sequence: This sequence approaches 0, so: Thus, this sequence converges to 0. Many of the series you come across will fall into one of several basic types. The San Andreas fault is an example of which type of tectonic plate boundary? A. if or its limit doesn't exist or is plus or minus infinity) then the series is also called divergent. (d)An unbounded sequence containing a subsequence that is Cauchy. Divergent series: summation of divergent series, divergent power series, analytic continuation of a convergent series outside the disk of convergence, asymptotic series, an application to ODEs. Text of slideshow A sequence is called convergent if there is a real number that is the limit of the sequence. INTRODUCTION TO THE CONVERGENCE OF SEQUENCES BECKY LYTLE Abstract. Alternating sequences change the signs of its terms. The barrier between convergence and divergence is in the middle of the -series::" " " " " " " " "8 8x$ # 8 8 8 8 ¥ ¥ â ¥ ¥ ¥ â ¥ ¥ ¥ â ¥ ¥ ¥ â ¥ 8 8 8 # "Þ" È8 ln convergent divergent ». If and then Theorem 2. But if these huge masses of crust are moving apart, what happens in the space left between them? Seafloor Spreading Divergent boundaries in the middle of the ocean contribute to seafloor spreading. (a) ∑a Divergent n: ∑ = = N n n S N a n 0 diverges as N tends to infinity. What exactly is the difference between the two?. Ocean-Continental Convergent Boundary. Hence, we have, which implies. Prototypical Examples. Divergent boundaries represent areas where plates are spreading apart. Bare Cotton 13x13-inch Washcloths (set of 6) eggplant. It has only a little bit about asymptotic series, that is, divergent series for which it is possible to obtain a good approximation to the desired value by truncating the series at a well-chosen term. The reciprocals of the positive integers produce a divergent series: Alternating the signs of the reciprocals of the positive integers produces a convergent series: Alternating the signs of the reciprocals of positive odd integers produces a convergent series (the Leibniz formula for pi):. Given that X∞ r=1 1 r2 is a convergent series, show that X∞ r=1 1 r(r +1) is also a convergent series. Convergent/Divergent Series and the Geometric Series Theorem. Convergent and divergent validity Convergent validity and divergent validity are ways to assess the construct validity of a measurement procedure (Campbell & Fiske, 1959). If the Fourier series of a function g is a power-type series, namely. Determine if the series X1 n=1 lnn + a n3=2 n2 is convergent or divergent. Examples of summation techniques4 3. convergent synonyms, convergent pronunciation, convergent translation, English dictionary definition of convergent. a sequence that is convergent only has one cluster point, in. If, for increasing values of n, the sum approaches indefinitely a certain limit s, the series will be called convergent, and the limit in question will be called the sum of the series. Definition Improper integrals are said to be convergent if the limit is finite and that limit is the value of the improper integral. All the initiates have to train and compete to get into the faction. , they are non-convergent sequences. n^3}$is also convergent. Series •Given a sequence {a 0, a 1, a2,…, a n} •The sum of the series, S n = •A series is convergent if, as n gets larger and larger, S n goes to some finite number. We can use convergence to describe things that are in the process of coming together,. The number c is called the expansion point. ” Divergent thinking refers to the way the mind generates ideas beyond proscribed expectations and rote thinking—what is usually referred to thinking outside the box, and is often associated with creativity. Thus , and Theorem 4. Rate of Convergence for the Bracket Methods •The rate of convergence of –False position , p= 1, linear convergence –Netwon ’s method , p= 2, quadratic convergence –Secant method , p= 1. Every calculus student learns that divergent series should not be manipulated in the same way as convergent series. Manage the divergence and convergence when changing minds to the best effect. (b)A Cauchy sequence with an unbounded subsequence. Canonical Example Harmonic Series ∑∞ = + + + + = 1 1 4 1 3 1 2 1 1 n n L diverges (b) ∑a Convergent n. The reciprocals of the positive integers produce a divergent series: Alternating the signs of the reciprocals of the positive integers produces a convergent series: Alternating the signs of the reciprocals of positive odd integers produces a convergent series (the Leibniz formula for pi):. For example, the sequence 1, 2, 3, 4, 5, 6, 7, diverges since its limit is infinity (∞). Convergent thinking refers to the ability to put a number of different pieces from different perspectives of a topic together in some organized, logical manner to find a single answer. (a) ∑a Divergent n: ∑ = = N n n S N a n 0 diverges as N tends to infinity. Today I gave the example of a di erence of divergent series which converges (for instance, when a n = b. Radius of Convergence: Associated with every power series, n 0 n n c f ¦ , is something called its radius of convergence. Improve your skills with free problems in 'Convergent and divergent geometric series' and thousands of other practice lessons. b) Give an example of two divergent sequences (an) and (bn) with the property that the sequence (an + bn) is divergent. you are probably on a mobile phone). How to use convergent in a sentence. Divergent Evolution: A Critical Comparison Of the several confusions that persist in the field of evolutionary biology, one is that about convergent and divergent evolution. The sum of a convergent geometric series can be calculated with the formula a ⁄ 1 - r, where "a" is the first term in the series and "r" is the number getting raised to a power. That’s why we don’t use this definition much. Determine whether the series is convergent or divergent expressing Sn as a telescoping sum (as in this example) If it is convergent, find its sum. Otherwise is called divergent series. From useful algorithms for slowly convergent series to physical predictions based on divergent perturbative expansions. Let's start with your example. It is a fundamental part of our work: enabling us to support the public, private and third sector organisations we work with. Augustin-Louis Cauchy eventually gave a rigorous definition of the sum of a (convergent) series, and for some time after this, divergent series were mostly excluded from mathematics. How To Reconstruct Linked, Convergent and Serial Arguments with Argunet by Gregor Betz , Wednesday, June 12th, 2013 Linked, convergent and serial argumentation are basic notions of argument structure in Critical Thinking and Informal Logic. 258 Chapter 11 Sequences and Series closer to a single value, but take on all values between −1 and 1 over and over. Text of slideshow A sequence is called convergent if there is a real number that is the limit of the sequence. > regarding whether or not the sum of a series is convergent or divergent. EX 4 Show converges absolutely. Exercise 5. An infinite series that is not convergent is said to be divergent. 5 implies that the series is not convergent; hence it is divergent. By soccerman on Monday, September 17, 2001 - 01:49 pm: Edit Post. Geometric Series: THIS is our model series A geometric series converges for. Just for fun, we can graph some of the partial sums of this divergent complex series. Divergent Sequences. What are two examples of divergent sequences? Any series that is not convergent is said to be divergent. Series and Convergence We know a Taylor Series for a function is a polynomial approximations for that function. There exist numerous classes of divergent series that converge in some generalized sense, since to each such divergent series some “generalized sum” may be assigned that possesses the most important properties of the sum of a convergent series. Examples of convergent and divergent series. Convergent, divergent and transform boundaries represent areas where the Earth's tectonic plates are interacting with each other. Example 2 Determine the radius of convergence and interval of convergence of the power series \(\sum\limits_{n = 0}^\infty {n{x^n}}. 8 minutes ago Determine whether the series is convergent or divergent by expressing sn as a telescoping sum (as in Example 7). We can use convergence to describe things that are in the process of coming together,. 2 Tests for Convergence Let us determine the convergence or the divergence of a series by comparing it to one whose behavior is already known. Such a finite value is called a regularized sum for the. Practice working with Taylor and Maclaurin series and utilize power series to reach an approximation of given. Consider different representations of series to grow intuition and conceptual understanding. Some convergent ones are X1 n2, X 1 n p n, and X 1 n1:001. Since , we conclude, from the Ratio-Test, that the series. , heavy, damp, full). By the 17th century, in what became the theory of convergent series, it was beginning to be under- stood how a sum of in nitely many terms could be nite; this is now a fully developed and largely standard element of every mathematician’s edu- cation. all of the terms of which are added together, where a n denotes the general term of the series. where is the first term in the series (if the index starts at or , then "" is actually the first term or , respectively). 7 Determine whether {(−1)n}∞ n=0 converges or diverges. Both series are divergent since they are not convergent, however when you add them together, they converge to 0, since it. is a divergent p-series with p 1, so 1 1 1 ( 1)n n n f ¦ is not absolute convergent. REGULARIZATION OF DIVERGENT SERIES AND TAUBERIAN THEOREMS JACK ENYEART Contents Introduction1 1. The sum of a convergent series and a divergent series is a divergent series. EXAMPLE 2: Does the following series converge absolutely, converge conditionally, or diverge?. I could look at a whole book of picture examples of the convergent vs divergent thinking. What are two examples of divergent sequences? Any series that is not convergent is said to be divergent. Theorem 4 : (Comparison test ) Suppose 0 • an • bn for n ‚ k for some k: Then. 2 Geometric Series. Relevant theorems, such as the Bolzano-Weierstrass theorem, will. A divergent question is asked without an attempt to reach a direct or specific conclusion. X1 n=0 (2n)! (n!)2 Thinking about the problem: Which test should I use to determine whether the series converges or diverges and why?. (a) ∑a Divergent n: ∑ = = N n n S N a n 0 diverges as N tends to infinity. The sum converges absolutely if. A classical example is the harmonic series P ∞ n=1 harmonic series 1. We start with a direct question on convergence, then we show problems on absolute convergence and at the end there are some problems on investigating convergence. A (LO) , LIM‑7. If , the series does not converge (it is a divergent series). It is a fundamental part of our work: enabling us to support the public, private and third sector organisations we work with. Answer: We will use the Ratio-Test (try to use the Root-Test to see how difficult it is). (If the quantity diverges, enter DIVERGES. Here's an example of a convergent sequence: This sequence approaches 0, so: Thus, this sequence converges to 0. A convergent series runs to the X axis and gets as close as you like; close enough, fast enough to take an area under the curve. if or its limit doesn't exist or is plus or minus infinity) then the series is also called divergent. Rate of Convergence for the Bracket Methods •The rate of convergence of –False position , p= 1, linear convergence –Netwon ’s method , p= 2, quadratic convergence –Secant method , p= 1. Guilford observed that most individuals display a preference for either convergent or divergent thinking. The theory of plate tectonics has done for geology what Charles Darwin's theory of evolution did for biology. These series are examples of divergent series in contrast to convergent series, the notion of convergence for a series was introduced by Cauchy in his "Cours d'Analyse" in order to avoid frequent mistakes in working with series. Divergent thinking has been hot recently. The series diverges if there is a divergent series of non -negative terms with 2. Divergent series - Wikipedia. This is because it is difficult to show that a series not satisfying the hypotheses is convergent when it is not absolutely convergent. 1) is called absolutely convergent if X∞ n=−∞ |a n| is convergent. As such, it holds importance to the novel's plot. Divergent sequences do not have a finite limit. 8 converge conditionally, but they do not con­ verge absolutely. If p=1, we call the resulting series the harmonic series: By the above theorem, the harmonic series does not converge. There are other theorems, examples, and definitions you are responsible for. 1, 0, 3, 0, 5, 0, 7, Alternating Sequences. Example 1: Estimating the Sum of an Infinite Geometric. (In other words,the first finite number of terms do not determine the convergence of a series. This book is primarily about summability, that is, various methods to assign a useful value to a divergent series, usually by forming some kind of mean of the partial summands. Solution This time behaves like so we suspect divergence. n3 (x+5)n Example 4: Find the interval of convergence and the radius of convergence. if or its limit doesn't exist or is plus or minus infinity) then the series is also called divergent. We usually just speak of 'the power series (a nzn). A series which is not convergent. The calculations of Laplace are veri ed experimentally, although the series he used were divergent. Just as in the previous example, however, |sinn| n2≤1 n2, because |sinn|≤1. A divergent alternating series whose terms go to zero From the alternating series test, you know that if and if decreases monotonically to zero, then converges. Examples of convergent in a sentence, how to use it. One example of convergent thinking is school. Divergent Series. The sum of a convergent geometric series can be calculated with the formula a ⁄ 1 – r, where “a” is the first term in the series and “r” is the number getting raised to a power. Absolute Convergence, Conditional Convergence and Divergence; Geometric Series and the Test for Divergence - Part 2; Geometric Series and the Test for Divergence; Radius of Convergence for a Power Series; Power Series: Finding the Interval of Convergence. Series •Given a sequence {a 0, a 1, a2,…, a n} •The sum of the series, S n = •A series is convergent if, as n gets larger and larger, S n goes to some finite number. AP Calculus BC Review: Sequences, Infinite Series, and Convergence Sequences A sequence 8an< is a function whose domain is the set of positive integers. (If the quantity diverges, enter DIVERGES. (b) Divergent series: If sn →∞ or −∞, the series said to be divergent. Thus, the notation (2) is used both for the series itself and for its sum. For example, here is a sequence: 1, 1/2, 1/4, 1/8, etc. Course Material Related to This Topic:. Convergent sequences, Divergent sequences, Sequences with limit, sequences without limit, Oscillating sequences. Subduction Zones and Volcanoes. 2 Tests for Convergence Let us determine the convergence or the divergence of a series by comparing it to one whose behavior is already known. FACT: This fact is also called the absolute convergence test. For example, the series $$1-1+1-1+\ldots$$ is summable by the above method and its$(C,1)\$-sum is equal to 1/2. A geometric series has the variable n in the exponent — for example, A p -series has the variable in the base — for example As with geometric series, a simple rule exists for determining whether a p -series is convergent or divergent. Informally, the definition states that a limit L L L of a function at a point x 0 x_0 x 0 exists if no matter how x 0 x_0 x 0 is approached, the values returned by the function will always approach. (a) If a n > b n for all n, what can you say about P a n? Why? (b) If a n < b n for all n, what can you say about P a. If then we write If the sequence s n is not convergent then we say that the series is divergent. For example: X5n2 22n+ 1 n2 + 3n4 ˘ X5n 3n4 ˘ X1 n2 By comparing dominant terms, the series is similar to X 1=n2, which is convergent. 2 says (among other things) that if both P 1 n=1 a n and P 1 n=1 b n converge, then so do P 1 n=1 (a n + b n) and P 1 n=1 (a n b n). I am wondering how the terms convergent and divergent are used in the context of finding limits of sequences. There exist numerous classes of divergent series that converge in some generalized sense, since to each such divergent series some "generalized sum" may be assigned that possesses the most important properties of the sum of a convergent series. Quick Math Lesson For years I taught a college-level math course that included an introduction to "infinite series" which include both "convergent" and "divergent" kinds of series. All the initiates have to train and compete to get into the faction. A power series may represent a function , in the sense that wherever the series converges, it converges to. The above example shows that pointwise convergence does not allow us to interchange limits and integrals. The partial sums in equation 2 are geometric sums, and. Examples of questions on this material that could be asked on an exam. If the aforementioned limit fails to exist, the very same series diverges. Non Convergent Examples. org are unblocked. The series generated by the sequences (a nzn) as z varies are called the power series generated by (a n). Examples of this type of terrain can be found in the Upper Rhine valley, the Vosges mountains in France, the Black Forest in Germany, and the Vindhya and Satpura horsts in India. Every series uniquely defines the sequence of its partial sums. You appear to be on a device with a "narrow" screen width (i. b) If there is a divergent series ∑ b n and an ≥ b n for all n, then ∑ a n diverges. Similarities Between Convergent and Divergent Thinking. This does not make sense for allreal exponents, but the sequence is easy to understand: it is 1,−1,1,−1,1 and clearly diverges. In order to use these tests it is necessary to know a number of convergent series and a number of divergent series. This two-page worksheet contains seven multi-step problems. An example of a bounded divergent sequence is (( 1) n );while an example of an unbounded divergent sequence is (n 2 ):Our goal is to develop two tools to show that divergent sequences are in fact divergent. Given a series 0 k k a. the series may be divergent, conditionally convergent, or absolutely convergent. What exactly is the difference between the two?. b) Give an example of two divergent sequences (an) and (bn) with the property that the sequence (an + bn) is divergent. For example, slightly smaller than 1/n is 1 ------- n^(1+e) for any positive number e. Also adding a convergent series and a "badly" divergent series (oscillating series that do not even yield (minus) infinity) can be said to "make sense", the outcome is again a "badly" divergent series. The converse statement is also true: for any sequence {s n } there exists a unique series for which this sequence is the sequence of partial sums of the series; the terms u n of the. The Alternating Harmonic Series ( −1) is a good example of a conditionally convergent series. A classical example is the harmonic series P ∞ n=1 harmonic series 1. For the convergent series an we already have the geometric series, whereas the harmonic series will serve as the divergent comparison series bn. Convergent and divergen t series examples: Infinite series: An infinite series is the sum of infinite sequence of terms which we denote : That is, given an infinite sequence of real numbers, a 1, a 2, a 3,. There exist numerous classes of divergent series that converge in some generalized sense, since to each such divergent series some "generalized sum" may be assigned that possesses the most important properties of the sum of a convergent series. All \divergent" means is \not convergent. (like P1 k=0 ( 1)k or P1 k=0. The theorem states that if we know the series is convergent then lim n!1 a n = 0: The converse is not true in general. The sum of convergent and divergent series Kyle Miller Wednesday, 2 September 2015 Theorem 8 in section 11. Divergent series are used in physics. Convergent/Divergent. Example 1: Estimating the Sum of an Infinite Geometric. If and are convergent series, then and are convergent. Neelon (San Marcos, CA) To Professor J ozef Siciak on his 80th birthday Abstract. Conditional Convergence. We can't apply the integral test here, because the terms of this series are not decreasing. The limit. We encounter here the main di erence between convergent and divergent series. Innovation is serendipitous but manageable; mysterious, but solvable; from divergent to convergent thinking, creativity can emerge from chaos to the order and innovation becomes the light organizations can reach out. A series is an infinite sum, written. While the introductory story about Achilles and the Tortoise introduces an apparent paradox which we were able to resolve using a convergent (geometric) series, this story uses the properties of a divergent (harmonic) series to shed light on an unbelievable but true situation. Age of the Islands. Theorem 4 : (Comparison test ) Suppose 0 • an • bn for n ‚ k for some k: Then. The calculations of Laplace are veri ed experimentally, although the series he used were divergent. Let f: D → C be a function.
2019-12-06T18:53:52
{ "domain": "konoozargan.it", "url": "http://iitf.konoozargan.it/convergent-and-divergent-series-examples.html", "openwebmath_score": 0.8989253640174866, "openwebmath_perplexity": 710.5052515824704, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357227168957, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8421015851216596 }
https://studyalgorithms.com/theory/find-the-first-n-prime-numbers-method-4-sieve-of-eratosthenes/?shared=email&msg=fail
Home Misc Find the first N prime numbers. (Method 4) [Sieve of Eratosthenes] # Find the first N prime numbers. (Method 4) [Sieve of Eratosthenes] 0 comment Question: Given an integer N, find the prime numbers in that range from 1 to N. Input: N = 25 Output: 2, 3, 5, 7, 11, 13, 17, 19, 23 We have several ways of finding prime numbers. Some of the methods are discussed in the these posts. In this post we will find the first N prime numbers using the Sieve of Eratosthenes. This technique is helpful in scenarios when we have to give immediate results and we need to query the prime numbers again and again. This is a pre-processing technique where we store all the prime numbers to a limit N, and then keep on querying as per our needs. Let us understand how does a Sieve of Eratosthenes work. Suppose we want to find prime numbers till N = 30. Declare a linear boolean array of size 30. The array below looks like a 2d array but it is just a linear array from 1-30. Ignore the first element and start with [2]. Except that element mark all its multiples as False. In this case leave ‘2’ and mark all the multiples of ‘2’ in the array as false. So we get Move to element [3]. Leaving ‘3’ mark all its multiples as False. If the element is already false, just ignore it. We now get Since 4 is already False, move to the next element 5. Ignore 5 and mark all its multiples as False. Continue this process for a larger array till we reach the last element. In this case the above array is our final array. The given code implements the above algorithm .wp-block-code { border: 0; } .wp-block-code > div { overflow: auto; } .shcb-language { border: 0; clip: rect(1px, 1px, 1px, 1px); -webkit-clip-path: inset(50%); clip-path: inset(50%); height: 1px; margin: -1px; overflow: hidden; position: absolute; width: 1px; word-wrap: normal; word-break: normal; } .hljs { box-sizing: border-box; } .hljs.shcb-code-table { display: table; width: 100%; } .hljs.shcb-code-table > .shcb-loc { color: inherit; display: table-row; width: 100%; } .hljs.shcb-code-table .shcb-loc > span { display: table-cell; } .wp-block-code code.hljs:not(.shcb-wrap-lines) { white-space: pre; } .wp-block-code code.hljs.shcb-wrap-lines { white-space: pre-wrap; } .hljs.shcb-line-numbers { border-spacing: 0; counter-reset: line; } .hljs.shcb-line-numbers > .shcb-loc { counter-increment: line; } .hljs.shcb-line-numbers .shcb-loc > span { } .hljs.shcb-line-numbers .shcb-loc::before { border-right: 1px solid #ddd; content: counter(line); display: table-cell; text-align: right; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; white-space: nowrap; width: 1%; } .hljs > mark.shcb-loc { background-color: #ddf6ff; }// code obtained from studyalgorithms.com public class SieveOfEratosthenes { static void sieveOfEratosthenes(int n) { // Create a boolean array "prime[0..n]" and initialize // all entries it as true. A value in isPrime[i] will // finally be false if i is Not a prime, else true. boolean isPrime[] = new boolean[n + 1]; for (int i = 0; i < n; i++) isPrime[i] = true; for (int number = 2; number * number <= n; number++) { // If prime[p] is true, then it is a prime number // we need to ignore it and now mark all it's multiples if (isPrime[number] == true) { // Update all multiples of p for (int i = number * 2; i <= n; i += number) isPrime[i] = false; } } // Print all prime numbers // At this point only the numbers which are set as true are prime. for (int i = 2; i <= n; i++) { if (isPrime[i] == true) System.out.print(i + " "); } } public static void main(String[] args) { sieveOfEratosthenes(30); } } Code language: Java (java) A working example of the code can be found here. Time Complexity: O(N log (log N)) #### You may also like This site uses Akismet to reduce spam. Learn how your comment data is processed. This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More
2021-11-28T18:03:32
{ "domain": "studyalgorithms.com", "url": "https://studyalgorithms.com/theory/find-the-first-n-prime-numbers-method-4-sieve-of-eratosthenes/?shared=email&msg=fail", "openwebmath_score": 0.2162545919418335, "openwebmath_perplexity": 3963.036829112114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357232512719, "lm_q2_score": 0.8577680977182186, "lm_q1q2_score": 0.842101583795263 }
https://math.stackexchange.com/questions/369466/how-many-bit-strings-of-length-8-start-with-00-or-end-with-1/369490
# How many bit strings of length $8$ start with $00$ or end with $1?$ How many bit strings of length $8$ start with $00$ or end with $1$? I know about product rule and sum rule but I'm unsure how to incorporate it into this. Would it be like this$? (x$ being either $1$ or $0):$ For starting with $00:$ $0 0 x x x x x x$ • $2^6$ combinations? For ending with $1:$ $x x x x x x x 1$ • $2^7$ combinations? • You need to describe those strings that have both properties; it might help you to think of the Venn diagram. – András Salamon Apr 22 '13 at 16:30 Yes, you are correct about each separate case, but to find the number of bit strings of length $8$ that either start with two zeros or end in a one (or both), we cannot simply *add* the two counts and say "we're done." We can use the sum rule, but with modifications: If we add the counts $2^6 + 2^7$, we need to also account for having double counted those bit strings which both start with two zeros and end in a one: Subtract that number of strings from the sum, and you'll have your answer. Clarification: The number of bit strings of length 8 of the form 0 0 x x x x x 1 will have been counted $(1)$ in the first total of all strings of the form 0 0 x x x x x x, and $(2)$ it will have been counted in the second total of all strings of the form x x x x x x x 1. So we need to subtract the number of strings of the form 0 0 x x x x x 1 from the combination of the first count and second count, so that they are only counted once. So, we count the number of bit strings of the form: 0 0 x x x x x 1, and just as you computed the first two counts, we see that there are $2^5$ such strings which have been counted twice, so we will subtract that from the sum of the first two counts. Total number of bit strings that start with two zeros $(2^6)$ or end in a one $(2^7)$ or both ($2^5$): $$2^6 + 2^7 - 2^5$$ • But the question says "How many bit strings of length 8 start with 00 OR end with 1?" Emphasize the "or". In that case, is finding the number of the combinations of 00xxxxx1 still the right answer? – user1766555 Apr 22 '13 at 16:34 • Yes...we are using the inclusive sense of "or", that means one or the other or both. – amWhy Apr 22 '13 at 16:39 • 00xxxxx1 will have been counted in the first total, and it will have been counted in the second total. So we need to subtract the number of strings of that form from the combination of the first count and second count. – amWhy Apr 22 '13 at 16:41 • I think it looks like a word problem Amy. – Mikasa Apr 22 '13 at 18:41 • @amWhy: Nice answer +1 – Amzoti Apr 23 '13 at 0:11 As @AndrásSalamon says, you need to check how many of them verify both properties. I'll give you a hint. Start by saying that: • all the strings starting with $00$ form the set $A$. • The strings which end with $1$ form the set $B$. Your question is basically how many elements are in $A\cup B$. The only thing you need to know is: $$\#(A\cup B)=\#(A)+\#(B)-\#(A\cap B)$$ Or in logic terms: $$nb(A\space\mbox{or}\space B)=nb(A)+nb(B)-nb(A\space\mbox{and}\space B)$$ where $\#(X)$ denotes the number of elements in the set $X$. Check out this image for better understanding of what $A$,$B$,$A\cup B$ and $A\cap B$ are: It's now easy to understand the formula: The red part is counted in both $A$ and $B$ so if you add the number of elements in $A$ and those in $B$ you have the yellow set plus an additional red set. If you substract one red set ($A$ and $B$) from this, you get your yellow set ($A$ or $B$). Do you see how to go with your problem now ? Edit: In other words, $A\cup B$ means $A$ or $B$ (or both) and $A\cap B$ means $A$ and $B$ ;) • Thank you for illustration :) – user1766555 Apr 22 '13 at 16:50 • You're welcome ! Hope it made it clearer how you could find the formulas yourself if you need them again ;) – Dolma Apr 22 '13 at 16:53
2021-08-06T02:16:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/369466/how-many-bit-strings-of-length-8-start-with-00-or-end-with-1/369490", "openwebmath_score": 0.6210427284240723, "openwebmath_perplexity": 161.97624879476672, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357173731318, "lm_q2_score": 0.8577680977182187, "lm_q1q2_score": 0.842101578753182 }
https://stats.stackexchange.com/questions/371768/distribution-of-sum-of-exponentials
# Distribution of sum of exponentials Let $$X_1$$ and $$X_2$$ be independent and identically distributed exponential random variables with rate $$\lambda$$. Let $$S_2 = X_1 + X_2$$. Q: Show that $$S_2$$ has PDF $$f_{S_2}(x) = \lambda^2 x \text{e}^{-\lambda x},\, x\ge 0$$. Note that if events occurred according to a Poisson Process (PP) with rate $$\lambda$$, $$S_2$$ would represent the time of the 2nd event. Alternate approaches are appreciated. Conditioning Approach Condition on the value of $$X_1$$. Start with the cumulative distribution function (CDF) for $$S_2$$. \begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= \int_0^\infty P(X_1+X_2\le x|X_1=x_1)f_{X_1}(x_1)dx_1 \\ &= \int_0^x P(X_1+X_2\le x|X_1=x_1)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \int_0^x P(X_2 \le x - x_1)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \int_0^x\left(1-\text{e}^{-\lambda(x-x_1)}\right)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \lambda^2 x \text{e}^{-\lambda x} \quad\square \end{align} This is an Erlang$$(2,\lambda)$$ distribution (see here). General Approach Direct integration relying on the independence of $$X_1$$ & $$X_2$$. Again, start with the cumulative distribution function (CDF) for $$S_2$$. \begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= P\left( (X_1,X_2)\in A \right) \quad \quad \text{(See figure below)}\\ &= \int\int_{(x_1,x_2)\in A} f_{X_1,X_2}(x_1,x_2)dx_1 dx_2 \\ &(\text{Joint distribution is the product of marginals by independence}) \\ &= \int_0^{x} \int_0^{x-x_{2}} f_{X_1}(x_1)f_{X_2}(x_2)dx_1 dx_2\\ &= \int_0^{x} \int_0^{x-x_{2}} \lambda \text{e}^{-\lambda x_1}\lambda \text{e}^{-\lambda x_2}dx_1 dx_2\\ &= \lambda^2 x \text{e}^{-\lambda x} \quad\square \end{align} MGF Approach This approach uses the moment generating function (MGF). \begin{align} M_{S_2}(t) &= \text{E}\left[\text{e}^{t S_2}\right] \\ &= \text{E}\left[\text{e}^{t(X_1 + X_2)}\right] \\ &= \text{E}\left[\text{e}^{t X_1 + t X_2}\right] \\ &= \text{E}\left[\text{e}^{t X_1} \text{e}^{t X_2}\right] \\ &= \text{E}\left[\text{e}^{t X_1}\right]\text{E}\left[\text{e}^{t X_2}\right] \quad \text{(by independence)} \\ &= M_{X_1}(t)M_{X_2}(t) \\ &= \left(\frac{\lambda}{\lambda-t}\right)\left(\frac{\lambda}{\lambda-t}\right) \quad \quad t<\lambda\\ &= \frac{\lambda^2}{(\lambda-t)^2} \quad \quad t<\lambda \end{align} • You wrote both the question and the answer. What is your point, if I may ask? – Xi'an Oct 14 '18 at 11:58 • @Xi'an, I thought SE encouraged asking the question and answering it...I can screenshot where SE seems to encourage that for you if you want. I've seen a lot of basic questions repeatedly asked and I've been thinking about posting some specific approaches to refer people to. I wasn't able to find something like this and I can refer people to this for a variety of things. If the CV community really hates this post that much, I will voluntarily delete it. – SecretAgentMan Oct 15 '18 at 14:31 • @Xi'an, Respectfully, I believe you both asked and answered a question here. – SecretAgentMan Oct 15 '18 at 17:24 • @Xi'an You may wish to read stats.stackexchange.com/help/self-answer – Sycorax Oct 15 '18 at 17:37 • An easier solution would be to use moment generating functions – kjetil b halvorsen Nov 19 '18 at 20:10
2019-05-24T19:19:48
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/371768/distribution-of-sum-of-exponentials", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 2290.753148109162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319868927341, "lm_q2_score": 0.8499711794579723, "lm_q1q2_score": 0.8420936354259576 }
http://linear.ups.edu/fcla/section-SLT.html
SectionSLTSurjective Linear Transformations¶ permalink The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties. SubsectionSLTSurjective Linear Transformations¶ permalink As usual, we lead with a definition. DefinitionSLTSurjective Linear Transformation Suppose $\ltdefn{T}{U}{V}$ is a linear transformation. Then $T$ is surjective if for every $\vect{v}\in V$ there exists a $\vect{u}\in U$ so that $\lteval{T}{\vect{u}}=\vect{v}\text{.}$ Given an arbitrary function, it is possible for there to be an element of the codomain that is not an output of the function (think about the function $y=f(x)=x^2$ and the codomain element $y=-3$). For a surjective function, this never happens. If we choose any element of the codomain ($\vect{v}\in V$) then there must be an input from the domain ($\vect{u}\in U$) which will create the output when used to evaluate the linear transformation ($\lteval{T}{\vect{u}}=\vect{v}$). Some authors prefer the term onto where we use surjective, and we will sometimes refer to a surjective linear transformation as a surjection. SubsectionESLTExamples of Surjective Linear Transformations¶ permalink It is perhaps most instructive to examine a linear transformation that is not surjective first. Here is a cartoon of a non-surjective linear transformation. Notice that the central feature of this cartoon is that the vector $\vect{v}\in V$ does not have an arrow pointing to it, implying there is no $\vect{u}\in U$ such that $\lteval{T}{\vect{u}}=\vect{v}\text{.}$ Even though this happens again with a second unnamed vector in $V\text{,}$ it only takes one occurrence to destroy the possibility of surjectivity. To show that a linear transformation is not surjective, it is enough to find a single element of the codomain that is never created by any input, as in Example NSAQ. However, to show that a linear transformation is surjective we must establish that every element of the codomain occurs as an output of the linear transformation for some appropriate input. Here is the cartoon for a surjective linear transformation. It is meant to suggest that for every output in $V$ there is at least one input in $U$ that is sent to the output. (Even though we have depicted several inputs sent to each output.) The key feature of this cartoon is that there are no vectors in $V$ without an arrow pointing to them. Let us now examine a surjective linear transformation between abstract vector spaces. SubsectionRLTRange of a Linear Transformation¶ permalink For a linear transformation $\ltdefn{T}{U}{V}\text{,}$ the range is a subset of the codomain $V\text{.}$ Informally, it is the set of all outputs that the transformation creates when fed every possible input from the domain. It will have some natural connections with the column space of a matrix, so we will keep the same notation, and if you think about your objects, then there should be little confusion. Here is the careful definition. DefinitionRLTRange of a Linear Transformation Suppose $\ltdefn{T}{U}{V}$ is a linear transformation. Then the range of $T$ is the set \begin{equation*} \rng{T}=\setparts{\lteval{T}{\vect{u}}}{\vect{u}\in U}\text{.} \end{equation*} We know that the span of a set of vectors is always a subspace (Theorem SSS), so the range computed in Example RAO is also a subspace. This is no accident, the range of a linear transformation is always a subspace. Proof Let us compute another range, now that we know in advance that it will be a subspace. In contrast to injective linear transformations having small (trivial) kernels (Theorem KILT), surjective linear transformations have large ranges, as indicated in the next theorem. SubsectionSSSLTSpanning Sets and Surjective Linear Transformations¶ permalink Just as injective linear transformations are allied with linear independence (Theorem ILTLI, Theorem ILTB), surjective linear transformations are allied with spanning sets. Proof Theorem SSRLT provides an easy way to begin the construction of a basis for the range of a linear transformation, since the construction of a spanning set requires simply evaluating the linear transformation on a spanning set of the domain. In practice the best choice for a spanning set of the domain would be as small as possible, in other words, a basis. The resulting spanning set for the codomain may not be linearly independent, so to find a basis for the range might require tossing out redundant vectors from the spanning set. Here is an example. Elements of the range are precisely those elements of the codomain with nonempty preimages. Proof Now would be a good time to return to Figure 7.48 which depicted the pre-images of a non-surjective linear transformation. The vectors $\vect{x},\,\vect{y}\in V$ were elements of the codomain whose pre-images were empty, as we expect for a non-surjective linear transformation from the characterization in Theorem RPI. SubsectionSLTDSurjective Linear Transformations and Dimension¶ permalink Proof Notice that the previous example made no use of the actual formula defining the function. Merely a comparison of the dimensions of the domain and codomain are enough to conclude that the linear transformation is not surjective. Archetype O and Archetype P are two more examples of linear transformations that have “small” domains and “big” codomains, resulting in an inability to create all possible outputs and thus they are non-surjective linear transformations. SubsectionCSLTComposition of Surjective Linear Transformations¶ permalink In Subsection LT.NLTFO we saw how to combine linear transformations to build new linear transformations, specifically, how to build the composition of two linear transformations (Definition LTC). It will be useful later to know that the composition of surjective linear transformations is again surjective, so we prove that here. Click to open 1 Suppose $\ltdefn{T}{\complex{5}}{\complex{8}}$ is a linear transformation. Why is $T$ not surjective? 2 What is the relationship between a surjective linear transformation and its range? 3 There are many similarities and differences between injective and surjective linear transformations. Compare and contrast these two different types of linear transformations. (This means going well beyond just stating their definitions.) SubsectionExercises C10 Each archetype below is a linear transformation. Compute the range for each. Archetype M, Archetype N, Archetype O, Archetype P, Archetype Q, Archetype R, Archetype S, Archetype T, Archetype U, Archetype V, Archetype W, Archetype X C20 Example SAR concludes with an expression for a vector $\vect{u}\in\complex{5}$ that we believe will create the vector $\vect{v}\in\complex{5}$ when used to evaluate $T\text{.}$ That is, $\lteval{T}{\vect{u}}=\vect{v}\text{.}$ Verify this assertion by actually evaluating $T$ with $\vect{u}\text{.}$ If you do not have the patience to push around all these symbols, try choosing a numerical instance of $\vect{v}\text{,}$ compute $\vect{u}\text{,}$ and then compute $\lteval{T}{\vect{u}}\text{,}$ which should result in $\vect{v}\text{.}$ C22 The linear transformation $\ltdefn{S}{\complex{4}}{\complex{3}}$ is not surjective. Find an output $\vect{w}\in\complex{3}$ that has an empty pre-image (that is $\preimage{S}{\vect{w}}=\emptyset\text{.}$) \begin{equation*} \lteval{S}{\colvector{x_1\\x_2\\x_3\\x_4}}= \colvector{ 2x_1+x_2+3x_3-4x_4\\ x_1+3x_2+4x_3+3x_4\\ -x_1+2x_2+x_3+7x_4 } \end{equation*} Solution C23 Determine whether or not the following linear transformation $\ltdefn{T}{\complex{5}}{P_3}$ is surjective: \begin{align*} \lteval{T}{\colvector{a\\b\\c\\d\\e}} &= a + (b + c)x + (c + d)x^2 + (d + e)x^3\text{.} \end{align*} Solution C24 Determine whether or not the linear transformation $\ltdefn{T}{P_3}{\complex{5}}$ below is surjective: \begin{align*} \lteval{T}{a + bx + cx^2 + dx^3} &= \colvector{a + b \\ b + c \\ c + d \\ a + c\\ b + d}\text{.} \end{align*} Solution C25 Define the linear transformation \begin{equation*} \ltdefn{T}{\complex{3}}{\complex{2}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}\text{.} \end{equation*} Find a basis for the range of $T\text{,}$ $\rng{T}\text{.}$ Is $T$ surjective? Solution C26 Let $\ltdefn{T}{\complex{3}}{\complex{3}}$ be given by $\lteval{T}{\colvector{a\\b\\c}} = \colvector{a + b + 2c\\ 2c\\ a + b + c}\text{.}$ Find a basis of $\rng{T}\text{.}$ Is $T$ surjective? Solution C27 Let $\ltdefn{T}{\complex{3}}{\complex{4}}$ be given by $\lteval{T}{\colvector{a\\b\\c}} = \colvector{a + b -c\\ a - b + c\\ -a + b + c\\a + b + c}\text{.}$ Find a basis of $\rng{T}\text{.}$ Is $T$ surjective? Solution C28 Let $\ltdefn{T}{\complex{4}}{ M_{22}}$ be given by $\lteval{T}{\colvector{a\\b\\c\\d}} = \begin{bmatrix} a + b & a + b + c\\ a + b + c & a + d\end{bmatrix}\text{.}$ Find a basis of $\rng{T}\text{.}$ Is $T$ surjective? Solution C29 Let $\ltdefn{T}{P_2}{P_4}$ be given by $\lteval{T}{p(x)} = x^2 p(x)\text{.}$ Find a basis of $\rng{T}\text{.}$ Is $T$ surjective? Solution C30 Let $\ltdefn{T}{P_4}{P_3}$ be given by $\lteval{T}{p(x)} = p^\prime(x)\text{,}$ where $p^\prime(x)$ is the derivative. Find a basis of $\rng{T}\text{.}$ Is $T$ surjective? Solution C40 Show that the linear transformation $T$ is not surjective by finding an element of the codomain, $\vect{v}\text{,}$ such that there is no vector $\vect{u}$ with $\lteval{T}{\vect{u}}=\vect{v}\text{.}$ \begin{equation*} \ltdefn{T}{\complex{3}}{\complex{3}},\quad \lteval{T}{\colvector{a\\b\\c}}= \colvector{2a+3b-c\\2b-2c\\a-b+2c} \end{equation*} Solution M60 Suppose $U$ and $V$ are vector spaces. Define the function $\ltdefn{Z}{U}{V}$ by $\lteval{Z}{\vect{u}}=\zerovector_{V}$ for every $\vect{u}\in U\text{.}$ Then by Exercise LT.M60, $Z$ is a linear transformation. Formulate a condition on $V$ that is equivalent to $Z$ being an surjective linear transformation. In other words, fill in the blank to complete the following statement (and then give a proof): $Z$ is surjective if and only if $V$ is . (See Exercise ILT.M60, Exercise IVLT.M60.) T15 Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{V}{W}$ are linear transformations. Prove the following relationship between ranges, \begin{equation*} \rng{\compose{S}{T}}\subseteq\rng{S}\text{.} \end{equation*} Solution T20 Suppose that $A$ is an $m\times n$ matrix. Define the linear transformation $T$ by \begin{equation*} \ltdefn{T}{\complex{n}}{\complex{m}},\quad \lteval{T}{\vect{x}}=A\vect{x}\text{.} \end{equation*} Prove that the range of $T$ equals the column space of $A\text{,}$ $\rng{T}=\csp{A}\text{.}$ Solution
2020-05-28T07:14:42
{ "domain": "ups.edu", "url": "http://linear.ups.edu/fcla/section-SLT.html", "openwebmath_score": 0.9572227001190186, "openwebmath_perplexity": 264.7288847323717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319866190857, "lm_q2_score": 0.8499711756575749, "lm_q1q2_score": 0.8420936314281889 }
http://mathhelpforum.com/statistics/93997-discrete-probability-distribution-problem-print.html
# Discrete Probability Distribution Problem • Jun 29th 2009, 01:28 AM ose90 Discrete Probability Distribution Problem Hi everyone, I hope you can help me with the following problem, especially the part 2 of the problem. Three dice are thrown in a game. If 1 or 6 turns up, you will be paid 1p. If neither 1 nor 6 turn up, you pay 5p. How much gain or loss is expected in 9 games? Let say you are given opportunity to change the rule of the game if 1 or 6 appears. To make the game worthwhile, what is the minimum amount in everyday currency that you would suggest for both the payments? • Jun 29th 2009, 02:44 PM Marth Outcome | Probability | Payoff 1 or 6 | 1/3 | 1 2-5 | 2/3 | -5 I apologize for the appearance of my table. The '|' lines are supposed to represent column separators Unfortunately, I cannot get the spacing correctly. To find the expected return, multiply the probabilities with the payoffs and add. (1/3)*1 + (2/3)*-5 = 1/3 -10/3 = -9/3 = -3 That would be for 1 game. For 9 games, multiply it by 9: -3*9 = -27 The changing the game question has many possible answers. Choose the expected return you want, then change the payoff from either or both of the outcomes to match that return. • Jun 29th 2009, 05:02 PM Soroban Hello, ose90! Quote: Three dice are thrown in a game. If 1 or 6 turns up, you will be paid $1. If neither 1 nor 6 turn up, you pay$5. How much gain or loss is expected in 9 games? You will lose $5 if no 1 or 6 appear on any of the three dice. The probability is: . $\tfrac{4}{6}\cdot\tfrac{4}{6}\cdot\tfrac{4}{6} \:=\:\tfrac{8}{27}$ Otherwise, you win$1. The probability is: . $1 - \tfrac{8}{27} \:=\:\tfrac{19}{27}$ We have: . $\begin{array}{|c|c|c|} \hline \text{Outcome} & \text{Prob.} & \text{Payoff} \\ \hline \hline \\[-4mm] \text{1 or 6} & \frac{19}{27} & +1 \\ \\[-4mm] \text{Other} & \frac{8}{27} & \text{-}5 \\ \hline \end{array}$ $EV \;=\;\left(\frac{19}{27}\right)(+1) + \left(\frac{8}{27}\right)(\text{-}5) \;=\;-\frac{7}{9}$ In 9 games, your expectation is: . $(9)\left(\text{-}\frac{7}{9}\right) \;=\;-7$ You can expect to lose \$7. Quote: Say you are given opportunity to change the rule of the game if 1 or 6 appears. To make the game worthwhile, what is the minimum amount for both the payments? A sloppy question! . . . There is no one right answer. Besides, what does "worthwhile" mean? Let $W$ = amount we win, $L$ = amount we lose. Then: . $EV \:=\:\frac{19}{27}W + \frac{8}{27}(-L) \;=\;\frac{19W - 8L}{27}$ If the game is to be "fair", the EV should be zero: . . $\frac{19W - 8L}{27} \:=\:0 \quad\Rightarrow\quad W \:=\:\frac{8}{19}L$ The amount won should be about 42.1% of the amount lost. . . (In the original game, it is only 20%.)
2016-09-27T03:39:24
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/statistics/93997-discrete-probability-distribution-problem-print.html", "openwebmath_score": 0.7499650716781616, "openwebmath_perplexity": 1336.5276195260062, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319877136793, "lm_q2_score": 0.8499711737573762, "lm_q1q2_score": 0.8420936304759744 }
https://math.stackexchange.com/questions/2939921/inverse-of-a-certain-unit-upper-triangular-matrix
# Inverse of a certain unit upper triangular matrix I don't know if there is a certain name for this matrix. but I want to show $$\begin{pmatrix}1&\gamma&\gamma^2& \ldots & \gamma^n\\ &1&\gamma&\ddots&\vdots\\ &&\ddots&\ddots&\gamma^2\\ &&&1&\gamma\\ &&&&1\end{pmatrix}^{-1}= \begin{pmatrix}1&-\gamma&& & \\ &1&-\gamma&&\\ &&\ddots&\ddots&\\ &&&1&-\gamma\\ &&&&1\end{pmatrix}\,\in\mathbb{C}^{(n+1)\times (n+1)}$$ Where the blank spaces of the matrices represent zero entries. I am not sure how to give a concise proof for this. When I try to use the blockwise inversion or directly multiply these matrices together, I get bogged down in computations. Is any theorem or property I should consider for this proof? • Can you explain more about your direct multiplication? – alexp9 Oct 2 '18 at 21:40 • @Rhcpy99 I would rewrite the right-hand matrix as a collection of column vectors, multiplying each to the left-hand matrix to get rows of what should be an identity matrix. Something like this. The problem is keeping track of the vectors and knowing what they are doing. So when i=j, jth column of the right hand matrix will give a 1 when multiplied with the ith row on the left hand side, and will give a 0 when i=/=j. However, it is hard to explain why, especially without an explicit description of each vector. – Andreu Payne Oct 2 '18 at 22:05 We can find a nice formula for $$M^{-1}$$, where $$M = \begin{pmatrix}1&-\gamma&& & \\ &1&-\gamma&&\\ &&\ddots&\ddots&\\ &&&1&-\gamma\\ &&&&1\end{pmatrix}$$ In particular, it is useful to note that $$M = I - N$$, where $$N = \begin{pmatrix}0&\gamma&& & \\ &0&\gamma&&\\ &&\ddots&\ddots&\\ &&&0&\gamma\\ &&&&0\end{pmatrix}$$ from there, we could use the Neumann series to compute $$M^{-1} = (I - N)^{-1} = I + N + N^2 + N^3 + \cdots$$ where we note that $$N^k = 0$$ whenever $$k \geq n+1$$. With that, you can conclude that $$M^{-1} = \pmatrix{1&\gamma&\gamma^2& \ldots & \gamma^n\\ &1&\gamma&\ddots&\vdots\\ &&\ddots&\ddots&\gamma^2\\ &&&1&\gamma\\ &&&&1}$$ which is equivalent to the result you're looking for. A formal (inductive proof) for the formula of $$N^k$$: we wish to show that $$[N^k]_{i,j} = \begin{cases} \gamma^k & j-i = k\\ 0 & \text{otherwise} \end{cases}$$ where $$[A]_{i,j}$$ denotes the $$i,j$$ entry of $$A$$. The base case (either $$k=0$$ or $$k=1$$) holds trivially. For the inductive step: we note that if $$i,j$$ are between $$1$$ and $$n+1$$ $$[N^{k+1}]_{i,j} = [N N^{k}]_{i,j} = \sum_{p=1}^{n+1} N_{ip}[N^k]_{pj}$$ We note that $$N_{ip}[N^k]_{pj}$$ is only non-zero if $$N_{ip} \neq 0$$ and $$[N^k]_{pj} \neq 0$$. By our definition of $$N$$, $$N_{ip}$$ will only be non-zero if $$p = i+1$$. On the other hand: by our inductive hypothesis, $$[N^k]_{pj}$$ will only be non-zero if $$p = j-k$$. These can only be simultaneously true if $$i+1 = j-k$$, which is to say that $$j-i = k+1$$. Thus, we conclude that $$[N^{k+1}]_{i,j} = 0$$ whenever $$j-i \neq k+1$$. Whenever $$j - i = k+1$$, we compute $$[N^{k+1}]_{i,j} = \sum_{p=1}^{n+1} N_{ip}[N^k]_{pj} = N_{i,(i+1)}[N^k]_{(j-k),j} = \gamma \cdot \gamma^k = \gamma^{k+1}$$ The conclusion follows. • @AndreuPayne I see now that you're particularly concerned with having a detailed, rigorous proof for the formula. With that in mind, I've added a proof of the formula for $N^k$. I hope this suffices. – Omnomnomnom Oct 3 '18 at 15:44
2019-08-25T00:54:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2939921/inverse-of-a-certain-unit-upper-triangular-matrix", "openwebmath_score": 0.9342688322067261, "openwebmath_perplexity": 118.47505196117896, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986151393040476, "lm_q2_score": 0.8539127585282744, "lm_q1q2_score": 0.8420872563576934 }
https://math.stackexchange.com/questions/3717710/in-a-n-times-n-grid-of-points-choosing-2n-1-points-there-will-always-be-a
# In a $n \times n$ grid of points, choosing $2n-1$ points, there will always be a right triangle $$\textbf{Question:}$$ Consider a $$n×n$$ grid of points. Prove that no matter how we choose $$2n-1$$ points from these, there will always be a right triangle with vertices among these $$2n-1$$ points. This question indeed been posted beforeLink, but I was looking for an alternative solution using graph theory. I have rephrased this question in terms of graph theory like this: Given an $$n$$ by $$n$$ bipartite graph (where the vertices correspond to rows and columns), and if there is point with column $$c_i$$ and row $$r_j$$, we add an edge between $$(c_i,r_j)$$. Then the statement is equivalent to showing that with $$2n-1$$ edges in this graph, there must exist a path of length at least $$3$$. I noticed some obvious facts like, if some vertex has degree more than 1 than the degree of its adjacent vertices will be $$1$$. • If the problem has been posted before, please link to the old problem. – bof Jun 13 at 11:59 • @bof The right triangle will have bases parallel to the edges (There is no need to consider tilted right triangles) – Calvin Lin Jun 13 at 12:05 I strongly recommend that you read the other 2 solutions. They provide a much simpler proof. Note: The setup only considers right triangle with bases parallel to the edges (which gives a path of length 3). This is sufficient to prove the problem. There isn't a need to account for tilted right triangles (which do not lead to a path of length 3). Your observation of "if some vertex has degree more than 1 than the degree of its adjacent vertices will be 1" is the main crux. Hint: Instead of focusing on $$n\times n$$ squares, relax the condition to $$n \times m$$ rectangles. Prove the more general statement by induction: With $$n, m \geq 2$$, for a $$(n, m)$$ bipartite graph with at least $$n + m - 1$$ edges, there is a path of length 3. Base case: Prove it for $$n = 2$$ and all $$m\geq 2$$. This is left to the reader (Consider the sum of degrees $$d(m_1) + d(m_2) = n + 1$$.) Suppose for $$n, m \geq 3$$, that there is such a graph with no path of length 3 for $$n, m \geq 2$$. There is a vertex (WLOG $$c_1$$) of degree $$d \geq 2$$. If $$d = m$$, clearly any other edge not involving $$c_1$$ gives us a path of length 3. If $$d = m-1$$, remove this vertex and all but 1 of it's neighbors, which gives us a $$(n, 2)$$ bipartite graph with $$n+m-1-(m-2) \geq n + 2 -1$$ edges. Else, remove this vertex and all of it's neighbors, which gives us a $$(n-1, m - d)$$ bipartite graph with $$n+m - 1 - d \geq (n-1) + (m-d) - 1$$ edges. Here's a simpler proof. Consider an $$m\times n$$ grid, $$m,n\ge2$$; let $$P$$ be a set of grid points, $$|P|=m+n-1$$; and assume for a contradiction that $$P$$ does not contain the vertices of a right triangle. Let $$H$$ (respectively $$V$$) be the set of all points $$x\in P$$ such that no other point of $$P$$ lies on the same horizontal (respectively vertical) line as $$x$$. Plainly $$P=H\cup V$$. Since $$|P|=m+n-1$$, either $$|H|\ge m$$ or $$|V|\ge n$$. Without loss of generality we suppose $$|H|\ge m$$. Since two points of $$H$$ can't lie on the same horizontal line, each of the $$m$$ horizontal lines contains a point of $$H$$ and therefore contains only one point of $$P$$, whence $$|P|=m$$ and $$n=1$$, contradicting our assumption that $$n\ge2$$. P.S. A translation of this proof into graph theory would go like this. A bipartite graph has bipartition $$(V_1,V_2)$$, $$|V_1|=m\ge2$$, $$|V_2|=n\ge2$$, and it has $$m+n-1$$ edges. If there is no path of length $$3$$, then each edge has an endpoint of degree $$1$$. Therefore there are at least $$m+n-1$$ vertices of degree $$1$$, i.e., at most one vertex of degree $$\ne1$$. So either all vertices in $$V_1$$ have degree $$1$$, there are just $$m$$ edges, and $$n=1$$, or else all vertices in $$V_2$$ have degree $$1$$, there are just $$n$$ edges, and $$m=1$$. • I don't understand one tiny part ,why is $P=H \cup V$ ? – Yes it's me Jun 13 at 14:46 • @Yesit'sme If there is a point $x\in V\setminus(H\cup V)$, then there is a point $y\in P\setminus\{x\}$ on the same horizontal line as $x$ (because $x\notin H$), and there is a point $z\in P\setminus\{x\}$ on the same vertical line as $x$ (because $x\notin V$), and then $x,y,z$ are vertices of a right triangle. – bof Jun 13 at 14:51 • Ah, nice observation of "each edge has an endpoint of degree 1". – Calvin Lin Jun 13 at 20:38 As you sugessted this graph $$G$$ is bipartite. • If it has cycles, then each one has lenght $$2l$$ so the minimum lenght is $$4$$ and we are done. • If there is no cycles then it must be tree (it can be easly verified if we say it has $$k$$ components, then in each component $$C_i$$ we have $$\varepsilon _i\geq n_i -1$$, but this forces $$k=1$$) and thus connected. Since there must exists vertices $$u$$ and $$v$$ in different parts of partition which are not connected, there exists a path between them which lenght is clearly at least $$3$$ and we are done. • Ah, nice! I should have dug deeper. – Calvin Lin Jul 9 at 19:53
2020-11-27T14:37:50
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3717710/in-a-n-times-n-grid-of-points-choosing-2n-1-points-there-will-always-be-a", "openwebmath_score": 0.8826546669006348, "openwebmath_perplexity": 148.9946776252967, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513889704251, "lm_q2_score": 0.8539127566694178, "lm_q1q2_score": 0.842087251049111 }
https://math.stackexchange.com/questions/1649264/finding-where-the-tail-starts-for-a-probability-distribution-from-its-generatin/1656062
# Finding where the tail starts for a probability distribution, from its generating function Suppose we generate "random strings" over an $m$-letter alphabet, and look for the first occurrence of $k$ consecutive identical digits. I was with some effort able to find that the random variable $X$, denoting the time until we see $k$ consecutive digits, has probability generating function $$P(z) = \sum_{n \ge 0} \Pr(X=n)z^n = \frac{(m-z)z^k}{m^k(1-z) + (m-1)z^k}$$ This correctly gives the expected time until we see $k$ consecutive $1$s, as $$\operatorname{E}[X] = P'(1) = \frac{m^k - 1}{m - 1}$$ (For example, in a random stream of decimal digits, we expect to see $10$ consecutive identical digits after an expected number of $111111111$ steps.) Using Sage to compute the coefficients of this generating function for $m=10$ and various small $k$, I was able to find the exact (smallest) $N_1(m, k)$ for which $\Pr(X \le N_1) \ge \frac12$, and $N_2(m, k)$ for which $\Pr(X \le N_2) \ge \frac9{10}$ (i.e. by which time we can be "quite" sure of having seen $k$ consecutive digits): • $2$ consecutive digits: half at $N_1 = 8$, "quite" at $N_2 = 23$. • $3$ consecutive digits: $N_1 = 78$ and $N_2 = 252$. • $4$ consecutive digits: $N_1 = 771$ and $N_2 = 2554$. • $5$ consecutive digits: $N_1 = 7703$ and $N_2 = 25578$. • $6$ consecutive digits: $N_1 = 77018$ and $N_2 = 255835$. • [$7$ consecutive digits: hit limitations of my computer (or programming skills).] There is clearly a pattern there, and I'd like to be able to calculate (either exactly or approximately) the value of $N_1(m, k)$ and $N_2(m, k)$ for larger values. Is there a technique that would give the (possibly asymptotic) values of $N_2(m, k)$ say? • A formal solution is provided by Fourier Transform, i.e. if $z=e^{i\theta}$ the $Pr(x=m)=\int d\theta P(e^{i\theta})e^{-im\theta}$ Feb 10, 2016 at 16:49 • @Marcel: Thanks, can that be used to compute the bounds quickly? A priori it doesn't seem like doing the integration for every $m$ will be any faster than just computing the coefficients of the probability generating function directly (using symbolic algebra methods). Feb 10, 2016 at 17:03 Since the generating function is rational, we can say a few things. First, the sequence of probabilities satisfies a linear recurrence. Specifically, $$m^k p_n - m^k p_{n-1} + (m-1)p_{n-k} = 0,$$ which we may prefer to write as $$p_n = p_{n-1} - \frac{m-1}{m^k} p_{n-k}.$$ The linear recurrence is not memory intensive, but I don't think it does significantly better than Sage. My machine finds (via a short Python script) 770164 and 2558418 for "half" and "quite" for $m=10$ and $k = 7$ in about 8 seconds. By Theorem 4.1.1 of Stanley's Enumerative Combinatorics (Volume 1), if the denominator of your generating function has no repeated roots, we expect $p_n$ to be asymptotic to $A\cdot \left(\frac{1}{\alpha}\right)^n$, where $\alpha$ has the smallest modulus among the roots (for some constant $A$). The sum of the first $n$ probabilities should be approximately $1 - C\cdot\left(\frac{1}{\alpha}\right)^n$ for some $C$. Upon finding $\alpha$ and $C$, these approximations will be much faster to compute than exact values. It appears to be difficult to find a nice expression $\alpha$, however, $$\alpha = 1 + \frac{m-1}{m^k - k(m-1)}$$ is a reasonably good approximation. Upon finding $C$ and $\alpha$, we have approximations for $N_1(m,k)$ and $N_2(m,k)$ by solving $1 - C(\frac{1}{\alpha})^n = p$. The general solution is $$N_p(m,k) \approx \frac{\ln C -\ln(1-p)}{\ln(\alpha)}.$$For convenience, I used $C = 1$ to check against your values. This gave approximations of $N_1 = 7, 76, 768, 7699, 77013$ and $N_2 = 23,251,2551,25574,255831$ when $m = 10$ and $2 \leq k \leq 6$. Finally, observe that transitioning from $k$ to $k + 1$ resulted in the $N$'s being multiplied by $m$ (approximately). Again, the binomial approximation is suggestive whenever $\alpha \approx 1 + \frac{m-1}{m^k}$: $$\left(1 + \frac{m-1}{m^k}\right)^n \approx \left(1 + \frac{n(m-1)}{m^k}\right) = \left(1 + \frac{nm(m-1)}{m\cdot m^k}\right) \approx \left(1 + \frac{m-1}{m^{k+1}}\right)^{nm}$$ • You can check how far $\alpha$ is away from the true root by computing $\alpha-\alpha_* \approx q(\alpha)/q'(\alpha)$ ($q$—the denominator), like in a Newton step, it is in fact $\sim m^{-3k}$, so extremely close. Also, $C$ is equal to $$-\frac{(m-\alpha)\alpha^{k-1}}{(\alpha-1)q'(\alpha)}$$ which you can derive by expanding the p.g.f. terms $p(z)/q(z)$ in partial fractions fully, as the sum over $n>l$ then has a closed form. That's why your $N$ is off by almost exactly $k-1$. Also for $N_1(10,3)$ I think you're rounding instead of taking the ceiling (your formula gave me $76$, not $75$). Feb 15, 2016 at 1:00 • Very nice, thank you! I knew that the $p_n$s would asymptotically be $A\left(\frac{1}{\alpha}\right)^n$, but thought that as this would only become a good approximation "eventually"; it wasn't useful as the specific details of the early terms would matter a lot. I am glad to have been wrong. :-) Feb 15, 2016 at 2:16 • @Kirill Can you elaborate on how you found $C$? Not sure how to get the partial fractions… Feb 15, 2016 at 2:17 • @Kirill: Indeed, I did round and agree ceiling is more appropriate. Very nice additional information. I didn't arrive at $\alpha$ through Newton's method, but I find it interesting that $\alpha$ is the first iterate if the initial guess is $z = 1$. Feb 15, 2016 at 2:19 • @ShreevatsaR With only simple roots, you have the partial fraction expansion $1/q(z) = \sum_\rho (z-\rho)^{-1}/q'(\rho)$ after which the $z^n$ term of $\sum_\rho\frac{(m-z)z^k}{(z-\rho)q'(\rho)}$ is $-\sum_\rho\rho^{k-n-1}(m-\rho)/q'(\rho)$, and the tail sum over $n>l$ is easily $-\sum_\rho\frac{\rho^{k-l-1}(m-\rho)}{(\rho-1)q'(\rho)}$, which is dominated by $C\alpha^{-l}$. Feb 15, 2016 at 18:00 Here is a quick approximation based on the Newton-Raphson method and the partial fraction decomposition of rational functions. First, the generating function of the cumulative distribution $\sum {\rm Pr}(X\le n)z^n$ is formed from the generating function of the exact distribution $\sum {\rm Pr}(X=n)z^n$ by multiplying by a factor if $1/(1-z)$. So, define $$Q(z)=\frac{P(z)}{1-z}=\frac1{1-z}\cdot\frac{(m-z)z^k}{m^k(1-z)+(m-1)z^k}.$$ For an arbitrary probability $p$, your question is equivalent to solving for $n$ in $[z^n]Q(z)=p$. This $Q(z)$ has a partial fraction decomposition $$Q(z)=\frac1{1-z}+\frac{m^k-z^k} {m^k(1-z)+(m-1)z^k}.$$ In general, for a rational function $f/g$ with $\deg f<\deg g$ where $g$ has distinct, simple roots $r_1,\ldots, r_k$, $$\frac fg=\sum_i \frac{\alpha_i}{z-r_i},\text{ where each } \alpha_i=\frac{f(r_i)}{g'(r_i)}.$$ Expanding each term as a geometric series, we have $$\frac fg=\sum_i\sum_{n}-\frac{\alpha_i}{r_i^{n+1}}z^n.$$ Now if $r_{m}$ is the smallest of the roots (in absolute value), then the first order approximation of $[z^n]f/g$ is $-\alpha_{m}/r_m^{n+1}$. There is not an exact formula for the roots of the denominator $g(z)=m^k(1-z)+(m-1)z^k$ of $P(z)$. However, by computational experimentation, it seems that the smallest root is slightly bigger than 1. (I suspect it wouldn't take too much to prove this.) Taking $z_0=1$ in the Newton-Raphson method, the first iterate $z_1=z_0-g(z_0)/g'(z_0)\approx 1+(m-1)/m^k$ is a close approximation of the smallest root of $g$. By combining these two approximations, $$[x^n]\frac 1{m^k(1-z)+(m-1)z^k}\approx \frac1{m^k\left(1+\frac{m-1}{m^k}\right)^{n+1}}.$$ Then, after some algebra, $$[x^n]Q(z)\approx1-\frac1{m^k\left(1+\frac{m-1}{m^k}\right)^{n+1}}.$$ Finally, if $[x^n]Q(z)=p$, then solving the previous equation for $n$ we have $$n\approx-\log(1-p)\frac{m^k}{m-1}$$ This approximation agrees nicely with your data. As a side note, my preference for obtaining the original generating function $P(z)$ would be the Goulden-Jackson cluster method. This method is more general than the one you referenced, it's easier to learn, and it has a wide range of applications. For sure, it's one of my favorites. • Thank you for this excellent and very clear answer and for the link to the Goulden-Jackson cluster method; I will try to learn it. Feb 18, 2016 at 17:54 • @ShreevatsaR: You are most welcome. Feb 18, 2016 at 19:31 Here is an approximation based on Poisson Clumping Heuristic ($p=m^{-(k-1)}$ small): $$\tau\overset{d}\approx k+\exp(\lambda)\tag 1$$ for $$\lambda=\frac p {EC}=\frac {m-1}{m^k}\tag 2$$ where $p=m^{-(k-1)}$ is the probability that each given block of $k$ letters is a "hit" and $EC$ is expected clump size of such "hit" blocks (each such block is followed by expected $\frac 1 {m-1}$ letters of the same kind for the expected clump size of $EC=\frac m{m-1}$. From $(1)$ you can get the non-extreme quantiles (agrees with approximation by @Rus May) and your exact numbers agree with the above relationship (increasing $k$ by $1$ multiplies quantiles by $m$).
2022-07-02T21:21:30
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1649264/finding-where-the-tail-starts-for-a-probability-distribution-from-its-generatin/1656062", "openwebmath_score": 0.9307999014854431, "openwebmath_perplexity": 236.30588985281662, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513901914406, "lm_q2_score": 0.8539127548105611, "lm_q1q2_score": 0.8420872502586375 }
http://math.stackexchange.com/questions/92697/recurrence-relations-and-generating-functions-how-to-find-the-initial-conditio
# Recurrence relations and Generating functions - how to find the initial conditions? Best to ask by example. Given the recurrence relation $a_{n}=a_{n-1}+a_{n-2}$, and some given initial conditions, we can find a similar relation for the generating function for the sequence, $f(x)=\sum_{n=0}^\infty a_nx^n$: $$f(x)=xf(x)+x^2f(x)+c(x)$$ Where $c(x)$ is a polynomial encoding the initial conditions. My main question is how can this polynomial be computed as painlessly as possible from the initial conditions? This is interesting not only for Fibonacci but in general, of course. A similar, probably equivalent questions is this: If for some sequence we have a rational generating function $\frac{p(x)}{q(x)}$ then the coefficients of $q(x)$ are exactly the coefficients of the recurrence relation for the sequence. Also, $p(x)$ is dependent on the initial conditions - but again, it's not clear to me how to compute those initial conditions from $p(x)$. - I added the $x^n$ that you left out of the generating function. –  Brian M. Scott Dec 19 '11 at 8:10 We know $f(0)$ and $f'(0)$. That's enough to determine $c(x)$. –  André Nicolas Dec 19 '11 at 8:12 André, I'm not sure I understand. Would you like to elaborate in an answer? –  Gadi A Dec 19 '11 at 8:47 If your initial conditions are $a_0$ and $a_1$, $p(x)=c(x)=(a_1-a_0)x+a_0$. Look at it this way. You have $$a_n=a_{n-1}+a_{n-2}\tag{1}$$ for $n\ge 2$. Assume that $a_n=0$ for $n<0$. Then $(1)$ is valid for all integers $n$ except possibly $0$ and $1$. To make it valid for all integers, add a couple of terms using the Iverson bracket to get $$a_n=a_{n-1}+a_{n-2}+(a_1-a_0)[n=1]+a_0[n=0]\;.\tag{2}$$ Note that while $a_0$ is straightforward, you have to be careful for $n>0$, since the earlier initial values are automatically built into the basic recurrence. Now multiply $(2)$ through by $x^n$ and sum: \begin{align*} \sum_na_nx^n&=\sum_na_{n-1}x^n+\sum_na_{n-2}x^n+(a_1-a_0)\sum_n[n=1]x^n+a_0\sum_n[n=0]x^n\\ &=x\sum_na_nx^n+x^2\sum_na_nx^n+(a_1-a_0)x+a_0\;. \end{align*} Thus, if your generating function is $A(x)=\displaystyle\sum_na_nx^n$, you have $$A(x)=xA(x)+x^2A(x)+(a_1-a_0)x+a_0\;,$$ and hence $$A(x)=\frac{(a_1-a_0)x+a_0}{1-x-x^2}\;.$$ This obviously generalizes to higher-order recurrences and other starting points for the initial values. For example, a third-order recurrence with initial values $a_0,a_1,a_2$ would have \begin{align*} p(x)=c(x)&=a_0+(a_1-a_0)x+\left(a_2-\big(a_0+(a_1-a_0)\big)\right)x^2\\ &=a_0+(a_1-a_0)x+(a_2-a_1)x^2\;. \end{align*} In general with initial values $a_0,\dots,a_m$ you’ll get Iverson terms $$a_0[n=0]+(a_1-a_0)[n=1]+(a_2-a_1)[n=2]+\cdots+(a_m-a_{m-1})[n=m]$$ in the recurrence and $$c(x)=p(x)=a_0+(a_1-a_0)x+\cdots+(a_m-a_{m-1})x^m\;.$$ - Your $c(x)$ is incorrect because (2) does not hold for $n=1$. As a consequence, the ratio $(a_1x+a_0)/(1-x-x^2)$ is $a_0+(a_1+a_0)x+$higher terms and not $a_0+a_1x+$higher terms. –  Did Dec 19 '11 at 8:30 As pointed out, this naive c(x) is tempting but incorrect. I had the same mistake myself, and this is the reason I'm asking the question. –  Gadi A Dec 19 '11 at 8:44 For example (copied my from Sage): sage: f(x) = (x+1)/(1-x-x^2) sage: f.taylor('x', 0, 5) x |--> 13*x^5 + 8*x^4 + 5*x^3 + 3*x^2 + 2*x + 1 –  Gadi A Dec 19 '11 at 8:45 @Didier: Fixed. Not the first time I’ve done that; you’d think that I’d know better by now. (I’ve worked too many classroom examples with $a_0=0$, I think!) –  Brian M. Scott Dec 19 '11 at 9:01 Look at a slightly more general problem $a_n=pa_{n-1}+qa_{n-2}$. As in your Fibonacci case, we get $$f(x)=pxf(x)+qx^2f(x)+c(x).$$ Since $f(0)=a_0$, we have $c(0)=a_0$. Now differentiate, and note that $a_1=f'(0)$. (In general, $a_n n!=f^{(n)}(0)$.) We get $$f'(x)=pxf'(x)+pf(x)+qx^2f'(x)+2qxf(x)+c'(x),$$ so $$c'(0)=f'(0)-pf(0)=a_1-pa_0.$$ Since everything is determined once $a_0$ and $a_1$ are known, $c(x)$ is a polynomial of degree $\le 1$, and therefore $c(x)=c'(0)x+c(0)=(a_1-pa_0)x+a_0$. Repeated differentiation also has to work for your more general question, since the derivatives at $0$ determine the coefficients. - If you have: $$A(z) = \sum_{n \ge 0} a_n z^n$$ then also: $$\frac{A(z)}{1 - z} = \sum_{n \ge 0} \left(\sum_{0 \le k \le n} a_k \right) z^n$$ (multiply out, the inner sums telescope away leaving just the last terms). -
2015-01-26T19:17:08
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/92697/recurrence-relations-and-generating-functions-how-to-find-the-initial-conditio", "openwebmath_score": 0.9247905015945435, "openwebmath_perplexity": 324.09421014773903, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986151392226466, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8420872483301434 }
http://mathhelpforum.com/number-theory/183704-remainder-when-large-numbers-divided-print.html
# Remainder when large numbers divided • June 27th 2011, 01:13 PM Zalren Remainder when large numbers divided What is the remainder when $2001^{2001}$ is divided by $26$? I think I have to do something with $26$, like break it up into $13 * 2$. Try to use Fermat's Theorem somehow. I know that $(2001, 13) = 1$ and $(2001, 2) = 1$. Any help to get rolling would be appreciated. • June 27th 2011, 02:24 PM VinceW Re: Remainder when large numbers divided $2001 \equiv 25 \pmod {26}$ $2001^2 \equiv 25^2 \equiv 1 \pmod {26}$ $2001^4 \equiv 1^2 \equiv 1 \pmod {26}$ $2001^8 \equiv 1 \pmod {26}$ $2001^{16} \equiv 1 \pmod {26}$ $2001^{64} \equiv 1 \pmod {26}$ $2001^{256} \equiv 1 \pmod {26}$ $2001^{512} \equiv 1 \pmod {26}$ $2001^{1024} \equiv 1 \pmod {26}$ $2001^{2001} = 2001^{1024} 2001^{512} 2001^{256} 2001^{128} 2001^{64} 2001^{16} 2001 \equiv 1 \cdot 1 \cdot 1 \cdot 1 \cdot 1 \cdot 1 \cdot 25 \equiv 25 \pmod {26}$ • June 27th 2011, 06:10 PM Soroban Re: Remainder when large numbers divided Hello, Zalren! Quote: $\text{What is the remainder when }2001^{2001}\text{ is divided by }26\,?$ . . . $2001 \;=\;77(26) -1$ $\begin{array}{ccccc} 2001 & \equiv & \text{-}1 & \text{(mod 26)} \\ \\[-3mm] 2001^{2001} & \equiv & (\text{-}1)^{2001} & \text{(mod 26)} \\ \\[-3mm] 2001^{2001} & \equiv & \text{-}1 & \text{(mod 26)} \\ \\[-3mm] 2001^{2001} & \equiv & 25 & \text{(mod 26)}\end{array}$ $\text{Therefore, }2001^{2001} \div 26\text{ has a remainder of }25.$ • June 27th 2011, 06:13 PM Drexel28 Re: Remainder when large numbers divided Quote: Originally Posted by VinceW $2001 \equiv 25 \pmod {26}$ $2001^2 \equiv 25^2 \equiv 1 \pmod {26}$ $2001^4 \equiv 1^2 \equiv 1 \pmod {26}$ $2001^8 \equiv 1 \pmod {26}$ $2001^{16} \equiv 1 \pmod {26}$ $2001^{64} \equiv 1 \pmod {26}$ $2001^{256} \equiv 1 \pmod {26}$ $2001^{512} \equiv 1 \pmod {26}$ $2001^{1024} \equiv 1 \pmod {26}$ $2001^{2001} = 2001^{1024} 2001^{512} 2001^{256} 2001^{128} 2001^{64} 2001^{16} 2001 \equiv 1 \cdot 1 \cdot 1 \cdot 1 \cdot 1 \cdot 1 \cdot 25 \equiv 25 \pmod {26}$ I think you made it harder than it has to be (both of you). Isn't it true that $\displaystyle 2001^{2001}\equiv 25^{2001}\equiv (-1)^{2001}\equiv -1\equiv 25\text{ mod }26$.
2016-08-27T15:33:00
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/number-theory/183704-remainder-when-large-numbers-divided-print.html", "openwebmath_score": 0.6643934845924377, "openwebmath_perplexity": 1172.1188057173451, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513893774303, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8420872458973154 }
http://mathhelpforum.com/algebra/132146-double-inequalities-print.html
# Double Inequalities • March 5th 2010, 12:46 AM anonymous_maths Double Inequalities Need help to solve this: 0 < √1-x^2 < 1 < x < 1 How does that work? Thanks. • March 5th 2010, 01:27 AM Prove It Quote: Originally Posted by anonymous_maths Need help to solve this: 0 < √1-x^2 < 1 < x < 1 How does that work? Thanks. I take it that this is $0 \leq \sqrt{1 - x^2} \leq 1$. Squaring everything maintains the inequality... $0 \leq 1 - x^2 \leq 1$ $-1 \leq -x^2 \leq 0$ $1 \geq x^2 \geq 0$ $0 \leq x^2 \leq 1$ Now look at each inequality separately. $x^2 \leq 1$ $|x| \leq 1$ $-1 \leq x \leq 1$. But since $0\leq x^2$ $0 \leq x$. Putting it together, that means $0 \leq x \leq 1$. • March 5th 2010, 01:32 AM Haven Quote: Originally Posted by Prove It But since $0\leq x^2$ $0 \leq x$. That is false. I.e., $x = \frac{-1}{2}$. $x < 0$ yet $x^2 > 0$. $0 \leq x^2 \leq 1$ All you have to do from here is square root everything $0 \leq x \leq 1$ • March 5th 2010, 02:01 AM anonymous_maths yes, but that's different to the answer supplied of -1< x < 1 • March 5th 2010, 02:18 AM Prove It Quote: Originally Posted by Haven That is false. I.e., $x = \frac{-1}{2}$. $x < 0$ yet $x^2 > 0$. $0 \leq x^2 \leq 1$ All you have to do from here is square root everything $0 \leq x \leq 1$ Correct in finding the mistake in my logic. However, since all square numbers are nonnegative, that means if $0 \leq x^2$ Then $-\infty \leq x \leq \infty$. So NOW putting the inequalities together you find that $-1 \leq x \leq 1$. • March 5th 2010, 03:34 AM HallsofIvy Quote: Originally Posted by anonymous_maths Need help to solve this: 0 < √1-x^2 < 1 < x < 1 How does that work? Thanks. The crucial point with double inequalities is that you must have $0\le \sqrt{1- x^2}$ and $\sqrt{1- x^2}\ge 1$. Personally, I think the simplest way to solve complicated inequalities is to solve the associated equation first. If [/tex]0= \sqrt{1- x^2}[/tex] then $0= 1- x^2$, $x= \pm 1$. If $\sqrt{1- x^2}= 1$ then $1- x^2= 1$, x= 0. The whole point of that is that the three points, -1, 0, and 1 separate intervals where $\sqrt{1- x^2}$ is less than 0, greater than 0, or does not exist. Try one value of x in each of the intervals x< -1, -1< x< 0, 0< x< 1, and x> 1. For example, if x= -2< -1, then $\sqrt{1- x^2}= \sqrt{-3}$ which is not a real number so does not satisfy the inequality. No value of x less than -1 satisfies the inequality. If x= -1/2, between -1 and 0, then [tex]\sqrt{1- x^2}= \sqrt{1- 1/4}= \sqrt{3}{2}. which is positive and less than 1. Every value of x between -1 and 0 satisfies the inequality. If x= 1/2, between 0 and 1, then [tex]\sqrt{1- x^2}= \sqrt{1- 1/4}= \sqrt{3}{2}. which is positive and less than 1. Every value of x between 0 and 1 satisfies the inequality. Finally, if x= 2, between 0 and 1, then $\sqrt{1- x^2}= \sqrt{-3}$ which is not a real number so does not satisfy the inequality. No value of x larger than 1 satisfies the inequality. Since the numbers -1, 0, and 1 make the two sides equal, the solution set is $-1\le x\le 1$. • March 5th 2010, 07:22 AM Krizalid $\sqrt{1-x^2}\ge0$ it's true as long as $|x|\le1.$ as for $\sqrt{1-x^2}\le1,$ (1) fix $|x|\le1$ and by squaring we get $x^2\ge0$ which holds for any $x,$ thus the solution set for (1) is again $|x|\le1,$ and the final solution set it's just $|x|\le1.$
2013-12-18T20:43:57
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/132146-double-inequalities-print.html", "openwebmath_score": 0.9278259873390198, "openwebmath_perplexity": 1217.9644568594867, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98615138856342, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8420872452022217 }
https://math.stackexchange.com/questions/1367935/factoring-x2-x-1-0-from-spivak-calculus-exercise
# Factoring $x^2 + x +1 > 0$ from Spivak Calculus exercise Hi!! I found me in trouble when I saw the solution of a simple inequality, that can be found at the end of the first chapter, that is the exercise 4 - (viii): $x^2+x+1 > 0$. Very easy to solve I know, but I also saw the related solution on the Spivak Calculus Answer Book and I did not understand how succeed to factoring the polynomial ( already factored into the solution ), I show what I mean: $$x^2 + x + 1 > 0 = \left( x + \frac{ 1}{2} \right)^2 + \frac{3}{4}$$ On the RHS there is the factored expression and now my question is: how can I get this form for every second degree polynomial? Is there a specific formula to get this form? I hope someone could help me • Look up: completing the square. – Zain Patel Jul 20 '15 at 19:06 • Among the various methods for motivating how one might discover how to complete the square, here's one you might not come across. Note that $x^2+x+1=x(x+1)+1,$ where the two variable factors are "not balanced" (the zeros of $x$ and $x+1$ are $0$ and $-1,$ which are not symmetric with respect to the origin). To balance the variables, change variables by letting $x=u-\frac{1}{2}$ and $x+1=u+\frac{1}{2},$ so that $x(x+1)+1$ becomes $(u-\frac{1}{2})(u+\frac{1}{2}) + 1 = u^2-\frac{1}{4}+1=u^2+\frac{3}{4}.$ Now convert back to $x$'s using $u=x+\frac{1}{2}.$ – Dave L. Renfro Jul 20 '15 at 19:56 • By the way, the expression you got is not "factored". Factoring an expression generally means to reduce it to a product of polynomial factors each of lesser degree than the original expression. The expression $x^2 + x + 1$ cannot be factored in the reals, however, it is possible to find a factorisation in the complex numbers, i.e. $x^2 + x + 1 = (x - \omega)(x - \overline{\omega})$, where $\omega$ and its conjugate are complex cube roots of $1$. Completing the square is not the same as factorisation, however, it can help you simplify expressions and solve equations/inequalities. – Deepak Jul 21 '15 at 1:37 • Thank you all for the answers and the comments! Now everything's clear! – Michele Jul 21 '15 at 21:24 Completing the square is the process you're looking for. Basically, for a quadratic $ax^2 + bx + c = a(x^2 + \frac{b}{a}x + \frac{c}{a})$. The term inside the bracket can be written as $$\left(x + \frac{b}{2a}\right)^2 - \left(\frac{b}{2a}\right)^2 + \frac{c}{a}$$ so that for any general quadratic expression we have $$ax^2 + bx + c\equiv a\left(\left(x + \frac{b}{2a}\right)^2 - \left(\frac{b}{2a}\right)^2 + \frac{c}{a}\right ).$$ In fact, writing it in this form is a useful tool because $x = -\frac{b}{2a}$ is the minimum/maximum point of the quadratic. As suggested, this was achieved through completing the square: $$x^2 +x+1 = x^2 + x +\left(\frac{1}{2}\right)^2 + 1 -\frac{1}{4}$$ $$= \left(x+\frac{1}{2}\right)^2 + \frac{3}{4}$$ You need to go and read up completing the square to understand how I got the above. $$ax^2+bx+c=a\left(x^2+\frac{b}{a}x+\frac{c}{a}\right)=a\left(\left(x+\frac{b}{2a}\right)^2+\frac{c}{a}-\frac{b^2}{4a^2}\right)\\=a\left(x+\frac{b}{2a}\right)^2+\frac{b^2-4ac}{a}$$ • (I think maybe you need to adjust the last term in your answer.) – user84413 Jul 20 '15 at 22:20 For $a\not =0$, we have\begin{align}ax^2+bx+c&=a\left(x^2+\frac{b}{a}x\right)+c\\&=a\left(\color{red}{x^2+\frac{b}{a}x+\left(\frac{b}{2a}\right)^2}-\left(\frac{b}{2a}\right)^2\right)+c\\&=a\left(\color{red}{\left(x+\frac{b}{2a}\right)^2 }-\left(\frac{b}{2a}\right)^2\right)+c\\&=a\left(x+\frac{b}{2a}\right)^2-a\cdot\frac{b^2}{4a^2}+c\\&=a\left(x+\frac{b}{2a}\right)^2-\frac{b^2}{4a}+c\end{align} We see that $x^2+x+1=(x^2+x+{1\over 4})+{3\over 4}=(x+{1\over 2})^2+{3\over4}$. Since $(x+{1\over 2})^2\ge 0$ for all $x\in \mathbb{R}$, then it follows that $(x+{1\over 2})^2+{3\over4}>0$ and so $x^2+x+1>0$. Simply you have to add and subtract the square of half of $x$'s coefficient then the algebraic statement will be written as sum or difference of to square.
2019-11-16T21:21:25
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1367935/factoring-x2-x-1-0-from-spivak-calculus-exercise", "openwebmath_score": 0.839210033416748, "openwebmath_perplexity": 180.31110784050162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513897844355, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8420872444117482 }
http://math.stackexchange.com/questions/167329/f-g-entire-functions-with-f2-g2-equiv-1-implies-exists-h-entire-wit
# $f, g$ entire functions with $f^2 + g^2 \equiv 1 \implies \exists h$ entire with $f(z) = \cos(h(z))$ and $g(z) = \sin(h(z))$ I am studying for a qualifier exam in complex analysis and right now I'm solving questions from old exams. I am trying to prove the following: Prove that if $f$ and $g$ are entire functions such that $f(z)^2 + g(z)^2 = 1$ for all $z \in \mathbb{C}$, then there exists an entire function $h$ such that $f(z) = \cos(h(z))$ and $g(z) = \sin(h(z))$. My Attempt The approach that occurred to me is the following. Since $f(z)^2 + g(z)^2 = 1$ then we have $(f(z) + ig(z))(f(z) - ig(z)) = 1$. Then each factor is nonvanishing everywhere in $\mathbb{C}$ and thus by the "holomorphic logarithm theorem" we know that since $\mathbb{C}$ is simply connected, there exists a holomorphic function $H:\mathbb{C} \to \mathbb{C}$ such that $$e^{H(z)} = f(z) + ig(z)$$ and then we can write $\exp(H(z)) = \exp\left(i\dfrac{H(z)}{i} \right) = \exp(ih(z))$, where $h(z) := \dfrac{H(z)}{i}$. Thus so far we have an entire function $h(z)$ that satisfies $$e^{ih(z)} = f(z) + ig(z)$$ On the other hand, we also know that $e^{iz} = \cos{z} + i \sin{z}$ for any $z \in \mathbb{C}$, thus we see that $$e^{ih(z)} = \cos{(h(z))} + i \sin{(h(z))} = f(z) + ig(z)$$ Thus at this point I would like to conclude somehow that we must have $f(z) = \cos(h(z))$ and $g(z) = \sin(h(z))$, but I can't see how and if this is possible. My questions 1. Is the approach I have outlined a correct way to proceed, and if so how can I finish my argument? 2. If my argument does not work, how can this be proved? Thanks for any help. - Since $f-ig$ is the reciprocal of $f+ig$, you know that $e^{-ih(z)}=f(z)-ig(z)$. Average this with $e^{ih(z)}$ to finish. (I am not familiar with the holomorphic logarithm theorem you mention though.) –  anon Jul 6 '12 at 4:21 @anon Maybe that name is not the usual one. I have only seen it by that name in Greene and Krantz's book Function Theory of One Complex Variable. It is stated as Lemma 6.6.4 in page 197 of the book. –  Adrián Barquero Jul 6 '12 at 4:25 Just in case the page view from google books doesn't work, it basically says that if $U$ is a simply connected open set and if $f: U \to \mathbb{C}$ is holomorphic and nowhere zero on $U$, then there exists a holomorphic function $h$ on $U$ such that $e^h \equiv f$ on $U$. –  Adrián Barquero Jul 6 '12 at 4:28 Ah, of course. ${}$ –  anon Jul 6 '12 at 4:32 @anon Thank you very much for the help. I tried something like that at first but I was messing things up by trying to take the conjugate of $f + ig$ when it really was the reciprocal that I needed. Would you please add your comment as an answer so that I can accept it and upvote it? –  Adrián Barquero Jul 6 '12 at 4:35 ## 1 Answer You approach appears to be correct, and it can be finished with the following thought: not only do complex exponentials split into combinations of trigonometric functions, but trig functions also split into combinations of complex exponentials. Indeed: $$\cos\alpha=\frac{e^{i\alpha}+e^{-i\alpha}}{2},\quad \sin\alpha=\frac{e^{i\alpha}-e^{-i\alpha}}{2i}.$$ This is applicable for not just real $\alpha$, but complex as well. You've deduced $e^{ih(z)}=f(z)+ig(z)$ for some entire function $h$, and taking inverses gives $e^{-ih(z)}=f(z)-ig(z)$, so averaging these two will give you $\cos h(z)=f(z)$ (and similarly, $\sin h(z)=g(z)$). - Thank you very much for the help. I'm happy that my argument worked in the end ;-) –  Adrián Barquero Jul 6 '12 at 4:40
2015-09-04T04:37:00
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/167329/f-g-entire-functions-with-f2-g2-equiv-1-implies-exists-h-entire-wit", "openwebmath_score": 0.9407651424407959, "openwebmath_perplexity": 160.22667943963717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513918194609, "lm_q2_score": 0.8539127473751341, "lm_q1q2_score": 0.8420872443163682 }
https://buboflash.eu/bubo5/show-dao2?d=150891175
Tags #m249 #mathematics #open-university #statistics #time-series Question The 1-step ahead forecast error at time t, which is denoted et, is the difference between the observed value and the 1-step ahead forecast of Xt: et = xt - $$\hat{x}_t$$ The sum of squared errors, or SSE, is given by SSE = $$\large \sum_{t=1}^ne_t^2 = \sum_{t=1}^n(x_t-\hat{x}_t)^2$$ Given observed values x1 ,x2 ,...,xn ,the optimal value of the smoothing parameter α for simple exponential smoothing is the value that [...]. minimizes the sum of squared errors Tags #m249 #mathematics #open-university #statistics #time-series Question The 1-step ahead forecast error at time t, which is denoted et, is the difference between the observed value and the 1-step ahead forecast of Xt: et = xt - $$\hat{x}_t$$ The sum of squared errors, or SSE, is given by SSE = $$\large \sum_{t=1}^ne_t^2 = \sum_{t=1}^n(x_t-\hat{x}_t)^2$$ Given observed values x1 ,x2 ,...,xn ,the optimal value of the smoothing parameter α for simple exponential smoothing is the value that [...]. ? Tags #m249 #mathematics #open-university #statistics #time-series Question The 1-step ahead forecast error at time t, which is denoted et, is the difference between the observed value and the 1-step ahead forecast of Xt: et = xt - $$\hat{x}_t$$ The sum of squared errors, or SSE, is given by SSE = $$\large \sum_{t=1}^ne_t^2 = \sum_{t=1}^n(x_t-\hat{x}_t)^2$$ Given observed values x1 ,x2 ,...,xn ,the optimal value of the smoothing parameter α for simple exponential smoothing is the value that [...]. minimizes the sum of squared errors If you want to change selection, open original toplevel document below and click on "Move attachment" #### Parent (intermediate) annotation Open it is given by SSE = $$\large \sum_{t=t}^ne_t^2 = \sum_{t=t}^n(x_t-\hat{x}_t)^2$$ Given observed values x 1 ,x 2 ,...,x n ,the optimal value of the smoothing parameter α for simple exponential smoothing is the value that <span>minimizes the sum of squared errors.<span><body><html> #### Original toplevel document (pdf) cannot see any pdfs #### Summary status measured difficulty not learned 37% [default] 0 No repetitions
2021-09-20T15:13:06
{ "domain": "buboflash.eu", "url": "https://buboflash.eu/bubo5/show-dao2?d=150891175", "openwebmath_score": 0.8339410424232483, "openwebmath_perplexity": 3995.280183672095, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513914124559, "lm_q2_score": 0.853912747375134, "lm_q1q2_score": 0.8420872439688214 }
http://www.initiativlaksevag.no/d5i4fpvx/convex-optimization-example-661944
# convex optimization example In Lecture 1 of this course on convex optimization, we will talk about the following points: 00:00 Outline 05:30 What is Optimization? We use these as examples to highlight the power of optimization-based inference and to help you get a feel for what modeling with optimization layers is like. The first step is to find the feasible region on a graph. Consequently, convex optimization has broadly impacted several disciplines of science and engineering. Some examples of convex functions of one variable are: • f (x)=ax + b • f (x)=x2 + bx + c • f (x)=|x| • f (x)=− ln(x)forx> 0 • f (x)= 1 for x>0 x • f (x)=ex 5.2 Concave Functions and Maximization The “opposite” of a convex function is a concave function, defined below: Definition 5.12 … Flying the vertices of a 2-D 1 sec reachability set: Long-term projections indicate an expected demand of at least 100 digital and 80 mechanical watches each day. When the constraint set consists of an entire Euclidean space such problems can be easily solved by classical Newton-type methods, and we have nothing to say about these uncon- Optimization is the science of making a best choice in the face of conflicting requirements. (Econometrica 84(6):2215–2264, 2016) and Shi (J Econom 195(1):104–119, 2016). 1.1 Topology Review Let Xbe a nonempty set in R n. A point x 0 is called an interior point if Xcontains a small ball around x 0, i.e. 9r>0, such that B(x 0;r) := fx: kx x 0 k 2 rg X. Clearly from the graph, the vertices of the feasible region are. Equivalently, feasible sets are convex sets. method: eg. A constraint is convex if convex combinations of feasible points are also feasible. The conic combination of infinite set of vectors in $\mathbb{R}^n$ is a convex cone. Convex.jl allows you to use a wide variety of functions on variables and on expressions to form new expressions. The kidney shaped set is not convex, since the line segment between the tw opointsin . . Wishing a great success once more, I am. Not for re-distribution, re-sale or use in derivative works. . Convex optimization problems 4–8. square (x) <= sum (y) <= constraint (convex) ├─ qol_elem (convex; positive) │ ├─ real variable (id: 806…655) │ └─ [1.0] └─ sum (affine; real) └─ 4-element real variable (id: 661…933) M = Z for i = 1:length (y) global M += rand (size (Z)...)*y [i] end M ⪰ 0. Convex Optimization — Boyd & Vandenberghe 2. Convex Optimization Examples: Filter Design and Equalization: Disciplined Convex Programming and CVX ( 0, 0) ( 0, 2) ( 1, 0) ( 1 2, 3 2) Let f ( x, y) = 5 x + 3 y. According to the question, at least 100 digital watches are to be made daily and maximaum 200 digital watches can be made. Any linear function is a convex cone. The problem is called a convex optimization problem if the objective function is convex; the functions defining the inequality constraints , are convex; and , define the affine equality constraints. I. CVX also supports geometric programming (GP) through the use of a special GP mode. The maximum value of the objective function is obtained at $\left ( 100, 170\right )$ Thus, to maximize the net profits, 100 units of digital watches and 170 units of mechanical watches should be produced. Download the syllabus (pdf) Outline. Dr. R. K. Verma Convexity a) convex sets b) closest point problem and its dual the basic nature of Linear Programming is to maximize or minimize an objective function with subject to some constraints. Convex sets (convex/conic/a ne hulls) Examples of convex sets Calculus of convex sets Some nice topological properties of convex sets. Examples are the calibration of option pricing models to market data or the optimization of an agent’s utility. I appreciate your examples on Convex Optimization in R. My suggestion: You release a series on ‘Optimization Methods in R’ ranging from linear programming thru to non-linear programming. Convex sets • affine and convex sets • some important examples • operations that preserve convexity • generalized inequalities • separating and supporting hyperplanes • dual cones and generalized inequalities 2–1 Note that, in the convex optimization model, we do not tolerate equality constraints unless they are affine. A function $${\displaystyle f}$$ mapping some subset of $${\displaystyle \mathbb {R} ^{n}}$$into $${\displaystyle \mathbb {R} \cup \{\pm \infty \}}$$ is convex if its domain is convex and for all $${\displaystyle \theta \in [0,1]}$$ and all $${\displaystyle x,y}$$ in its domain, the following condition holds: $${\displaystyle f(\theta x+(1-\theta )y)\leq \theta f(x)+(1-\theta )f(y)}$$. There are great advantages to recognizing or formulating a problem as a convex optimization problem. Following are further examples of these ideas and methods in test flights with our custom built quad-rotor in our lab. … To satisfy a shipping contract, a total of at least 200 watches much be shipped each day. In other words, convex constraints are of the form, call a MathProgBase solver suited for your problem class, to solve problem using a different solver, just import the solver package and pass the solver to the solve! At long last, we are pleased to announce the release of CVXR!. •Known to be NP-complete. Solution −. Clearly from the graph, the vertices of the feasible region are, $\left ( 0, 0 \right )\left ( 0, 2 \right )\left ( 1, 0 \right )\left ( \frac{1}{2}, \frac{3}{2} \right )$, Putting these values in the objective function, we get −, $f\left ( \frac{1}{2}, \frac{3}{2} \right )$=7, Therefore, the function maximizes at $\left ( \frac{1}{2}, \frac{3}{2} \right )$. In mathematics, a real-valued function defined on an n-dimensional interval is called convex if the line segment between any two points on the graph of the function lies above the graph between the two points. The hexagon, which includes its boundary (shown darker), is convex. Step 1 − Maximize 5 x + 3 y subject to. Convex functions; common examples; operations that preserve convexity; quasiconvex and log-convex functions. •Yes, non-convex optimization is at least NP-hard •Can encode most problems as non-convex optimization problems •Example: subset sum problem •Given a set of integers, is there a non-empty subset whose sum is zero? The objective function is a linear function which is obtained from the mathematical model of the problem. Convex optimization seeks to minimize a convex function over a convex (constraint) set. . Because of limitations on production capacity, no more than 200 digital and 170 mechanical watches can be made daily. Since a hyperplane is linear, it is also a convex cone. Convex optimization basics I Convex sets I Convex function I Conditions that guarantee convexity I Convex optimization problem Looking into more details I Proximity operators and IST methods I Conjugate duality and dual ascent I Augmented Lagrangian and ADMM Ryota Tomioka (Univ Tokyo) Optimization 2011-08-26 14 / 72. . Convex Optimization Problems Definition An optimization problem is convex if its objective is a convex function, the inequality constraints fj are convex, and the equality constraints hj are affine minimize x f0(x) (Convex function) s.t. Optimization is the science of making a best choice in the face of conflicting requirements. Portfolio Optimization - Markowitz Efficient Frontier, « Portfolio Optimization - Markowitz Efficient Frontier. That is a powerful attraction: the ability to visualize geometry of an optimization problem. Introduction to optimization, example problems. find the value of the objective function at these vertices. Combining R and the convex solver MOSEK achieves speed gain and accuracy, demonstrated by examples from Su et al. If each digital watch sold results in a $\$2$loss, but each mechanical watch produces a$\$5$ profit, how many of each type should be made daily to maximize net profits? A point x 0 is called a Since each digital watch sold results in a $\$2$loss, but each mechanical watch produces a$\$5$ profit, And we have to maximize the profit, Therefore, the question can be formulated as −. Algorithms for Convex Optimization Nisheeth K. Vishnoi This material will be published by Cambridge University Press as Algorithms for Convex Optimization by Nisheeth K. Vishnoi. The above videos of rocket test flights with JPL and Masten Aerospace are examples of convexification and real-time optimization based control. 2016, CVXR is an R package that provides an object-oriented language for convex optimization, similar to CVX, CVXPY, YALMIP, and Convex.jl. applications of convex optimization are still waiting to be discovered. Examples least-squares minimize kAx−bk2 2 That is a powerful attraction: the ability to visualize geometry of an optimization problem. All of the examples can be found in Jupyter notebook form here. Convexity, along with its numerous implications, has been used to come up with efficient algorithms for many classes of convex programs. Any convex optimization problem has geometric interpretation. Geometric programs are not convex, but can be made so by applying a certain transformation. 'Nisheeth K. Vishnoi 2020. fact, the great watershed in optimization isn't between linearity and nonlinearity, but convexity and nonconvexity.\"- R We will discuss mathematical fundamentals, modeling (how to set up optimization algorithms for different applications), and algorithms. This pre-publication version is free to view and download for personal use only. OR/MS community in academia and industry will highly appreciate such a series, believe me. Previously, we wrote about Monte Carlo Simulation and if you haven’t read yet, we strongly suggest you do so. f(x,y) is convex if f(x,y) is convex in x,y and C is a convex set Examples • distance to a convex set C: g(x) = infy∈Ckx−yk • optimal value of linear program as function of righthand side g(x) = inf. For more information on disciplined convex programming, see these resources; for the basics of convex analysis and convex optimization, see the book Convex Optimization. Plotting the above equations in a graph, we get, $\left ( 100, 170\right )\left ( 200, 170\right )\left ( 200, 180\right )\left ( 120, 80\right ) and \left ( 100, 100\right )$. Examples… Left. First introduced at useR! Similarly, at least 80 mechanical watches are to be made daily and maximum 170 mechanical watches can be made. Robust performance of convex optimization is witnessed across platforms. A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. If a given optimization problem can be transformed to a convex equivalent, then this interpretive benefit is acquired. This document was generated with Documenter.jl on Friday 13 March 2020. Nonetheless, as mentioned in other answers, convex optimization is faster, simpler and less computationally intensive, so it is often easier to "convexify" a problem (make it convex optimization friendly), then use non-convex optimization. Optimization layers provide much more functionality than just subsuming standard activation functions as they can also be parameterized and learned. Closed half spaces are also convex cones. Step 2 − A watch company produces a digital and a mechanical watch. If a given optimization problem can be transformed to a convex equivalent, then this interpretive benefit is acquired. find the feasible region, which is formed by the intersection of all the constraints. Examples. •How do we encode this as an optimization … Convex optimization studies the problem of minimizing a convex function over a convex set. In finance and economics, convex optimization plays an important role. Tools: De nitions ofconvex sets and functions, classic examples 24 2 Convex sets Figure 2.2 Some simple convex and nonconvex sets. Any convex optimization problem has geometric interpretation. The first step is to find the feasible region on a graph. for all z with kz − xk < r, we have z ∈ X Def. \Convex calculus" makes it easy to check convexity. From the given question, find the objective function. 4: Convex optimization problems. The most basic advantage is that the problem can then be solved, very reliably and efficiently, using interior-point methods or other special methods for convex optimization. A vector x0 is an interior point of the set X, if there is a ball B(x0,r) contained entirely in the set X Def. Convex Optimization Examples: Filter Design and Equalization: Disciplined Convex Programming and CVX A set S is convex if for all members $${\displaystyle x,y\in S}$$ and all $${\displaystyle \theta \in [0,1]}$$, we have that $${\displaystyle \theta x+(1-\theta )y\in S}$$. Let $x$ be the number of digital watches produced, $y$ be the number of mechanical watches produced. Convex optimization is regarded to have a smooth output and whereas the non-convex optimization is a non-smooth output. The vertice which either maximizes or minimizes the objective function (according to the question) is the answer. find the vertices of the feasible region. Lecture 2 Open Set and Interior Let X ⊆ Rn be a nonempty set Def. Using Julia version 1.0.5. # Let us first make the Convex.jl module available using Convex, SCS # Generate random problem data m = 4; n = 5 A = randn (m, n); b = randn (m, 1) # Create a (column vector) variable of size n x 1. x = Variable (n) # The problem is to minimize ||Ax - b||^2 subject to x >= 0 # This can be done by: minimize(objective, constraints) problem = minimize (sumsquares (A * x -b), [x >= 0]) # Solve the problem by calling solve! Perspective. This page was generated using Literate.jl. Convex optimization problems; linear and quadratic programs; second-order cone and semidefinite programs; quasiconvex optimization problems; vector and multicriterion optimization. Wide variety of functions on variables and on expressions to form new expressions set... Videos of rocket test flights with JPL and Masten Aerospace are examples of and... X ≥ 0 the non-convex optimization is witnessed across platforms notebook form here projections indicate expected... The hexagon, which includes its boundary ( shown darker ), and.. Not for re-distribution, re-sale or use in derivative works fx: kx x 0 ; r ): fx. Waiting to be made daily and maximaum 200 digital and a mechanical watch programs quasiconvex... Are great advantages to recognizing or formulating a problem as a convex function over a convex,... Visualize geometry of an optimization … convex functions and a mechanical watch to Maximize or minimize an function. Given optimization problem can be transformed to a convex ( constraint ) set convex. Convex/Conic/A ne hulls ) examples of convexification and real-time optimization based control optimization we. Is formed by the intersection of all the constraints are the calibration of option pricing models to market data the! Properties of convex optimization are still waiting to be made daily and maximaum 200 watches... We wrote about Monte Carlo Simulation and if you haven ’ t yet. + 3 y subject to: Disciplined convex Programming and CVX applications of convex sets calculus of convex optimization.... So by applying a certain transformation GP mode the vertices of the of! Hyperplane is linear, it is also a convex cone by examples from Su al. Science of making a best choice in the face of conflicting requirements examples from Su al..., modeling ( how to set up optimization algorithms for different applications,! Once more, I am examples from Su et al consequently, convex optimization has impacted... ( according to the question ) is the science of making a choice. Studies the problem gain and accuracy, demonstrated by examples from Su et al ).: De nitions ofconvex sets and functions, classic examples 24 2 convex sets calculus of convex optimization studies problem... Re-Distribution, re-sale or use in derivative works attraction: the ability to visualize geometry of an agent ’ utility! Be made daily and maximaum 200 digital and 170 mechanical watches produced previously we! Form new expressions a hyperplane is linear, it is also a convex over., such that B ( x 0 k 2 rg x: the to. Document was generated with Documenter.jl on Friday 13 March 2020 modeling ( how set! Attraction: the ability to visualize geometry of an agent ’ s utility to a. Examples 24 2 convex sets calculus of convex optimization are still waiting to be made daily and maximum mechanical... Z ∈ x Def, in the convex solver MOSEK achieves speed gain and accuracy, demonstrated by examples Su... In Jupyter notebook form here subsuming standard activation functions as they can also be parameterized and learned function which obtained... Industry will highly appreciate convex optimization example a series, believe me optimization, we do tolerate... And economics, convex optimization examples: Filter Design and Equalization: Disciplined convex Programming CVX! Gp ) through the use of a special GP mode total of at 200... Conditions for Global Optima, Karush-Kuhn-Tucker Optimality Necessary Conditions for Global Optima, Karush-Kuhn-Tucker Optimality Necessary for. Examples: Filter Design and Equalization: Disciplined convex Programming and CVX applications of convex optimization the... Linear Programming is to Maximize or minimize an objective function not for re-distribution, re-sale or use derivative... Supports geometric Programming ( GP ) through the use of a special GP mode Conditions for Optima! Sets and functions, classic examples 24 2 convex sets examples are the calibration option. Sets Some nice topological properties of convex programs ) set a total at. By applying a certain transformation free to view and download for personal use only these vertices question ) the... Convex sets the non-convex optimization is the science of making a best choice in face. Expressions to form new expressions is the science of making a best choice in the face of requirements... Have a smooth output and whereas the non-convex optimization is witnessed across platforms darker convex optimization example, is convex if combinations! Minimizes the objective function is a powerful attraction: the ability to visualize geometry an...:2215–2264, 2016 ) and Shi ( J Econom 195 ( 1 ):104–119, 2016 and! Form here, then this interpretive benefit is acquired conflicting requirements are pleased announce. Quad-Rotor in our lab that preserve convexity ; quasiconvex and log-convex functions of functions on variables and on to! Face of conflicting requirements region are vectors in $\mathbb { r } ^n$ is a powerful:! ; operations that preserve convexity ; quasiconvex and log-convex functions above videos of rocket test flights with and. Some nice topological properties of convex optimization model, we will talk about the following points: 00:00 05:30. Function with subject to be transformed to a convex cone convex sets calculus of convex sets calculus of sets! To view and download for personal use only unless they are affine 0 n! Points are also feasible strongly suggest you do so properties of convex plays! Number of mechanical watches can be found in Jupyter notebook form here and sets! Functions as they can also be parameterized and learned convex equivalent, then interpretive. Do we encode this as an optimization … convex functions which are imposed on the model and are also.! The value of the examples can be made daily r, we about! The first step is to Maximize or minimize an objective function at these vertices optimization problems ; vector and optimization! Wishing a great success once more, I am, the vertices the! Either maximizes or minimizes the objective function is a powerful attraction: the ability to geometry... Speed gain and accuracy, demonstrated by examples from Su et al up with Efficient algorithms for many classes convex! These ideas and methods in test flights with our custom built quad-rotor in lab! Convex programs step 2 − a watch company produces a digital and 170 mechanical can!, re-sale or use in derivative works y ≤ 3, x ≥ 0 J 195! Limitations on production capacity, no more than 200 digital and 170 mechanical can! Examples can be made so by applying a certain transformation pleased to announce the release of CVXR!,! Will talk about the following points: 00:00 Outline 05:30 What is optimization equivalent, then this interpretive is! New expressions non-convex optimization is regarded to have a smooth output and whereas the non-convex is. Least 200 watches are to be discovered because of limitations on production capacity, no more 200. Document was generated with Documenter.jl on Friday 13 March 2020 CVX also supports geometric Programming GP!, re-sale or use in derivative works x Def optimization model, we do not tolerate constraints! And the convex solver MOSEK achieves speed gain and accuracy, demonstrated by examples from Su et al '' it... So by applying a certain transformation the value of the objective function is a convex equivalent, then this benefit! Hexagon, which includes its boundary ( shown darker ), is convex if convex of. And CVX applications of convex optimization examples: Filter Design and Equalization: Disciplined convex and..., a convex optimization example of at least 100 digital and a mechanical watch a best choice in convex... And industry will highly appreciate such a series, believe me to use a wide variety functions. Which is formed by the intersection of all the constraints are the Conditions which are imposed on the model are! How to set up optimization algorithms for many classes of convex sets CVX also supports geometric (! Since a hyperplane is linear, it is also a convex function over a convex ( )... Success once more, I am constraint is convex convex and nonconvex sets activation functions as they can be! Algorithms for different applications ), is convex ( shown darker ), and.! $be the number of digital watches produced,$ y \$ be the number of watches... In finance and economics, convex optimization has broadly impacted several disciplines of science and engineering functions common. Function is a non-smooth output conflicting requirements Programming ( GP ) through the use of a special mode! Free to view and download for personal use only the vertices of the.. Be discovered, convex optimization plays an important role the ability to visualize geometry of an optimization convex! ; r ): = fx: kx x 0 ; r ): = fx: x. Produced each day with its numerous implications, has been used to come up with Efficient algorithms for many of. Optimality Necessary Conditions daily and maximum 170 mechanical watches can be made so by applying certain. Function which is obtained from the given question, at least 200 watches much be shipped day... 0 a n d y ≥ 0 CVX also supports geometric Programming ( GP ) through the use of special! We are pleased to announce the release of CVXR! the number of mechanical watches are to be produced day. Feasible points are also linear convex optimization are still waiting to be made algorithms many... Ne hulls ) examples of convexification and real-time optimization based control geometric programs are convex! Been used to come up with Efficient algorithms for different applications ), is convex to...: 00:00 Outline 05:30 What is optimization vector and multicriterion optimization a wide variety of functions on variables and expressions! ( x 0 ; r ): = fx: kx x 0 r. Efficient algorithms for many classes of convex sets Some nice topological properties of optimization...
2021-09-26T13:45:17
{ "domain": "initiativlaksevag.no", "url": "http://www.initiativlaksevag.no/d5i4fpvx/convex-optimization-example-661944", "openwebmath_score": 0.663661003112793, "openwebmath_perplexity": 1216.9361770754356, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9861513901914405, "lm_q2_score": 0.8539127473751341, "lm_q1q2_score": 0.8420872429261809 }
https://math.stackexchange.com/questions/2417728/toss-a-fair-coin-n-times-getting-n-heads-toss-it-n-more-times-what-is-the
# Toss a fair coin n times, getting $N$ heads. Toss it $N$ more times. What is the variance of the total number of heads? I've been stuck on this problem for a while; I can get the expected number of heads, but the solution set I saw gives $\frac{11n}{16}$ for the variance, whereas I get $\frac{9n}{16}$: Each coin flip, $X_i$ for $0,\ldots,n$, can be represented by a Bernoulli random variable with $E(X_i) = p = 0.5$. The total number of heads for the initial round of throws is then a binomial random variable, $N$, with $E(N) = np = \frac{1}{2}n$. The expected total number of heads for the second set of throws, $N'$ is conditional on $N$, $$E(N') = E(E(N' \vert N)) = E(N)E(X_i) = \frac{1}{4}n$$ To get the expected total number of heads we sum these two expectations, $$E(N + N') = E(N) + E(N') = \frac{1}{4}n$$ So the expected total number of heads is a function of a random variable, $h(N) = N + E(X_i)N$. Using the fact that $Var(Y) = Var[E(Y \vert X)] + E[Var(Y \vert X)]$ $$Var(N + E(X_i)N) = Var[E(N + E(X_i)N \vert N)] + E[Var(N + E(X_i)N \vert N)] \\ = Var\left[E\left(\frac{3}{2}N \mid N \right)\right] + E\left[Var\left(\frac{3}{2}N \mid N \right) \right] \\ = \left(\frac{3}{2}\right)^2Var[E(N \mid N)] + \left( \frac{3}{2}\right)^2E[Var(N\mid N)]$$ Factoring out the $\left(\frac{3}{2}\right)^2$, we're left with an expression that is equivalent to $Var(N) = \frac{n}{4}$ (per the equality above), hence $Var(N + N') = \frac{9n}{16}$. What am I missing? For $i\in\{1,\dots,n\}$ define $$H_i=\begin{cases} 0\text{ if tails on the }i^{\text{th}}\text{ toss,}\\ 1\text{ if heads on the }i^{\text{th}}\text{ toss, tails on the corresponding second round toss,}\\ 2\text{ if heads on the }i^{\text{th}}\text{ toss, heads on the corresponding second round toss}. \end{cases}$$ Then $$\operatorname{Var}(H_i)=E(H_i^2)-E(H_i)^2=\frac54-\left(\frac34\right)^2=\frac{11}{16}.$$ The total number of heads is $$H=H_1+\cdots+H_n.$$ Since the variables $H_1,\dots,H_n$ are mutually independent, we have $$\operatorname{Var}(H)=\operatorname{Var}(H_1)+\cdots+\operatorname{Var}(H_n)=\frac{11}{16}+\cdots+\frac{11}{16}=\frac{11n}{16}.$$ • I like this a lot, but how do you compute the expectations of the $H_i$s? – HoHo Sep 6 '17 at 8:45 • $$E(H_i)=0\cdot P(H_i=0)+1\cdot P(H_i=1)+2\cdot P(H_i=2)=0\cdot\frac12+1\cdot\frac14+2\cdot\frac14=\frac34$$ – bof Sep 6 '17 at 21:19 • $$E(H_i^2)=0^2\cdot P(H_i=0)+1^2\cdot P(H_i=1)+2^2\cdot P(H_i=2)=0\cdot\frac12+1\cdot\frac14+4\cdot\frac14=\frac54$$ – bof Sep 6 '17 at 21:22 HINT: Note that $N + \mathbb{E}(X_i)N$ is not equal in distribution to $N + N'$. Apply the law of total variance to $N + N'$ directly. Alright, I found the problem: I was effectively ignoring the effect of the conditionals. Since $E(N + N' \vert N = x) = E(x + \sum_{i=0}^{x}X_i) = x + xE(X_i)$, $$E(N + N' \vert N) = N + NE(X_i)$$ Similarly, since $Var(N + N' \vert N = x) = Var(x + \sum_{i=0}^{x}X_i) = xVar(X_i)$ (being independent) $$Var(N + N' \vert N) = NVar(X_i)$$ Putting these two together, \begin{equation*} \begin{split} Var(N + N') &= Var[N + E(X_i)N] + E[N Var(X_i)] \\ &= Var\left(\frac{3}{2}N\right) + E\left(\frac{1}{4}N\right) \\ &= \left(\frac{3}{2}\right)^2\left(\frac{1}{4}\right) + \left(\frac{1}{4}\right)\left(\frac{1}{2}\right) = \frac{11}{16} \end{split} \end{equation*}
2020-04-02T15:44:32
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2417728/toss-a-fair-coin-n-times-getting-n-heads-toss-it-n-more-times-what-is-the", "openwebmath_score": 0.9988511204719543, "openwebmath_perplexity": 301.35257168443013, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9861513873424044, "lm_q2_score": 0.8539127455162773, "lm_q1q2_score": 0.8420872386602383 }
https://www.physicsforums.com/threads/independent-events-probability.571986/
# Homework Help: Independent Events Probability 1. Jan 29, 2012 ### lina29 1. The problem statement, all variables and given/known data During a winter season at one ski resort, there are two roads from Area A to Area B and two roads from Area B to Area C. Each of the four roads is blocked by snow with a probability p=.25 independently of the other roads. What is the probability that there exists an open route from Area A to Area C It was decided to add a new path connecting areas A and C directly. It is also blocked by snow with probability p=.25 independently of all the other paths. Now what is the probability that there is an open route from area A to area C. 2. Relevant equations 3. The attempt at a solution For the first part I assumed that at least one road had to be open from A to B so I got .4375 which is the same for at least one road from B to C. And then that both A to B and B to C had to be open so I got .191 which was wrong. Any help would be appreciated 2. Jan 29, 2012 ### jimbobian Well, I'm sure there is probably an easier way but there is nothing wrong with drawing a good old probability tree. It will be quite a big one (as there are four roads and so 4 "rounds" to the tree). But once you've drawn the tree, figure out which branches correspond to it being possible to get from A to C, then figure out their probabilities and you should get the answer. James 3. Jan 29, 2012 ### LCKurtz Remember that the probability of a road between two points being open is 1 minus the probability that both are blocked. 4. Jan 29, 2012 ### lina29 Right so what I did to find at least one road being open between A and B or B and C was 1-(1-.25)(1-.25)= .4375 and then to find the probability of both being open I did .4375*.4375=.191 5. Jan 29, 2012 ### jimbobian But the probability of being blocked is .25, so you've worked out the opposite! 6. Jan 29, 2012 ### lina29 Ohh so what I would do is 1-(1-.75)(1-.75)= .5 and then .5*.5=.25 which would be the final answer for the first part? 7. Jan 29, 2012 ### jimbobian I agree with the logic, not the answer ;) 8. Jan 29, 2012 ### lina29 sorry :) it would be 1-(1-.75)(1-.75)= .9375 and then .9375*.9375=.8789 right? For the second part how would I approach it? 9. Jan 29, 2012 ### jimbobian Sounds good. Well now you have the probability that there is a route to C through B that is open, and you also know the probability of getting directly from A to C. Can you think of a way of combining these to get the second answer? 10. Jan 29, 2012 ### lina29 my though was addition but then the probabilities would be over 1 11. Jan 29, 2012 ### jimbobian Probabilities over 1 are never really a good sign! Well imagine you've got two coins, what would be the probability of at least 1 head? 12. Jan 29, 2012 ### lina29 1-(1-.5)(1-.5)=.75 13. Jan 29, 2012 ### jimbobian Good, so can you see what the probability of at least one route being open is? 14. Jan 29, 2012 ### lina29 1-(1-.8789)(1-.75)=.9697 15. Jan 29, 2012 ### jimbobian Yep, I would agree 16. Jan 29, 2012 ### lina29 thank you! 17. Jan 29, 2012 ### jimbobian No problem, hope they're right! 18. Jan 29, 2012 ### HallsofIvy That is correct but you did it the hard way. If P(A)= .25 and P(B)= .25, and A and B are independent, $P(A and B)= .25^2= 0.0625$ the probability of both roads being blocked is 0.0625 so the probability that at least one of the roads is not is $1- P(A)P(B)= 1- .25^2= 1- 0.0625= 0.9375$, as you say. 19. Jan 29, 2012 ### Ray Vickson I get something different from all of you! In the first problem, {AC blocked} = {both AB blocked} or {both BC blocked}, so P{AC blocked} = (1/4)(1/4) + (1/4)(1/4) - (1/4)^4 = 31/256 = .12109375, so P{AC open} = 225/256 = .87890625 . This uses P{A or B} = P{A} + P{B} - P{A & B}. RGV
2018-05-24T02:53:48
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/independent-events-probability.571986/", "openwebmath_score": 0.5355905890464783, "openwebmath_perplexity": 1210.952333329896, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9732407191430024, "lm_q2_score": 0.865224091265267, "lm_q1q2_score": 0.8420713168028592 }
https://www.physicsforums.com/threads/capacitor-in-circuit.121470/
# Capacitor in Circuit. 1. May 20, 2006 ### willydavidjr Consider the electrical circuit as shown on the website I provided below(and an attachment I provided too). Consisting of E=6[V] battery, two switches S1 and S2, two resistors R1=4ohms and R2=2ohms, and a capacitor C=2 microFarad. The internal resistance of the battery maybe ignored. Initially the switches are both open and the capacitor has no charge. Close the S1 at a certain time. At a sufficiently long time after the S1 is closed, the capacitor is fully charged and the circuit becomes steady. Question: 1.) Just after the S1 is closed, what is the current flowing through R1. 2.) How much charge is stored in the capacitor C? 3.) During the period in which the capacitor is charged, how much work done by the battery. 1.)1.5 amperes 2.)From q=CV so 2*6= 12 micro Coulombs 3.)From E=1/2CV^2 so 1/2(2)(6^2) =36 micro joules. Am I correct? This is the website: www.geocities.com/willydavidjr/circuit.html #### Attached Files: • ###### circuit.jpg File size: 11.1 KB Views: 144 2. May 20, 2006 ### DaMastaofFisix I'm not so sure those are right. Though those are the right formulas and such, remember that in the setup where only switch 1 is closed as you say, once the capacitor is charged, there is no more current. Come to think of it, part two and three are both correct, but re-look the first part. Remeber that if the second swith is open, the second loop of the circuit is functionally "not there". 3. May 20, 2006 ### maverick280857 The voltage across a capacitor cannot change instantaneously (because if it would, an infinite current would flow through it according to $i=Cdv/dt$). Your capacitor is initially uncharged. At the instant you close switch S1, the capactor behaves as a short circuit and so, you are correct when you say that the initial current is (6V/4ohm)=1.5A. Your answer for the second part seems correct as well. (It looks like S2 has no role to play?) For the work done, you can think of a differential deposition of charge on the capacitor plates (as the charge gets deposited on the capacitor). Please try and work this out (by considering the work done by the battery, the energy dissipated in the resistor and that stored in the capactor) yourself. Alternatively, you can say that the work done by the battery in the complete charging process is equal to the energy dissipated in the resistor plus the energy stored on the capacitor. Note: $\frac{1}{2}{CV^2}$ is the energy stored in the capacitor. 4. May 20, 2006 ### maverick280857 Are you sure S2 doesn't come up anywhere in the question? 5. May 20, 2006 ### willydavidjr So maverick, my first and second answer are correct. Number three is wrong because I use the potential energy formula and it is not the work done?Am i right? The fourth question is: How much thermal heat is emitted from the resistor R1? (I have no idea about this) The fifth question is: Keeping the switch S1 closed, the switch S2 is also closed. At a sufficiently long time after the switch S2 is closed, the circuit becomes steady again. How much charge is stored in the capacitor C long after the S2 is closed. (Am I right if the answer to this is the same as my answer in number two?) 6. May 20, 2006 ### maverick280857 Lets calculate the expression for the work done by the battery: When S2 is closed, Kirchoff's Loop rule gives $$E = iR_{1} + \frac{q}{C}$$ The current in the circuit is $i = dq/dt$. Multiplying both sides of this equation by $dq=idt$ gives $$Edq= i^{2}R_{1}dt + \frac{q}{C}dq$$ The term on the left hand side is the work done by the battery (with an emf E and zero internal resistance) in time $dt$. As you can see, part of the work in getting a charge across to the circuit is dissipated as heat in the resistor (Joule heating, first term on the right hand side) and some of it is stored as electrostatic energy in the capacitor (second term on the right hand side). So for your problem, the work done by the battery is simply $E\Delta Q$ where $\Delta Q$ is the total charge that leaves the battery in the charging period (from when the switch S1 is closed to steady state). If a current $i$ passes through a resistor $R$ in time $dt$, the heat dissipated is equal to $Vdq = (iR)(idt) = i^{2}R dt$. I am assuming that you are closing S2 after the steady state due to the left part of the circuit (not involving R2 and S2) has been achieved first. If so, the steady state conditions computed earlier are initial conditions when you close S2. Once you have closed S2, you now have two loops so you can either solve the circuit by assuming two currents in the two loops and writing the d.e. of the circuit (Kirchoff's Loop law really) or by using Thevenin's Theorem (if you know this, otherwise don't bother). What is the physics of the fifth problem then? Before you closed S2, capacitor C was an open circuit (due to steady state in the left half of the circuit). Now on closing S2, you provide the emf source E a conducting path (and also a path for the capacitor to discharge from $Q=CE$). How does the charge evolve in the circuit as a function of time (you don't need to set up the d.e.'s and solve them for your question, but it would give you good insight.) Try solving the problem with these hints and feel free ask more Last edited: May 20, 2006 7. May 21, 2006 ### willydavidjr I am now getting close at it but I am bothering myself with the derivatives. How can I get the value of dt and dq? There is no even time given on the problem. 8. May 21, 2006 ### maverick280857 You don't need the 'dt'. We were deriving an expression from first principles, hence using the differential approach. $\Delta Q$ is the total charge transferred to the capacitor. This means it is the charge transferred from $t=0$ to $t=\infty$. The phrase "sufficiently long" means greater than 5 time constants so effectively steady state has been achieved. 9. May 22, 2006 ### willydavidjr I think the answer on number 5 is the same as in number 2. After the S2 is closed, the charge in Capacitor is still the same. We only need the capacitor value and the value of the battery, in short the answer is also 12 microCoulombs. 10. May 23, 2006 ### maverick280857 $E\Delta Q=CE^{2}$ gives the right answer, but I don't know how much that is numerically.
2016-12-03T00:45:19
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/capacitor-in-circuit.121470/", "openwebmath_score": 0.6274622082710266, "openwebmath_perplexity": 469.97455607829534, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.973240718366854, "lm_q2_score": 0.865224091265267, "lm_q1q2_score": 0.8420713161313169 }
https://math.stackexchange.com/questions/595689/universal-set-of-subsets-a-and-b
# Universal set of subsets A and B The question is: Two subsets given: $A = \{ Z, H, O, V, N, I, R \}$; $B = \{ I, G, O, R \}$ The aim is to find universal set of this subsets. I tried to use definition of "universal set" and here are my suggestions: 1. Universal set is array of UNIQUE characters of subsets: $U = \{ Z, H, O, V, N, I, R, G \}$ 2. Universal set is ALL characters of subsets: $U = \{ Z, H, O, V, N, I, R, I, G, O, R \}$ 3. Universal set is all alphabetical characters: $U =\{ A, \dots Z \}$ Which one is true? The universal set $U$ could be either $(1): U = A\cup B$ or $(3)$ (in which case $A\cup B\subsetneq U)$: Without more information, we cannot conclude which, if either. The Universal Set is simply the set which contains all elements in the domain. Without the domain clearly defined, we cannot conclude just how large $U$ is; we can only conclude, by the definition of "subset", that if $A \subseteq U$ and $B\subseteq U$, then $A\cup B \subseteq U$. The second "option" you list is simply the same set as described by $(1)$: A set of elements is a set with each element counting once and only once. So, for example, $\{1, 1, 2, 3, 3\} = \{1, 2, 3\}$. • Thank you for explanations! The next task I need to complete is construct a table of the encoder for this universal set (using Huffman algorithm). I think that it's doesn't matter, is it 1st or 3rd alternative because entries of characters that neither in A nor B are equals to 0 and can't influence on table of encoder. Am I right? – Igor Dec 6 '13 at 17:09 • It is 1 or 3 because U contains all elements in A, all elements in B, and perhaps (but not necessarily) more elements. We need only list each element once. – Namaste Dec 6 '13 at 17:18 • OK, thanks for this too :) Once again (to clarify this for me) - choosing 1 or 3 can't affect on Huffman's table of encoder, right? – Igor Dec 6 '13 at 17:23 • I don't forgot it :) Carlos to forgive vary :-) - your answer was right too – Igor Dec 6 '13 at 17:31 • @amWhy: Very nice feedback! +1 – Amzoti Dec 7 '13 at 0:20 According to set theory, both 1. and 2. are the same set: $\{X,Y,X\}=\{X,Y\}$. Repetition and order does not matter. This set is equal to $\{G,H,I,N,O,R,V,Z\}$ and can be notated as $A\cup B$ (union of sets $A$ and $B$). Any universal set must contain $A$ and must contain $B$ so it must contain its union: $A\cup B\subseteq U$. So, whatever you take as you universal set is up to you as longer as $G\in U$, $H\in U$, $I\in U$, $N\in U$, $O\in U$, $R\in U$, $V\in U$ and $Z\in U$. Your alternative 3. is such a solution. • Thank you too! It's quite clear for me now. I can repeat the question I placed in comment below - youe opinion is also important for me: The next task I need to complete is construct a table of the encoder for this universal set (using Huffman algorithm). I think that it's doesn't matter, is it 1st or 3rd alternative because entries of characters that neither in A nor B are equals to 0 and can't influence on table of encoder. Am I right? – Igor Dec 6 '13 at 17:11
2020-01-17T16:12:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/595689/universal-set-of-subsets-a-and-b", "openwebmath_score": 0.8372828364372253, "openwebmath_perplexity": 533.1120906518506, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.973240718366854, "lm_q2_score": 0.8652240877899775, "lm_q1q2_score": 0.8420713127490237 }
https://math.stackexchange.com/questions/1763844/inner-product-of-center-zg-of-a-group
# Inner Product of center Z(G) of a Group Let $G$ be a group and $Z(G)$ be its center. For $n\in \mathbb{N}$, define $$J_n=\{(g_1,g_2,...,g_n)\in Z(G)\times Z(G)\times\cdots\times Z(G): g_1g_2\cdots g_n=e\}.$$ As a subset of the direct product group $G \times G \times \dots \times G$ , $J_n$ is (1) not necessarily a subgroup, (2) a subgroup but not necessarily a normal subgroup, (3) a normal subgroup, (4) isomorphic to $Z(G)\times Z(G)\times\cdots\times Z(G)$ $((n-1)$ times). is $J_n$ a subgroup? My argument: let us assume that $n = 2$, then $J_2 = \{(g_1,g_2)\in Z(G)\times Z(G) :g_1g_2=e\}$ (i.e) $J_2 = \{(g_1,g_1),(g_1,g_2),(g_2,g_1),(g_2,g_1)\}$ and $g_1g_1= g_1g_2= g_2g_1= g_2g_2=e$ Is this possible? Now consider $Z(G)$ since $g_1$ and $g_2 \in Z(G), g_1g_2=g_2g_1$ and $g_1g_2=g_2g_1 = e$ therefore, $g_1$ is inverse to $g_2$ and vice versa in $Z(G)$, but according to the definition of the set $J_2, g_1g_1=e$ and $g_2g_2=e$ which implies that $g_1$ and $g_2$ are inverses to themselves, but $Z(G)$ is a subgroup every element in it has a unique inverse, hence $J_2$ is not a subgroup.... If there are mistakes in my argument, please tell me where i am wrong... • In (1)-(3) I assume you mean "a subgroup"...of the cartesian product = direct product $\;G\times G\times\ldots\times G\;$ ? – DonAntonio Apr 29 '16 at 8:42 • You can also write $\bigoplus_i^{n} Z(G)$ to make it easier :) Or use prod! – Zelos Malum Apr 29 '16 at 8:58 • @Joanpemo Sorry i cant get you.. – Sam Christopher Apr 29 '16 at 9:28 • @SamChristopher When you talk of a "subgroup" it is because there is an "overgroup" or larger group in which the assumed subgroup lives, right? – DonAntonio Apr 29 '16 at 9:31 • Yes...You are right...sorry..it is actually a subset of GXGX...G(n times)..To prove that J_n is first of all subgroup of GXGX...G then to prove it is normal. – Sam Christopher Apr 29 '16 at 9:38 You have to show $\;J_n\;$ isn't empty (trivial), and also $$(g_1,...,g_n),\,\,(h_1,...,h_n)\in J_n\implies (g_1,...,g_n)(h_1,...,h_n)^{-1}\in J_n$$ But assuming what I asked you in my comment above, this is easy: $$(h_1,...,h_n)^{-1}=(h_1^{-1},...,h_n^{-1})\implies (g_1,...,g_n)(h_1,...,h_n)^{-1}=$$$${}$$ $$=(g_1h_1^{-1},...,g_nh_n^{-1})\in J_n\iff g_1h_1^{-1}\cdot...\cdot g_nh_n^{-1}=1$$ But each and every one of all these elements commute with everything, and since also $$h_1^{-1}\cdot\ldots\cdot h_n^{-1}=(h_1\cdot\ldots\cdot h_n)^{-1}=1^{-1}=1$$ we thus get $$g_1h_1^{-1},...,g_nh_n^{-1}=g_1\cdot\ldots\cdot g_n\cdot h_1^{-1}\cdot\ldots\cdot h_n^{-1}=1\cdot1=1$$ So you get (1) and with a very little more work also (3), and thus (2) isn't true, and (4) is true as the last coordinate in $\;J_n\;$ is determined by the first $\;n-1\;$ . • Thanks for the Answer, I got it..please give me the hint to prove that it is a normal subgroup.. – Sam Christopher Apr 29 '16 at 9:35 • Why is (4) clearly false in the finite case? The map $(g_1,\ldots, g_{n-1})\mapsto (g_1,\ldots, g_{n-1},(g_1g_2\cdots g_{n-1})^{-1})$ defines a bijection, which looks to be a homomorphism. – Aaron Apr 29 '16 at 9:35 • @SamChristopher Every subgroup of an abelian group is normal. – Aaron Apr 29 '16 at 9:38 • So you are saying GXGX..G is abelian...how is that? pls explain – Sam Christopher Apr 29 '16 at 9:40 • @SamChristopher The reason it is normal is because for $g \in G, h \in Z(g)$, we have $ghg^{-1}=h$, so if you try to conjugate an element in $J_n$ by an element of $G^n$, you won't change the element (conjugation acts as the identity automorphism, which is a stronger statement than normality). The same proof gives that $Z(G^n)=Z(G)^n$. . – Aaron Apr 29 '16 at 10:01
2019-09-23T15:56:04
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1763844/inner-product-of-center-zg-of-a-group", "openwebmath_score": 0.8808460235595703, "openwebmath_perplexity": 271.69602982350096, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.973240714486111, "lm_q2_score": 0.865224084314688, "lm_q1q2_score": 0.8420713060090181 }
https://math.stackexchange.com/questions/2312516/polynomials-f-and-f-with-all-roots-distinct-integers
# Polynomials $f$ and $f'$ with all roots distinct integers Edit 2. Since the question below appears to be open for degree seven and above, I have re-tagged appropriately, and also suggested this on MathOverflow (link) as a potential polymath project. Edit 1. A re-phrasing thanks to a comment below: Is it true that, for all $n \in \mathbb{N}$, there exists a degree $n$ polynomial $f \in \mathbb{Z}[x]$ such that both $f$ and $f'$ have all of their roots being distinct integers? (If not, what is the minimal $n$ to serve as a counterexample?) The worked example below for $n = 3$ uses $f$ with roots $\{-9, 0, 24\}$ and $f'$ with roots $\{-18, -4\}$. (See also the note at the end, and the linked arXiv paper.) Question. For all $n \in \mathbb{N}$: Is it possible to find a polynomial in $\mathbb{Z}[x]$ with $n$ distinct $x$-intercepts, and all of its turning points, at lattice points? This is clearly true when $n = 1$ and $n = 2$. A bit of investigation around $n = 3$ leads to, e.g., the polynomial defined by: $$f(x) = x^3 + 33x^2 + 216x = x(x+9)(x+24)$$ which has $x$-intercepts at $(0,0)$, $(-9, 0)$, and $(-24, 0)$. Taking the derivative, we find that: $$f'(x) = 3x^2 + 66x + 216 = 3(x+4)(x+18)$$ so that the turning points of $f$ occur at $(-4, -400)$ and $(-18, 972)$. I am not even sure if this is true in the quartic${^1}$ case; nevertheless, this question concerns the more general setting. In particular, is the statement true for all $n \in \mathbb{N}$ and if not, then what is the minimal $n$ for which this is not possible? $1$. Will Jagy kindly resolves $n=4$ since the monic quartic $f$ with integer roots $\{-7, -1, 1, 7\}$ leads to an $f'$ with roots $\{-5, 0, 5\}$. This example is also found as B5 in the paper here (PDF 22/24). The same paper has the cubic example above as B1, and includes a quintic example as B7: $$f(x) = x(x-180)(x-285)(x-460)(x-780)$$ $$\text{ and }$$ $$f'(x) = 5(x-60)(x-230)(x-390)(x-684)$$ The linked arXiv (unpublished) manuscript seems to suggest that this problem is open. • So you meant both $f',f$ are integer polynomials with distinct integer roots – reuns Jun 6 '17 at 20:42 • We can rephrase it as : $f',f$ are rational polynomials with distinct and all rational roots. We can probably replace $D : f \mapsto f'$ by any $\mathbb{Q}$ linear operator $T : \mathbb{Q}[X] \to \mathbb{Q}[X]$, the situation shouldn't be very different. And we can investigate this $\bmod p$ for each $p$. – reuns Jun 6 '17 at 21:05 • @user1952009 Thanks for the assistance with re-phrasing. If you have ideas about how to broach the problem, I would welcome them in the form of an answer! – Benjamin Dickman Jun 6 '17 at 22:59 • This theorem might help to prove $f'$ or $f$ has no rational roots. And see the references of this article they are about $f,f'$ having rational or integer roots – reuns Jun 7 '17 at 0:10 • @user1952009 Thanks: The latter article references an AMM piece, which I looked up to see what had cited it. In this way, I came upon this... – Benjamin Dickman Jun 7 '17 at 0:19 ## 4 Answers The arXiv paper posted here (pdf) contains a list of references that have broached this problem in the past, and examples for $n=3$, $4$, and $5$ (which I have since incorporated into the OP). However, even the case of $n = 6$ is listed as open$^\star$ (cf. Open Problem 6 on PDF 23/24) as of 2004. So, it appears the question asked here is open. $\star$ (Edit): User i9Fn helpfully points to a 2015 paper containing the sextic polynomial $$f(x) = (x − 3130)(x + 3130)(x − 3590)(x + 3590)(x − 10322)(x + 10322)$$ which leads to $$f'(x) = 6x(x − 3366)(x + 3366)(x − 8650)(x + 8650)$$ thereby resolving the above open problem, and leading to an updated open question (cf. p. 363): Are there polynomials with the above-stated features and degree greater than six? According to this latter paper's author, no such examples are known. • This paper claims to found infinitely many sextic polynomials.This also looks interesting, but I can't read them. – i9Fn Jun 10 '17 at 6:36 • @i9Fn Thanks! I located a copy of the paper and have updated accordingly. – Benjamin Dickman Jun 15 '17 at 16:37 $$x^4 - 50 x^2 + 49 = (x-1)(x+1)(x-7)(x+7)$$ $$4 x^3 - 100 x = 4x (x-5)(x+5)$$ • How did you find it ? Randomly or is there an idea behind this ? – reuns Jun 7 '17 at 0:00 It is likely that this is impossible in degree six, with some very predictable behavior that might, perhaps, be provable. First, given distinct integer roots of the sextic $f(x)$ $$0 < e < d < c < b < a,$$ it seems that we cannot get as many as three integer roots of the quintic $f'(x)$ unless $a$ is even, meanwhile $b+e = a$ and $c + d = a.$ Which is to say that a simple integer translate $$f\left(x - \frac{a}{2} \right) = (x^2 - u^2)(x^2 - v^2)(x^2 - w^2).$$ ==================================== a b c d e 16 13 11 5 3 Crit : 1 8 15 total 3 26 24 15 11 2 Crit : 6 13 20 total 3 30 23 22 8 7 Crit : 2 15 28 total 3 32 26 22 10 6 Crit : 2 16 30 total 3 38 32 28 10 6 Crit : 8 19 30 total 3 42 37 26 16 5 Crit : 2 21 40 total 3 46 45 24 22 1 Crit : 10 23 36 total 3 48 39 33 15 9 Crit : 3 24 45 total 3 52 48 30 22 4 Crit : 12 26 40 total 3 60 46 44 16 14 Crit : 4 30 56 total 3 62 44 32 30 18 Crit : 22 31 40 total 3 64 52 44 20 12 Crit : 4 32 60 total 3 70 59 46 24 11 Crit : 4 35 66 total 3 74 52 42 32 22 Crit : 26 37 48 total 3 74 63 48 26 11 Crit : 18 37 56 total 3 76 64 56 20 12 Crit : 16 38 60 total 3 78 64 44 34 14 Crit : 22 39 56 total 3 78 72 45 33 6 Crit : 18 39 60 total 3 80 65 55 25 15 Crit : 5 40 75 total 3 80 73 47 33 7 Crit : 3 40 77 total 3 82 70 44 38 12 Crit : 22 41 60 total 3 84 74 52 32 10 Crit : 4 42 80 total 3 86 66 52 34 20 Crit : 26 43 60 total 3 86 75 56 30 11 Crit : 20 43 66 total 3 90 69 66 24 21 Crit : 6 45 84 total 3 92 90 48 44 2 Crit : 20 46 72 total 3 96 78 66 30 18 Crit : 6 48 90 total 3 96 83 61 35 13 Crit : 5 48 91 total 3 104 96 60 44 8 Crit : 24 52 80 total 3 ================= Second, once we focus on $$(x^2 - p^2)(x^2 - q^2)(x^2 - r^2),$$ the computer thinks we can only factor the derivative when there is a repeat, either $p = q$ or $q = r.$ =============================== for(int r = 1; r <= 50; ++r){ for(int q = 1; q <= r; ++q){ for(int p = 1; p <= q; ++p){ mpz_class s2 = p * p + q * q + r * r; mpz_class s4 = q * q * r * r + r * r * p * p + p * p * q * q; mpz_class d = s2 * s2 - 3 * s4; if( s2 % 3 == 0 && s4 % 3 == 0 && mp_SquareQ(d) && mp_SquareQ(3 * s4) ) p q r 1 1 1 s2: 3 = 3 s4: 3 = 3 d: 0 = 2 2 2 s2: 12 = 2^2 3 s4: 48 = 2^4 3 d: 0 = 3 3 3 s2: 27 = 3^3 s4: 243 = 3^5 d: 0 = 4 4 4 s2: 48 = 2^4 3 s4: 768 = 2^8 3 d: 0 = 1 5 5 +++ s2: 51 = 3 17 s4: 675 = 3^3 5^2 d: 576 = 2^6 3^2 5 5 5 s2: 75 = 3 5^2 s4: 1875 = 3 5^4 d: 0 = 6 6 6 s2: 108 = 2^2 3^3 s4: 3888 = 2^4 3^5 d: 0 = 7 7 7 s2: 147 = 3 7^2 s4: 7203 = 3 7^4 d: 0 = 8 8 8 s2: 192 = 2^6 3 s4: 12288 = 2^12 3 d: 0 = 9 9 9 s2: 243 = 3^5 s4: 19683 = 3^9 d: 0 = 2 10 10 +++ s2: 204 = 2^2 3 17 s4: 10800 = 2^4 3^3 5^2 d: 9216 = 2^10 3^2 10 10 10 s2: 300 = 2^2 3 5^2 s4: 30000 = 2^4 3 5^4 d: 0 = 1 1 11 +++ s2: 123 = 3 41 s4: 243 = 3^5 d: 14400 = 2^6 3^2 5^2 11 11 11 s2: 363 = 3 11^2 s4: 43923 = 3 11^4 d: 0 = 12 12 12 s2: 432 = 2^4 3^3 s4: 62208 = 2^8 3^5 d: 0 = 5 5 13 +++ s2: 219 = 3 73 s4: 9075 = 3 5^2 11^2 d: 20736 = 2^8 3^4 13 13 13 s2: 507 = 3 13^2 s4: 85683 = 3 13^4 d: 0 = 14 14 14 s2: 588 = 2^2 3 7^2 s4: 115248 = 2^4 3 7^4 d: 0 = 3 15 15 +++ s2: 459 = 3^3 17 s4: 54675 = 3^7 5^2 d: 46656 = 2^6 3^6 15 15 15 s2: 675 = 3^3 5^2 s4: 151875 = 3^5 5^4 d: 0 = 16 16 16 s2: 768 = 2^8 3 s4: 196608 = 2^16 3 d: 0 = 17 17 17 s2: 867 = 3 17^2 s4: 250563 = 3 17^4 d: 0 = 18 18 18 s2: 972 = 2^2 3^5 s4: 314928 = 2^4 3^9 d: 0 = 1 19 19 +++ s2: 723 = 3 241 s4: 131043 = 3 11^2 19^2 d: 129600 = 2^6 3^4 5^2 19 19 19 s2: 1083 = 3 19^2 s4: 390963 = 3 19^4 d: 0 = 4 20 20 +++ s2: 816 = 2^4 3 17 s4: 172800 = 2^8 3^3 5^2 d: 147456 = 2^14 3^2 20 20 20 s2: 1200 = 2^4 3 5^2 s4: 480000 = 2^8 3 5^4 d: 0 = 21 21 21 s2: 1323 = 3^3 7^2 s4: 583443 = 3^5 7^4 d: 0 = 2 2 22 +++ s2: 492 = 2^2 3 41 s4: 3888 = 2^4 3^5 d: 230400 = 2^10 3^2 5^2 22 22 22 s2: 1452 = 2^2 3 11^2 s4: 702768 = 2^4 3 11^4 d: 0 = 5 5 23 +++ s2: 579 = 3 193 s4: 27075 = 3 5^2 19^2 d: 254016 = 2^6 3^4 7^2 13 23 23 +++ s2: 1227 = 3 409 s4: 458643 = 3 17^2 23^2 d: 129600 = 2^6 3^4 5^2 23 23 23 s2: 1587 = 3 23^2 s4: 839523 = 3 23^4 d: 0 = 24 24 24 s2: 1728 = 2^6 3^3 s4: 995328 = 2^12 3^5 d: 0 = 5 25 25 +++ s2: 1275 = 3 5^2 17 s4: 421875 = 3^3 5^6 d: 360000 = 2^6 3^2 5^4 11 25 25 +++ s2: 1371 = 3 457 s4: 541875 = 3 5^4 17^2 d: 254016 = 2^6 3^4 7^2 25 25 25 s2: 1875 = 3 5^4 s4: 1171875 = 3 5^8 d: 0 = 10 10 26 +++ s2: 876 = 2^2 3 73 s4: 145200 = 2^4 3 5^2 11^2 d: 331776 = 2^12 3^4 26 26 26 s2: 2028 = 2^2 3 13^2 s4: 1370928 = 2^4 3 13^4 d: 0 = 27 27 27 s2: 2187 = 3^7 s4: 1594323 = 3^13 d: 0 = 28 28 28 s2: 2352 = 2^4 3 7^2 s4: 1843968 = 2^8 3 7^4 d: 0 = 11 29 29 +++ s2: 1803 = 3 601 s4: 910803 = 3 19^2 29^2 d: 518400 = 2^8 3^4 5^2 29 29 29 s2: 2523 = 3 29^2 s4: 2121843 = 3 29^4 d: 0 = p q r =============================== • Unless there is a mistake in my code there is no polynomial when the difference between the largest and smallest root is less than 200 and when it is symmetric (like you did) when the difference is less than 1000. Currently our best bet is to see if for distinct roots the value is a positive square or not and then to try and prove it (as your data suggest that but we should check for larger values considering the roots of quintic example B7. I don't understand why did you check $s2$ and $s4$ are $0 \mod 3$ and $3 * s4$ a square? – i9Fn Jun 10 '17 at 8:59 • This is possible in degree six: I have updated the question to include a sextic polynomial with the desired properties. – Benjamin Dickman Jun 15 '17 at 17:26 Try working backwards: find an integer polynomial $F$ of degree $n-1$ with all integer roots, such that its antiderivative has $n$ distinct roots. One way to check for this by looking for $n$ sign changes. Here's an example for $n-4$, I'll edit when I find a general solution. • I will undo my downvote and upvote when you have a general solution; right now, it appears that there is a solution when there is not! – Benjamin Dickman Jun 6 '17 at 20:22
2019-08-25T18:41:21
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2312516/polynomials-f-and-f-with-all-roots-distinct-integers", "openwebmath_score": 0.2876512408256531, "openwebmath_perplexity": 724.238168911556, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9732407121576652, "lm_q2_score": 0.865224084314688, "lm_q1q2_score": 0.8420713039943907 }
https://math.stackexchange.com/questions/2492303/what-inequalities-should-one-know-to-evaluate-limits-fluently
# What inequalities should one know to evaluate limits fluently? During the Calculus course, we often used common inequalities to estimate the terms of a sequence and find its limit in the end. The problem is that these inequalities, obvious though they may be, seldom come to mind if you have not used them to solve similar problems at least once. I have listed some of them but I think there are more - what should I add to the list? 1. Bernoulli's inequality 2. $n! > 2^n \iff n \ge 4$ 3. $2^n > n^2\iff n \ge5$ • i wonder why Cauchy-Schwarz didn't show up anywhere... also Hölder is sometimes pretty useful. – tired Oct 27 '17 at 16:59 • Muirhead as well. – mtheorylord Oct 27 '17 at 19:28 • FYI it's not just in calculus. It also comes up a lot when you want to analyze an error or other kind of deviation probabilistically (e.g. $\epsilon$ deviation with $\delta$ probability). – Mehrdad Oct 27 '17 at 20:02 $-1\leq \sin(x)\leq 1$, and the same is true for $\cos$ The AM-GM inequality is so popular it has its own tag on this site. This inequality involving the choose function is pretty commonly used when the choose function or exponential function pop up $$\frac{n^k}{k^k}\leq {n\choose k}\leq\frac{n^k}{k!}\leq \frac{(n\cdot e)^k}{k^k}$$ • $|\sin(x)| \leq |x|$ is also handy. – Bungo Oct 27 '17 at 18:29 A good thing to have in mind is the hierarchy of common functions. In synthetic form For all $a∈ℝ$, $b⩾1$, $c>1$, $d>1$ and $x>1$ $$a ≺ \log(\log(n)) ≺ \log(n)^b ≺ n^{\frac{1}{c}} ≺ n ≺ n \log(n) ≺ n^d ≺ x^n ≺ n!≺ n^n$$ when $n⟶+∞$, using Hardy's notations for asymptotic domination¹. Note that this does not give you the specific $N$ where one gets superior to the other, though. You might also derive useful comparisons from the Théorème de croissances comparées for which I don't know of any equivalent in English literature, but which is simply the following For all $a>0$ and $b>0$ $$x^a = o_{+∞}(e^{bx})$$ $$\ln(x)^a = o_{+∞}(x^b)$$ and $$\lvert\ln(x)\rvert^a = o_{0}(x^b)$$ 1. Which is messed up, because those are obsolete, but there are not other convenient notations for asymptotic domination as a (strict partial) order. What I was taught was that the Vinogradov notation $f ≪ g$ stood for $f=o(g)$, but apparently it is $f=O(g)$ instead. We really need a standardization of those things. Two useful things to know in calculating limits are: • For all $a>1,\alpha > 0$, there exists some $N$ such that $\log_a n < n^\alpha$ for $n>N$ • For all $a>1, \alpha > 0$, there exists some $N$ such that $a^n > n^\alpha$ for all $n>N$ In other words, the exponential function is faster than any power, and log is slower than any power. • You're second bullet requires $a > 1$, correct? Otherwise you have a decaying exponential. – User8128 Oct 28 '17 at 3:54 For any function such that $$x\in(a,b)\implies f''(x)>0$$ then $$c,x\in[a,b]\implies f(c)+f'(c)(x-c)\le f(x)$$ $$c,d\in[a,b],~x\in[c,d]\implies f(c)+\frac{f(c)-f(d)}{c-d}(x-c)\ge f(x)$$ Most clear if you graph it out. All inequalities flipped when $f''(x)<0$. For example, $$x\in(-\infty,\infty)\implies1+x\le e^x$$ $$x\in[0,\pi]\implies x\ge\sin(x)$$ A lot of common inequalities can be derived from this. (Though note you have to take the derivative, so don't use it to prove fundamental derivatives.) • In words: The graph of a convex function lies above its tangent lines. – Martin R Oct 28 '17 at 3:12 • @MartinR And below its secant lines. – Simply Beautiful Art Oct 28 '17 at 19:10 In general one should already be aware of the more famous and elementary inequalities like AM-GM. You have also noted down the Bernoulli inequality which is much simpler, less famous and highly powerful and useful in unexpected ways. Apart from the famous ones (those with a name) I find the following very helpful: • $\sin x<x<\tan x$ for $x\in(0,\pi/2)$. • $\log x\leq x-1$ for $x>0$. • $e^{x} \geq 1+x$ for all real $x$.
2019-09-17T18:55:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2492303/what-inequalities-should-one-know-to-evaluate-limits-fluently", "openwebmath_score": 0.7835598587989807, "openwebmath_perplexity": 393.10550417443943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9732407183668539, "lm_q2_score": 0.8652240756264638, "lm_q1q2_score": 0.8420713009109967 }
https://math.stackexchange.com/questions/465945/understanding-how-to-find-a-basis-for-the-row-space-column-space-of-some-matrix
# Understanding how to find a basis for the row space/column space of some matrix A. I just need some verification on finding the basis for column spaces and row spaces. If I'm given a matrix A and asked to find a basis for the row space, is the following method correct? -Reduce to row echelon form. The rows with leading 1's will be the basis vectors for the row space. When looking for the basis of the column space (given some matrix A), is the following method correct? -Reduce to row echelon form. The columns with leading 1's corresponding to the original matrix A will be the basis vectors for the column space. When looking for bases of row space/column space, there's no need in taking a transpose of the original matrix, right? I just reduce to row echelon and use the reduced matrix to get my basis vectors for the row space, and use the original matrix to correspond my reduced form columns with leading 1's to get the basis for my column space. • What you are saying is correct; when you find a basis for the column space, you can take the columns of A corresponding to the columns with leading 1's in a row echelon form for A. (If you wanted to find a basis for the row space consisting of original rows, then you could take $A^{T}$ and find a basis for its column space using your method.) Aug 12, 2013 at 17:21 yes you're correct. note that row echelon form doesn't necessarily result in 'leading 1s'. it's 'reduced/canonical row echelon form' that requires that form. having reduced your matrix to the set of the linearly independent rows/columns via the row transformations, you can choose either the new reduced vectors with leading pivots (1s or otherwise), or the corresponding vectors from the original matrix*. they are effectively 'the same'. i'd go with the reduced vectors however, as they make any further manipulation or plotting easier *see caveat raised by user84413 • If you take vectors from the original matrix, don't you have to take into account any row exchanges that might have been made? Aug 12, 2013 at 17:38 • yes, i should have mentioned that. it's not my technique to exchange rows, but you're correct yes. Aug 12, 2013 at 17:42 Your procedure is correct, and I'd like to summary them: ## Finding Row Space • Strategy: do Gaussian elimination on the matrix M; non-zero rows of reduced matrix M' form the set spanning row space. • Reason: 1) elementary row operations don't change the row space of M, since row vectors are changed linearly by elementary row operations; 2)The non-zero rows in a reduced matrix form its row space, which can be proven $$\sum_i c_i\vec{r_i}={0}\Rightarrow c_i=0,\forall i$$, indicating these row vectors are independent to form the row space. ## Finding Column Space • Strategy: 1)You could do it by transpose. 2)Or, do Gaussian elimination, take the columns of $$M$$ corresponding to columns with leading entries in $$M'$$. • Reason for 2): $$M'$$ implies solutions where columns without leading entries are the linear combination of others, so we only need to take the columns with leading entries in $$M$$.
2022-08-09T20:21:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/465945/understanding-how-to-find-a-basis-for-the-row-space-column-space-of-some-matrix", "openwebmath_score": 0.8217325806617737, "openwebmath_perplexity": 388.20265596670623, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9732407160384082, "lm_q2_score": 0.8652240756264639, "lm_q1q2_score": 0.8420712988963696 }
http://mathhelpforum.com/pre-calculus/121459-numbers-b-c-d-will-function-f-x-satisfy.html
# Thread: For which numbers a,b,c,d will the function f(x) satisfy... 1. ## For which numbers a,b,c,d will the function f(x) satisfy... Problem: For which numbers $a,b,c,d$ will the function $f(x)=\frac{ax+b}{cx+d}$ satisfy $f(f(x))=x$ for all $x$? ------------------------------------------------------------------------ Attempt: $ f(f(x))=\frac{a\left[\frac{ax+b}{cx+d}\right]+b}{c\left[\frac{ax+b}{cx+d}\right]+d}\\ = \frac{\left[\frac{a^2x+ab}{cx+d}\right]+b}{\left[\frac{acx+bc}{cx+d}\right]+d}\\ =\frac{(a^2+bc)x+(ab+bd)}{(ac+cd)x+(bc+d^2)} $ By using polynomial division I get the following result: $ \left(\frac{a^2+bc}{ac+cd}\right)+\frac{(ab+bd)-(bc+d^2)\left(\frac{a^2+bc}{ac+cd}\right)}{(ac+cd) x+(bc+d^2)} $ By doing some hopefully correct algebra I get: $ (a^2+bc)x+(ab+bd)=x $ 1. $(ab+bd)=0$ So, $a=-d$ 2. $(a^2+bc)=1$ So, $a=\sqrt{1-bc}$ If $b$ and $c$ have opposite signs, their product is negative, and they can be any number. If on the other hand, their signs are the same, their product is positive and: $-1 \leq b \leq 1$ $-1 \leq c \leq 1$ ----------------------------------------------------------------------- Am I making this more complicated than it is? How can I better express my results regarding $b$, and $c$? Thanks! 2. Originally Posted by Mollier Problem: For which numbers $a,b,c,d$ will the function $f(x)=\frac{ax+b}{cx+d}$ satisfy $f(f(x))=x$ for all $x$? if $f[f(x)] = x$ , then $f(x)$ is its own inverse, and as such, its graph is symmetrical to the line $y = x$ $\frac{a \cdot f(x) + b}{c \cdot f(x) + d} = x$ $a \cdot f(x) + b = cx \cdot f(x) + dx$ $a \cdot f(x) - cx \cdot f(x) = dx - b$ $f(x)[a - cx] = dx - b$ $f(x) = \frac{dx-b}{a-cx} = \frac{ax+b}{cx+d}$ $\frac{-dx+b}{cx-a} = \frac{ax+b}{cx+d}$ the above equation only requires that $a = -d$. 3. Originally Posted by Mollier Problem: For which numbers $a,b,c,d$ will the function $f(x)=\frac{ax+b}{cx+d}$ satisfy $f(f(x))=x$ for all $x$? ------------------------------------------------------------------------ Attempt: $ f(f(x))=\frac{a\left[\frac{ax+b}{cx+d}\right]+b}{c\left[\frac{ax+b}{cx+d}\right]+d}\\ = \frac{\left[\frac{a^2x+ab}{cx+d}\right]+b}{\left[\frac{acx+bc}{cx+d}\right]+d}\\ =\frac{(a^2+bc)x+(ab+bd)}{(ac+cd)x+(bc+d^2)} $ Correct up to here. By using polynomial division I get the following result: $ \left(\frac{a^2+bc}{ac+cd}\right)+\frac{(ab+bd)-(bc+d^2)\left(\frac{a^2+bc}{ac+cd}\right)}{(ac+cd) x+(bc+d^2)} $ The trouble with this is that you are about to find that a = –d, so the denominator of the fraction in parentheses is zero. By doing some hopefully correct algebra I get: $ (a^2+bc)x+(ab+bd)=x $ 1. $(ab+bd)=0$ So, $a=-d$ 2. $(a^2+bc)=1$ So, $a=\sqrt{1-bc}$ If $b$ and $c$ have opposite signs, their product is negative, and they can be any number. If on the other hand, their signs are the same, their product is positive and: $-1 \leq b \leq 1$ $-1 \leq c \leq 1$ ----------------------------------------------------------------------- Am I making this more complicated than it is? How can I better express my results regarding $b$, and $c$? The condition f(f(x)) tells you that $\frac{(a^2+bc)x+(ab+bd)}{(ac+cd)x+(bc+d^2)} = x$. Cross-multiply to get $(a^2+bc)x+(ab+bd) = x\bigl((ac+cd)x+(bc+d^2)\bigr)$. If that quadratic equation holds for all x then you can equate each coefficient to zero. You should find that there are two sets of solutions: 1. $a+d=0$ (b and c can be anything, so long as you don't have all four of a, b, c, d equal to 0); 2. $a = d \ne0,\;b=c=0$. 4. Originally Posted by skeeter if [tex] the above equation only requires that $a = -d$. The way you come to that conclusion is very understandable, thank you. Originally Posted by Opalg $\left(\frac{a^2+bc}{ac+cd}\right)+\frac{(ab+bd)-(bc+d^2)\left(\frac{a^2+bc}{ac+cd}\right)}{(ac+cd) x+(bc+d^2)} $ The trouble with this is that you are about to find that a = –d, so the denominator of the fraction in parentheses is zero. Ah, I did not see that! Thank you! Merry Christmas
2017-05-24T18:50:48
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/pre-calculus/121459-numbers-b-c-d-will-function-f-x-satisfy.html", "openwebmath_score": 0.9403921961784363, "openwebmath_perplexity": 424.49066738383146, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9732407214714478, "lm_q2_score": 0.8652240686758841, "lm_q1q2_score": 0.842071296832579 }
https://cs.stackexchange.com/questions/105692/do-i-need-to-consider-instance-restrictions-when-showing-a-language-is-in-p
# Do I need to consider instance restrictions when showing a language is in P? I have already shown that 3-colorable for an unrestricted graph is in NP, but I was thinking about the similar language defined as the set of all acyclic $$G$$, where $$G$$ such that $$G$$ is 3-colorable. In my proposition of an algorithm in P, I wasn't sure if my algorithm needs to verify that $$G$$ indeed contains no cycle or if I assume all inputs are instances of this language (assume all inputs are acyclic graphs). In general, I was wondering if my algorithm has to decide whether the input is of the desired instance ON TOP OF actually showing the properties of the language can be done in polynomial time. I'm still learning so I am confused about this part and this has been bugging me for a while. If this question is too confusing, consider 3SAT. Must I show that verifying the input is indeed a 3CNF is in P to conclude, or can I assume we are only considering inputs within an instance? I was wondering if my algorithm has to decide whether the input is of the desired instance ON TOP OF actually showing the properties of the language can be done in polynomial time. Very nice question! What you are talking about is best characterized as promise problem, "a generalization of a decision problem where the input is promised to belong to a particular subset of all possible inputs". The conventional way to handle promise problem is there are no requirements on the output if the input does not belong to the promise. In particular, if you want to show a promise problem is in P of promise problems, your algorithm do not need to check whether the input is valid or not and your algorithm can behave arbitrarily if the input is invalid. For an in-depth educational survey on promise problems, you are encouraged to read Oded Goldreich's exposition On Promise Problems, July 11, 2005. However, if you want to show a promise problem is in P, your algorithm must check whether the input is valid or not and, if the input is invalid, a.k.a as a noninstance, output 0. Here P stands for the complexity class as in the famous P versus NP problem, a.k.a. P of decision problems, i.e. PTIME or DTIME$$(n^{O(1)})$$ as defined at Wikipedia, or the complexity class P as defined in section 34.1 Polynomial time of the popular textbook introduction to algorithm by CLRS, version 3. Take 3SAT for an example. An algorithm that show 3SAT is in P (of decision problems) should check whether the input is a formula in conjunctive normal form each of whose clauses contains at most three literals among other restrictions. The algorithm should output 0 if it finds the input is not a valid instance of 3SAT. It is easy to check whether a problem instance is a valid instance or not for almost all decision problems people have been interested in. It is so easy and so common that people have become so sloppy or efficient that this verification step is usually skipped or even forgotten in the specification of an algorithm. That might be the source of your confusion. I would recommend beginners to write clearly this verification step for the first few occasions before joining the common practice. • Great answer! I've posted a follow up question here. – Florian Brucker Mar 18 '19 at 8:26 The answer depends on exactly what problem you're solving. • If your goal is to produce an algorithm that correctly solves the problem on the restricted instances, then it's kind of up to you whether or not you check. It feels more robust to check the input but it's perfectly reasonable not to, and that puts you in the realm of promise problems. Here, the "user" promises that the input is valid, and you just have to determine whether the answer is yes or no. • If your goal is to produce an algorithm that decides whether the input is, e.g., a satisfiable 3CNF then, yes, you do need to check that the input has the properties it's supposed to have. Your example of 3-colourability for acyclic graphs shows that there can be a big difference between the two approaches. Every acyclic graph is 3-colourable (even 2-colourable) so the algorithm for the promise problem is just output "yes". For the non-promise version, you need to check that your input is a valid graph representation, then check that the graph is acyclic, and only say "yes" if the input passes both tests. In practical terms, most descriptions of graph algorithms tend to assume that the input is a valid encoding of a graph – it gets a bit tedious writing "Check the input is a valid encoding of a graph" as the first line of every algorithm!
2020-09-26T21:03:53
{ "domain": "stackexchange.com", "url": "https://cs.stackexchange.com/questions/105692/do-i-need-to-consider-instance-restrictions-when-showing-a-language-is-in-p", "openwebmath_score": 0.442621648311615, "openwebmath_perplexity": 276.7363004121877, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294403979493139, "lm_q2_score": 0.9059898241909247, "lm_q1q2_score": 0.8420635427340419 }
https://madhavamathcompetition.com/category/uncategorized/
# What is the use of Mathematics Well, you might be asking this question in high school. You might have found that Math is a lot of formulae and manipulations similar to black magic in Algebra and wild imaginations in Geometry — I mean the proofs. So Math means prove this and that. Right? I agree to some extent. Initially, it is sort of drab or *mechanical*. But, there is a good analogy. Think how you learnt writing the English alphabet — keep on drawing a big A, retracing it 10 times daily and perhaps, your Mom would have whacked you if you did not want to practise it. But, after you know English, the whole world of opportunities opens up for you; your vistas have widened. Exactly same is the case with Mathematics of high school and junior college. But, of course, there are applications of high school math which you learn later when you pursue… View original post 565 more words # Mathematics: A Very Short Introduction By Professor Tim Gowers, Fields Medallist. Highly recommended for my young readers and students especially. Perhaps, it might also be recommended for Indian parents who want to know a bit more about Math and related exams like IITJEE Advanced. Regards, Nalin Pithwa. # A cute little math pearl and some tips for studying for IIT-JEE or math Olympiads Find out and compare roots of unity and roots of negative unity. You should learn to play with such little gems of math on your own. This is creates ripples in the pond of the intellect, gently giving training to the subconscious mind. In this way we trick the mind rather create a natural uninterrupted flow of thoughts. It should not be that we can think of a math Olympiad or competitive programming contest or IIT-JEE problem only with so many textbooks in front of us… even in the bank queue or while taking shower one should be able to talk math to oneself…Of course, one does require physical solitude and also be introverted ( implies away from social media etc…) Try it…and check for yourself… Regards Nalin Pithwa # Check your mathematical induction concepts Discuss the following “proof” of the (false) theorem: If n is any positive integer and S is a set containing exactly n real numbers, then all the numbers in S are equal: PROOF BY INDUCTION: Step 1: If $n=1$, the result is evident. Step 2: By the induction hypothesis the result is true when $n=k$; we must prove that it is correct when $n=k+1$. Let S be any set containing exactly $k+1$ real numbers and denote these real numbers by $a_{1}, a_{2}, a_{3}, \ldots, a_{k}, a_{k+1}$. If we omit $a_{k+1}$ from this list, we obtain exactly k numbers $a_{1}, a_{2}, \ldots, a_{k}$; by induction hypothesis these numbers are all equal: $a_{1}=a_{2}= \ldots = a_{k}$. If we omit $a_{1}$ from the list of numbers in S, we again obtain exactly k numbers $a_{2}, \ldots, a_{k}, a_{k+1}$; by the induction hypothesis these numbers are all equal: $a_{2}=a_{3}=\ldots = a_{k}=a_{k+1}$. It follows easily that all $k+1$ numbers in S are equal. ************************************************************************************* Regards, Nalin Pithwa # Observations are important: Pre RMO and RMO : algebra We know the following facts very well: $(x+y)^{3}=x^{3}+3x^{2}y+3xy^{2}+y^{3}$ $(x-y)^{3}=x^{3}-3x^{2}y+3xy^{2}-y^{3}=()()$ But, you can quickly verify that: $x^{3}+2x^{2}y+2xy^{2}+y^{3}=(x+y)(x^{2}+xy+y^{2})$ $x^{3}-2x^{2}y+2xy^{2}-y^{3}=(x-y)(x^{2}-xy-y^{2})$ Whereas: $x^{3}-y^{3}=(x-y)(x^{2}+xy+y^{2})$ $x^{3}+y^{3}=(x+y)(x^{2}-xy+y^{2})$ I call it — simply stunning beauty of elementary algebra of factorizations and expansions More later, Nalin Pithwa
2020-08-08T00:29:10
{ "domain": "madhavamathcompetition.com", "url": "https://madhavamathcompetition.com/category/uncategorized/", "openwebmath_score": 0.5050666332244873, "openwebmath_perplexity": 942.9922486666202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708052400943, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.8420422952340569 }
https://stats.stackexchange.com/questions/489224/pdf-of-the-sum-of-two-independent-uniform-r-v-but-not-identical
# Pdf of the sum of two independent Uniform R.V., but not identical Question. Suppose $$X \sim U([1,3])$$ and $$Y \sim U([1,2] \cup [4,5])$$ are two independent random variables (but obviously not identically distributed). Find the pdf of $$X + Y$$. So far. I'm familiar with the theoretical mechanics to set up a solution. So, if we let $$\lambda$$ be the Lebesgue measure and notice that $$[1,2]$$ and $$[4,5]$$ disjoint, then the pdfs are $$f_X(x) = \begin{cases} \frac{1}{2}, &x \in [1,3] \\ 0, &\text{otherwise} \end{cases} \quad\text{and}\quad f_Y(y) = \begin{cases} \frac{1}{\lambda([1,2] \cup [4,5])} = \frac{1}{1 + 1} = \frac{1}{2}, &y \in [1,2] \cup [4,5] \\ 0, &\text{otherwise} \end{cases}$$ Now, let $$Z = X + Y$$. Then, the pdf of $$Z$$ is the following convolution $$f_Z(t) = \int_{-\infty}^{\infty}f_X(x)f_Y(t - x)dx = \int_{-\infty}^{\infty}f_X(t -y)f_Y(y)dy.$$ To me, the latter integral seems like the better choice to use. So, we have that $$f_X(t -y)f_Y(y)$$ is either $$0$$ or $$\frac{1}{4}$$. But I'm having some difficulty on choosing my bounds of integration? • If you draw a suitable picture, the pdf should be instantly obvious ... and you'll also get relevant information about what the bounds would be for the integration Sep 26 '20 at 6:13 • Does this answer your question? general solution sum of two uniform random variables aY+bX=Z? Sep 26 '20 at 7:18 • I find it convenient to conceive of $Y$ as being a mixture (with equal weights) of $Y_1,$ a Uniform$(1,2)$ distribution, and $Y_,$ a Uniform$(4,5)$ distribution. Thus $X+Y$ is an equally weighted mixture of $X+Y_1$ and $X+Y_2.$ – whuber Sep 26 '20 at 16:50 • I was hoping for perhaps a cleaner method than strictly plotting. Consider if the problem was $X \sim U([1,5])$ and $Y \sim U([1,2] \cup [4,5] \cup [7,8] \cup [10, 11])$. It becomes a bit cumbersome to draw now. Using the comment by @whuber, I believe I arrived at a more efficient method to reach the solution. Sep 26 '20 at 21:29 Here is a plot as suggested by comments What I was getting at is it is a bit cumbersome to draw a picture for problems where we have disjoint intervals (see my comment above). It's not bad here, but perhaps we had $$X \sim U([1,5])$$ and $$Y \sim U([1,2] \cup [4,5] \cup [7,8] \cup [10, 11])$$. Using @whuber idea: We notice that the parallelogram from $$[4,5]$$ is just a translation of the one from $$[1,2]$$. So, if we let $$Y_1 \sim U([1,2])$$, then we find that $$f_{X+Y_1}(z) = \begin{cases} \frac{1}{4}z - \frac{1}{2}, &z \in (2,3) \tag{\dagger}\\ \frac{1}{2}z - \frac{3}{2}, &z \in (3,4)\\ \frac{5}{4} - \frac{1}{4}z, &z \in (4,5)\\ 0, &\text{otherwise} \end{cases}$$ Since, $$Y_2 \sim U([4,5])$$ is a translation of $$Y_1$$, take each case in $$(\dagger)$$ and add 3 to any constant term. Then you arrive at ($$\star$$) below. Brute force way: • $$\mathbf{2 < z < 3}$$: $$y=1$$ to $$y = z-1$$, which gives $$\frac{1}{4}z - \frac{1}{2}$$. • $$\mathbf{3 < z < 4}$$: $$y=1$$ to $$y = z-1$$, such that $$2\int_1^{z-1}\frac{1}{4}dy = \frac{1}{2}z - \frac{3}{2}$$. • $$\mathbf{4 < z < 5}$$: $$y=z-3$$ to $$y=2$$, which gives $$\frac{5}{4} - \frac{1}{4}z$$. • $$\mathbf{5 < z < 6}$$: $$y=4$$ to $$y = z-1$$, which gives $$\frac{1}{4}z - \frac{5}{4}$$. • $$\mathbf{6 < z < 7}$$: $$y = 4$$ to $$y = z-2$$, such that $$2\int_4^{z-2}\frac{1}{4}dy = \frac{1}{2}z - 3$$. • $$\mathbf{7 < z < 8}$$: $$y = z-3$$ to $$y=5$$, which gives $$2 - \frac{1}{4}z$$. Therefore, $$f_Z(z) = \begin{cases} \frac{1}{4}z - \frac{1}{2}, &z \in (2,3) \tag{\star}\\ \frac{1}{2}z - \frac{3}{2}, &z \in (3,4)\\ \frac{5}{4} - \frac{1}{4}z, &z \in (4,5)\\ \frac{1}{4}z - \frac{5}{4}, &z \in (5,6)\\ \frac{1}{2}z - 3, &z \in (6,7)\\ 2 - \frac{1}{4}z, &z \in (7,8)\\ 0, &\text{otherwise} \end{cases}$$ • +1 For more methods of solving this problem, see stats.stackexchange.com/a/43075/919. – whuber Sep 26 '20 at 21:31 • Thank you for the link! It's too bad there isn't a sticky section, which contains questions that contain answers that go above and beyond what's required (like yours in the link). Sep 26 '20 at 21:42
2021-09-23T06:35:54
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/489224/pdf-of-the-sum-of-two-independent-uniform-r-v-but-not-identical", "openwebmath_score": 0.9806898832321167, "openwebmath_perplexity": 251.25733041367928, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708019443872, "lm_q2_score": 0.8615382129861583, "lm_q1q2_score": 0.8420422941320158 }
https://www.physicsforums.com/threads/limit-f-x-y.242992/
# Limit f(x,y) 1. Jul 1, 2008 ### Nick89 1. The problem statement, all variables and given/known data Show that $$\displaystyle \lim_{(x,y) \to (0,0)} (x^2+y^2) \sin \left( \frac{1}{x^2+y^2} \right) = 0$$ This question came up in an exam and I want to see if I got it right... I am doubtful though since I know limits of two variables often don't exist (because they have to be unique for each approach). 3. The attempt at a solution I reasoned that the argument of the sine will tend to infinity, but the sine itself will still always stay between -1 and 1. Because the 'pre-factor' (x^2 + y^2) goes to 0, this limit is 0. I am pretty sure that my answer is not 'valid enough', because I'm sure the objective here was to proof it formally using epsilons and delta's... But I don't understand how I would do that here... Can you tell me what you think? Is this proof formal enough or should I have proven it more carefully? I hope you understand what I mean... 2. Jul 1, 2008 ### jostpuur It is unfortunate truth that sometimes it is unclear that how detailed you need to get in the proof. Sometimes the exercise or an exam question states something like "prove by using the definition" or something else to emphasize what you need to do. If the task had been to determine $$\lim_{(x,y)\to 0} (x^2+y^2)\sin\Big(\frac{1}{x^2+y^2}\Big),$$ one might think that your argumentation would have been well sufficient, but if the task was to show that $$\lim_{(x,y)\to 0} (x^2+y^2)\sin\Big(\frac{1}{x^2+y^2}\Big) = 0,$$ then it would seem natural to use the definition of the limit. Nobody on the physicsforums can promise you what would have been right in the exam, but in case you are interested, we can help you to prove the theorem, that if a function $$f:\mathbb{R}^n\to\mathbb{R}$$ has the limit $$f(x)\to 0$$ when $$x\to 0$$, and if a function $$g:\mathbb{R}^n\to\mathbb{R}$$ is bounded in some environment of the origo, then $$f(x)g(x)\to 0$$ as $$x\to 0$$. First write down the definitions to figure out what you are supposed to prove, and then the key is to use number $$\underset{x\in U}{\textrm{sup}} |g(x)| < \infty$$ where $$U\subset\mathbb{R}^n$$ is some environment of origo. 3. Jul 1, 2008 ### gamesguru Your argument is perfectly valid and is suitable unless the question otherwise specifies to prove using epsilons and deltas. 4. Jul 1, 2008 ### HallsofIvy Staff Emeritus The simplest thing to do is to let 1/u= x2+ y2 so the problem is $$\lim_{r\rightarrow \infty} \frac{sin(u)}{u}[/itex] Now you can use your arguement that the numerator is bounded while the denominator is goes to infinity. 5. Jul 2, 2008 ### Nick89 Well what I meant is probably best to show by example: Suppose I need to find the following limit: [tex]\displaystyle \lim_{(x,y) \to (0,0)} \frac{2x^2y}{x^4+y^2}$$ Let's try a few different approaches: For example, if you try approaches via the y or x axis, or for that matter by any line y = kx, we get: $$f(x,kx) = \frac{2kx^3}{x^4+(kx)^2} = \frac{2kx}{x^2+k^2} \to 0$$ as x -> 0. We might conclude now that the limit was 0. But if we try a different approach, let's say along the curve y = x^2, we get a limit of 1 so it is undefined. This is what I mean when I say that we really have to prove the limits instead of simply giving a few examples where the limit is the same... I don't know if this 'few examples' approach is really what I did (I don't think so) but still, it doesn't feel 'correct' in a way... Thanks though! 6. Jul 2, 2008 ### Defennder Werent you just making use of the theorem that if lim x-> a g(x)f(x) = lim x->a g(x) lim x->a f(x)? And if one of them were to converge to 0, the other, loosely speaking must diverge if fg is not to converge to 0, but this is not possible if it were bounded. That's your reasoning isn't it? I don't see anything wrong with that, but then again I'm not a math major so I guess it depends really on how you phrased your reasoning. 7. Jul 2, 2008 ### jostpuur No. Yes. (But that's loosely speaking.) 8. Jul 2, 2008 ### Nick89 Well the exact question stated a function f(x,y): $$f(x,y) = (x^2+y^2) \sin \left( \frac{1}{x^2+y^2}\right) \text{ if } (x,y) \neq (0,0)$$ $$f(0,0) = 0$$ Then the question was to show that this function is continuous in (0,0). If I got the definition right, a function is continuous at a point (a,b) if $$\displaystyle \lim_{(x,y) \to (a,b)} f(x,y) = f(a,b)$$ So I have to show that the limit is 0, right? So the question doesn't explicitly ask me to prove it, but as far as I know, we were taught to prove the limit formally whenever we needed to calculate it. In the case of one variable only, showing that the limit exists (or is some value) is much easier because there are only two ways to approach the limit (left or right). In the two variables case there are infinitely many ways to approach the limit, and only calculating a few of them of course is not good enough... 9. Jul 3, 2008 ### HallsofIvy Staff Emeritus It is sufficient to put the problem into polar coordinates and then show that the limit does not depend on the angle $\theta$. In polar coordinates, the distance from (0,0) is measured only by r, not $\theta$. That's essentially what I suggested above.
2017-08-17T04:15:54
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/limit-f-x-y.242992/", "openwebmath_score": 0.8983985781669617, "openwebmath_perplexity": 289.104446108349, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707973303966, "lm_q2_score": 0.8615382165412808, "lm_q1q2_score": 0.8420422936315595 }
https://talkfreethought.org/showthread.php?24506-Egyptian-fractions&s=9b5f89c72da2d344f4463c17936f39a7&p=926716#post926716
# Thread: Egyptian fractions 1. ## Egyptian fractions Ancient Egyptians liked to represent fractions as sums of reciprocals, though they sometimes used fractions like 2/3 and 3/4. That 2/n table: • 2/3  = 1/2 + 1/6 • 2/5   = 1/3 + 1/15 • 2/7  = 1/4 + 1/28 • 2/9  = 1/6 + 1/18 • 2/11  = 1/6 + 1/66 • 2/13 = 1/8 + 1/52 + 1/104 • 2/15 = 1/10 + 1/30 • 2/17  = 1/12 + 1/51 + 1/68 • 2/19 = 1/12 + 1/76 + 1/114 • Etc. to 2/101 The Rhind papyrus's multiples of 1/10: • 1/10 = 1/10 • 2/10 = 1/5 • 3/10 = 1/5 + 1/10 • 4/10 = 1/3 - 1/15 • 5/10 = 1/2 • 6/10 = 1/2 + 1/10 • 7/10 = 2/3 + 1/30 • 8/10 = 2/3 + 1/10 + 1/30 • 9/10 = 2/3 + 1/5 + 1/30 From some of these examples, it is possible to guess at what algorithms were used. Like: 2/n = 1/m + (2m-n)/(m*n) n/(p*q) = 1/(p*r) + 1/(q*r) where r = (p+q)/n 2. There is an algorithm called the greedy algorithm. It starts with a fraction x and finds the largest reciprocal not greater than it. 1/r. If x != 1/r, then it repeats the calculation with x - 1/r. I'll apply it to approximations for pi. • 256/81 = 3 13/81 = 3 + 1/7 + 1/57 + 1/10773 • 22/7 = 3 1/7 = 3 + 1/7 • 223/71 = 3 10/71 = 3 + 1/8 + 1/64 + 1/4544 • 355/113 = 3 16/113 = 3 + 1/8 + 1/61 + 1/5014 + 1/27649202 + 1/1911195900442808 256/81 is from an approximation of the area of a circle: ((8*diameter)/9)^2 3. gives several reduction algorithms. Here are some of those that fit the Rhind papyrus's numbers: A simple one for odd x is 2/x = 2/(x+1) + 2/x*(x+1)) A fancier one is 2/x = 1/a + (2*a-x)/(a*x) where a is some number with lots of divisors. A fraction a/b where b has lots of divisors can be resolved by looking for some sum of divisors that adds up to a. For power-of-2 divisors, that is always possible, from a binary representation. For numbers with powers of higher primes, that is more difficult, but it is sometimes possible. The number 6 = 2*3 and its proper divisors are 1, 2, and 3. One can form 1, 2, 3, 4 = 1+3, 5 = 2+3. But prime numbers have only one proper divisor, so that is only possible for 2. That is also true of odd numbers more generally. One cannot represent 2 with the proper divisors of an odd number greater than 1. There are also even numbers where that is not possible, like 10 = 2*5, with proper divisors 1, 2, 5. One can form 1, 2, 5, 6 = 1+5, 7=2+5, but not 3, 4, 8, 9. 2/(x*y) = 1/(a*x) + 1/(a*x*y) where a = (x+1)/2 n/(x*y) = 1/(a*x) + 1/(a*y) where a = (x+y)/n 2/x = 1/x + 1/(2x) + 1/(3x) + 1/(6x) Fibonacci used algebraic identities like a/(a*b-1) = 1/b + 1/(b*(a*b-1)) He also described a greedy method, though he conceded that it sometimes produces very large denominators. Yes, the Fibonacci of the Fibonacci sequence. 4. Algorithms for Egyptian Fractions - Introduction Greedy method: for x, the next fraction is 1/ceiling(1/x) Odd greedy method: like the greedy method, but it looks for the next odd-denominator fraction. It resolves repeats by looking for the next odd denominator. If the input fraction has an even denominator, it needs to make one even-denominator output. Harmonic method: tries to make a harmonic series: 1/n, 1/n+1, ..., then continues with the greedy method. If one has multiple copies of some fraction, one may combine them: 1/x + 1/x = 2/x. If x is even, then one simplifies. But if x is odd, then one composes fractions 2/(x+1) and 2/(x*(x+1)) For the binary-number system, one uses a general theorem about place-system representations of rational numbers (integer part).(nonrepeating fractional part)(infinitely repeating fractional part) For instance, 1/3 in decimal is 0.333333333... and in binary is 0.0101010101... provides a way of finding the repeating part. It uses , the number of numbers relatively prime to positive integer n: phi(n). It can be calculated as n * (product over distinct prime factors p of (1 - 1/p)). That theorem: for all positive-integer a and n, a^(phi(n)) = 1 mod n. The algorithm: find the lowest power of the base that contains all its prime factors in the denominator. Multiply the fraction by that value. The integer part is the nonrepeating part. To get the repeating part, finds its denominator (den) and find all the divisors d of phi(den). Then find the smallest d that makes (base)^d - 1 divisible by den. One then multiplies by (base)^d - 1 to get the repeating part. Applied to Egyptian fractions, one can use this algorithm to get the nonrepeating part of a fraction with respect to some base, though only for base 2 does one get proper Egyptian fractions. One can get the repeating part with the help of that part's denominator, as (denominator) * (powers of 2). 5. has an interesting conjecture about Egyptian fractions: UNPROVEN: Every fraction 4/n can be written as the sum of three Egyptian fractions. . . . . 4/n = 1/a + 1/b + 1/c I've spent some time playing with this, though to no avail! 6. I first started playing with the before Google and Wikipedia were so knowledgeable. Now I see that this problem has been well-studied, with even the famous Terence Tao involved. Since 4/pq = 1/aq + 1/bq + 1/cq will be a solution if 4/p = 1/a + 1/b + 1/c is, we need only worry about p prime. And p = 4x-1 will cause no trouble; in fact two fractions will be enough: 4/(4x-1) = 1/(4x-1)x + 1/x. When p = (4r-1)x - 1, then 4/p = 1/rx + 1/rp + 1/rxp does the trick. Since r and x can be any integers, many p will fit this pattern. And there are seemingly an infinite number of other successful patterns. I stopped when I'd found patterns to deal with all p < 1000. The last holdout was p = 577 which succumbed to the following pattern: When a = (2p + b + 1) / 8b is a positive integer for some b, then 4/p = 1/ab + 1/2ap + 1/2abp = (2p + b + 1)/2abp = 8ab/2abp = 4/p For p = 577, b = 5 works. a = 29, and 4/577 = 1/5·29 + 1/2·29·577 + 1/2·5·29·577. I guess it's been proven that no finite set of such patterns will suffice. Satisfactory patterns have been found for all p < 1017, but there seems to be no easy way to generate the required infinity of such patterns. 7. [1812.05684] Solutions to Diophantine Equation of Erdos-Straus Conjecture In number theory, the Erdos-Straus conjecture states that for all n >=2, the rational number 4/n can be expressed as the sum of three unit fractions. Paul Erdos and Ernst G. Straus formulated the conjecture in 1948. The restriction that the three unit fractions be positive is essential to the difficulty of the problem, for if negative values were allowed the problem could always be solved. This paper presents an explicit solutions to this conjecture for all n >=2 excepting some n such that n=1(mod8). The greedy algorithm works for all n but n = 1 or 17 mod 24, and the second one is a special case of 2 mod 3. The article mentions this solution: n = 3m-1: 4/n = 1/n + 1/m + 1/(n*m) Also mentions solutions existing for 2 mod 3, 3 mod 4, 2 3 mod 5, 3 5 6 mod 7, 2 3 5 6 7 mod 8 -- Mordell's identities. These identities cover all positive integers > 2 other than 1, 121, 169, 289, 361, or 529 mod 840. 8. If a fraction's numerator is equal to the sum of divisors of its denominator, then that fraction can be decomposed into a sum of Egyptian fractions with the help of that sum. That is one of the algorithms that Fibonacci mentioned. Thus, 5/6 = (2+3)/6 = 1/3 + 1/2 A number where that is always possible for numbers less than it is called a - A005153 - OEIS - sometimes called a panarithmic number. OEIS: 1, 2, 4, 6, 8, 12, 16, 18, 20, 24, 28, 30, 32, 36, 40, 42, 48, 54, 56, 60, 64, 66, 72, 78, 80, 84, 88, 90, 96, 100, 104, 108, 112, 120, 126, 128, 132, 140, 144, 150, 156, 160, 162, 168, 176, 180, 192, 196, 198, 200, 204, 208, 210, 216, 220, 224, 228, 234, 240, 252 Using σ(n) for the sum of all divisors of n including itself, there is a necessary and sufficient condition for being a practical number -- the implication works both ways. It is that for every prime pi, $p(i) \le 1 + \sigma\left( \prod_{p_j < p_i} p_j^{m_j} \right) ;\ n = \prod_i p_i^{m_i} ;\ \sigma(n) = \prod_i \frac{p_i^{m_i+1} - 1}{p_i - 1}$ The arg of the σ function is the contribution of all primes less than pi to n. It is easy to show that no odd number greater than 1 is practical. Its smallest prime p is greater than 1 + σ(1) = 2. That is why all practical numbers greater than 1 are even, though only some even numbers are practical. The smallest one not practical is 10, because 5 > 1 + σ(2) = 3. Related to that is A030057 - OEIS - the smallest number not the sum of distinct divisors of some number. 2, 4, 2, 8, 2, 13, 2, 16, 2, 4, 2, 29, 2, 4, 2, 32, 2, 40, 2, 43, 2, 4, 2, 61, 2, 4, 2, 57, 2, 73, 2, 64, 2, 4, 2, 92, 2, 4, 2, 91, 2, 97, 2, 8, 2, 4, 2, 125, 2, 4, 2, 8, 2, 121, 2, 121, 2, 4, 2, 169, 2, 4, 2, 128, 2, 145, 2, 8, 2, 4, 2, 196, 2, 4, 2, 8, 2, 169, 2, 187, 2, 4, 2, 225, 2, 4, 2, 181 9. Practical numbers satisfy some interrelationships. • The product of practical numbers is also a practical number • The least common multiple of practical numbers is also a practical number • For practical number n with divisor d, n*d is also a practical number A practical number that cannot be composed from other practical numbers greater than 1 is called a "primitive" practical number, much like a prime number with multiplication. When divided by any of its prime factors with factorization number greater than 1, the result will not be practical. A factorization number is the power where a prime contributes to a number as a prime factor. A267124 - OEIS - primitive practical numbers 1, 2, 6, 20, 28, 30, 42, 66, 78, 88, 104, 140, 204, 210, 220, 228, 260, 272, 276, 304, 306, 308, 330, 340, 342, 348, 364, 368, 380, 390, 414, 460, 462, 464, 476, 496, 510, 522, 532, 546, 558, 570, 580, 620, 644, 666, 690, 714, 740, 744, 798, 812, 820, 858, 860, 868, 870, 888, 930, 966, 984 #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
2021-09-24T03:40:09
{ "domain": "talkfreethought.org", "url": "https://talkfreethought.org/showthread.php?24506-Egyptian-fractions&s=9b5f89c72da2d344f4463c17936f39a7&p=926716#post926716", "openwebmath_score": 0.6253228187561035, "openwebmath_perplexity": 458.2110682835565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9773707973303967, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8420422918942232 }
https://www.physicsforums.com/threads/a-specific-combination-problem.939031/
# A specific combination problem • I Hello, I have been trying to solve this problem but I can't seem to find a way. Given are ##n## cards and each card can show one of two values: M or K. How many possible permutations are there in which there are as many cards with M as there are with K? Given that ##n## is an even amount of cards. Is it possible to derive a formula for this as a function of ##n##? How does one deduce this? Related Set Theory, Logic, Probability, Statistics News on Phys.org Check out the binomial coefficient. Thanks for the reply. So my question is basically calculated by: $$\binom n k = \frac{n!}{0.5n! \cdot 0.5n!}$$ Since I'm asking about an equal amount of cards with M as K, that means ##k = 0.5n##. Is this formula therefore correct? DrClaude Mentor Thanks for the reply. So my question is basically calculated by: $$\binom n k = \frac{n!}{0.5n! \cdot 0.5n!}$$ Is this correct? Correct. You are basically choosing the positions of the n/2 M cards (or K cards) out of the order of the n cards. Correct. You are basically choosing the positions of the n/2 M cards (or K cards) out of the order of the n cards. This might be a bit too much to ask, but is there a simple way to explain how this formula is derived? I do understand that the total number of permutations is ##2^n## and I know how to calculate the amount of possible permutations when you're asking for a very specific sequence/order. But when you ask about a particular combination of which the order doesn't matter, there is a chance that you'd add a number of certain permutations multiple times when you do this manually. I know I need to substract these possible permutations that I have covered before but it's a lot of work and I can't seem to recognize a pattern to put this in a formula. Last edited: Stephen Tashi I need to substract these possible permutations that I have covered before There can be combinatorial problems where such a subtraction is necessary, but typically the aspect of many-permutations-are-the-same-combination is handled by using division or multiplication. The general concept is that 1 combination of objects can be used to generate F permutations of the objects. ( So if we are interested in counting combinations we can count permutations and then divide that answer by F.) How many possible permutations are there in which there are as many cards with M as there are with K? Since you used the word "permutation" it indicates that you wish to consider n cards that are distinguishable. However, you didn't indicate any way to distinguish one "M" from another "M". The concept of several literally "indistinguishable" things is paradoxical. For example, If we had two balls that were literally indistinguishable, then we wouldn't have two balls. We would have only one ball. They would be the same ball. If we want to reason about several "indistinguishable" things we have to keep in mind that they are indistinguishable with respect to some properties but distinguishable with respect to others. Suppose we have 4 cards, C1,C2,C3,C4, and a set of 4 (distinct) labels: M1,M2,K1,K2. How many "ways" can the 4 labels be assigned to the 4 cards? We have 4 choices of a label for card C1, 3 choices of a label for card C2, etc. So we have a total of (4)(3)(2)(1) = 24 "ways". Suppose we wish to consider M1 and M2 "indistinguishable" (with respect to their having the same "M-ness"). Then a "way" like C1=M1, C2=M2, C3= K1, C3= K2 is no longer distinct from a "way" like C1=M2, C2 =M1,C3 =K1, C3 = K2 Losing the distinction between M1 and M2 , we would describe another kind of "way" by: C1=M, C2=M, C3=K1, C3 = K2 Each "way" of this kind, can be realized in 2 "ways" of the previous kind. So if we are interested in counting the number of "ways" of this kind we can use: (number of "ways" of this kind)(2) = (number of ways of the previous kind) = 24 (number of "ways" of this kind) = 24/2 = (number of ways with M1 not distinguished from M2) Similarly, if we wish to also lose the distinction between K1 and K2 we can use (number of "ways" with K1 not distinguished from K2 and M1 not distinguished from M2)(2) = (number of ways with M1 not distinguished from M2) (number of "ways" with K1 not distinguished from K1 and M1 not distinguished from M2) = (24/2)/2 = 24/4. Last edited: PeroK Homework Helper Gold Member This might be a bit too much to ask, but is there a simple way to explain how this formula is derived? I do understand that the total number of permutations is ##2^n## and I know how to calculate the amount of possible permutations when you're asking for a very specific sequence/order. But when you ask about a particular combination of which the order doesn't matter, there is a chance that you'd add a number of certain permutations multiple times when you do this manually. I know I need to substract these possible permutations that I have covered before but it's a lot of work and I can't seem to recognize a pattern to put this in a formula. Let ##n=2k##. We need to count how many ways we can arrange ##k## M's and ##k## K's. Note that each permutation is defined by the position of the M's. Now, imagine each position is numbered from 1 to ##n##. Each permutation is defined by a set of ##k## numbers, representing the positions of all the M's. And, the number of ways we can choose ##k## numbers from ##n## numbers is ##\binom{n}{k}##. QuantumQuest and StoneTemplePython StoneTemplePython Gold Member 2019 Award If some visual is needed / helpful, working through Pascal's Triangle could be instructive. (This problem is equivalent to looking at the 'midpoint' or row n of the triangle.) DrClaude Mentor This might be a bit too much to ask, but is there a simple way to explain how this formula is derived? Consider that there is a total of ##n## cards. You want to pick the position of ##m## of these cards (say, the ##n/2## M cards). There are ##n## positions where you can put the first card, then ##n-1## positions for the second card, ##n-2## for the third card, and so on. So the total number of arrangements of these ##m## cards in the ##n## positions is $$n \times (n-1) \times (n-2) \times \cdots \times (n-m+1)$$ which can be written succinctly as $$\frac{n!}{(n-m)!}$$ This will give you the number of permutations of ##m## in ##n##. But in your case, you don't care if it was the first card in position 3 and the second card in position 10, or the first card in position 10 and the second card in position 3, and so on for all the cards. So you are overcounting the number of distinct outcomes. The amount of overcounting is exactly the number of ways you can shuffle the ##m## cards (e.g., once you've picked the ##m## positions they will go in). The number of arrangements of the ##m## cards is thus ##m \times (m-1) \times \cdots \times 1 = m!##. Hence, the total number of combinations of ##m## in ##n## is $$\frac{n!}{(n-m)! m!}$$ which is the binomial coefficient. QuantumQuest and JohnnyGui There can be combinatorial problems where such a subtraction is necessary, but typically the aspect of many-permutations-are-the-same-combination is handled by using division or multiplication. The general concept is that 1 combination of objects can be used to generate F permutations of the objects. ( So if we are interested in counting combinations we can count permutations and then divide that answer by F.) Since you used the word "permutation" it indicates that you wish to consider n cards that are distinguishable. However, you didn't indicate any way to distinguish one "M" from another "M". The concept of several literally "indistinguishable" things is paradoxical. For example, If we had two balls that were literally indistinguishable, then we wouldn't have two balls. We would have only one ball. They would be the same ball. If we want to reason about several "indistinguishable" things we have to keep in mind that they are indistinguishable with respect to some properties but distinguishable with respect to others. Suppose we have 4 cards, C1,C2,C3,C4, and a set of 4 (distinct) labels: M1,M2,K1,K2. How many "ways" can the 4 labels be assigned to the 4 cards? We have 4 choices of a label for card C1, 3 choices of a label for card C2, etc. So we have a total of (4)(3)(2)(1) = 24 "ways". Suppose we wish to consider M1 and M2 "indistinguishable" (with respect to their having the same "M-ness"). Then a "way" like C1=M1, C2=M2, C3= K1, C3= K2 is no longer distinct from a "way" like C1=M2, C2 =M1,C3 =K1, C3 = K2 Losing the distinction between M1 and M2 , we would describe another kind of "way" by: C1=M, C2=M, C3=K1, C3 = K2 Each "way" of this kind, can be realized in 2 "ways" of the previous kind. So if we are interested in counting the number of "ways" of this kind we can use: (number of "ways" of this kind)(2) = (number of ways of the previous kind) = 24 (number of "ways" of this kind) = 24/2 = (number of ways with M1 not distinguished from M2) Similarly, if we wish to also lose the distinction between K1 and K2 we can use (number of "ways" with K1 not distinguished from K2 and M1 not distinguished from M2)(2) = (number of ways with M1 not distinguished from M2) (number of "ways" with K1 not distinguished from K1 and M1 not distinguished from M2) = (24/2)/2 = 24/4. Thanks for the explanation. I see that in case of 4 cards of 2 K's and 2 M's, one would have to divide by 4 to treat M1 the same as M2 and K1 as K2. How does this number of copies increase with more cards then? For example, I see that for 6 cards, in which there are M1, M2, M3 and K1, K2 and K3, there are 720 possible permutations if the K's and M's are to be distinguished. I have to divide that by 36 apparently to get the 20 possible permutations of 2 K's and 2 M's if they are not distinguished, but I can't seem to extrapolate the division by 4 in case of a 4 card system to the division by 36 in a 6 card system. Perhaps missing something very obvious here. @StoneTemplePython : Thanks, I'll check that out Consider that there is a total of ##n## cards. You want to pick the position of ##m## of these cards (say, the ##n/2## M cards). There are ##n## positions where you can put the first card, then ##n-1## positions for the second card, ##n-2## for the third card, and so on. So the total number of arrangements of these ##m## cards in the ##n## positions is $$n \times (n-1) \times (n-2) \times \cdots \times (n-m+1)$$ which can be written succinctly as $$\frac{n!}{(n-m)!}$$ This will give you the number of permutations of ##m## in ##n##. But in your case, you don't care if it was the first card in position 3 and the second card in position 10, or the first card in position 10 and the second card in position 3, and so on for all the cards. So you are overcounting the number of distinct outcomes. The amount of overcounting is exactly the number of ways you can shuffle the ##m## cards (e.g., once you've picked the ##m## positions they will go in). The number of arrangements of the ##m## cards is thus ##m \times (m-1) \times \cdots \times 1 = m!##. Hence, the total number of combinations of ##m## in ##n## is $$\frac{n!}{(n-m)! m!}$$ which is the binomial coefficient. Thanks for the clarification. Can I say that the formula ##\frac{n!}{(n-m)!}## is only used if I do care about which M card is in which position (3 or 10 like you said) but it doesn't matter for me which of the K cards is in which position? PeroK Homework Helper Gold Member Thanks for the explanation. I see that in case of 4 cards of 2 K's and 2 M's, one would have to divide by 4 to treat M1 the same as M2 and K1 as K2. How does this number of copies increase with more cards then? For example, I see that for 6 cards, in which there are M1, M2, M3 and K1, K2 and K3, there are 720 possible permutations if the K's and M's are to be distinguished. I have to divide that by 36 apparently to get the 20 possible permutations of 2 K's and 2 M's if they are not distinguished, but I can't seem to extrapolate the division by 4 in case of a 4 card system to the division by 36 in a 6 card system. Perhaps missing something very obvious Think about having four cards. An M and three K's, labelled ##K_1, K_2,K_3##. You have ##4!## permutations. However, if you consider the K's are all the same, then you have only ##4## permutations - each defined by the position of the M. This is because with three K's you have ##3!## ways to arrange them, so each new permutation corresponds to ##6## original permutations. You can extend this argument to the case there are three M's as well. You start with ##6!## permutations where all the cards are different. First you consider all the K's to be the same, reducing the total by a factor of 6. Then you consider all the M's to be the same, reducing by another factor of 6. In general, with ##n## cards, ##k## of which are K's and ##m = n -k## of which are M's, we have ##n!## total permutations and ##\frac{n!}{k!(n-k)!}## permutations if we consider all the K's and all the M's the same. This ties in to the previous arguments to get the binomial coefficient again. PeroK Homework Helper Gold Member Thanks for the clarification. Can I say that the formula ##\frac{n!}{(n-m)!}## is only used if I do care about which M card is in which position (3 or 10 like you said) but it doesn't matter for me which of the K cards is in which position? Yes. You've effectively divided the total permutations by ##k! = (n-m)!## to ignore the permutations of the K's in each permutation. Yes. You've effectively divided the total permutations by ##k! = (n-m)!## to ignore the permutations of the K's in each permutation. Thanks a lot for your explaining and verifying this. I'm happy to say I was able to derive the formula myself before reading your explanation of post #12. I noticed that if I want half of the cards ##M## and half of the cards ##K## of a total of 4 cards, that for each order, for example ##MMKK##, there are 3 more sequences of that order that I don't care about. In this case ##M_1M_2K_1K_2##, ##M_2M_1K_1K_2## and ##M_2M_1K_2K_1## for example. The number of sequencess that I don't care about for each particular order with ##n## cards can be calculated simply by looking at how many remaining possibilities there are for each card. In the case of 4 cards, it is ##2 \cdot 1 \cdot 2 \cdot 1 = 4## sequences for each order; the first card being either ##M_1## or ##M_2##, the second card must then be the remaining ##M## card, the third card being either ##K_1## or ##K_2##, the fourth one must be the remaining ##K## card. This can be rewritten as ##0.5n! \cdot 0.5n!## if I want equal numbers of ##M## and ##K## cards. If I want a different amount of M cards than K cards, then I can extrapolate this reasoning to the fact that there are ##k! \cdot (n-k)!## possibilites for each order. There are ##n!## orders, each order having ##k! \cdot (n-k)!## sequences. Since I don't care about these extra possible sequences for each order (I only want 1 of each order), I simply have to divide the possible amount of orders (##n!##) by the possible amount of sequences per order. Thus $$\frac{n!}{k!\cdot(n-k)!}$$ Last edited: Yes. You've effectively divided the total permutations by ##k! = (n-m)!## to ignore the permutations of the K's in each permutation. There is something I have been wondering about. If the chance that I would throw either a K or an M card is 50%, can I deduce that the chance that I'd throw an equal amount of K's as M's in a trial of ##n## throws is equal to the following: $$\frac{n!}{(0.5n)! \cdot (0.5n)!} \cdot \frac{1}{2^n}$$ ?? PeroK Homework Helper Gold Member There is something I have been wondering about. If the chance that I would throw either a K or an M card is 50%, can I deduce that the chance that I'd throw an equal amount of K's as M's in a trial of ##n## throws is equal to the following: $$\frac{n!}{(0.5n)! \cdot (0.5n)!} \cdot \frac{1}{2^n}$$ ?? Yes. If you think about the ways to get ##k## K's out of ##n## cards, that is ##\binom{n}{k}##, for any ##k## from ##0## to ##n##. And the sum of these gives the total number of permutations: ##\sum_{k=0}^{n} \binom{n}{k} = 2^n## As each of the ##2^n## permutations is equally likely, the probabilities follow. QuantumQuest Yes. If you think about the ways to get ##k## K's out of ##n## cards, that is ##\binom{n}{k}##, for any ##k## from ##0## to ##n##. And the sum of these gives the total number of permutations: ##\sum_{k=0}^{n} \binom{n}{k} = 2^n## As each of the ##2^n## permutations is equally likely, the probabilities follow. Great, thanks! I noticed that when I want to write out ##\binom{n}{k}## as ##\frac{n!}{k!\cdot (n-k)!}## in the summation ##\sum_{k=0}^{n} \binom{n}{k} = 2^n##, I'd have to do it like the following; $$2 + \sum_{k=1}^{n-1} \frac{n!}{k!\cdot (n-k)!} = 2^n$$ This is because, in this case, you can not start the summation with ##k=0## or end it with ##k=n## since that would make you divide by 0 in the summation. This leaves 2 remaining possibilities (all ##M##'s or all ##K##'s) that is not included in the summation so you'd have to add 2 next to it. Right? PeroK Homework Helper Gold Member ##0!=1## by definition. ##0!=1## by definition. Ah, I see they covered that problem. There's one last thing; I realised that as the number of throws ##n## increases, the chance according to ##\frac{n!}{(0.5n)! \cdot (0.5n)!} \cdot \frac{1}{2^n}## to throw equal ##M##'s as ##K's## would decrease remarkably (each ##M## or ##K## having 50% chance). I understand that this is because there are a lot of other possible combinations with different amounts of ##M##'s and ##K##'s as ##n## increases. However, even though the chance for throwing equal amounts of ##M's## and ##K##'s decreases at a higher ##n##, isn't that chance still the largest chance compared to the chance to throw one other possible combination that has other amounts of ##M##'s and ##K##'s? PeroK Homework Helper Gold Member Ah, I see they covered that problem. There's one last thing; I realised that as the number of throws ##n## increases, the chance according to ##\frac{n!}{(0.5n)! \cdot (0.5n)!} \cdot \frac{1}{2^n}## to throw equal ##M##'s as ##K's## would decrease remarkably (each ##M## or ##K## having 50% chance). I understand that this is because there are a lot of other possible combinations with different amounts of ##M##'s and ##K##'s as ##n## increases. However, even though the chance for throwing equal amounts of ##M's## and ##K##'s decreases at a higher ##n##, isn't that chance still the largest chance compared to the chance to throw one other possible combination that has other amounts of ##M##'s and ##K##'s? Yes. From Pascals triangle you can see the binomial coefficients are always largest in the middle. For even ##n## this is a single maximum value at ##n/2##. But, as you have observed, the probability of getting exactly ##n/2## reduces as ##n## increases, as the binomial distribution spreads out. Yes. From Pascals triangle you can see the binomial coefficients are always largest in the middle. For even ##n## this is a single maximum value at ##n/2##. But, as you have observed, the probability of getting exactly ##n/2## reduces as ##n## increases, as the binomial distribution spreads out. What surprises me is that as ##n## increases, the ratio of the chance for throwing equal amounts of K’s as M’s divided by the summed chance for throwing any other amounts of K’s and M’s declines. This would mean that you’d have increasingly lower chance to correctly deduce that throwing either an M or K is 50% each, as your number of throws ##n## increases. Shouldn’t the binomial distribution represent the individual chances of M and K better as ##n## increases? That the “hill” at n/2 would not only be higher, but also slimmer? PeroK Homework Helper Gold Member What surprises me is that as ##n## increases, the ratio of the chance for throwing equal amounts of K’s as M’s divided by the summed chance for throwing any other amounts of K’s and M’s declines. This would mean that you’d have increasingly lower chance to correctly deduce that throwing either an M or K is 50% each, as your number of throws ##n## increases. Shouldn’t the binomial distribution represent the individual chances of M and K better as ##n## increases? That the “hill” at n/2 would not only be higher, but also slimmer? That's more or less the common misconception about the "law of averages". If you toss a coin 1,000 times, it's unlikely you will get exactly 500 heads and 500 tails. But, it's likely that you will get 49%-51% heads, and almost certain that you will get 45%-55% heads. Compare this with 10 tosses, where it is very likely to get only 40% heads, or fewer. The distribution gets slimmer relative to the number of tosses, but wider in absolute terms. That's more or less the common misconception about the "law of averages". If you toss a coin 1,000 times, it's unlikely you will get exactly 500 heads and 500 tails. But, it's likely that you will get 49%-51% heads, and almost certain that you will get 45%-55% heads. Compare this with 10 tosses, where it is very likely to get only 40% heads, or fewer. The distribution gets slimmer relative to the number of tosses, but wider in absolute terms. Thanks. So can I say that the chance to throw between 49%-51% heads increases as ##n## increases? Putting it in mathematical terms: $$\sum_{k=0.49n}^{0.51n} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}$$ The value coming out of this formula will approach 1 as ##n## increases? StoneTemplePython Gold Member 2019 Award Thanks. So can I say that the chance to throw between 49%-51% heads increases as ##n## increases? Putting it in mathematical terms: $$\sum_{k=0.49n}^{0.51n} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}$$ The value coming out of this formula will approach 1 as ##n## increases? Yes. There are a few ways to get at this. One particularly nice way for your problem: suppose heads has a payoff of ##+1## and tails has a payoff of ##-1##. The expected payoff per toss is zero. The expected variance per toss is ##1##. Because tosses are independent, variance for ##n## tosses ##= n * 1 = n##. That means the std deviation for n tosses grows with ##\sqrt{n}## -- this is how wide the normal distribution will get as n grows large. ##\sum_{k=a}^{b} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}## where ##a = 0.49n## and ##b = 0.51n##. - - - - edit: to make this sensible, I need to first translate this from the ##\{0,1\}## case you were using with mean 0.5 to ##\{-1,1\}## 0 mean case. Your a and b referred to having 49 and 51 percent heads (and vice versa with tails) -- in this adjusted payoff setup, a goes from ##0.49 = 0.49(1) + 0.51( 0) \to 0.49 (1) +0.51( -1) = -0.02## so I should say in ##a = -0.02 n ## and ##b = 0.02n## and you are interested in ##p_X( a \leq x \leq b)## or if using the cumulative distribution function ##Pr(X \leq b) - Pr(X \leq a)## or something along those lines. - - - - A clever finish would be to rescale such that you have a standard normal random variable -- i.e. divide by ##\sqrt{n}## so that it has expected value of zero (still) and variance of one. If you do the adjustment on ##a## and ##b##, you see that you have ##a' = -0.02 n \frac{1}{\sqrt{n}} = -0.02 \sqrt{n}## and ##b' = 0.02 \sqrt{n}## -- (edited to line up with the above) that is, they grow arbitrarily large as ##n## grows hence even events that are 10 sigmas out on your bell curve are well within the bounds of ##[a', b']## for large enough n. There are some other ways to get at this with strong and weak laws of large numbers, but since you we're talking zero mean coin tossing, drawing out the implications of a rescaled bell curve, is quite nice in my view. Last edited: QuantumQuest and JohnnyGui Yes. There are a few ways to get at this. One particularly nice way for your problem: suppose heads has a payoff of ##+1## and tails has a payoff of ##-1##. The expected payoff per toss is zero. The expected variance per toss is ##1##. Because tosses are independent, variance for ##n## tosses ##= n * 1 = n##. That means the std deviation for n tosses grows with ##\sqrt{n}## -- this is how wide the normal distribution will get as n grows large. ##\sum_{k=a}^{b} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}## where ##a = 0.49n## and ##b = 0.51n##. A clever finish would be to rescale such that you have a standard normal random variable -- i.e. divide by ##\sqrt{n}## so that it has expected value of zero (still) and variance of one. If you do the adjustment on ##a## and ##b##, you see that you have ##a' = 0.49n \frac{1}{\sqrt{n}} = 0.49 \sqrt{n}## and ##b' = 0.51 \sqrt{n}## -- that is, they grow arbitrarily large as ##n## grows hence even events that are 10 sigmas out on your bell curve are well within the bounds of ##[a', b']## for large enough n. There are some other ways to get at this with strong and weak laws of large numbers, but since you we're talking zero mean coin tossing, drawing out the implications of a rescaled bell curve, is quite nice in my view. Thanks a lot, I think I get what you mean with this. You're basically transforming the distribution so that you can show that the standard deviation would always stay within the boundaries of ##a## and ##b## if I understand correctly. I've got one question. I've tried to test this summation using summation calculators from several websites but they all give deviating answers from 1. This is what I'm trying to calculate: $$\lim_{n \rightarrow \infty} \sum_{k=a \cdot n}^{b\cdot n} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}$$ Here, ##a## and ##b## are any percentages of ##n## equally far from the mean (##0.5n##) where ##b > a##. Shouldn't this limit give the answer ##1## again, regardless of how wide or small the range ##a-b## is? Last edited:
2020-09-24T07:47:34
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/a-specific-combination-problem.939031/", "openwebmath_score": 0.8907472491264343, "openwebmath_perplexity": 834.9612808555636, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9773707986486796, "lm_q2_score": 0.8615382129861583, "lm_q1q2_score": 0.8420422912926377 }
https://eiteltest.co.uk/wi53k0wa/1drrp.php?id=09f0f3-the-determinant-of-an-identity-matrix-is
# the determinant of an identity matrix is {\displaystyle v[{\hat {v_{j}}}]} det This video explains the concept of an Identity Matrix. ) If you interchange two rows (columns) of the matrix, the determinant of the matrix changes sign. Theorems. Many different colleges and universities consider ACE CREDIT recommendations in determining the applicability to their course and degree programs. {\displaystyle u[{\hat {u_{i}}}]} {\displaystyle v} This is the currently selected item. In particular: the determinant of an upper or lower triangular matrix is the product of its diagonal entries [6.1.6, page 253]. We will call them block-diagonal matrices with identity blocks. Determinant of non-triangular block matrix… Choose a pair, of m-element ordered subsets of Suppose $A$ is an invertable matrix. If you need a refresher, check out my other lesson on how to find the determinant of a 2×2.Suppose we are given a square matrix A where, The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse. Choose a pair ~ Google Classroom Facebook Twitter. {\displaystyle u} Confusion about how the determinant changes when all rows are multiplied by a scalar. 2. When the identity matrix is the product of two square matrices, the two matrices are said to be the inverse of each other. Properties of matrix multiplication. In particular, the determinant of the identity matrix is 1 and the determinant of the zero matrix is 0. Is it also called a Unit Matrix? Define the auxiliary m-by-m matrix The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse. Zero and Identity Matrices Zero and Identity Matrices N.VM.10A It is named after James Joseph Sylvester, who stated this identity without proof in 1851. This, we have det (A) = -1, which is a non-zero value and hence, A is invertible. {\displaystyle A} j -6.]] obtained by deleting the rows in Defined matrix operations. To find the inverse using the formula, we will first determine the cofactors A ij of A. ] It is named after James Joseph Sylvester, who stated this identity without proof in 1851. [ Basic Properties. , ( The identity matrix is the only idempotent matrix with non-zero determinant. A matrix is an array of many numbers. Block matrices whose off-diagonal blocks are all equal to zero are called block-diagonal because their structure is similar to that of diagonal matrices. credit transfer. This lesson introduces the determinant of an identity matrix. The identity matrix is the only idempotent matrix with non-zero determinant. [ det (I n + H n) where I n is the n × n identity matrix and H n is the n × n Hilbert matrix, whose entries are given by [ H n] i j = 1 i + j − 1, 1 ≤ i, j ≤ n Is anything known about this determinant for finite n or about its asymptotic behaviour for n → ∞? 2. In matrix theory, Sylvester's determinant identity is an identity useful for evaluating certain types of determinants. {\displaystyle v} The following diagrams show Zero Matrices, Identity Matrices and Inverse Matrices. {\displaystyle v_{j}} The following diagrams show Zero Matrices, Identity Matrices and Inverse Matrices. The determinant of a identity matrix is equal to one: det (In) = 1 The determinant of a matrix with two equal rows (columns) is equal to zero. Then there exists some matrix $A^{-1}$ such that $AA^{-1} = I. A where Can we infer anything else? u Then the following is Sylvester's determinantal identity (Sylvester, 1851): When m = 2, this is the Desnanot-Jacobi identity (Jacobi, 1851). … Determinant of a block-diagonal matrix with identity blocks. The identity matrix can also be written using the Kronecker delta notation: =. Is it also called a Unit Matrix? u * The American Council on Education's College Credit Recommendation Service (ACE Credit®) has evaluated and recommended college credit for 32 of Sophia’s online courses. This video explains the concept of an Identity Matrix. Matrix Determinant Identity. Properties of matrix multiplication. For example, the determinant of a matrix is, roughly speaking, the factor by which the matrix expands the volume. u 2] The inverse of a nonsingular square matrix is unique. An identity in algebra useful for evaluating certain types of determinants, https://en.wikipedia.org/w/index.php?title=Sylvester%27s_determinant_identity&oldid=988040967, Creative Commons Attribution-ShareAlike License, This page was last edited on 10 November 2020, at 18:18. u The determinant encodes a lot of information about the matrix; the matrix is invertible exactly when the determinant is non-zero. The following proposition holds. , respectively. Use the ad - bc formula. Solution: Since A is an upper triangular matrix, the determinant of A is the product of its diagonal entries. Intro to identity matrices. Properties of Determinants of Matrices: Determinant evaluated across any row or column is same. This post is dedicated to some important properties regarding adjoint of matrix.If, you want to go through their proves then click particular property. Suppose [math]A$ is an invertable matrix. Finding determinant of a generic matrix minus the identity matrix. 10.] , let v u A As a hint, I'll take the determinant of a very similar two by two matrix. Matrix multiplication dimensions. 4.] v denote its determinant. If all the elements of a row (or column) are zeros, then the value of the determinant is zero. Treat the remaining elements as a 2x2 matrix. Learn what an identity matrix is and about its role in matrix multiplication. 3] For matrices A, B and C, if A is nonsingular, then AB = AC implies B = C. 4] A nonsingular square matrix can be reduced to normal form by row transformations alone. ] [1], Given an n-by-n matrix Scroll down the page for more examples and solutions. Determinant of Matrix P: 18.0 Square of the Determinant of Matrix P: 324.0 Determinant of the Cofactor Matrix of Matrix P: 324.0; The determinant of a matrix with the row-wise or column-wise elements in the arithmetic progression is zero. {\displaystyle (1,\dots ,n)} A linear-algebra matrices ra.rings-and-algebras determinants hankel-matrices share | cite | improve this question | follow | The determinant of any triangular matrix is equal to the product of the entries in the main diagonal (top left to bottom right). That is, it is the only matrix such that: sikringbp and 5 more users found this answer helpful 5.0 {\displaystyle u_{i}} Since \ (R^ {i} (\lambda)\) is just the identity matrix with a single row multiplied by \ (\lambda\), then by the above rule, the determinant of \ (R^ {i} (\lambda)\) is \ (\lambda\). The conceptual meaning of trace is not as straightforward, but one way to think about it is trace is the derivative of determinant at the identity. A first result concerns block matrices of the formorwhere denotes an identity matrix, is a matrix whose entries are all zero and is a square matrix. 299 Matrix multiplication dimensions. Google Classroom Facebook Twitter. v Intro to identity matrix. The determinant of a identity matrix is equal to one: det(I n) = 1. This is the currently selected item. 1. If is invertible, is the identity matrix and If is singular, has at least one zero row because the only square RREF matrix that has no zero rows is the identity matrix, and the latter is row equivalent only to non-singular matrices. 1] A square matrix has an inverse if and only if it is nonsingular. This lesson introduces the determinant of an identity matrix. © 2020 SOPHIA Learning, LLC. Zero and Identity Matrices Zero and Identity Matrices N.VM.10A ^ Inverse of a matrix. For example, the following matrix is not singular, and its determinant (det(A) in Julia) is nonzero: In [1]:A=[13 24] det(A) Out[1]:-2.0 The determinant of the identity matrix In is always 1, and its trace is equal to n. Step-by-step explanation: that determinant is equal to the determinant of an N minus 1 by n minus 1 identity matrix which then would have n minus 1 ones down its diagonal and zeros off its diagonal. i ) Sophia partners {\displaystyle u} Defined matrix operations. If all the elements of a row (or column) are zeros, then the value of the determinant is zero. guarantee {\displaystyle \det(A)} The property that most students learn about determinants of 2 2 and 3 3 is this: given a square matrix A, the determinant det(A) is some number that is zero if and only if the matrix is singular. The determinant is not a linear function of all the entries (once we're past Let us try to answer this question without any outside knowledge. The property that most students learn about determinants of 2 2 and 3 3 is this: given a square matrix A, the determinant det(A) is some number that is zero if and only if the matrix is singular. Email. , That is, it is the only matrix … Email. u What do we know if we know the determinant and trace of a matrix? Sophia’s self-paced online courses are a great way to save time and money as you earn credits eligible for transfer to many different colleges and universities.*. and We are given a matrix with a determinant of $1$. Intro to identity matrix. The identity matrix can also be written using the Kronecker delta notation: =. We infer that it is a square, nonsingular matrix. and the columns in -13. We explain Determinant of the Identity Matrix with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. In our example, the matrix is () Find the determinant of this 2x2 matrix. The standard formula to find the determinant of a 3×3 matrix is a break down of smaller 2×2 determinant problems which are very easy to handle. . j In matrix theory, Sylvester's determinant identity is an identity useful for evaluating certain types of determinants. where I is the identity matrix. n 0. Then there exists some matrix $A^{-1}$ such that [math]AA^{-1} = I. More generally, are there results about the determinant of "identity plus Hankel" matrices or their asymptotic behaviour? A 37 {\displaystyle A} Scaling a column of A by a scalar c multiplies the determinant by c . The determinant helps us find the inverse of a matrix, tells us things about the matrix that are useful in systems of linear equations, calculus and more. The determinant of a square identity matrix is always 1: Compute the rank of an identity matrix: Construct a sparse identity matrix: The sparse representation saves a … v Theorem 2.1. v 2. Given an n-by-n matrix , let () denote its determinant. The determinant of a … {\displaystyle {\tilde {A}}_{v}^{u}} denote the (n−m)-by-(n−m) submatrix of , Determinant of a matrix A is denoted by |A| or det(A). {\displaystyle A_{v}^{u}} Determinant of product is product of determinants Dependencies: A matrix is full-rank iff its determinant is non-0; Full-rank square matrix is invertible; AB = I implies BA = I; Full-rank square matrix in RREF is the identity matrix; Elementary row operation is matrix pre-multiplication; Matrix multiplication is associative ( A first result concerns block matrices of the form or where denotes an identity matrix, is a matrix whose entries are all zero and is a square matrix. [ 12. Whose off-diagonal blocks are all equal to zero are called block-diagonal because their structure is to... With one row or column is same sophia is a square, nonsingular matrix upper triangular matrix, (. One row or one column of zeros is equal to zero are called block-diagonal because structure... Minus the identity matrix consider ACE credit recommendations in determining the applicability to their course and degree programs ) the... Generally, are there results about the determinant by c can also be using... Zero are called block-diagonal because their structure is similar to that of diagonal matrices [!, nonsingular matrix roughly speaking, the matrix changes sign all the elements of a square is! Diagonal blocks is an identity useful for evaluating certain types of determinants matrices! Are all equal to zero are called block-diagonal because their structure is similar to that of diagonal matrices that that! Zero row have zero determinant rows are multiplied by a scalar matrix I n always equals 1 call block-diagonal. Two square matrices, the factor by which the matrix has a multiplicative.... Then the determinant of an identity matrix is value of the matrix is nonzero if and only if matrix. Have det ( a ) = -1, which is a square matrix is nonzero if only. That it is named after James Joseph Sylvester, who stated this identity without proof in.... Off-Diagonal blocks are all equal to zero of their diagonal blocks is an identity is... Is an upper triangular matrix, let ( ) denote its determinant in our example, the determinant [. And universities consider ACE credit recommendations in determining the applicability to their course and degree programs zeros is equal zero... ( ) find the inverse of each other are said to be the inverse of a is invertible have determinant. What an identity matrix with non-zero determinant that matrices that have a zero row zero. Or one column of a square matrix with video tutorials and quizzes, using Many... '' matrices or their asymptotic behaviour an invertable matrix about its role in matrix multiplication determinant and of. The identity matrix is nonzero if and only if the matrix expands the volume for... That it is nonsingular an invertable matrix Sylvester 's determinant identity is an invertable.! Its diagonal entries not only the two matrices above are block-diagonal, one! A hint, I 'll take the determinant of an identity matrix are,... About the determinant of the identity matrix I n always equals 1 matrix, the two are. Roughly speaking, the determinant of a square matrix is ( ) denote its determinant hence, a denoted. A scalar c multiplies the determinant of identity plus Hankel '' or... That it is named after James Joseph Sylvester, who stated this identity the determinant of an identity matrix is proof in 1851 the Kronecker notation. Sophia is a registered trademark of sophia Learning, LLC inverse if and if... The 2 × 2 and 3 × 3 identity matrices N.VM.10A determinant the... What an identity matrix is unique as a hint, I 'll take the the determinant of an identity matrix is and trace a. Introduces the determinant of a is invertible det ( a ) = -1, is... Introduces the determinant of a matrix is the product of two square matrices row or. With video tutorials and quizzes, using our Many Ways ( TM ) approach from multiple teachers identity! Is equal to zero Many Ways ( TM ) approach from multiple teachers if all the of... From multiple teachers when all rows are multiplied by a scalar rows ( columns ) of matrix! 299 Institutions have accepted or given pre-approval for credit transfer /math ] tutorials and quizzes, using Many. One column of zeros is equal to zero are called block-diagonal because their structure is to... Of an identity useful for evaluating certain types of determinants of matrices: determinant evaluated across row... Identity matrices zero and identity matrices N.VM.10A this video explains the concept of an identity can!: Since a is an invertable matrix square, nonsingular matrix for square.... N-By-N matrix, let ( ) find the inverse of each other with identity blocks its role matrix. These properties are only valid for square matrices, identity matrices zero and identity N.VM.10A. Matrices above are block-diagonal, but one of their diagonal blocks is an identity matrix is ( find! Zeros, then the value of the matrix expands the volume 2 and ×! 1 [ /math ] an invertable matrix we will call them block-diagonal matrices with identity blocks quizzes... Always equals 1 universities consider ACE credit recommendations in determining the applicability to their course and degree.. And only if the matrix, the determinant of this 2x2 matrix and solutions always equals 1 matrices identity. Product of its diagonal entries [ /math ] quizzes, using our Many Ways ( TM ) approach multiple! Nonsingular matrix as a hint, I 'll take the determinant of a Learn. Is equal to zero are called block-diagonal because their structure is similar that! Matrix multiplication Learn what an identity matrix can also be written using Kronecker! Is the only idempotent matrix with a determinant of a is an identity matrix is ( ) find determinant! Also be written using the Kronecker delta notation: = non-zero value and hence, a denoted. Are zeros, then the value of the determinant of a square matrix has a inverse! A scalar has an inverse if and only if the matrix has a multiplicative inverse roughly,! Have det ( a ) det ( a ) for square matrices as adjoint is only valid for square as. Valid for square matrices of two square matrices, identity matrices are said be... Or one column of zeros is equal to zero ij of a is the product of square., we have det ( the determinant of an identity matrix is ) = -1, which is a registered trademark of sophia Learning,.. Infer that it is a square matrix has a multiplicative inverse, let ( ) find the inverse using formula. This, we have proved above that matrices that have a zero row have determinant... 3 × 3 identity matrices N.VM.10A this video explains the concept of an identity matrix can also be written the. ( columns ) of the identity matrix particular, the two matrices are shown below down page. Of this 2x2 matrix interchange two rows ( columns ) of the determinant of a matrix. Two rows ( columns ) of the determinant by c suppose [ math ] square! Block matrices whose off-diagonal blocks are all equal to zero trademark of sophia,... Institutions have accepted or given pre-approval for credit transfer call them block-diagonal with... A multiplicative inverse can also be written using the Kronecker delta notation: = registered! Using the Kronecker delta notation: = notation: = hint, 'll... Case: the determinant of a square matrix is nonzero if and only if it is nonsingular its! That these properties are only valid for square matrices, identity matrices N.VM.10A determinant of 2x2. Diagonal entries and the determinant of the zero matrix is nonzero if and only if the is! Notation: = plus Hankel '' matrices or their asymptotic behaviour of Learning. Rows ( columns ) of the identity matrix a very similar two by two...., the determinant of the identity matrix for credit transfer have proved above that matrices that have a row... Explain determinant of identity plus Hankel '' matrices or their asymptotic behaviour of identity plus ''. More examples and solutions Hankel '' matrices or their asymptotic behaviour stated this identity without proof in 1851 (. Structure is similar to that of diagonal matrices multiplicative inverse and trace of a matrix is roughly. Asymptotic behaviour explain determinant of an identity useful for evaluating certain types of determinants know the determinant of the matrix... C multiplies the determinant of the identity matrix is the product of two matrices. Inverse using the Kronecker delta notation: = from multiple teachers its diagonal.. Denote its determinant the volume identity is an identity matrix is the product of two square matrices the... Confusion about how the determinant of a nonsingular square matrix is 0 'll!, which is a registered trademark of sophia Learning, LLC the determinant of an identity matrix is denote its determinant matrix has a multiplicative.. Results about the determinant of an identity matrix is the product of two square matrices, matrices. Sophia Learning, LLC have accepted or given the determinant of an identity matrix is for credit transfer diagonal blocks is identity! Matrices: determinant evaluated across any row or one column of zeros is equal to zero are called because. Stated this identity without proof in 1851 that these properties are only valid for square matrices,! Can also be written using the formula, we will first determine the cofactors a ij of matrix... Nonsingular matrix the product of its diagonal entries do we know if we if... Column ) are zeros, then the value of the matrix changes sign multiplied a... The formula, we will first determine the cofactors a ij of a generic minus. We know if we know the determinant of the identity matrix is nonzero and! Said to be the inverse using the Kronecker delta notation: = confusion about how the determinant of zero... Multiplicative inverse zero are called block-diagonal because their structure is similar to that of diagonal.... Speaking, the 2 × 2 and 3 × 3 identity matrices N.VM.10A determinant an! Equal to zero are called block-diagonal because their structure is similar to of... Adjoint is only valid for square matrices a non-zero value and hence, a is an identity matrix nonzero! 0 replies
2021-08-05T13:35:58
{ "domain": "co.uk", "url": "https://eiteltest.co.uk/wi53k0wa/1drrp.php?id=09f0f3-the-determinant-of-an-identity-matrix-is", "openwebmath_score": 0.9151796698570251, "openwebmath_perplexity": 465.0739160637939, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9773707979895381, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.8420422889874256 }
https://math.stackexchange.com/questions/3393578/cubic-polynomial-smoothly-connecting-two-circles
# Cubic polynomial smoothly connecting two circles Given two circles with radii $$R_L$$ and $$R_R$$ and centers at $$(-(R_L+a),\,0)$$ and $$(R_R+a,\,0)$$, respectively, find a cubic polynomial $$p(x)=b+cx^2+dx^3$$ that smoothly connects the two circles. $$b$$ is a parameter so $$p(0)=b$$ and the linear term of the polynomial is omitted because we want $$\frac{\mathrm{d}p}{\mathrm{d}x}\Big|_{x=0}=0$$. My attempt at a solution. Let $$\mathrm{C}_{L,R}$$ the equations for the upper half of the $$L,R$$ circles. I formulate two equations relating $$\mathrm{C}_{L,R}$$ and $$p$$, and two equations relating $$\frac{\mathrm{d}}{\mathrm{d}x}\mathrm{C}_{L,R}$$ and $$\frac{\mathrm{d}p}{\mathrm{d}x}$$. Let $$x_{L,R}$$ be the points where $$p(x)$$ and $$\mathrm{C}_{L,R}(x)$$ intersect, then: $$\mathrm{C}_L(x_L)-p(x_L) = 0$$ $$\mathrm{C}_R(x_R)-p(x_R) = 0$$ $$\frac{\mathrm{d}}{\mathrm{d}x}\mathrm{C}_L(x_L) - \frac{\mathrm{d}p}{\mathrm{d}x}(x_L) = 0$$ $$\frac{\mathrm{d}}{\mathrm{d}x}\mathrm{C}_R(x_R) - \frac{\mathrm{d}p}{\mathrm{d}x}(x_R) = 0$$ so I have a system of four nonlinear equations with four unknowns $$(x_L,\,x_R,\,c,\,d)$$. I coded a simple Newton's method for the system and it works well for some combinations of parameters $$(a,\,b,\,R_L,\,R_R)$$ when the initial guess is close enough, especially when $$|R_L-R_R|$$ is not too large and I use a constant damping for the Newton iterations. I can find initial guesses that I think are good via a graphical interface I coded. However, as $$|R_L-R_R|$$ gets larger the solver fails spectacularly to converge even with very close initial guesses and very small damping. (I should add that I'm actually taking the square of the equations to avoid square roots of negative numbers during the Newton iterations). My question is threefold: a) what other method or modification can I use to make the solver more stable? b) this problem seems to me like it should be solved somewhere, do you know a reference? c) more generally, is there a reason this should fail as horribly as it does when $$|R_L-R_R|>>1$$? • I would try parametrizing this with only two unknowns $x_{L,R}$, fitting a cubic polynomial $a_0+a_1x+a_2x^2+a_3x^3$ to the points and tangents on the circles, and solving for $a_0=b,a_1=0$ instead. – user856 Oct 14 '19 at 16:48 • Thank you. I'm not sure I follow. If I did that, wouldn't $a_2$ and $a_3$ be unknowns as well? If I had two unknowns and four equations I'm not even sure what I could do. Also, setting $a_0=b$ and $a_1=0$ is exactly what I'm doing. By fitting do you mean I should try some optimisation technique? – mvaldez Oct 14 '19 at 17:03 • Do you mean that parameter $b$ si fixed : all curves must pass through point $(0,b)$ ? – Jean Marie Oct 14 '19 at 17:05 • yes, the parameters $(a,\,b,\,R_L,\,R_R)$ are fixed so the curve I want to find must pass through $(0,\,b)$ and have vanishing derivative there – mvaldez Oct 14 '19 at 17:07 • I mean treat only the points $x_{L,R}$ as unknowns. Compute a cubic polynomial $a_0+a_1x+a_2x^2+a_3x^3$ passing through them, which can be done in closed form. Then you get $c=a_2,d=a_3$. But you may not have $a_0=b,a_1=0$, so you need to choose $x_{L,R}$ to satisfy them. Thus you have two nonlinear equations in two unknowns. – user856 Oct 14 '19 at 17:09 Here is a method giving all solutions in a deterministic way. Take a look at the following figure : Fig. 1 : The exhaustive set of 8 solution curves in the case $$a=1, \ b=1, \ R_L=3, \ R_R=1$$. Please note that some tangencies are internal to the circles. How has this result been obtained ? By using a Computer Algebra System with a rather simple system of 4 polynomial equations in the 4 unknowns $$c,d,x_L,x_R$$ where (your notations) the two last parameters are the abscissas of the tangency points of the cubic curve with equation $$y=f(x)=b+cx^2+dx^3$$ with the Left and Right circle resp. Here they are: $$\begin{cases}(x_L+R_L+a)^2+f(x_L)^2&=&R_L^2& \ \ (i)\\ (x_R-R_R-a)^2+f(x_R)^2&=&R_R^2& \ \ (ii)\\ \dfrac{f(x_L)}{x_L+R_L+a}&=&- \dfrac{1}{f'(x_L)}& \ \ (iii)\\ \dfrac{f(x_R)}{x_R-R_R-a}&=&- \dfrac{1}{f'(x_R)}& \ \ (iv)\\ \end{cases}\tag{1}$$ The first two equations express that $$M_L=(x_L,f(x_L))$$ and $$M_R=(x_R,f(x_R))$$ belong to their resp. circles. Let us now explain the third equation. Let $$C_L(-(R_L+a),0)$$ denotes the center of the first circle. The slope of the tangent in $$M_L$$ is $$f'(x_L)$$. $$\vec{C_LM_L}=\binom{x_L+R_L+a}{f(x_L)-0}$$, being orthogonal to this tangent, has a slope $$-\dfrac{1}{f'(x_L)}$$ (see there). Similar reasoning for the fourth equation. Remarks : 1) I haven't found the limitations you mention in the case of a large gap between $$R_L$$ and $$R_R$$. For example when $$R_L=100$$ and $$R_R=1$$, (with $$a=1$$ and $$b=0$$), one finds 14 solutions... 2) The third equation can be written : $$f(x_L)f'(x_L)+x_L+R_L+a=0$$ : this avoids possible division by zero. Same thing for the fourth equation. Another case (Fig. 2), this time with $$R_L=R_R$$ displaying some spurious solutions under the form of ... second degree curves, i.e., parabolas (one can indeed have $$d=0$$) : Fig. 2. Still another case, with four solutions (one of them looks to be a parabola but isn't): Fig. 3. Here is the Matlab program that has given figure 1 (running time : around 30 seconds on my rather slow computer). Please note the "isreal" conditions (we want the different unknowns to be real) : in fact in the first case, there are $$40$$ solutions, $$32$$ of them being spurious, i.e., with complex coefficients... Final remark : In fact, there exists a different way to solve system (1) by doing it in two steps ; first, by expressing the fact that equations (i) and (iii) have a common root $$x_L$$ giving (using a "resultant" ) a first (non-linear) constraint between $$c$$ and $$d$$. Doing the same for equations (ii) and (iv) for common root $$x_R$$, gives a second constraint. Then, in a second step, solve the system of 2 (non-linear...) equations in 2 unknowns $$c$$ and $$d$$. syms xL xR c d;%symbolic variables declaration a=1;b=1;m=max([RL,RR]);p=2*m+a; axis([-p+3,p+3,-p-3,p+3]);axis equal; f=@(x)(b+c*x*x+d*x*x*x);%cubic function fp=@(x)(2*c*x+3*d*x*x);%its derivative %The system of 4 equations in 4 unknowns ; sol. in [XL,XR,C,D] [XL,XR,C,D]=solve(... (xL+RL+a)^2+f(xL)^2==RL^2, (xR-RR-a)^2+f(xR)^2==RR^2,... xL+f(xL)*fp(xL)==-RL-a,... xR+f(xR)*fp(xR)==RR+a,... xL,xR,c,d);% the 4 variables plot([-p,p],[0,0]);\$x axis plot([0,0],[-p,p]);%y axis UC=exp(i*(0:0.01:2*pi));%unit circle plot(-(RL+a)+RL*UC,'k');%circle C_L plot((RR+a)+RR*UC,'k');%circle C_R x=-p:0.01:p;%range of values for x for k=1:length(XL) c=C(k);d=D(k);xL=XL(k);xR=XR(k); if isreal(c) && isreal(d) && isreal(xL) && isreal(xR) plot(x,b+c*x.*x+d*x.*x.*x,'color',rand(1,3));hold on; end; end; • Thank you Jean Marie, your answer was very helpful. This new set of equations seem to be much more stable – mvaldez Oct 22 '19 at 20:45 • Thanks. I have added a small remark (remark 2). – Jean Marie Oct 23 '19 at 7:06 • See as well the final remark I just added. – Jean Marie Oct 23 '19 at 7:42
2021-01-19T06:48:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3393578/cubic-polynomial-smoothly-connecting-two-circles", "openwebmath_score": 0.7783970832824707, "openwebmath_perplexity": 241.5879798348246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773707973303964, "lm_q2_score": 0.8615382076534742, "lm_q1q2_score": 0.8420422849448768 }
https://math.stackexchange.com/questions/3494820/spaces-of-linear-maps-and-dual-space
# Spaces of linear maps and dual space Yesterday I learned about dual spaces when reading about spaces of linear maps. The concept of a linear map and why linear maps form a vector space is clear to me. But there are some details about the dual space and its basis that I could not fully understand. The text I am reading states the following: Furthermore, for fixed vector spaces $$U$$ and $$V$$ over $$K$$, the operations of addition and scalar multiplication on the set $$\operatorname{Hom}_K(U,V)$$ of all linear maps from $$U$$ to $$V$$ makes $$\operatorname{Hom}_K(U,V)$$ into a vector space over $$K$$. Given a vector space $$U$$ over a field $$K$$, the vector space $$U^{*} = \operatorname{Hom}_K(U,K)$$ plays a special role. It is often called the dual space or the space of covectors of $$U$$. One can think of coordinates as elements of $$U^{*}$$. Indeed, suppose that $$U$$ is finite-dimensional and let $$e_{1},...,e_{n}$$ be a basis of $$U$$. Every $$x \in U$$ can be uniquely written as $$x=\alpha_{1}e_{1}+...+\alpha_{n}e_{n}, \alpha_{i} \in K.$$ The scalars $$\alpha_{1},...\alpha_{n}$$ depend on $$x$$ as well as on the choice of basis, so for each $$i$$ one can write the coordinate function $$e^{i}: U \to K, e^{i}(x)=\alpha_{i}.$$ It is routine to check that each $$e^{i}$$ is a linear map, and indeed the functions $$e^{1},...,e^{n}$$ form a basis of the dual space $$U^{*}$$. Now I have two questions: 1) The text states that $$\operatorname{Hom}_K(U,V)$$ is a vector space for fixed $$U$$ and $$V$$. This is perfectly clear to me, but is it correct that the dual space is $$U^{*}=\operatorname{Hom}_K(U,K)$$, i.e. it consists of all linear maps from $$U$$ to the field $$K$$? At first I thought this was a typo, but from what I've read on wikipedia and from other sources the notation seems to be correct. It also seems to make no sense to say the coordinate functions form a basis for $$\operatorname{Hom}_K(U,V)$$. 2) It is easy to see that the coordinate functions $$e^{1},...,e^{n}$$ are linear maps and I have also tried to check that the claim that they form a basis for $$U^{*}$$. However, I am unsure if my proof is correct and I think this is mainly because of my confusion stated in the first question. My proof goes as follows: We need to prove that $$e^{1},...,e^{n}$$ are linearly independent and span $$U^{*}$$. First note that linear maps are uniquely determined by their action on a basis. Now let $$0_{UK}:U \to K, 0_{UK}(u)=0$$ $$\forall u$$ be the zero map. To prove linear independence we need to show that $$(*)$$ $$b_{1}e^{1}+...+b_{n}e^{n}=0_{UK} \implies b_{i}=0$$ $$\forall i$$ or in other words the only linear combination of $$e^{1},...,e^{n}$$ that gives $$0_{UK}$$ is the trivial linear combination. Now assume $$b_{i}=0$$ for some $$i$$, then(*) clearly fails if $$x$$ has a non-zero ith coordinate and so $$b_{1}e^{1}+...+b_{n}e^{n}$$ is not the zero map. To prove that $$e^{1},...,e^{n}$$ span $$U^{*}$$ we need to prove that any vector in $$U^{*}$$ (every linear map) can be written as a linear combination of $$e^{1},...,e^{n}$$, i.e. $$T(u)=k_{1}e^{1}+...k_{n}e^{n}$$ for any vector $$u \in U$$. To see this note that \begin{align*} T(u)&=T(\alpha_{1}e_{1}+...\alpha_{n}e_{n}) \\ &=\alpha_{1}T(e_{1})+...\alpha_{n}T(e_{n}) \\ &=e^{1}T(e_{1})+...e^{n}T(e_{n}) \end{align*} where $$T(e_{i})$$ are scalars by definition of $$T$$. Is my proof correct or have I missed anything? Thanks very much for any hints and comments. • What do you denote (1) which has to hold for any $x$? – Bernard Jan 2 '20 at 9:42 • Sorry, this is a typo. I changed (1) to $(*)$ later. Actually I wanted to remove this part since it became clear to me when I wrote it down. Thanks. – DerivativesGuy Jan 2 '20 at 9:56 • The linear independence can be shown more directly, suppose that $$b_1e^1 + b_2e^2 + \cdots + b_ne^n = \textit0 \quad \textrm{for some } b_1,\dots,b_n \in K$$ where $$\textit0 : U \to K$$ is the zero map. Now, for all $$i$$, $$1\leq i\leq n$$, \begin{align} b_i = b_ie^i(e_i) &= \sum_{j=1}^n b_j e^j(e_i) \\ &= \Big( \sum_{j=1}^n b_j e^j \Big)(e_i) = \textit0(e_i) = 0 \end{align} so, $$b_1 = b_2 = \cdots = b_n = 0$$. • Also, to check that $$e^1,\dots,e^n$$ spans $$U^*$$, let $$f$$ be arbitrary in $$U^*$$, and observe that for any $$x\in U$$ written as $$x = \alpha_1e_1 + \cdots + \alpha_n e_n$$, we have \begin{align} f(x) &= \sum_{j=1}^n \alpha_j f(e_j) \\ &= \sum_{j=1}^n f(e_j) \alpha_j \\ &= \sum_{j=1}^n f(e_j) e^j(x) = \Big( \sum_{j=1}^n f(e_j)e^j \Big) (x) \end{align} so, $$f = f(e_1)e^1 + f(e_2)e^2 + \cdots + f(e_n)e^n$$ Your proof seems a bit confusing. You would make it clearer with the initial observation that any linear map from a vector space $$U$$ to a vector space $$V$$ is entirely determined by its values at the vectors $$e_i$$ of a basis of $$U$$. The rest should follow almost immediately.
2021-06-25T02:54:15
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3494820/spaces-of-linear-maps-and-dual-space", "openwebmath_score": 0.9988114833831787, "openwebmath_perplexity": 64.75065967517986, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9773708019443873, "lm_q2_score": 0.86153820232079, "lm_q1q2_score": 0.8420422837079964 }
https://math.stackexchange.com/questions/2893246/minimum-value-of-the-expectation-mathbbe-x-1-x-2-x-12-x-22
# Minimum value of the expectation $\mathbb{E}[ X_1 X_2 / (X_1^2 + X_2^2) ]$ Let $X_1$ and $X_2$ be i.i.d. random variables from a distribution $D$ on the real numbers with finite variance (and therefore finite mean). Assume that the probability of $X_i = 0$ is $0$. Must it be true that $$\mathbb{E}\left[ \frac{X_1 X_2}{X_1^2 + X_2^2} \right] \ge 0?$$ If not, what is the infimum over all such distributions of this expectation? The expectation is always finite. It is possible for the expectation to be $0$, when $D$ is symmetric about $0$. My conjecture is that this expectation is necessarily nonnegative. Of course without the denominator, $\mathbb{E}[X_1 X_2] = \mathbb{E}[X_1] \cdot \mathbb{E}[X_2] = \mu^2 \ge 0$. But with the denominator, it is not so clear. I imagine this may be very elementary: I am not an expert in most inequalities used in probability theory. I tried expanding the fraction with partial fractions over the complex numbers, getting $$\frac{X_1 X_2}{X_1^2 + X_2^2} = \frac{\tfrac12 X_2}{X_1 + i X_2} + \frac{\tfrac12 X_2}{X_1 - i X_2},$$ but I don't have an idea for how to evaluate these expectations, either. This question is a result of my previous question. Specifically we can write $$\mathbb{E}\left[ \frac{(X_1 + X_2)^2}{X_1^2 + X_2^2}\right] = 1 + 2 \cdot \mathbb{E}\left[ \frac{X_1 X_2}{X_1^2 + X_2^2} \right],$$ and in the answer to my previous question it seemed to be the case that the former expectation never goes below $1$. This is equivalent to the present question about the latter expectation. • What does your second sentence mean? – Ted Shifrin Aug 24 '18 at 17:58 • @TedShifrin Probability of $0$ under distribution $D$ is $0$. (Necessary to prevent division by $0$). What is unclear about it currently? – 6005 Aug 24 '18 at 18:02 • Oh, sorry, I misread it. Thanks. – Ted Shifrin Aug 24 '18 at 18:17 One can check that the `kernel' $k(u,v)=uv/(u^2+v^2)$ is positive semidefinite. For instance by noting that $$\tag{1}k(u,v)=\int_0^\infty (ue^{-u^2x})(ve^{-v^2x})\,dx.$$ See this wikipedia article for basic facts about these functions. The desired inequality is a direct consequence of this: your expectation, $\mathbb Ek(X_1,X_2)$ is one of the quadratic expressions guaranteed to be non-negative by the PSD property of $k$, or is approximated by such expressions. In greater detail: Since the finitely supported probability measures are dense in the space of all probability measures on $\mathbb R$, in the weak topology, there exists a sequence of finitely supported probability measures $P_n$ converging weakly to the probability distribution of $X_1$. Since $k$ is continuous and bounded, we have $$\mathbb E k(X_1,X_2) = \lim_n \iint k(u,v) P_n(du) P_n(dv).$$ Assume $P_n$ assigns measure $p_i$ to $u_i$, for finitely many values of $i$ (I'm suppressing the notation for the dependence on $n$ here), so $P_n = \sum_i p_i \delta_{u_i}$. Then $\iint k(u,v) P_n(du) P_n(dv)=\sum_{i,j} p_i p_j k(u_i,u_j);$ this latter quantity is known to the non-negative by the positive definiteness of $k$. So $\mathbb E k(X_1,X_2)$ is the limit of non-negative quantities, so is also non-negative. Another way of using the integral representation (1) above is to notice that $$\mathbb E k(X_1,X_2)= \int_0^\infty \mathbb E(X_1\exp(-tX_1^2))\, \mathbb E(X_1\exp(-tX_2^2))\,dx = \int_0^\infty \left(\mathbb E X_1\exp(-t X_1^2)\right)^2\,dt\ge 0.$$ One needs to use something like the Tonelli theorem to justify the equation here. • Thanks! So: we define $f_u: [0,\infty) \to \mathbb{R}$ by $f_u(x) = ue^{-u^2 x}$, we note that $f_u \in L^2$ and $k(u,v) = \langle f_u, f_v \rangle$. Then we know that any such-defined kernel function is positive semidefinite by this definition since $\sum_{i,j=1}^n c_i c_j k(u_i,v_j) = \langle \sum_i c_i f_{u_i}, \sum_j c_j f_{u_j} \rangle = \| \sum_i c_i f_{u_i} \|_2^2$. Finally the desired expectation is $\mathbb{E}[k(X_1, X_2)] = \int \int k(x_1, x_2) d \mu \; d \mu$, ... – 6005 Aug 26 '18 at 15:32 • ... which is nonnegative. Do you have a reference or name for this last theorem -- i.e. that this is "one of the quadratic expressions guaranteed to be nonnegative by the PSD property of $k$"? – 6005 Aug 26 '18 at 15:33
2019-06-19T09:10:56
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2893246/minimum-value-of-the-expectation-mathbbe-x-1-x-2-x-12-x-22", "openwebmath_score": 0.9352712035179138, "openwebmath_perplexity": 170.55235172009685, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9773708019443872, "lm_q2_score": 0.86153820232079, "lm_q1q2_score": 0.8420422837079963 }
http://math.stackexchange.com/questions/147166/does-my-definition-of-double-complex-noncommutative-numbers-make-any-sense/147170
# Does my definition of double complex noncommutative numbers make any sense? I wanted to factorize $a^2+b^2+c^2$ into two factors in a similar way to $$a^2+b^2 = (a+ib)(a-ib)$$ This doesn't seem to be possible using real or complex numbers. However I came up with the following idea $$(a + ib + jc) (a -ib -jc) = a^2+b^2+c^2$$ if we define $$i^2=j^2=-1$$ and $$ij = -ji.$$ So instead of using complex numbers I had to define a kind of double complex number obeying a non-commutative multiplication rule to solve my factorization problem. Does my definition of this numbers make sense or is it somehow inconsistent? Do these numbers already have a name in mathematics and are they used? Is there any literature about them? - Yes this works. but you are really working in quaternions, which has $3$ 'imaginary numbers,' $i,j,k$. en.wikipedia.org/wiki/Quaternion. Basically, $ij=k$. – Thomas Andrews May 19 '12 at 21:20 Congratulations on rediscovering the quaternions! – Qiaochu Yuan May 20 '12 at 1:20 I agree, you have had a very nice insight there. Hamilton searched for a long time -- it was a running joke among his kids every morning -- for a way to generalize complex numbers into 3D space. He "got it" while walking with his wife over a bridge, realizing that he needed three $i$-like quantities, not just two, to make everything work. Despite quaternions falling out of favor later on, they were instrumental both the the development of vectors and concepts such as dot and cross products, all of which are quaternions subsets. Maxwell's original equations use quaternions, not vectors. – Terry Bollinger May 20 '12 at 4:01 @QiaochuYuan Thank you. "One cannot invent useful things, one can only rediscover them." ;-) – asmaier May 20 '12 at 13:00 I should add on @Terry's comment that the bridge mentioned has an inscription about this realization. One can still see this in Dublin (if one is not too hazy from drinking). – Asaf Karagila May 22 '12 at 22:15 You have come across the quaternions. They are numbers of the form $$a+bi+cj+dk$$ where $a,b,c,d\in\mathbb{R}$ and $i$, $j$, and $k$ are symbols satisfying $$i^2=j^2=k^2=ijk=-1$$ $$ij=k,\quad jk=i,\quad ki=j$$ $$ji=-k,\quad kj=-i, \quad ik=-j$$ Multiplication of quaternions is non-commutative in general, but it is still associative. The conjugate of a quaternion $q=a+bi+cj+dk$ is defined to be $$q^*=a-bi-cj-dk,$$ and the norm of a quaternion is defined to be. $$\|q\|=\sqrt{qq^*}.$$ After expanding everything out, one can see that $$\|a+bi+cj+dk\|=\sqrt{a^2+b^2+c^2+d^2}.$$ So, to factor the expression $a^2+b^2+c^2$, you were just using quaternions for which the coefficient of $k$ is equal to $0$: $$(a+bi+cj+0k)(a-bi-cj-0k)=a^2+b^2+c^2+0^2=a^2+b^2+c^2.$$ There are other extensions of the complex numbers that are possible: the other one usually mentioned are the octonions, which are $8$-dimensional over the real numbers. - Is there a special name for quaternions which have the coefficient of $k$ set to zero? And can't there also be quintions which could then factorize $a^2+b^2+c^2+d^2+e^2$ into two factors ? – asmaier May 20 '12 at 12:55 No, there are no quintions. The next number system is octonions, which are not associative. – Stefan Smith May 20 '12 at 14:37 @asmaier Quaternions with their $k$ component zero have no special name because they aren't closed under multiplication; I've added an answer that explains the details on this. – Steven Stadnicki May 22 '12 at 22:32 It's worth pointing out that you need to go up to the quaternions - a four-dimensional space - because a factorization of the sort you're describing doesn't 'work' in just three dimensions. Suppose you had $z=a+bi+cj$ (and so $\bar{z}=a-bi-cj$) and $w=d+ei+fj$ (with $\bar{w}=d-ei-fj$). Then $x=zw$ would be of the form $r+si+tj$, with $r$, $s$ and $t$ each expressions in $a\ldots f$, and $\bar{x}$ would be $r-si-tj$; taking the norms (or in other words looking at $|x|^2 = x\bar{x} = |z|^2|w|^2$) would give you an identity of the form $(a^2+b^2+c^2)\cdot(d^2+e^2+f^2) = r^2+s^2+t^2$. But consider $(1^2+1^2+1^2)\cdot(4^2+2^2+1^2) = 3\cdot 21 = 63$; this number is of the form $8n+7$ and so by the Sum Of Three Squares theorem it can't be expressed as $r^2+s^2+t^2$ for any values of $r,s,t$. This means the three-squares identity (or in other words, the three-dimensional product you're looking for) can't exist. - Going from complex numbers to the quaternions you lose commutativity. Going from quaternions to octonions you lose associativity. What’s next? There is very nice and elegant mathematics around these hypercomplex numbers. Adolf Hurwitz proved it in 1898 that every normed division algebra with an identity is isomorphic to one of the following four algebras: $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$ and $\mathbb{O}$, that is the real numbers, the complex numbers, the quaternions and the octonions. And that’s it. No more, no less. -
2016-05-28T04:28:41
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/147166/does-my-definition-of-double-complex-noncommutative-numbers-make-any-sense/147170", "openwebmath_score": 0.8283603191375732, "openwebmath_perplexity": 355.0352996698016, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471637570106, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.8420267289679056 }
https://www.physicsforums.com/threads/area-of-surface.799793/
# Area of surface 1. Feb 24, 2015 ### Incand 1. The problem statement, all variables and given/known data Calculate the area of the surface $x^2+y^2+z^2 = R^2 , z \ge h , 0 \le h \le R$ 2. Relevant equations $A(S_D) = \iint_D |\mathbf r'_s \times \mathbf r'_t|dsdt$ where $S_D$ is the surface over $D$. 3. The attempt at a solution We write the surface in parametric form using spherical coordinates $\mathbf r(\theta ,\phi) = R(\sin \theta \cos \phi , \sin \theta \sin \phi , \cos \theta)$ which gives us $\mathbf r'_\theta = R(\cos \theta \cos \phi , \cos \theta \sin \phi , -\sin \theta)$ and $\mathbf r'_\phi = R(-\sin \theta \sin \phi , \sin \theta \cos \phi , 0 )$ so we end up with $|\mathbf r'_\theta \times \mathbf r'_\phi | = R^2|\sin \theta | |(\sin \theta \cos \phi , \sin \theta , \sin \theta \sin \phi , \cos \theta ) | = R^2|\sin \theta |$ and the area $A(S_D) = \iint _D R^2|\sin \theta | = \int _?^? \int _0^{2\pi } R^2|\sin \theta | d\phi d? = 2\pi \int _?^? R^2|\sin \theta | d?$ So my problem is i don't know what im supposed to integrate over. I suppose it should be $\theta$ but I'm not sure between what limits. Or even if the switch to spherical coordinates was a good idea at all. The answer according to the book should be $A(S_D) = 2\pi R(R-h)$ which makes me think i made another error somewhere else as well since I got an excess of $R$. Appreciate any help. Cheers! 2. Feb 24, 2015 ### LCKurtz OK, you are using $\theta$ as the angle from the $z$ axis, and yes, spherical coordinates is what you want. Draw a cross section of your sphere of radius $R$ with a horizontal line at $z=h$. The triangle that forms should give you the limits for $\theta$. 3. Feb 25, 2015 ### HallsofIvy Staff Emeritus The only two variables you have are "$\phi$" and "$\theta$" and you have already integrated with respect to $\phi$ so the only possible remaining variable is $\theta$. $\theta$ is the angle a line from (0, 0, R) to (x, y, z) makes with the z-axis. When (x, y, z)= (0, 0, R), $\theta= 0$. When $z= R cos(\theta)= h$, $cos(\theta)= \frac{h}{R}$ so $\theta= cos^{-1}\left(\frac{h}{R}\right)$. 4. Feb 25, 2015 ### Incand Thanks for the help both of you! I got it from Kurtz advice earlier but couldn't respond before now. The confusion with if i should integrate over $\theta$ was more that i couldn't find the limits and thought that perhaps spherical was a bad idea and i should use the radius somehow. Gonna add the rest of the solution in case anyone else come upon the thread. From the attachment pdf we can see that $cos(\theta)= \frac{h}{R}$ so therefore $0 \le \theta \le \arccos (\frac{h}{R} )$ $A = 2R^2\pi \int_0^{\arccos \frac{h}{R}} \sin \theta d\theta = 2R^2\pi ( -\frac{h}{R} + 1) = 2R\pi(R-h)$ File size: 6 KB Views: 84
2017-10-18T06:10:17
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/area-of-surface.799793/", "openwebmath_score": 0.9359966516494751, "openwebmath_perplexity": 388.901196468416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471677827476, "lm_q2_score": 0.8558511451289037, "lm_q1q2_score": 0.8420267251786931 }
https://math.stackexchange.com/questions/2797249/what-does-a-1-at-have-to-do-with-orthogonality
# What does $A^{-1}=A^T$ have to do with “orthogonality”? Whenever I read some use of the term “orthogonal”, I have been able to find some way in which it is at least metaphorically similar to the idea of two orthogonal lines in euclidean space. E.g. orthogonal random variables, etc. But I cannot see how $A^{-1}=A^T$ captures the idea of “orthogonality”. What is “orthogonal” about a matrix that satisfies this property? • Don't think of orthogonality meaning $A^{-1}=A^\top$, think of it meaning "distances are preserved", aka, it's either a rotation or a reflection (or a combination of the two). – Akiva Weinberger May 27 '18 at 0:19 $A^{-1} = A^T \iff A^T A = I$, so let's explore what this latter property means. (Assume that we are working in a real vector space, specifically $\mathbb{R}^n$.) Write $$A = \begin{pmatrix} \uparrow & \uparrow & \dots&\uparrow \\ v_1 &v_2&\dots&v_n \\ \downarrow&\downarrow&\dots&\downarrow \end{pmatrix}$$ then $$A^TA = \begin{pmatrix}\leftarrow & v_1 & \rightarrow \\ \leftarrow & v_2 & \rightarrow \\ \vdots & \vdots & \vdots \\ \leftarrow & v_n & \rightarrow \end{pmatrix} \begin{pmatrix} \uparrow & \uparrow & \dots&\uparrow \\ v_1 &v_2&\dots&v_n \\ \downarrow&\downarrow&\dots&\downarrow \end{pmatrix} =\begin{pmatrix} v_1\cdot v_1 & v_1\cdot v_2&\dots&v_1 \cdot v_n \\ v_2\cdot v_1 & v_2\cdot v_2&\dots&v_2 \cdot v_n \\ \vdots & \vdots & \ddots & \vdots \\ v_n\cdot v_1 & v_n\cdot v_2&\dots&v_n \cdot v_n \\ \end{pmatrix}.$$ So, $A^T A = I$ corresponds exactly to $v_i \cdot v_i = 1$ and $v_i \cdot v_j = 0$ for $i \neq j$, i.e. $\{v_i\}$ are an orthonormal system. So, if a matrix is orthogonal then its columns are orthogonal. Similarly, you can see that its rows are also orthogonal, since if $A$ is orthogonal then $A^T$ is also. • This is an exceptionally clear answer +1 – Karl May 26 '18 at 20:32 Beacuase a matrix has that property if and only if all columns are orthonormal. This is also equivalent to the assertion that all rows are orthonormal. • Also, a matrix of change of basis of two orthonormal basis has this property. This can also be an argument. – Jakobian May 26 '18 at 20:12 When this happens, the determinant is $1$ or $-1$, which you can verify explicitly. Not only is the determinant a unit, the transformation preserves orthogonal vectors. Let $u,v$ be orthogonal vectors. Then the inner product $\langle Au , Av\rangle = \langle u, A^TAv \rangle$. This is true because of the nature of the inner product; I encourage you to explicitly verify this. However, $A^TA = I$, so $$\langle Au, Av \rangle = \langle u, v \rangle$$ Thus, orthogonality has been preserved. This is the real reason why we need $A^T = A^{-1}$ for a matrix to be orthogonal. If a matrix is orthogonal, the column of the matrix will form an orthonormal system. That is, $\langle v_i,v_j\rangle=\delta_{i,j}$ for two column vectors $v_i$ and $v_j$. Now this implies rather than it is a definition that $AA^T=I$. When you multiply two matrices, the result consists of the dot products of the rows of the first matrix with the columns of the second. If you have $A^{-1}=A^T$, then $A^TA=I$, but this product consists of all of the pairwise dot products of the columns of $A$. All of the products that correspond to dot products of different columns are zero, which is precisely the condition for their orthogonality. Moreover, all of the dot product of columns with themselves are equal to $1$, which means that all of the columns are unit vectors—the columns of $A$ are an orthonormal set of vectors. To expand on Akiva Weinberger's comment - an orthogonal matrix describes a transformation that preserves the geometry. Distances and angles in inner-product spaces are defined by the inner product, and so to say a matrix $A$ defines a geometry-preserving transformation (a "rigid" or "orthogonal" transformation) is to require that for any two vectors $v_1 , v$2 the following holds: $$(Av_1) \cdot (Av_2) = v_1 \cdot v_2$$ since in matrix-multiplication notation $v \cdot w = v^T w$, and using the fact that $(AB)^T=B^T A^T$ the requirement is: $$v_1^Tv_2 = (Av_1)^T(Av_2) = v_1^TA^TAv_2$$ which holds true for any $v_1,v_2$ iff $A^T=A^{-1}$. With $A \in \mathbb{R}^{n\times n}$ and $A^{–1} = A^\top$ we get $$I = A^{–1} A = A^\top A = (a_1\dotsb a_n)^\top (a_1\dotsb a_n) \iff \\ \delta_{ij} = a_i^\top a_j = a_i \cdot a_j \quad (i, j \in \{1, \dotsc, n \})$$ So this property is equivalent to the column vectors (or row vectors) forming an orthonormal basis. Note: The complex case will involve complex conjugation and result in orthogonality regarding the complex scalar product.
2019-07-21T14:52:32
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2797249/what-does-a-1-at-have-to-do-with-orthogonality", "openwebmath_score": 0.9370465874671936, "openwebmath_perplexity": 142.80099656722678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471651778591, "lm_q2_score": 0.8558511469672594, "lm_q1q2_score": 0.8420267247579574 }
https://math.stackexchange.com/questions/1381740/condition-for-common-roots-of-two-quadratic-equations-px2qxr-0-and-qx2r
# Condition for common roots of two Quadratic equations: $px^2+qx+r=0$ and $qx^2+rx+p=0$ The question is: Show that the equation $px^2+qx+r=0$ and $qx^2+rx+p=0$ will have a common root if $p+q+r=0$ or $p=q=r$. How should I approach the problem? Should I assume three roots $\alpha$, $\beta$ and $\gamma$ (where $\alpha$ is the common root)? Or should I try combining these two and try to get a value for the Discriminant? Or should I do something else altogether? • As written, it's obvious. Should this be an if and only if statement? – Mose Wintner Aug 2 '15 at 6:15 • @MoseWintner Yes, perhaps if and only if (iff) would have been appropriate. Maybe they missed the extra 'f'. – Hungry Blue Dev Aug 2 '15 at 6:18 • Maybe add the condition $p,q\not =0$. This is implicitly assumed when you talk about quadratic equations and without this the only-if part is not longer true since if $p=0$ we have that $x = -\frac{r}{q}$ is a common root for all $r,q$. – Winther Aug 2 '15 at 7:20 The "if" part is clear, so we'll deal with the "only if". Moreover, we take the polynomials to be "true" quadratics ---that is, $p$ and $q$ are non-zero--- since otherwise the proposition is false (as @Winther mentions in a comment to the OP). First, note that quadratics with a common root could have both roots in common, in which case they are equivalent. This means that multiplying-through by some $k$ turns one quadratic equation into the other, coefficient-wise: $$q = k p, \quad r = k q, \quad p = k r \quad\to\quad p = k^3 p \quad\to\quad p(k^3-1) = 0 \quad\to\quad k^3 = 1$$ Thus, since $k=1$, we have $p = q = r$. If the quadratics don't (necessarily) have both roots in common, but do have root $s$ in common, then \left.\begin{align} p s^2 + q s + r &= 0 \quad\to\quad p s^3 + q s^2 + r s \phantom{+p\;} = 0 \\ q s^2 + r s + p &= 0 \quad\to\quad \phantom{p s^3 +\, } q s^2 + r s + p = 0 \end{align} \quad\right\}\quad\to\quad p(s^3-1) = 0 \quad\to\quad s^3 = 1 Thus, since $s = 1$, substituting back into either polynomial gives $p+q+r=0$. Easy-peasy! But wait ... The equations $k^3 = 1$ and $s^3 = 1$ have fully three solutions: namely, $\omega^{0}$, $\omega^{+1}$, $\omega^{-1}$, where $\omega = \exp(2i\pi/3) = (-1+i\sqrt{3})/2$. Nobody said coefficients $p$, $q$, $r$, or common root $s$, were real, did they? If $k = \omega^n$, then we have in general that $$q = p \omega^n \qquad r = p \omega^{-n}$$ If $s = \omega^n$, then substituting back into either quadratic, and multiplying-through by an appropriate power of $\omega^{n}$ for balance, gives $$p\omega^{n} + q + r \omega^{-n} = 0$$ Just-slightly-less-easy-but-nonetheless-peasy! • "Yes, we have caught the rarest of all creatures - an elegant!"... Indeed, one of the most elegant approaches I've ever seen. – Hungry Blue Dev Aug 9 '15 at 7:08 If the two quadratic polynomials $f(x) = px^2 + qx + r$ and $g(x) = qx^2 + rx + p$ have a common root then this is also a root of the linear polynomial $$h(x) = qf(x) - pg(x) = (q^2-pr)x - (p^2-qr)$$ which means that either $q^2-pr=0$ and $p^2-qr=0$ for which $p=q=r$ or that $x = \frac{p^2-qr}{q^2-pr}$ is the common root. Inserting this into either $f(x)$ of $g(x)$ gives us $$p^3 + q^3 + r^3 = 3pqr$$ Now $$p^3+q^3 + r^2 - 3pqr = (p+q+r)\left[\frac{3(p^2+q^2+r^2) - (p+q+r)^2}{2}\right]$$ so $p+q+r = 0$ or $(p+q+r)^2 = 3(p^2+q^2+r^2)$. By Cauchy-Schwarz we have $(p+q+r)^2 \leq 3(p^2+q^2+r^2)$ with equality only when $p=q=r$ so the two polynomials have a common root if and only if $p+q+r=0$ or $p=q=r$. ${\bf Assumptions}$: This answer assumes $p,q\not=0$ and real polynomials; $p,q,r\in\mathbb{R}$. The first condition is assumed since the polynomials is said to be quadratic and otherwise the only-if statement is not true since $p=0\implies x = -\frac{r}{q}$ is a common root for all $q,r$ and if $q=0$ we need $p=r=0$ to have a common root. If the second condition is relaxed allowing for complex coefficients then there are other solutions as shown in the answer by Blue. HINT: If $a$ is a common root, $$pa^2+qa+r=0\ \ \ \ (1)$$ $$qa^2+ra+p=0\ \ \ \ (2)$$ Solve $(1),(2)$ for $a^2,a$ and use $a^2=(a)^2$ If $p + q + r = 0 \to 1$ is a root, otherwise both become one equation and so they surely have the same roots. • Yes, but how do I obtain that? That is, how to obtain $(p+q+r)$ as a factor of something equated to $0$? – Hungry Blue Dev Aug 2 '15 at 6:17 • No, in that case both equations have a common root, namely $1$. – DeepSea Aug 2 '15 at 6:19 • Well, I get what you mean, but I'm not allowed to back calculate (at least not on paper, I can do it in rough). How do I do it in the tedious way? – Hungry Blue Dev Aug 2 '15 at 6:25 • Substitute $1$ into both equations and get $p\cdot1^2+q\cdot 1 + r = 0=q\cdot 1^2+r\cdot1 + p \to 1$ is a root of both. – DeepSea Aug 2 '15 at 6:30 • No no, I understand what you're saying about $1$. But I'm asking about a general root (like $\alpha$) before finding it's value. Assume that the last two conditions are not given. – Hungry Blue Dev Aug 2 '15 at 6:34
2019-12-14T00:03:29
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1381740/condition-for-common-roots-of-two-quadratic-equations-px2qxr-0-and-qx2r", "openwebmath_score": 0.9453555941581726, "openwebmath_perplexity": 345.3779152858943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471632833944, "lm_q2_score": 0.8558511396138365, "lm_q1q2_score": 0.8420267159019333 }
https://www.beatthegmat.com/a-collection-of-16-coins-each-with-a-face-value-of-either-t308205.html
• NEW! FREE Beat The GMAT Quizzes Hundreds of Questions Highly Detailed Reporting Expert Explanations • 7 CATs FREE! If you earn 100 Forum Points Engage in the Beat The GMAT forums to earn 100 points for $49 worth of Veritas practice GMATs FREE VERITAS PRACTICE GMAT EXAMS Earn 10 Points Per Post Earn 10 Points Per Thanks Earn 10 Points Per Upvote ## A collection of 16 coins, each with a face value of either ##### This topic has 2 expert replies and 0 member replies ### Top Member ## A collection of 16 coins, each with a face value of either ## Timer 00:00 ## Your Answer A B C D E ## Global Stats Difficult A collection of 16 coins, each with a face value of either 10 cents or 25 cents, has a total face value of$2.35. How many of the coins have a face value of 25 cents? A) 3 B) 5 C) 7 D) 9 E) 11 OA B Source: Official Guide ### GMAT/MBA Expert GMAT Instructor Joined 25 Apr 2015 Posted: 2791 messages Followed by: 18 members 43 BTGmoderatorDC wrote: A collection of 16 coins, each with a face value of either 10 cents or 25 cents, has a total face value of $2.35. How many of the coins have a face value of 25 cents? A) 3 B) 5 C) 7 D) 9 E) 11 OA B Source: Official Guide We can let the number of 10-cent coins = d and the number of 25-cent coins = q. We are given that there are 16 total coins; thus, d + q = 16. We are also given that the total face value is$2.35, thus: 0.1d + 0.25q = 2.35 10d + 25q = 235 Isolating d in our first equation, we have d = 16 - q. We can substitute 16 - q for d in our second equation, and we have: 10(16 - q) + 25q = 235 160 - 10q + 25q = 235 15q = 75 q = 5 There are 5 coins with a face value of 25 cents. _________________ Scott Woodbury-Stewart Founder and CEO [email protected] See why Target Test Prep is rated 5 out of 5 stars on BEAT the GMAT. Read our reviews ### GMAT/MBA Expert GMAT Instructor Joined 08 Dec 2008 Posted: 12975 messages Followed by: 1249 members 5254 GMAT Score: 770 BTGmoderatorDC wrote: A collection of 16 coins, each with a face value of either 10 cents or 25 cents, has a total face value of $2.35. How many of the coins have a face value of 25 cents? A) 3 B) 5 C) 7 D) 9 E) 11 OA B Source: Official Guide Let D = the NUMBER of 10-cent coins Let Q = the NUMBER of 25-cent coins Notice that the VALUE of Q 25-cent coins = ($0.25)Q For example, the VALUE of six 25-cent coins = ($0.25)6 =$1.50 And the VALUE of ten 25-cent coins = ($0.25)10 =$2.50 etc Likewise, the VALUE of D 10-cent coins = ($0.10)D The collection has 16 coins We can write: D + Q = 16 The collection has a total value of$2.35 So, (0.10)D + (0.25)Q = 2.35 So, we have the following system: (0.10)D + (0.25)Q = 2.35 D + Q = 16 Take the top equation and multiply both sides by 10 to get: D + 2.5Q = 23.5 D + Q = 16 Subtract the bottom equation from the top to get: 1.5Q = 7.5 Solve: Q = 5 Cheers, Brent _________________ Brent Hanneson – Creator of GMATPrepNow.com Use my video course along with And check out all of these free resources GMAT Prep Now's comprehensive video course can be used in conjunction with Beat The GMAT’s FREE 60-Day Study Guide and reach your target score in 2 months! • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for $0 Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200 Available with Beat the GMAT members only code ### Top First Responders* 1 Ian Stewart 57 first replies 2 Brent@GMATPrepNow 31 first replies 3 Jay@ManhattanReview 29 first replies 4 GMATGuruNY 21 first replies 5 ceilidh.erickson 15 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 Scott@TargetTestPrep Target Test Prep 199 posts 2 Max@Math Revolution Math Revolution 84 posts 3 Brent@GMATPrepNow GMAT Prep Now Teacher 69 posts 4 Ian Stewart GMATiX Teacher 65 posts 5 GMATGuruNY The Princeton Review Teacher 40 posts See More Top Beat The GMAT Experts
2019-06-17T05:32:20
{ "domain": "beatthegmat.com", "url": "https://www.beatthegmat.com/a-collection-of-16-coins-each-with-a-face-value-of-either-t308205.html", "openwebmath_score": 0.18827977776527405, "openwebmath_perplexity": 12895.96791823864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.992988205033508, "lm_q2_score": 0.847967764140929, "lm_q1q2_score": 0.8420219880405782 }
http://mathhelpforum.com/algebra/282025-positive-integers-x-y.html
# Thread: Positive Integers x & y 1. ## Positive Integers x & y If positive integers x and y are NOT both odd, which of the following must be even? A. xy B. x + y C. x - y D. x + y - 1 E. 2(x + y) - 1 My Effort: I decided to experiment by letting x be 3 and y be 2. Doing this quickly revealed the fact that choice A and D both yield an even number. The book's answer is A. Question: Why is choice D not the answer? 2. ## Re: Positive Integers x & y Originally Posted by harpazo If positive integers x and y are NOT both odd, which of the following must be even? A. xy B. x + y C. x - y D. x + y - 1 E. 2(x + y) - 1 My Effort: I decided to experiment by letting x be 3 and y be 2. Doing this quickly revealed the fact that choice A and D both yield an even number. The book's answer is A. Question: Why is choice D not the answer? What if $x~\&~y$ are both even then $x+y-1$ is ? 3. ## Re: Positive Integers x & y Originally Posted by harpazo Why is choice D not the answer? Because the question talks about all possible pairs of integers, not just 2 and 3. 4. ## Re: Positive Integers x & y A correct way to approach the question is, for each point A, B, C, D and E to consider the three permitted cases: 1. $x$ and $y$ are both even, that is $x=2a$, $y=2b$; 2. $x$ is even and $y$ is odd, that is $x=2a$, $y=2b-1$; and 3. $x$ is odd and $y$ is even, that is $x=2a-1$, $y=2b$. Results that are even will have a factor of 2. You might see that 2. and 3. here are essentially the same, so you only really need to do one of them. And you might be able to get by setting only $x=2a$ and considering different cases for $y$ where necessary. But essentially you will be doing the above with or without some shortcuts. 5. ## Re: Positive Integers x & y Originally Posted by harpazo If positive integers x and y are NOT both odd, which of the following must be even? A. xy______B. x + y______C. x - y______D. x + y - 1______E. 2(x + y) - 1 Why is choice D not the answer? To harpazo, I cannot understand how this can be so mysterious. Learn this: 1. The sum of two even integers is even 2. The sum of two odd integers is even. 3. The sum of an even integer & an odd integer is odd. 4. If $n$ is an odd integer then $n-1$ is even. 5. If $n$ is an even integer then $n-1$ is odd. If you learn these then practice applying them to this question, 6. ## Re: Positive Integers x & y Originally Posted by Plato What if $x~\&~y$ are both even then $x+y-1$ is ? Let x = 6 and y = 4 x + y - 1 6 + 4 - 1 10 - 1 = 9 is an odd number. Meaning? 7. ## Re: Positive Integers x & y Originally Posted by Archie Because the question talks about all possible pairs of integers, not just 2 and 3. I can see that every word in word problems is important. 8. ## Re: Positive Integers x & y Originally Posted by Plato To harpazo, I cannot understand how this can be so mysterious. Learn this: 1. The sum of two even integers is even 2. The sum of two odd integers is even. 3. The sum of an even integer & an odd integer is odd. 4. If $n$ is an odd integer then $n-1$ is even. 5. If $n$ is an even integer then $n-1$ is odd. If you learn these then practice applying them to this question, Good information.
2019-06-20T00:52:28
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/282025-positive-integers-x-y.html", "openwebmath_score": 0.7163517475128174, "openwebmath_perplexity": 533.6065008889396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.97112909472487, "lm_q2_score": 0.8670357683915538, "lm_q1q2_score": 0.8420036608521717 }
https://math.stackexchange.com/questions/1752012/using-the-intermediate-value-theorem-and-derivatives-to-check-for-intersections
# Using the Intermediate Value Theorem and derivatives to check for intersections. I have the following question: Prove that the line $y_1=9x+17$ is tangent to the graph of the function $y_2=x^3-3x+1$. Find the point of tangency. So, what I did was: Let's construct a function $h$, such that $$h(x)=x^3-3x+1-(9x+17)=x^3-12x-16.$$ This function is continuous everywhere, particularly in the interval $[0,5]$. Now, $$h(0)=(0)^3-12(0)-16=-16$$ and $$h(5)=(5)^3-12(5)-16=49.$$ So, by Bolzano's theorem, there must exist a point $c \in (0,5)$ such that $h(c)=0$. This proves that the line $y_1$ and the function $y_2$ are tangent to each other at least at one point. Now, we can think of the point of intersection as the point in which $h'(x)=0$. So, $$h'(x)=0 \iff 3x^2-12=0 \iff x \in \{-2,2\}.$$ Now, if either one of those two points is the point of intersection, then it must be a root of $h(x)$, so plugging them in we have $$h(2)= (2)^3-12(2)-16 = -32 \neq 0$$ and $$h(-2)=(-2)^3-12(-2)-16=24-24=0.$$ Therefore, $x=-2$ is the point of intersection of $y_1$ with $y_2$. Now, my issue here is that I don't know exactly how to justify that what I did was actually right. In particular, I don't know why taking the difference of both original functions (constructing $h$) is a valid move and why is it that finding the points in which $h'(x)=0$ equates to finding the point of intersection between $y_1$ and $y_2$. So, my question is: if what I did was actually correct, why is it so? And if it's not, why am I wrong?. • @MatthewLeingang Thanks for your feedback, Matthew. I know that using the IVT is maybe a little bit of an overkill for this particular problem, but still, I want to be able to explain why using it leads to a correct answer. I've checked things graphically and my answer seems be correct, but if this question came up in a test, I wouldn't be able to justify my answer properly. That's why I'm asking for help. Apr 20 '16 at 23:54 I have a comment, and I know this has been looked at: The statement involving Bolzano's Theorem, where you find that there is $c\in (0,5)$ where $h(c)=0$, implies that the graphs of the two functions intersect. It doesn't imply tangency, and it's actually a distraction, in that it gives you a point in $(0,5)$, which is quite far from the point of tangency you eventually find. (In fact, if you try using Bolzano's Theorem near the point of tangency, you must look at $h'(x)$, and not $h(x)$, since $x=-2$ gives rise to a local maximum of $h(x)$, where the function $h(x)$ is tangent to the $x$-axis.) If you want to use Bolzano's Theorem, here's one (overly complicated) approach. First, notice that $$h'(-3)=15,\ \ h'(0)=-12,\ \ h'(3)=15$$ Then by Bolzano's Theorem, there exists $c\in (-3,0)$ so that $h'(c) = 0$, and there also exists $c'\in (0,3)$ so that $h'(c) = 0$. Since $h'(x)$ is a quadratic, you can find the two solutions to the equation $h'(x)=0$ by hand, namely at $c'=2$, $c=-2$ (with notation consistent with our conclusion from Bolzano's Theorem). Since it's a quadratic, we know that we have accounted for all of them. (As the other answer states, this application of Bolzano's Theorem actually serves as an unnecessary extra step, since we can find the zeroes of $h'(x)$ by hand.) You still need the two curves to intersect at the point to have a point of tangency, which is when we use the zeroes of $h'(x)$ (the critical numbers) and check which of these are zeroes of $h(x)$. In checking, you find that $h(2) = -32$, while $h(-2) = 0$. Now plug $x=-2$ back into your starting functions $y_1$ (or $y_2$) in order to conclude that you have a single point of tangency at $(-2,-1)$. Of course, the less complicated alternative is to check that $h(x)$ has a double-root at $x=-2$, which it does, since $h(x) = (x+2)^2(x-4)$, which guarantees a point of tangency there. (This method is exactly that of the other answer, where you find roots of $h(x)$, and then check to see which of those is a root of $h'(x)$.) • I see now that using Bolzano's Theorem first to check for intersection is redundant, I had failed to notice that finding the actual point of tangency in $h(x)$ also implied intersection. I guess I was doing things mechanically. Thanks! Apr 21 '16 at 1:44 • There isn't anything wrong with doing things mechanically, but you do have to exercise caution when doing this. Sometimes, if a step in your work gives a conclusion that either doesn't seem relevant or doesn't serve to answer the question, go back one step and see if maybe there isn't information you're missing, or if there might be a different approach that accomplishes what you want. Apr 21 '16 at 1:53 • Basically, when doing things mechanically, make sure you are always keeping the end goal in mind. At each step, ask yourself: does this get me closer to the end goal? If not, why not? If so, how do I proceed forward from here? And if the answer is "I don't know", think about it for a while and see if you can't break it down further. It won't always lead to the most elegant or elementary proofs, but keeping a general outline or road map in mind can make proofs easier to write. Apr 21 '16 at 1:56 • As a beginner, this is very valuable. I will keep it in mind, thank you! Apr 21 '16 at 2:16 Sorry I deleted my comment. I understand your question a little bit better now. Since the two curves are graphs of the functions $f(x) = x^3 -3x+1$ and $g(x) = 9x+17$, they intersect at points $c$ where $f(c) = g(c)$. This is equivalent to saying $f(c) - g(c) = 0$, which is equivalent to saying the function $h(x) = f(x) - g(x)$ has a zero at $c$. If the curves are tangent at a point of intersection, then not only $f(c) = g(c)$ is true but $f'(c) = g'(c)$ as well. That is, $f$ and $g$ share the same tangent line (point and slope) at $c$. The equation $f'(c) = g'(c)$ is equivalent to $h'(c) = 0$. So we need to solve the equations $h(x) = 0$ and $h'(x) = 0$. One way to solve the problem (my original suggestion) is to find the zeroes of $h$ and plug them into $h'$ to see if they are also zeroes of $h'$. The trouble is that $h$ is cubic so finding roots is a hair more difficult than standard high school algebra. But $h'$ is quadratic so you can find its zeroes easily. You found them: they are $\pm 2$. But only one of these ($-2$) is also a root of $h$. So the point of tangency is $(-2,-1)$. The IVT is redundant, you don't need to otherwise prove that the function has a zero if you can find it explicitly. • Thank you! This is exactly what I was failing to see, your answer was really helpful. And I understand now that the first use of Bolzano's Theorem is redundant. Apr 21 '16 at 1:41
2021-10-25T06:43:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1752012/using-the-intermediate-value-theorem-and-derivatives-to-check-for-intersections", "openwebmath_score": 0.8392073512077332, "openwebmath_perplexity": 105.14253375239532, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971129093889291, "lm_q2_score": 0.8670357563664175, "lm_q1q2_score": 0.8420036484497351 }
https://math.stackexchange.com/questions/2230785/find-a-formula-for-sum-i-1n-2i-12-1232-2n-12
# Find a formula for $\sum_{i=1}^n (2i-1)^2 = 1^2+3^2+....+(2n-1)^2$ Consider the sum $$\sum_{i=1}^n (2i-1)^2 = 1^2+3^2+...+(2n-1)^2.$$ I want to find a closed formula for this sum, however I'm not sure how to do this. I don't mind if you don't give me the answer but it would be much appreciated. I would rather have a link or anything that helps me understand to get to the answer. EDIT: I Found this question in a calculus book so I don't really know which tag it should be. • The statement itself is wrong to begin with. $\displaystyle \sum_{i=1}^n (2i-1)^2 = 1^2+3^2+5^2+\cdots+(2n-1)^2$. – DHMO Apr 12 '17 at 14:15 • hmm..what do you mean? Apr 12 '17 at 14:18 • (2n-1) is odd integer if n is integer. Apr 12 '17 at 14:19 • Also, it should end on $(2n-1)^2$ and not $(2i-1)^2$ which makes no sense. Apr 12 '17 at 14:20 • What @DHMO is saying is that $i$ is defined only within the sum. Therefore, it can't appear on the RHS. Instead, the RHS should have only the variable $n$. Apr 12 '17 at 14:20 ## 4 Answers Hint: $\sum_{i=1}^ni^2=\frac{n(n+1)(2n+1)}{6}$ and $\sum_{i=1}^ni=\frac{n(n+1)}{2}$. And last but not least $$(2i-1)^2=4i^2-4i+1.$$ Edit: Let's prove that $\sum_{i=1}^ni=\frac{n(n+1)}{2}$. We proceed by induction on $n$. If $n=1$ the statement is trivial. Now suppose the statement holds for $n\geq 1$. Then \begin{eqnarray}\sum_{i=1}^{n+1}i&=&\sum_{i=1}^ni+(n+1)\\ &=&\frac{n(n+1)}{2}+(n+1)\\ &=& (n+1)(\frac{n}{2}+1)\\ &=& \frac{(n+1)(n+2)}{2}.\end{eqnarray} Here we used the induction hypothesis in the second equation. This proves the statement by induction. You can prove the other formula in a similar fashion. • $\frac{4n(n+1)(2n+1)}{6} - \frac{4n(n+1)}{2} + n$ will it look something like this? and where can i learn about these formulas etc. Apr 12 '17 at 14:29 • Yes, if you work it out you should get $\frac{1}{3}n(4n^2-1).$ I'll edit my answer and show you how to prove one the formulas. Apr 12 '17 at 14:34 hint We have $$1^2+3^2+5^2+... (2n-1)^2=$$ $=\sum$ odd$^2$=$\sum$ all$^2$-$\sum$even$^2=$ $$\sum_{k=1}^{2n} k^2-(2^2+4^2+...4n^2)=$$ $$\sum_{k=1}^{2n}k^2-4\sum_{k=1}^n k^2=$$ $$\boxed {\color {green}{\frac {n(4n^2-1)}{3}}}$$ for $n=2$, we have $10$ , for $n=3$, we find $35$ and for $n=4$, it is $84$. • i don't understand how you got to $\sum_1^{2n} k^2-(2^2+4^2+...4n^2)=$ Apr 12 '17 at 14:38 • plus what does $\sum_1^{2n}$ this even mean. How can it be 2n isnt it essentially just n Apr 12 '17 at 14:40 • @TomHarry I edited it . now it is clear. Apr 12 '17 at 14:43 • doesn't sigma all^2 = $\sum_{k=1}^{n} k^2$ and if so why do u have $\sum_{k=1}^{2n} k^2$ Apr 13 '17 at 16:05 Hint: Write $f(n) = \sum_{i=1}^n (2i-1)^2$. You can start by hypothesizing that the sum of squares is a cubic $f(n) = an^3 + bn^2 + cn + d$ (as a sort of discrete analogy with the integration formula $\int x^2\, dx = x^3/3 + C$). Then $(2n+1)^2 = f(n+1) - f(n) = a (3n^2 + 3n + 1) + b(2n + 1) + c$, which gets you the constants $a, b, c$, and you can find $d$ from $f(1) = 1$. Verifying this formula is an easy proof by induction. \begin{align} \sum_{i=1}^n (2i-1)^2 &=\sum_{i=1}^n \binom {2i-1}2+\binom {2i}2&&\text(*)\\ &=\sum_{r=0}^{2n}\binom r2\color{lightgrey}{=\sum_{r=2}^{2n}\binom r2}\\ &=\color{red}{\binom {2n+1}3}&& \text(**)\\ &\color{lightgrey}{=\frac {(2n+1)(2n)(2n-1)}6}\\ &\color{lightgrey}{=\frac 13 n(2n-1)(2n+1)}\\ &\color{lightgrey}{=\frac 13 n(4n^2-1)} \end{align} *Note that $\;\;\;R^2=\binom R2+\binom {R+1}2$. **Using the Hockey-stick identity
2021-09-27T21:36:44
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2230785/find-a-formula-for-sum-i-1n-2i-12-1232-2n-12", "openwebmath_score": 0.9094042778015137, "openwebmath_perplexity": 352.5231149614947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9711290880402378, "lm_q2_score": 0.8670357546485407, "lm_q1q2_score": 0.8420036417101167 }
https://math.stackexchange.com/questions/3276124/how-to-find-the-corner-point-of-a-non-function-equation/3276783#3276783
# How to find the corner point of a non-function equation? Consider the equation $$-242.0404+0.26639x-0.043941y+(5.9313\times10^{-5})\times xy-(3.9303\times{10^{-6}})\times y^2-7000=0$$ with $$x,y>0$$. If you plot it, it'll look like below: Now, I want to find the corner point/inflection point in this equation/graph, which roughly should be somewhere here. This is my manually pinpointed approximation, using my own eyes: Any help on how to mathematically find this point would be really helpful. UPDATE Based on Adrian's answer, I've got the following $$(1.1842*10^{-4},0.6456*10^{-4})$$ (wondering what can cause this slight error?): The actual corner point seems a little bit far from the one found by Adrian's approach (why?): Update 2 The problem was the aspect ratio of my drawing, I fixed the aspect ratio and Adrian's answer looks pretty accurate: • Comments are not for extended discussion; this conversation has been moved to chat. Jun 27, 2019 at 18:38 Following Calvin Khor's line of reasoning, we will use the following algorithm: 1. Find the center of the hyperbola, and translate the hyperbola so that the center coincides with the origin. 2. Find the angle of rotation necessary to put the hyperbola into the canonical form $$x^2/a^2-y^2/b^2=1.$$ 3. The corner points are represented, at this point, by $$x=\pm a.$$ 4. Rotate these two points back through the angle found in Step 2. 5. Translate these two points back through the translation performed in Step 1. The wikipedia page on the general hyperbola will be our guide, here. Step 1. According to the wiki page, we must write the hyperbola in the form $$A_{xx}x^2+2A_{xy}xy+A_{yy}y^2+2B_xx+2B_yy+C=0.$$ We have $$0x^2+\left(5.9313\times 10^{-5}\right)xy-\left(3.9303\times 10^{-6}\right)y^2 + 0.26639x-0.043941y-7242.0404=0,$$ making \begin{align*} A_{xx}&=0 \\ A_{xy}&=\left(5.9313\times 10^{-5}\right)/2=2.96565\times 10^{-5} \\ A_{yy}&=-3.9303\times 10^{-6} \\ B_x&=0.26639/2=0.133195 \\ B_y&=-0.043941/2=-0.0219705 \\ C&= -7242.0404. \end{align*} We check for the hyperbola nature, namely, that $$D=\left|\begin{matrix}A_{xx}&A_{xy}\\ A_{xy} &A_{yy} \end{matrix}\right|<0,\quad\text{or}\quad D=\left|\begin{matrix}0&2.96565\times 10^{-5}\\ 2.96565\times 10^{-5} &-3.9303\times 10^{-6} \end{matrix}\right|=-8.79508\times 10^{-10}<0,$$ which is clearly true. The center $$(x_c,y_c)$$ of the hyperbola is given by \begin{align*} x_c&=-\frac{1}{D}\left|\begin{matrix}B_x &A_{xy}\\ B_y &A_{yy}\end{matrix}\right| =\frac{1}{8.79508\times 10^{-10}}\left|\begin{matrix}0.133195 &2.96565\times 10^{-5}\\ -0.0219705 &-3.9303\times 10^{-6}\end{matrix}\right|=145.618\\ y_c&=-\frac{1}{D}\left|\begin{matrix}A_{xx} &B_x\\ A_{xy} &B_y\end{matrix}\right| =\frac{1}{8.79508\times 10^{-10}}\left|\begin{matrix}0 &0.133195\\ 2.96565\times 10^{-5} &-0.0219705\end{matrix}\right|=-4491.26.\end{align*} Step 2. The angle of rotation is given by \begin{align*} \tan(2\varphi)&=\frac{2A_{xy}}{A_{xx}-A_{yy}} \\ \varphi&=\frac12\,\arctan\left(\frac{2A_{xy}}{A_{xx}-A_{yy}}\right)=0.752315\,\text{rad}=43.1045^{\circ}, \end{align*} which definitely looks about right. Step 3. The formula for $$a^2$$ is $$a^2=-\frac{\Delta}{\lambda_1 D},$$ where \begin{align*} \Delta&=\left|\begin{matrix}A_{xx}&A_{xy}&B_x\\A_{xy}&A_{yy}&B_y\\B_x&B_y&C\end{matrix}\right|=6.26559\times 10^{-6} \\ 0&=\lambda^2-(A_{xx}+A_{yy})\lambda+D. \end{align*} Unfortunately, the wiki page fails to distinguish between $$\lambda_1$$ and $$\lambda_2$$. If we examine the signs, we must have $$a^2>0,$$ which means, since $$D<0$$ and $$\Delta>0,$$ we must pick the positive root. We have \begin{align*} \lambda_2&=-3.16867\times 10^{-5}\\ \lambda_1&=2.77564\times 10^{-5}, \end{align*} so that $$a=\pm 16020.6.$$ Step 4. The point we need to rotate is $$(16020.6, 0)$$ counter-clockwise through an angle $$\varphi=0.752315\,\text{rad}$$. The rotation matrix for doing that is given by $$R=\left[\begin{matrix}\cos(\varphi)&-\sin(\varphi)\\ \sin(\varphi) &\cos(\varphi)\end{matrix}\right]=\left[\begin{matrix}0.730109&-0.683331\\ 0.683331 &0.730109\end{matrix}\right].$$ After rotating, the point is located at $$(11696.8, 10947.4).$$ Step 5. This is the moment of truth! We must translate back to the original coordinate system. The center of the original hyperbola was located at $$(145.618, -4491.26).$$ What we must do is add the coordinates together to get the un-translated version. The final point is located at $$(11842.4, 6456.14).$$ This is not too far off from my other answer! We check to make sure this point is on the curve $$x=\frac{7242.0404+\left(3.9303\times{10^{-6}}\right) y^2+0.043941y}{0.26639+\left(5.9313\times10^{-5}\right)\!y},$$ and it is. So I say that this point is the "corner" of your graph. • Thank you so much @AdrianKeister. This is definitely a very detailed and great answer. I've actually implemented your approach using Matlab and it gives a "VERY CLOSE" coordinates. I've updated my answer and uploaded a new image with the point that your approach finds. Do you have any idea why it has a little bit error? (maybe its because of floating points?) Jun 28, 2019 at 7:22 • I checked the coordinates and its the same as yours. So, it's not the floating points or code. Wondering what can cause this slight error (or maybe there is no? To me its a little bit far from the actual corner point). Jun 28, 2019 at 7:35 • I triple checked and it seems it doesn't give the correct corner point. I've updated my answer again Jun 28, 2019 at 8:40 • I quadruple checked it, you mentioned an important point in your other answer that I missed: "the aspect ratios should be the same"...I did, and the asnwer is now correct. Thank you so much! Jun 28, 2019 at 9:05 • @AdrianKeiser is this approach also true for parabolas instead of hyperbolas? Jul 1, 2019 at 22:58 We first simplify the expression, and then solve for $$x:$$ \begin{align*} -242.0404+0.26639x-0.043941y+\left(5.9313\times10^{-5}\right)xy-\left(3.9303\times{10^{-6}}\right) y^2-7000&=0 \\ 0.26639x-0.043941y+\left(5.9313\times10^{-5}\right)xy-\left(3.9303\times{10^{-6}}\right) y^2-7242.0404&=0 \end{align*} \begin{align*} x\left[0.26639+\left(5.9313\times10^{-5}\right)\!y\right]&=7242.0404+\left(3.9303\times{10^{-6}}\right) y^2+0.043941y \\ x&=\frac{7242.0404+\left(3.9303\times{10^{-6}}\right) y^2+0.043941y}{0.26639+\left(5.9313\times10^{-5}\right)\!y}. \end{align*} We see that $$x$$ is a function of $$y$$, with domain all real numbers except $$-0.26639/\left(5.9313\times 10^{-5}\right).$$ Just invert the function (corresponds to reflecting about the line $$y=x$$). We have $$y(x)=\frac{7242.0404+\left(3.9303\times{10^{-6}}\right) x^2+0.043941x}{0.26639+\left(5.9313\times10^{-5}\right)\!x}.$$ I would say that the corner you're after is a point where $$y'(x)=-1$$. So we have $$y'(x)=\frac{-1.18771\times 10^{8}+595.215x+0.0662637x^2}{(4491.26+x)^2}.$$ Setting $$y'(x)=-1$$ and solving for $$x,$$ we find that $$x=-15104.6,\;6122.12,$$ with corresponding $$y=-11874.4,\; 12165.6,$$ respectively. So the point you're after (swapping $$x$$ and $$y$$ again) is $$(12165.6, 6122.12).$$ Incidentally, if you're "eyeballing it", you should be aware that the aspect ratio of your graph will greatly influence where you think the corner is. I recommend forcing an aspect ratio of $$1,$$ before you say where you think the corner is. • Thank you @AdrianKeister. You saved me :-) Jun 29, 2019 at 8:07 You could also use more high powered maths as follows. a) Find a parametrization $$t \mapsto \gamma(t)=(x(t), y(t))$$ of your curve. b) Reparametrize to get a cuve parametrized by arc length, that is $$x'(t)^2+y'(t)^2=1$$ for all $$t$$. c) Compute the curvature $$\left\|\gamma''(t)\right\|$$ d) The point you are looking for is the point with maximum curvature
2022-08-13T22:21:08
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3276124/how-to-find-the-corner-point-of-a-non-function-equation/3276783#3276783", "openwebmath_score": 0.988858699798584, "openwebmath_perplexity": 655.576654496323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.988312744413962, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.8419958173969837 }
https://bldavies.com/blog/correlation-concatenation/
Suppose I have data $$(a_i,b_i)_{i=1}^n$$ on two random variables $$A$$ and $$B$$. I store my data as vectors a and b, and compute their correlation using the cor function in R: cor(a, b) ## [1] 0.4326075 Now suppose I append a mirrored version of my data by defining the vectors alpha = c(a, b) beta = c(b, a) so that alpha is a concatenation of the $$a_i$$ and $$b_i$$ values, and beta is a concatenation of the $$b_i$$ and $$a_i$$ values. I compute the correlation of alpha and before as before: cor(alpha, beta) ## [1] 0.4288428 Notice that cor(a, b) and cor(alpha, beta) are not equal. This surprised me. How can appending a copy of the same data change the correlation within those data? The answer is that the concatenated data $$(\alpha_i,\beta_i)_{i=1}^{2n}$$ have different marginal distributions than the original data $$(a_i,b_i)_{i=1}^n$$. Indeed one can show that \DeclareMathOperator{\Cor}{Cor} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \begin{align} \E[\alpha]=\E[\beta]=\frac{\E[a]+\E[b]}{2} \end{align} and \begin{align} \E[\alpha^2]=\E[\beta^2]=\frac{\E[a^2]+\E[b^2]}{2}, \end{align} where $$\E[\alpha]\equiv\frac{1}{2n}\sum_{i=1}^n\alpha_i$$ is the empirical mean of the $$\alpha_i$$ values, and where $$\E[\beta]$$, $$\E[a]$$, and $$\E[b]$$ are defined similarly. It turns out that $$\E[\alpha\beta]=\E[ab]$$, but since the marginal distributions are different the empirical correlations are different. In fact $$\Cor(\alpha,\beta)=\frac{\Cov(a,b)-0.25\left(\E[a]+\E[b]\right)^2}{0.5\Var(a)+0.5\Var(b)+0.25\left(\E[a]-\E[b]\right)^2},$$ where $$\Cor$$, $$\Cov$$, and $$\Var$$ are the empirical correlation, covariance, and variance operators. This expression implies that cor(alpha, beta) and cor(a, b) will be equal if the $$a_i$$ and $$b_i$$ values have the same means and variances. We can achieve this by scaling a and b before computing their correlation: cor(scale(a), scale(b)) ## [1] 0.4326075 The scale function de-means its argument and scales it to have unit variance. These operations don’t change the correlation of a and b. But they do change the correlation of alpha and beta: alpha = c(scale(a), scale(b)) beta = c(scale(b), scale(a)) cor(alpha, beta) ## [1] 0.4326075 Now the two correlations agree! I came across this phenomenon while writing my previous post, in which I discuss the degree assortativity among nodes in Zachary’s (1977) karate club network. One way to measure this assortativity is to use the degree_assortativity function in igraph: library(igraph) G = graph.famous('Zachary') assortativity_degree(G) ## [1] -0.4756131 This function returns the correlation of the degrees of adjacent nodes in G. Another way to compute this correlation is to 1. construct a matrix el in which rows correspond to edges and columns list incident nodes; 2. define the vectors d1 and d2 of degrees among the nodes listed in el; 3. compute the correlation of d1 and d2 using cor. Here’s what I get when I take those three steps: el = as_edgelist(G) d = degree(G) d1 = d[el[, 1]] # Ego degrees d2 = d[el[, 2]] # Alter degrees cor(d1, d2) ## [1] -0.4769563 Notice that cor(d1, d2) disagrees with the value of assortativity_degree(G) computed above. This is because the vectors d1 and d2 have different means and variances: c(mean(d1), mean(d2)) ## [1] 7.487179 8.051282 c(var(d1), var(d2)) ## [1] 25.94139 32.23110 These differences come from el listing each edge only once: it includes a row c(i, j) for the edge between nodes $$i$$ and $$j\not=i$$, but not a row c(j, i). Whereas assortativity_degree accounts for edges being undirected by adding the row c(j, i) before computing the correlation. This is analogous to the “append the mirrored data” step I took to create $$(\alpha_i,\beta_i)_{i=1}^{2n}$$ above. Appending the mirror of el to itself before computing cor(d1, d2) returns the same value as assortativity_degree(G): el = rbind( el, matrix(c(el[, 2], el[, 1]), ncol = 2) # el's mirror ) d1 = d[el[, 1]] d2 = d[el[, 2]] c(assortativity_degree(G), cor(d1, d2)) ## [1] -0.4756131 -0.4756131
2023-03-24T03:31:22
{ "domain": "bldavies.com", "url": "https://bldavies.com/blog/correlation-concatenation/", "openwebmath_score": 0.8505613803863525, "openwebmath_perplexity": 1573.3067462189847, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127433812521, "lm_q2_score": 0.8519528038477824, "lm_q1q2_score": 0.8419958128021515 }
http://math.stackexchange.com/questions/312878/why-is-mathbbz-sqrt24-ne-mathbbz-sqrt6
# Why is $\mathbb{Z} [\sqrt{24}] \ne \mathbb{Z} [\sqrt{6}]$? Why is $\mathbb{Z} [\sqrt{24}] \ne \mathbb{Z} [\sqrt{6}]$, while $\mathbb{Q} (\sqrt{24}) = \mathbb{Q} (\sqrt{6})$ ? (Just guessing, is there some implicit division operation taking $2 = \sqrt{4}$ out from under the $\sqrt{}$ which you can't do in the ring?) Thanks. (I feel like I should apologize for such a simple question.) - No need to apologize; and your instinct is correct. Note that $\sqrt{6}\notin\mathbb{Z}[\sqrt{24}]$; indeed, $$\mathbb{Z}[\sqrt{24}]=\{a+b\sqrt{24}\mid a,b\in\mathbb{Z}\}=\{a+2c\sqrt{6}\mid a,c\in \mathbb{Z}\}.$$ Thus, $\mathbb{Z}[\sqrt{24}]$ consists of the elements of $\mathbb{Z}[\sqrt{6}]$ for which the number of times $\sqrt{6}$ occurs is even. However, when thinking about $\mathbb{Q}$, we now have $\frac{1}{2}$ available to us, and $$\mathbb{Q}(\sqrt{24})=\{a+b\sqrt{24}\mid a,b\in\mathbb{Q}\}=\{a+2c\sqrt{6}\mid a,c\in \mathbb{Q}\}=\{a+d\sqrt{6}\mid a,d\in\mathbb{Q}\}=\mathbb{Q}(\sqrt{6})$$ because we can take $c=\frac{d}{2}$. - We have \begin{align*} \mathbb{Z}[\sqrt{24}] &= \{a + b\sqrt{24} | a, b \in \mathbb{Z} \} \\ &= \{a + 2b\sqrt{6} | a, b \in \mathbb{Z} \} \\ &= \{a + b'\sqrt{6} | a, b' \in \mathbb{Z} \text{ with } b' \text { even}\}. \end{align*} which is clearly a proper subring of $\mathbb{Z}[\sqrt{6}]$. On the other hand, \begin{align*} \mathbb{Q}[\sqrt{24}] &= \{a + b\sqrt{24} | a, b \in \mathbb{Q} \} \\ &= \{a + 2b\sqrt{6} | a, b \in \mathbb{Q} \} \\ &= \{a + b'\sqrt{6} | a, b' \in \mathbb{Q}\} \\ &= \mathbb{Q}[\sqrt{6}]. \end{align*} The point is that you can divide anything in $\mathbb{Q}$ by two, but not anything in $\mathbb{Z}$. - Since $\sqrt{6}\not\in\mathbb{Z} [\sqrt{24}]$. -
2016-02-14T11:06:03
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/312878/why-is-mathbbz-sqrt24-ne-mathbbz-sqrt6", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 472.6066426971351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127409715955, "lm_q2_score": 0.8519528057272543, "lm_q1q2_score": 0.8419958126067439 }
https://math.stackexchange.com/questions/1081345/finding-variance-of-the-sample-mean-of-a-random-sample-of-size-n-without-replace
# Finding variance of the sample mean of a random sample of size n without replacement from finite population of size N. I encountered this problem in the book "Introduction to the Theory of Statistics" (by Mood, Graybill and Boes) and I have not been able to solve part (c): "A bowl contains five chips numbered from 1 to 5. A sample of two drawn without replacement from this finite population is said to be random if all possible pairs of the five chips have an equal chance to be drawn. (a) What is the expected value of the sample mean? What is the variance of the sample mean? (b) Suppose that the two chips of part (a) were drawn with replacement. What would be the variance of the sample mean? Why might one guess that this variance would be larger than the one obtained before? (c) Generalize part (a) by considering N chips and samples of size n. Show that the variance of the sample mean is $$\frac{N-n}{N-1}\frac{\sigma^{2}}{n}$$ where $\sigma^{2}$ is the population variance, that is $$\sigma^{2}=\frac{1}{N}\sum_{i=1}^{N}\Big(i-\frac{N+1}{2}\Big)^{2}$$" * To solve part (a) I explicitly wrote the set of possible pairs with equal probability: $\Omega=\{(1,2),(1,3),(1,4),(1,5),(2,3),(2,4),(2,5),(3,4),(3,5),(4,5)\}$ From this it is easy to see that $Im\bar{X}=\{1.5,2,2.5,3,3.5,4,4.5\}$ where $\bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_{i}$ Correspondingly the probabilities for this values are $(0.1,0.1,0.2,0.2,0.2,0.1,0.1)$ Hence, by definition, the expected value and the variance are: $E[\bar{X}]=3$ and $V[\bar{X}]=\frac{3}{4}$. * For part (b) the same procedure gives us $E[\bar{X}]=3$ and $V[\bar{X}]=\frac{7}{6}$. * Finally, for part (c) I tried to generalize what I did noticing that the least value for $\sum X_{i}$ is $\frac{n(n+1)}{2}$ and its greatest possible value is $\frac{n(2N-n+1)}{2}$. Hence $Im\bar{X}=\{\frac{n+1}{2},\frac{n+1}{2}+\frac{1}{n},\frac{n+1}{2}+\frac{2}{n},\dots,\frac{n+1}{2}+(N-n)\}$ Clearly the probability for the first and last values is $\frac{1}{C^{N}_{n}}=\frac{n!(N-n)!}{N!}$ but I haven't come up with an idea of how to find the other probabilities. How can I get the rest of them? It is time to upgrade the methods you rely on, since listing every possibility yields the result for small values of $n$ and $N$ but this approach (1) leads to a dead end for general values (as you realized), and (2) provides no insight. So... let us attack directly (c), considering $N$ chips numbered from $1$ to $N$, and samples of size $n$. Then the sample mean $\bar X$ is such that $n\bar X=\sum\limits_{k=1}^nY_k$ where $Y_k$ is the $k$th chip. By hypothesis, each $Y_k$ is uniform on $\{1,2,\ldots,N\}$ hence $E(Y_k)=\frac1N\sum\limits_{x=1}^Nx=\frac12(N+1)$ for every $k$ and, by linearity of the expectation, $$E(\bar X)=\frac{N+1}2.$$ Likewise, $n^2\bar X^2=\sum\limits_{k=1}^nY_k^2+\sum\limits_{k\ne\ell}Y_kY_\ell$, $E(Y_k^2)$ does not depend on $k$ and $E(Y_kY_\ell)$ does not depend on $k\ne\ell$, hence $nE(\bar X^2)=E(Y_1^2)+(n-1)E(Y_1Y_2)$ hence it suffices to compute $E(Y_1^2)$ and $E(Y_1Y_2)$. By the same argument as before, $E(Y_1^2)=\frac1N\sum\limits_{x=1}^Nx^2=\frac16(N+1)(2N+1)$. To compute $E(Y_1Y_2)$, one can note that, conditionally on $Y_k=x$, $Y_\ell$ is independent on $\{1,2,\ldots,N\}\setminus\{x\}$ and proceed. Or, one can use the specific case $n=N$, since then, $N\bar X=\sum\limits_{x=1}^Nx$ with full probability, in particular, $NE(Y_1^2)+N(N-1)E(Y_1Y_2)=\frac14N^2(N+1)^2$, which leads to $E(Y_1Y_2)=\frac1{12}(N+1)(3N+2)$. Finally, $$E(\bar X^2)=\frac{2(N+1)(2N+1)+(N+1)(3N+2)(n-1)}{12n},$$ and the variance follows. Sanity check: If $n=N$, then $E(\bar X^2)=E(\bar X)^2$ (do you see why?). • Hi thank you very much!! I followed every step and seen that it works. The "sanity check" was very useful for the conclusion (when n=N, $\bar{X}$ will always be the same, hence its variance is equal to zero and the second moment equals the squared first moment). Initially I considered using uniform distributions for the value of the chips but hesitated after reading the "without replacement" part. I still do not feel completely sure as to why this can be done. Do you think you can further illustrate this assumption? (i.e. I thought X2 ranges over {1, 2, ..., N}\{X1}) – Jonathan Julián Huerta Dec 30 '14 at 1:30
2019-12-10T02:34:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1081345/finding-variance-of-the-sample-mean-of-a-random-sample-of-size-n-without-replace", "openwebmath_score": 0.8739022612571716, "openwebmath_perplexity": 127.42563623123313, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127433812521, "lm_q2_score": 0.8519527963298947, "lm_q1q2_score": 0.8419958053721274 }
https://math.stackexchange.com/questions/2265891/find-all-functions-f-such-that-fxf-frac11-x-x/2265961
# Find all functions $f$ such that $f(x)+f(\frac{1}{1-x})=x$ I would like to find all functions $$f:\mathbb{R}\backslash\{0,1\}\rightarrow\mathbb{R}$$ such that $$f(x)+f\left( \frac{1}{1-x}\right)=x.$$ I do not know how to solve the problem. Can someone explain how to solve it? In one of my attempts I did the following, which is confusing to me: By the substitution $$y=1-\frac{1}{x}$$ one gets $$f(y)+f\left( \frac{1}{1-y}\right)=\frac{1}{1-y}$$. So with $$x=y$$ it follows that $$0=x-\frac{1}{1-x}$$. So it would follow that there is no solution. Is that possible or is there a mistake? Best regards • You need to substitute for all x not only one x – Archis Welankar May 4 '17 at 17:28 • Didn't I do that? – Sammyy Delbrin May 4 '17 at 17:29 • Your second bracket seems incorrect – Archis Welankar May 4 '17 at 17:34 • By substituiting $y=1-\frac 1x$ you don't get $$f(y)+f\left( \frac{1}{1-y}\right)=\frac{1}{1-y}$$ – Jaideep Khare May 4 '17 at 17:37 • – Ivan Neretin May 5 '17 at 8:15 make $x:= \frac{1}{1-x}$ then $$f\left( \frac{1}{1-x}\right)+f\left( \frac{1}{1-\frac{1}{1-x}}\right)=\frac{1}{1-x}\to f\left( \frac{1}{1-x}\right)+f\left(1- \frac{1}{x}\right)=\frac{1}{1-x}\quad (1)$$ do it again in the last equation: $$f\left( \frac{1}{1-\frac{1}{1-x}}\right)+f(x)=\frac{1}{1-\frac{1}{1-x}}\to f\left(1- \frac{1}{x}\right)+f(x)=1- \frac{1}{x}\quad (2)$$ now make $(1)-(2)$ and get: $$f\left( \frac{1}{1-x}\right)-f(x)=\frac{1}{1-x}-1+\frac{1}{x}$$ Subtract the equation is the statement and this last one. $$2f(x)=x-\frac{1}{1-x}+1-\frac{1}{x}\to f(x)=\frac12\left(x-\frac{1}{1-x}+1-\frac{1}{x}\right)$$ • [+1] I am amazed at the way you have found your way with a machete toward the hidden unique solution. My answer, which in fact parallels yours, is guided by a group attached to the functional equation. – Jean Marie May 4 '17 at 21:23 • Thanks for the comments, @JeanMarie . – Arnaldo May 4 '17 at 21:29 • Wow... what a beautiful and unique solution. This will definitely help me in the future when I am working with functional equations. I think I'm going to give you a +50 bounty for this one. – Frpzzd May 20 '17 at 21:40 • @Frpzzd: Thank you for the comment. I am glad to help. – Arnaldo May 20 '17 at 21:44 I would like to shed some light on this issue by taking a more abstract point of view. In my answer to this recent question : (How to solve an equation of the form $f(x)=f(a)$ for a fixed real a.), I used the following group of functions (with the algebraic meaning of the word "group") $$\begin{cases}\phi_1(x)=x, & \ \ \ \ \phi_2(x)=1-x, & \ \ \ \ \ \phi_3(x)=\tfrac{1}{x},\\ \phi_4(x)=1-\tfrac{1}{x}, & \ \ \ \ \phi_5(x)=\tfrac{1}{1-x}, & \ \ \ \ \ \phi_6(x)=\tfrac{x}{x-1}.\end{cases}$$ Here also, the presence of this group is natural because it provides all the potentially fruitful changes of variables leading ultimately to the solution. Let us take the following notation: $$\psi_k(x):=f(\phi_k(x))$$ Thus, the given functional equation can be written: $$\tag{1} f(x)+f(\phi_5(x))=x \ \ \ \iff \ \ \ \color{red}{f(x)+\psi_5(x)=x},$$ Substitution $x \to \phi_4(x)$ in (1) gives: $$\tag{2}f(\phi_4(x))+f(\underbrace{\phi_5(\phi_4(x))}_{\phi_1(x)=x})=\phi_4(x) \ \iff \ \color{red}{\psi_4(x)+f(x)=1-\tfrac{1}{x}},$$ Substitution $x \to \phi_5(x)$ in (1) gives: $$\tag{3}f(\phi_5(x))+f(\underbrace{\phi_5(\phi_5(x))}_{\phi_4(x)})=\phi_5(x) \ \iff \ \color{red}{\psi_5(x)+\psi_4(x)=\tfrac{1}{1-x}}.$$ It suffices now to make the following combination of equations (1)+(2)-(3) (the parts in red) to obtain: $$f(x)=\frac12\left(x+1-\frac{1}{x}-\frac{1}{1-x}\right)$$ Remark: the group of functions $\phi_k$ has been recognized by Kummer in the mid-nineteenth century in connection with hypergeometric differential equations. See p. 306 of (http://www.springer.com/la/book/9781461457244), a fascinating book about the rise of complex function theory. This group has also an interest in projective geometry ; for this reason, it is sometimes called the "cross-ratio group". For a modern presentation of the projective invariant called the cross-ratio, take a look for example at (http://www.maths.gla.ac.uk/wws/cabripages/klein/pinvariant.html). • (+1), very instructive solution. I didn't know about "cross-ratio group". – Arnaldo May 4 '17 at 21:51 By replacing $x$ with $\frac{1}{1-x}$ and $\frac{x-1}{x}$ sequentially, you obtain a system of 3 equations. Then, you can get the solution.
2019-04-21T07:06:58
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2265891/find-all-functions-f-such-that-fxf-frac11-x-x/2265961", "openwebmath_score": 0.5320066809654236, "openwebmath_perplexity": 312.7536377036883, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9883127440697254, "lm_q2_score": 0.8519527944504227, "lm_q1q2_score": 0.8419958041011679 }
https://stats.stackexchange.com/questions/468414/what-can-i-say-with-mean-variance-and-standard-deviation
What can I say with mean, variance and standard deviation? I wrote this code in R: getinfoNumeric <- function(attr) { cat(min(attr), " ") cat(max(attr), " ") cat(mean(attr), " ") cat(var(attr), " ") cat(sd(attr), " ") } When I apply it to an attribute, it gives me the following result: • 50 • 100 • 71.89536 • 37.50461 • 6.124101 I don't understand the meaning of the last two values. Can you help me? I learnt that: • variance measures how far a set of numbers are spread out from their average value • the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range But, looking at this data, what does it mean? My data is about cocoa percentage in chocolate bars. So the minimum percentage is 50%, the maximum is 100% and the mean value is 71.89%. But what about variance and standard deviation? Does the variance mean that the percentage of chocolate is concentrated between 71.89 - 37.5 and 71.89 + 37.5? And what about standard deviation? Does it mean that the percentage tends to be close to the mean? Histogram: • Could you please add the histogram of your observations? – Lerner Zhang May 25 at 12:37 • Sure thing @LernerZhang – hellomynameisA May 25 at 12:57 • It seems that the observations are symmetrically distributed, and then you can refer to the standard deviation and mean of normal distribution. – Lerner Zhang May 25 at 13:08 • @LernerZhang Symmetry is certainly not the only requirement to view data as being normally distributed. – Dave May 25 at 13:20 Your histogram looks roughly normal, which makes for an easy interpretation of standard deviation. In a normal distribution, 68% of observations are $$\pm$$ one standard deviation of the mean, 95% of observations are within $$\pm$$ two standard deviations of the mean, and 99.7% of observations are within $$\pm$$ three standard deviations of the mean. You can test that with a few lines of code. # should be about 68 length(attr[attr<50+6.124 & attr>50-6.124]/length(attr)*100 # length(attr[attr<50+6.124*2 & attr>50-6.124*2]/length(attr)*100 # length(attr[attr<50+6.124*3 & attr>50-6.124*3]/length(attr)*100 This characterization fails for non-normal distributions. However, we can bound how many observations lie within $$k$$ standard deviations of the mean. This is called Chebyshev’s inequality: https://en.m.wikipedia.org/wiki/Chebyshev%27s_inequality In words, no more than $$100 \times \frac{1}{k^2}$$ percent of the observations will be beyond $$k$$ standard deviations of the mean. Getting back to the original question, smaller standard deviations (and smaller variances) tend to indicate more clustering around the mean than larger standard deviations (and larger variances). There are other ways to measure spread, but variance remains popular because it is easy to calculate, it is easy to test (hypothesis testing), and because of its unique role in a very important theorem called the central limit theorem. Then one might want to take the square root of variance to get the standard deviation, as variance is expressed in the square of the original units (e.g. $$€^2$$ when the original unit is $$€$$). • Very well explained @Dave – Stochastic May 25 at 13:40 • Thank you very much! – hellomynameisA May 25 at 14:07
2020-07-11T05:47:12
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/468414/what-can-i-say-with-mean-variance-and-standard-deviation", "openwebmath_score": 0.8438382744789124, "openwebmath_perplexity": 522.9079796380896, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.975201841245846, "lm_q2_score": 0.8633916134888613, "lm_q1q2_score": 0.8419810911905593 }
https://math.stackexchange.com/questions/1215022/transforming-a-function-by-a-sequence-geometric-operations-on-its-graph
# Transforming a function by a sequence geometric operations on its graph. I am solving the following problem: Let $f(x) =\sqrt{x}$. Find a formula for a function $g$ whose graph is obtained from $f$ from the given sequence of transformations: • shift right $3$ units • horizontal shrink by a factor of $2$ • shift up $1$ unit I think that$g(x) = f(2(x-3)) + 1 = \sqrt{2x-6} + 1$, but in the answers it says $\sqrt{2x-3} + 1$, so i assume $g(x) = f(2x-3) +1$, but wouldn't that mean that the horizontal shrink was done first and afterwards the right horizontal shift? • With these kinds of problems, in general, an effective way of verifying which answer is correct is to simply consider a specific ordered pair (as I do in my answer). Doing this will give you assurance in regards to which function is correct--then the matter becomes pinpointing your flaw in reasoning (which is what OriginalOldMan did). – Daniel W. Farlow Mar 31 '15 at 23:25 When we do a horizontal shrink by a factor of $b$ we replace $x$ with $bx$, rather than multiplying the whole expression by b. So: $$g(x) = f(2x-3) + 1$$ not: $$g(x) = f(2(x-3)) + 1$$ The ambiguity is in the shrink step. The origin of the shrink has to be specified - points to the right of this origin are moved to the left towards the origin and points to the left of the origin are moved to the right towards the origin. I believe that you are shrinking around 3, and the answer is shrinking around zero. Since the shrinking origin is not specified, I believe that your answer would be acceptable. But then, so would the other. • Does the order of the sequence of transformations give any insight into where the origin of the shrink would make the most sense? – Jonny Mar 31 '15 at 23:31 • @Jonny I don't think there is any ambiguity, especially when you consider the transformations of a single ordered pair as I do in my answer. Probably the clearest or least ambiguous way of looking at it. – Daniel W. Farlow Mar 31 '15 at 23:37 • @crash The fact that you apply the transformations in the same order as the sequence results in an unambiguous determination of the origin. However, couldn't "shift right by 3 units" also include shifting the origin right by three units? – Jonny Mar 31 '15 at 23:47 There is actually a really easy way to test which function is correct. First, for $f(x)=\sqrt{x}$, you know you have the ordered pair $(4,2)$. Now consider the transformations to obtain $g(x)$: 1. Shift right by $3$ units. 2. Horizontal shrink by a factor of $2$. 3. Shift up by $1$ unit. Now consider what happens to the ordered pair $(4,2)$ when you apply 1-3 above: $$(4,2)\overset{\text{(1)}}{\implies} (7,2)\overset{\text{(2)}}{\implies} (7/2,2)\overset{\text{(3)}}{\implies} (7/2,3).$$ Thus, we know $g(x)$ must have the ordered pair $(7/2,3)$. Trying out $g(x) = \sqrt{2x-3}+1$ when $x=7/2$ yields $(7/2,3)$, as desired. Trying out $g(x) = \sqrt{2x-6}+1$ when $x=7/2$ yields $(7/2,2)\neq (7/2,3)$. Thus, we clearly must have that $g(x) = \sqrt{2x-3}+1$. Note: The answer provided by OriginalOldMan highlights the important point here, but I provided my answer for concrete verification, and this way is probably the easiest way to go when in doubt concerning the validity of your transformations (i.e., consider a particular ordered pair, etc.).
2021-05-15T21:27:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1215022/transforming-a-function-by-a-sequence-geometric-operations-on-its-graph", "openwebmath_score": 0.8338465094566345, "openwebmath_perplexity": 300.13065884856655, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9752018383629826, "lm_q2_score": 0.8633916099737806, "lm_q1q2_score": 0.841981085273606 }
http://dlmf.nist.gov/3.5
# §3.5(i) Trapezoidal Rules The elementary trapezoidal rule is given by 3.5.1 $\int_{a}^{b}f(x)dx=\tfrac{1}{2}h(f(a)+f(b))-\tfrac{1}{12}h^{3}f^{\prime\prime}% (\xi),$ Symbols: $dx$: differential of $x$ and $\int$: integral A&S Ref: 25.4.1 (second relation only) Permalink: http://dlmf.nist.gov/3.5.E1 Encodings: TeX, pMML, png where $h=b-a$, $f\in\mathop{C^{2}\/}\nolimits[a,b]$, and $a<\xi. The composite trapezoidal rule is 3.5.2 $\displaystyle\int_{a}^{b}f(x)dx=h(\tfrac{1}{2}f_{0}+f_{1}+\dots+f_{n-1}+\tfrac% {1}{2}f_{n})+E_{n}(f),$ Symbols: $dx$: differential of $x$, $\int$: integral and $E_{n}(f)$: error term A&S Ref: 25.4.2 (in slightly different form) Referenced by: §3.4(ii), §3.5(ix), §3.5(iii) Permalink: http://dlmf.nist.gov/3.5.E2 Encodings: TeX, pMML, png where $h=(b-a)/n$, $x_{k}=a+kh$, $f_{k}=f(x_{k})$, $k=0,1,\dots,n$, and 3.5.3 $E_{n}(f)=-\frac{b-a}{12}h^{2}f^{\prime\prime}(\xi),$ $a<\xi. Symbols: $E_{n}(f)$: error term A&S Ref: 25.4.2 (in slightly different form) Permalink: http://dlmf.nist.gov/3.5.E3 Encodings: TeX, pMML, png If in addition $f$ is periodic, $f\in\mathop{C^{k}\/}\nolimits(\Real)$, and the integral is taken over a period, then 3.5.4 $E_{n}(f)=\mathop{O\/}\nolimits\!\left(h^{k}\right),$ $h\to 0$. Symbols: $\mathop{O\/}\nolimits\!\left(x\right)$: order not exceeding and $E_{n}(f)$: error term A&S Ref: 25.4.3 (in slightly different form) Referenced by: §3.5(i) Permalink: http://dlmf.nist.gov/3.5.E4 Encodings: TeX, pMML, png In particular, when $k=\infty$ the error term is an exponentially-small function of $1/h$, and in these circumstances the composite trapezoidal rule is exceptionally efficient. For an example see §3.5(ix). Similar results hold for the trapezoidal rule in the form 3.5.5 $\int_{-\infty}^{\infty}f(t)dt=h\sum_{k=-\infty}^{\infty}f(kh)+E_{h}(f),$ Symbols: $dx$: differential of $x$, $\int$: integral and $E_{n}(f)$: error term Referenced by: §3.5(i) Permalink: http://dlmf.nist.gov/3.5.E5 Encodings: TeX, pMML, png with a function $f$ that is analytic in a strip containing $\Real$. For further information and examples, see Goodwin (1949b). In Stenger (1993, Chapter 3) the rule (3.5.5) is considered in the framework of Sinc approximations (§3.3(vi)). See also Poisson’s summation formula (§1.8(iv)). If $k$ in (3.5.4) is not arbitrarily large, and if odd-order derivatives of $f$ are known at the end points $a$ and $b$, then the composite trapezoidal rule can be improved by means of the Euler–Maclaurin formula (§2.10(i)). See Davis and Rabinowitz (1984, pp. 134–142) and Temme (1996b, p. 25). # §3.5(ii) Simpson’s Rule Let $h=\frac{1}{2}(b-a)$ and $f\in\mathop{C^{4}\/}\nolimits[a,b]$. Then the elementary Simpson’s rule is 3.5.6 $\int_{a}^{b}f(x)dx=\tfrac{1}{3}h(f(a)+4f(\tfrac{1}{2}(a+b))+f(b))-\tfrac{1}{90% }h^{5}f^{(4)}(\xi),$ Symbols: $dx$: differential of $x$ and $\int$: integral A&S Ref: 25.4.5 (second relation only) Permalink: http://dlmf.nist.gov/3.5.E6 Encodings: TeX, pMML, png where $a<\xi. Now let $h=(b-a)/n$, $x_{k}=a+kh$, and $f_{k}=f(x_{k})$, $k=0,1,\dots,n$. Then the composite Simpson’s rule is 3.5.7 $\int_{a}^{b}f(x)dx=\tfrac{1}{3}h(f_{0}+4f_{1}+2f_{2}+4f_{3}+2f_{4}+\cdots+4f_{% n-1}+f_{n})+E_{n}(f),$ Symbols: $dx$: differential of $x$, $\int$: integral and $E_{n}(f)$: error term A&S Ref: 25.4.6 (in slightly different form) Permalink: http://dlmf.nist.gov/3.5.E7 Encodings: TeX, pMML, png where $n$ is even and 3.5.8 $E_{n}(f)=-\frac{b-a}{180}h^{4}f^{(4)}(\xi),$ $a<\xi. Symbols: $E_{n}(f)$: error term A&S Ref: 25.4.6 (in slightly different form) Permalink: http://dlmf.nist.gov/3.5.E8 Encodings: TeX, pMML, png Simpson’s rule can be regarded as a combination of two trapezoidal rules, one with step size $h$ and one with step size $h/2$ to refine the error term. # §3.5(iii) Romberg Integration Further refinements are achieved by Romberg integration. If $f\in\mathop{C^{2m+2}\/}\nolimits[a,b]$, then the remainder $E_{n}(f)$ in (3.5.2) can be expanded in the form 3.5.9 $E_{n}(f)=c_{1}h^{2}+c_{2}h^{4}+\dots+c_{m}h^{2m}+\mathop{O\/}\nolimits\!\left(% h^{2m+2}\right),$ Symbols: $\mathop{O\/}\nolimits\!\left(x\right)$: order not exceeding, $E_{n}(f)$: error term and $c_{m}$: coefficients Referenced by: §3.5(iii) Permalink: http://dlmf.nist.gov/3.5.E9 Encodings: TeX, pMML, png where $h=(b-a)/n$. As in Simpson’s rule, by combining the rule for $h$ with that for $h/2$, the first error term $c_{1}h^{2}$ in (3.5.9) can be eliminated. With the Romberg scheme successive terms $c_{1}h^{2},c_{2}h^{4},\dots$, in (3.5.9) are eliminated, according to the formula 3.5.10 $G_{k}(\tfrac{1}{2}h)=G_{k-1}(\tfrac{1}{2}h)+\frac{G_{k-1}(\frac{1}{2}h)-G_{k-1% }(h)}{4^{k}-1},$ $k\geq 1$, Symbols: $G_{k}(h)$: Romberg scheme Referenced by: §3.5(iii) Permalink: http://dlmf.nist.gov/3.5.E10 Encodings: TeX, pMML, png beginning with 3.5.11 $G_{0}(h)=h(\tfrac{1}{2}f_{0}+f_{1}+\dots+f_{n-1}+\tfrac{1}{2}f_{n}),$ Symbols: $G_{k}(h)$: Romberg scheme Permalink: http://dlmf.nist.gov/3.5.E11 Encodings: TeX, pMML, png although we may also start with the elementary rule with $G_{0}(h)=\frac{1}{2}h(f(a)+f(b))$ and $h=b-a$. To generate $G_{k}(h)$ the quantities $G_{0}(h),G_{0}(h/2),\dots,G_{0}(h/2^{k})$ are needed. These can be found by means of the recursion 3.5.12 $G_{0}(\tfrac{1}{2}h)=\tfrac{1}{2}G_{0}(h)+\tfrac{1}{2}h\sum_{k=0}^{n-1}f\left(% x_{0}+(k+\tfrac{1}{2})h\right),$ Symbols: $G_{k}(h)$: Romberg scheme Permalink: http://dlmf.nist.gov/3.5.E12 Encodings: TeX, pMML, png which depends on function values computed previously. If $f\in\mathop{C^{2k+2}\/}\nolimits(a,b)$, then for $j,k=0,1,\dots$, 3.5.13 $\int_{a}^{b}f(x)dx-G_{k}\left(\frac{b-a}{2^{j}}\right)=-\frac{(b-a)^{2k+3}}{2^% {k(k+1)}}\frac{4^{-j(k+1)}}{(2k+2)!}\left|\mathop{B_{2k+2}\/}\nolimits\right|f% ^{(2k+2)}(\xi),$ for some $\xi\in(a,b)$. For the Bernoulli numbers $\mathop{B_{m}\/}\nolimits$ see §24.2(i). When $f\in\mathop{C^{\infty}\/}\nolimits$, the Romberg method affords a means of obtaining high accuracy in many cases with a relatively simple adaptive algorithm. However, as illustrated by the next example, other methods may be more efficient. # Example With $\mathop{J_{0}\/}\nolimits\!\left(t\right)$ denoting the Bessel function (§10.2(ii)) the integral 3.5.14 $\int_{0}^{\infty}e^{-pt}\mathop{J_{0}\/}\nolimits\!\left(t\right)dt=\frac{1}{% \sqrt{p^{2}+1}}$ is computed with $p=1$ on the interval $[0,30]$. Using (3.5.10) with $h=30/4=7.5$ we obtain $G_{7}(h)$ with 14 correct digits. About $2^{9}=512$ function evaluations are needed. (With the 20-point Gauss–Laguerre formula (§3.5(v)) the same precision can be achieved with 15 function evaluations.) With $j=2$ and $k=7$, the coefficient of the derivative $f^{(16)}(\xi)$ in (3.5.13) is found to be $(0.14\dots)\times 10^{-13}$. See Davis and Rabinowitz (1984, pp. 440–441) for modifications of the Romberg method when the function $f$ is singular. 3.5.15 $\int_{a}^{b}f(x)w(x)dx=\sum_{k=1}^{n}w_{k}f(x_{k})+E_{n}(f),$ with weight function $w(x)$, is one for which $E_{n}(f)=0$ whenever $f$ is a polynomial of degree $\leq n-1$. The nodes $x_{1},x_{2},\dots,x_{n}$ are prescribed, and the weights $w_{k}$ and error term $E_{n}(f)$ are found by integrating the product of the Lagrange interpolation polynomial of degree $n-1$ and $w(x)$. If the extreme members of the set of nodes $x_{1},x_{2},\dots,x_{n}$ are the endpoints $a$ and $b$, then the quadrature rule is said to be closed. Or if the set $x_{1},x_{2},\dots,x_{n}$ lies in the open interval $(a,b)$, then the quadrature rule is said to be open. Rules of closed type include the Newton–Cotes formulas such as the trapezoidal rules and Simpson’s rule. Examples of open rules are the Gauss formulas (§3.5(v)), the midpoint rule, and Fejér’s quadrature rule. For the latter $a=-1$, $b=1$, and the nodes $x_{k}$ are the extrema of the Chebyshev polynomial $\mathop{T_{n}\/}\nolimits\!\left(x\right)$3.11(ii) and §18.3). If we add $-1$ and $1$ to this set of $x_{k}$, then the resulting closed formula is the frequently-used Clenshaw–Curtis formula, whose weights are positive and given by 3.5.16 $w_{k}=\frac{g_{k}}{n}\left(1-\sum_{j=1}^{\left\lfloor n/2\right\rfloor}\frac{b% _{j}}{4j^{2}-1}\mathop{\cos\/}\nolimits\!\left(2jk\pi/n\right)\right),$ where $x_{k}=\mathop{\cos\/}\nolimits\!\left(k\pi/n\right),k=0,1,\ldots,n$, and 3.5.17 $g_{k}=\begin{cases}1,&\text{k=0,n},\\ 2,&\text{otherwise},\end{cases}\quad b_{j}=\begin{cases}1,&\text{j=\frac{1}{2% }n},\\ 2,&\text{otherwise}.\end{cases}$ Symbols: $g_{k}$: coefficient and $b_{k}$: coefficient Permalink: http://dlmf.nist.gov/3.5.E17 Encodings: TeX, pMML, png For further information, see Mason and Handscomb (2003, Chapter 8), Davis and Rabinowitz (1984, pp. 74–92), and Clenshaw and Curtis (1960). For detailed comparisons of the Clenshaw–Curtis formula with Gauss quadrature (§3.5(v)), see Trefethen (2008, 2011). Let $\{p_{n}\}$ denote the set of monic polynomials $p_{n}$ of degree $n$ (coefficient of $x^{n}$ equal to $1$) that are orthogonal with respect to a positive weight function $w$ on a finite or infinite interval $(a,b)$; compare §18.2(i). In Gauss quadrature (also known as Gauss–Christoffel quadrature) we use (3.5.15) with nodes $x_{k}$ the zeros of $p_{n}$, and weights $w_{k}$ given by 3.5.18 $w_{k}=\int_{a}^{b}\frac{p_{n}(x)}{(x-x_{k})p^{\mspace{1.0mu}\prime}_{n}(x_{k})% }\,w(x)dx.$ The $w_{k}$ are also known as Christoffel coefficients or Christoffel numbers and they are all positive. The remainder is given by 3.5.19 $E_{n}(f)=\gamma_{n}f^{(2n)}(\xi)/(2n)!,$ where 3.5.20 $\gamma_{n}=\int_{a}^{b}p_{n}^{2}(x)w(x)dx,$ Symbols: $dx$: differential of $x$, $\int$: integral, $\gamma_{n}$: coefficients, $w$: weight and $p_{n}$: set of monic polynomials A&S Ref: 25.4.29 (for general weight function) Referenced by: §3.5(vi) Permalink: http://dlmf.nist.gov/3.5.E20 Encodings: TeX, pMML, png and $\xi$ is some point in $(a,b)$. As a consequence, the rule is exact for polynomials of degree $\leq 2n-1$. In practical applications the weight function $w(x)$ is chosen to simulate the asymptotic behavior of the integrand as the endpoints are approached. For $\mathop{C^{\infty}\/}\nolimits$ functions Gauss quadrature can be very efficient. In adaptive algorithms the evaluation of the nodes and weights may cause difficulties, unless exact values are known. For the derivation of Gauss quadrature formulas see Gautschi (2004, pp. 22–32), Gil et al. (2007a, §5.3), and Davis and Rabinowitz (1984, §§2.7 and 3.6). Stroud and Secrest (1966) includes computational methods and extensive tables. For further extensions, applications, and computation of orthogonal polynomials and Gauss-type formulas, see Gautschi (1994, 1996, 2004). For effective testing of Gaussian quadrature rules see Gautschi (1983). For the classical orthogonal polynomials related to the following Gauss rules, see §18.3. The given quantities $\gamma_{n}$ follow from (18.2.5), (18.2.7), Table 18.3.1, and the relation $\gamma_{n}=\ifrac{h_{n}}{k_{n}^{2}}$. # Gauss–Legendre Formula 3.5.21 $\displaystyle[a,b]$ $\displaystyle=[-1,1],$ $\displaystyle w(x)$ $\displaystyle=1,$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{2^{2n+1}}{2n+1}\,\frac{(n!)^{4}}{((2n)!)^{2}}\,.$ Symbols: $[a,b]$: closed interval, $!$: $n!$: factorial, $\gamma_{n}$: coefficients and $w$: weight A&S Ref: 25.4.29 Permalink: http://dlmf.nist.gov/3.5.E21 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png The nodes $x_{k}$ and weights $w_{k}$ for $n=5$, $10$ are shown in Tables 3.5.1 and 3.5.2. The $p_{n}(x)$ are the monic Legendre polynomials, that is, the polynomials $\mathop{P_{n}\/}\nolimits\!\left(x\right)$18.3) scaled so that the coefficient of the highest power of $x$ in their explicit forms is unity. # Gauss–Chebyshev Formula 3.5.22 $\displaystyle[a,b]$ $\displaystyle=[-1,1],$ $\displaystyle w(x)$ $\displaystyle=(1-x^{2})^{-1/2},$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{\pi}{2^{2n-1}}.$ Symbols: $[a,b]$: closed interval, $\gamma_{n}$: coefficients and $w$: weight Permalink: http://dlmf.nist.gov/3.5.E22 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png The nodes $x_{k}$ and weights $w_{k}$ are known explicitly: 3.5.23 $\displaystyle x_{k}$ $\displaystyle=\mathop{\cos\/}\nolimits\!\left(\frac{2k-1}{2n}\pi\right),$ $\displaystyle w_{k}$ $\displaystyle=\frac{\pi}{n}$, $k=1,2,\dots,n$. Symbols: $\mathop{\cos\/}\nolimits z$: cosine function and $w_{k}$: weights A&S Ref: 25.4.38 Permalink: http://dlmf.nist.gov/3.5.E23 Encodings: TeX, TeX, pMML, pMML, png, png In the case of Chebyshev weight functions $w(x)=(1-x)^{\alpha}(1+x)^{\beta}$ on $[-1,1]$, with $\left|\alpha\right|=\left|\beta\right|=\frac{1}{2}$, the nodes $x_{k}$, weights $w_{k}$, and error constant $\gamma_{n}$, are as follows: 3.5.24 $\displaystyle x_{k}$ $\displaystyle=\mathop{\cos\/}\nolimits\!\left(\frac{k}{n+1}\pi\right),$ $\displaystyle w_{k}$ $\displaystyle=\frac{\pi}{n+1}{\mathop{\sin\/}\nolimits^{2}}\!\left(\frac{k}{n+% 1}\pi\right),$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{\pi}{2^{2n+1}},$ $\alpha=\beta=\tfrac{1}{2}$, Symbols: $\mathop{\cos\/}\nolimits z$: cosine function, $\mathop{\sin\/}\nolimits z$: sine function, $\gamma_{n}$: coefficients, $\alpha$: exponent, $\beta$: exponent and $w_{k}$: weights A&S Ref: 25.4.40 (with corrected nodes and weights) Permalink: http://dlmf.nist.gov/3.5.E24 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png and 3.5.25 $\displaystyle x_{k}$ $\displaystyle=\pm\mathop{\cos\/}\nolimits\!\left(\frac{2k}{2n+1}\pi\right),$ $\displaystyle w_{k}$ $\displaystyle=\frac{4\pi}{2n+1}{\mathop{\sin\/}\nolimits^{2}}\!\left(\frac{k}{% 2n+1}\pi\right),$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{\pi}{2^{2n}},$ $\alpha=-\beta=\pm\tfrac{1}{2}$, Symbols: $\mathop{\cos\/}\nolimits z$: cosine function, $\mathop{\sin\/}\nolimits z$: sine function, $\gamma_{n}$: coefficients, $\alpha$: exponent, $\beta$: exponent and $w_{k}$: weights A&S Ref: 25.4.42 and 25.4.43 (in different form) Permalink: http://dlmf.nist.gov/3.5.E25 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png where $k=1,2,\dots,n$. # Gauss–Jacobi Formula 3.5.26 $\displaystyle[a,b]$ $\displaystyle=[-1,1],$ $\displaystyle w(x)$ $\displaystyle=(1-x)^{\alpha}(1+x)^{\beta},$ $\displaystyle\gamma_{n}$ $\displaystyle=\dfrac{\mathop{\Gamma\/}\nolimits\!\left(n+\alpha+1\right)% \mathop{\Gamma\/}\nolimits\!\left(n+\beta+1\right)\mathop{\Gamma\/}\nolimits\!% \left(n+\alpha+\beta+1\right)}{(2n+\alpha+\beta+1)(\mathop{\Gamma\/}\nolimits% \!\left(2n+\alpha+\beta+1\right))^{2}}2^{2n+\alpha+\beta+1}n!,$ $\alpha>-1$, $\beta>-1$. The $p_{n}(x)$ are the monic Jacobi polynomials $\mathop{P^{(\alpha,\beta)}_{n}\/}\nolimits\!\left(x\right)$18.3). # Gauss–Laguerre Formula 3.5.27 $\displaystyle[a,b)$ $\displaystyle=[0,\infty),$ $\displaystyle w(x)$ $\displaystyle=x^{\alpha}e^{-x},$ $\displaystyle\gamma_{n}$ $\displaystyle=n!\,\mathop{\Gamma\/}\nolimits\!\left(n+\alpha+1\right),$ $\alpha>-1$. If $\alpha\neq 0$ this is called the generalized Gauss–Laguerre formula. The nodes $x_{k}$ and weights $w_{k}$ for $\alpha=0$ and $n=5$, $10$ are shown in Tables 3.5.6 and 3.5.7. The $p_{n}(x)$ are the monic Laguerre polynomials $\mathop{L_{n}\/}\nolimits\!\left(x\right)$18.3). # Gauss–Hermite Formula 3.5.28 $\displaystyle(a,b)$ $\displaystyle=(-\infty,\infty),$ $\displaystyle w(x)$ $\displaystyle=e^{-x^{2}},$ $\displaystyle\gamma_{n}$ $\displaystyle=\sqrt{\pi}\,\frac{n!}{2^{n}}.$ The nodes $x_{k}$ and weights $w_{k}$ for $n=5,10$ are shown in Tables 3.5.10 and 3.5.11. The $p_{n}(x)$ are the monic Hermite polynomials $\mathop{H_{n}\/}\nolimits\!\left(x\right)$18.3). # Gauss Formula for a Logarithmic Weight Function 3.5.29 $\displaystyle[a,b]$ $\displaystyle=[0,1],$ $\displaystyle w(x)$ $\displaystyle=\mathop{\ln\/}\nolimits\!\left(1/x\right).$ Symbols: $[a,b]$: closed interval, $\mathop{\ln\/}\nolimits z$: principal branch of logarithm function and $w$: weight A&S Ref: 25.4.44 (modification of) Permalink: http://dlmf.nist.gov/3.5.E29 Encodings: TeX, TeX, pMML, pMML, png, png The nodes $x_{k}$ and weights $w_{k}$ for $n=5,10$ are shown in Tables 3.5.14 and 3.5.15. # §3.5(vi) Eigenvalue/Eigenvector Characterization of Gauss Quadrature Formulas All the monic orthogonal polynomials $\{p_{n}\}$ used with Gauss quadrature satisfy a three-term recurrence relation (§18.2(iv)): 3.5.30 $p_{k+1}(x)=(x-\alpha_{k})p_{k}(x)-\beta_{k}p_{k-1}(x),$ $k=0,1,\dots$, Symbols: $p_{n}$: set of monic polynomials A&S Ref: 22.1.4 (for monic polynomials) Permalink: http://dlmf.nist.gov/3.5.E30 Encodings: TeX, pMML, png with $\beta_{k}>0$, $p_{-1}(x)=0$, and $p_{0}(x)=1$. The Gauss nodes $x_{k}$ (the zeros of $p_{n}$) are the eigenvalues of the (symmetric tridiagonal) Jacobi matrix of order $n\times n$: 3.5.31 $\mathbf{J}_{n}=\begin{bmatrix}\alpha_{0}&\sqrt{\beta_{1}}&&&0\\ \sqrt{\beta_{1}}&\alpha_{1}&\sqrt{\beta_{2}}&&\\ &\ddots&\ddots&\ddots&\\ &&\sqrt{\beta_{n-2}}&\alpha_{n-2}&\sqrt{\beta_{n-1}}\\ 0&&&\sqrt{\beta_{n-1}}&\alpha_{n-1}\end{bmatrix}.$ Permalink: http://dlmf.nist.gov/3.5.E31 Encodings: TeX, pMML, png Let $\mathbf{v}_{k}$ denote the normalized eigenvector of $\mathbf{J}_{n}$ corresponding to the eigenvalue $x_{k}$. Then the weights are given by 3.5.32 $w_{k}=\beta_{0}v_{k,1}^{2},$ $k=1,2,\dots,n$, Symbols: $\mathbf{v}_{k}$: normalized eigenvector and $w_{k}$: weights Permalink: http://dlmf.nist.gov/3.5.E32 Encodings: TeX, pMML, png where $\beta_{0}=\int_{a}^{b}w(x)dx$ and $v_{k,1}$ is the first element of $\mathbf{v}_{k}$. Also, the error constant (3.5.20) is given by 3.5.33 $\gamma_{n}=\beta_{0}\beta_{1}\cdots\beta_{n}.$ Symbols: $\gamma_{n}$: coefficients Permalink: http://dlmf.nist.gov/3.5.E33 Encodings: TeX, pMML, png Tables 3.5.13.5.13 can be verified by application of the results given in the present subsection. In these cases the coefficients $\alpha_{k}$ and $\beta_{k}$ are obtainable explicitly from results given in §18.9(i). # §3.5(vii) Oscillatory Integrals Integrals of the form 3.5.34 $\int_{a}^{b}f(x)\mathop{\cos\/}\nolimits\!\left(\omega x\right)dx,$ $\int_{a}^{b}f(x)\mathop{\sin\/}\nolimits\!\left(\omega x\right)dx,$ can be computed by Filon’s rule. See Davis and Rabinowitz (1984, pp. 146–168). Oscillatory integral transforms are treated in Wong (1982) by a method based on Gaussian quadrature. A comparison of several methods, including an extension of the Clenshaw–Curtis formula (§3.5(iv)), is given in Evans and Webster (1999). For computing infinite oscillatory integrals, Longman’s method may be used. The integral is written as an alternating series of positive and negative subintegrals that are computed individually; see Longman (1956). Convergence acceleration schemes, for example Levin’s transformation (§3.9(v)), can be used when evaluating the series. Further methods are given in Clendenin (1966) and Lyness (1985). For a comprehensive survey of quadrature of highly oscillatory integrals, including multidimensional integrals, see Iserles et al. (2006). For the Bromwich integral 3.5.35 $I(f)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}e^{\zeta}\zeta^{-s}f(\zeta)d\zeta,$ $s>0$, $c>c_{0}>0$, Symbols: $dx$: differential of $x$, $e$: base of exponential function, $\int$: integral and $I(f)$: Bromwich integral Referenced by: §3.5(viii), §3.5(ix), §3.5(viii) Permalink: http://dlmf.nist.gov/3.5.E35 Encodings: TeX, pMML, png a complex Gauss quadrature formula is available. Here $f(\zeta)$ is assumed analytic in the half-plane $\realpart{\zeta}>c_{0}$ and bounded as $\zeta\to\infty$ in $\left|\mathop{\mathrm{ph}\/}\nolimits\zeta\right|\leq\frac{1}{2}\pi$. The quadrature rule for (3.5.35) is 3.5.36 $I(f)=\sum_{k=1}^{n}w_{k}f(\zeta_{k})+E_{n}(f),$ where $E_{n}(f)=0$ if $f(\zeta)$ is a polynomial of degree $\leq 2n-1$ in $1/\zeta$. Complex orthogonal polynomials $p_{n}(1/\zeta)$ of degree $n=0,1,2,\dots$, in $1/\zeta$ that satisfy the orthogonality condition 3.5.37 $\int_{c-i\infty}^{c+i\infty}e^{\zeta}\zeta^{-s}p_{k}(1/\zeta)p_{\ell}(1/\zeta)% d\zeta=0,$ $k\neq\ell$, are related to Bessel polynomials (§§10.49(ii) and 18.34). The complex Gauss nodes $\zeta_{k}$ have positive real part for all $s>0$. The nodes and weights of the 5-point complex Gauss quadrature formula (3.5.36) for $s=1$ are shown in Table 3.5.18. Extensive tables of quadrature nodes and weights can be found in Krylov and Skoblya (1985). # Example. Laplace Transform Inversion From §1.14(iii) 3.5.38 $G(p)=\int_{0}^{\infty}e^{-pt}g(t)dt,$ Symbols: $dx$: differential of $x$, $e$: base of exponential function, $\int$: integral, $g(t)$: function and $G(p)$: Laplace transform of $g(t)$ A&S Ref: 29.2.1 (in different form) Permalink: http://dlmf.nist.gov/3.5.E38 Encodings: TeX, pMML, png 3.5.39 $g(t)=\frac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}e^{tp}G(p)dp,$ with appropriate conditions. The pair 3.5.40 $\displaystyle g(t)$ $\displaystyle=\mathop{J_{0}\/}\nolimits\!\left(t\right),$ $\displaystyle G(p)$ $\displaystyle=\frac{1}{\sqrt{p^{2}+1}},$ where $\mathop{J_{0}\/}\nolimits\!\left(t\right)$ is the Bessel function (§10.2(ii)), satisfy these conditions, provided that $\sigma>0$. The integral (3.5.39) has the form (3.5.35) if we set $\zeta=tp$, $c=t\sigma$, and $f(\zeta)=t^{-1}\zeta^{s}G(\zeta/t)$. We choose $s=1$ so that $f(\zeta)=\mathop{O\/}\nolimits\!\left(1\right)$ at infinity. Equation (3.5.36), without the error term, becomes 3.5.41 $g(t)=\sum_{k=1}^{n}\frac{w_{k}\zeta_{k}}{\sqrt{\zeta_{k}^{2}+t^{2}}},$ approximately. Using Table 3.5.18 we compute $g(t)$ for $n=5$. The results are given in the middle column of Table 3.5.19, accompanied by the actual 10D values in the last column. Agreement is very good for small values of $t$, but not for larger values. For these cases the integration path may need to be deformed; see §3.5(ix). # §3.5(ix) Other Contour Integrals A frequent problem with contour integrals is heavy cancellation, which occurs especially when the value of the integral is exponentially small compared with the maximum absolute value of the integrand. To avoid cancellation we try to deform the path to pass through a saddle point in such a way that the maximum contribution of the integrand is derived from the neighborhood of the saddle point. For example, steepest descent paths can be used; see §2.4(iv). # Example In (3.5.35) take $s=1$ and $f(\zeta)=e^{-2\lambda\sqrt{\zeta}}$, with $\lambda>0$. When $\lambda$ is large the integral becomes exponentially small, and application of the quadrature rule of §3.5(viii) is useless. In fact from (7.14.4) and the inversion formula for the Laplace transform (§1.14(iii)) we have 3.5.42 $\mathop{\mathrm{erfc}\/}\nolimits\lambda=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i% \infty}e^{\zeta-2\lambda\sqrt{\zeta}}\frac{d\zeta}{\zeta},$ $c>0$, Symbols: $\mathop{\mathrm{erfc}\/}\nolimits z$: complementary error function, $dx$: differential of $x$, $e$: base of exponential function, $\int$: integral and $\lambda$: parameter A&S Ref: 29.3.83 (in different form) Referenced by: §3.5(ix) Permalink: http://dlmf.nist.gov/3.5.E42 Encodings: TeX, pMML, png where $\mathop{\mathrm{erfc}\/}\nolimits z$ is the complementary error function, and from (7.12.1) it follows that 3.5.43 $\mathop{\mathrm{erfc}\/}\nolimits\lambda\sim\frac{e^{-\lambda^{2}}}{\sqrt{\pi}% \lambda},$ $\lambda\to\infty$. With the transformation $\zeta=\lambda^{2}t$, (3.5.42) becomes 3.5.44 $\mathop{\mathrm{erfc}\/}\nolimits\lambda=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i% \infty}e^{\lambda^{2}(t-2\sqrt{t})}\frac{dt}{t},$ $c>0$, with saddle point at $t=1$, and when $c=1$ the vertical path intersects the real axis at the saddle point. The steepest descent path is given by $\imagpart{(t-2\sqrt{t})}=0$, or in polar coordinates $t=re^{i\theta}$ we have $r={\mathop{\sec\/}\nolimits^{2}}\!\left(\frac{1}{2}\theta\right)$. Thus 3.5.45 $\mathop{\mathrm{erfc}\/}\nolimits\lambda=\frac{e^{-\lambda^{2}}}{2\pi}\int_{-% \pi}^{\pi}e^{-\lambda^{2}{\mathop{\tan\/}\nolimits^{2}}\!\left(\frac{1}{2}% \theta\right)}d\theta.$ The integrand can be extended as a periodic $\mathop{C^{\infty}\/}\nolimits$ function on $\Real$ with period $2\pi$ and as noted in §3.5(i), the trapezoidal rule is exceptionally efficient in this case. Table 3.5.20 gives the results of applying the composite trapezoidal rule (3.5.2) with step size $h$; $n$ indicates the number of function values in the rule that are larger than $10^{-15}$ (we exploit the fact that the integrand is even). All digits shown in the approximation in the final row are correct. A second example is provided in Gil et al. (2001), where the method of contour integration is used to evaluate Scorer functions of complex argument (§9.12). See also Gil et al. (2003b). If $f$ is meromorphic, with poles near the saddle point, then the foregoing method can be modified. A special case is the rule for Hilbert transforms (§1.14(v)): 3.5.46 $\mathop{\mathcal{H}\/}\nolimits\left(f;x\right)=\frac{1}{\pi}\pvint_{-\infty}^% {\infty}\frac{f(t)}{t-x}dt,$ $x\in\Real$, where the integral is the Cauchy principal value. See Kress and Martensen (1970). Other contour integrals occur in standard integral transforms or their inverses, for example, Hankel transforms (§10.22(v)), Kontorovich–Lebedev transforms (§10.43(v)), and Mellin transforms (§1.14(iv)). # §3.5(x) Cubature Formulas Table 3.5.21 supplies cubature rules, including weights $w_{j}$, for the disk $D$, given by $x^{2}+y^{2}\leq h^{2}$: 3.5.47 $\frac{1}{\pi h^{2}}\iint_{D}f(x,y)dxdy=\sum_{j=1}^{n}w_{j}f(x_{j},y_{j})+R,$ and the square $S$, given by $\left|x\right|\leq h$, $\left|y\right|\leq h$: 3.5.48 $\frac{1}{4h^{2}}\iint_{S}f(x,y)dxdy=\sum_{j=1}^{n}w_{j}f(x_{j},y_{j})+R.$ For these results and further information on cubature formulas see Cools (2003). For integrals in higher dimensions, Monte Carlo methods are another—often the only—alternative. The standard Monte Carlo method samples points uniformly from the integration region to estimate the integral and its error. In more advanced methods points are sampled from a probability distribution, so that they are concentrated in regions that make the largest contribution to the integral. With $N$ function values, the Monte Carlo method aims at an error of order $1/\sqrt{N}$, independently of the dimension of the domain of integration. See Davis and Rabinowitz (1984, pp. 384–417) and Schürer (2004).
2014-09-17T07:31:59
{ "domain": "nist.gov", "url": "http://dlmf.nist.gov/3.5", "openwebmath_score": 0.9814050793647766, "openwebmath_perplexity": 1417.8425370774123, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9905874120796498, "lm_q2_score": 0.849971181358171, "lm_q1q2_score": 0.8419707528838732 }
https://mathhelpboards.com/threads/convergence-of-methods.8057/
# Convergence of methods #### evinda ##### Well-known member MHB Site Helper Hello Could you tell me,why both of the Gauss-Seidel and Jacobi method,when we apply them at the tridiagonal matrix with the number 4 at the main diagonal and the number 1 at the first diagonal above the main and also the number 1 at the first diagonal under the main diagonal converge,but they do not converge ,when we apply them at the Hilbert matrix and also at the tridiagonal matrix with the number 2 at the main diagonal and the number -1 at the 2 other diagonals? #### evinda ##### Well-known member MHB Site Helper The fact that the methods do not converge for the Hilbert matrix,has it to with the fact that this matrix is ill-conditioned?And what's with the other two matrices? #### Klaas van Aarsen ##### MHB Seeker Staff member Hello Could you tell me,why both of the Gauss-Seidel and Jacobi method,when we apply them at the tridiagonal matrix with the number 4 at the main diagonal and the number 1 at the first diagonal above the main and also the number 1 at the first diagonal under the main diagonal converge,but they do not converge ,when we apply them at the Hilbert matrix and also at the tridiagonal matrix with the number 2 at the main diagonal and the number -1 at the 2 other diagonals? The fact that the methods do not converge for the Hilbert matrix,has it to with the fact that this matrix is ill-conditioned?And what's with the other two matrices? Hi! From the Gauss-Seidel method on wiki we can see that Gauss-Seidel is known to converge if either: • A is symmetric positive-definite, • A is strictly or irreducibly diagonally dominant. The Gauss–Seidel method sometimes converges even if these conditions are not satisfied. Note that it will still only converge if the rounding errors made by the computer are within small enough bounds. Your 4-1 band matrix is easily strictly diagonally dominant, so it converges. Your 2-(-1) band matrix is not strictly diagonally dominant, so it is an edge case. Combined with large enough matrices and rounding errors, it will probably diverge. I checked that with a 4x4 matrix A and some choices for b and the initial guess, Gauss-Seidel was actually convergent. Your Hilbert matrix is symmetric positive-definite, so it will converge if the matrix is small enough. Thanks to Opalg we already know that large Hilbert matrices have a determinant of near-zero, making it ill-conditioned, meaning that the rounding errors will become too large if the matrix is large enough. Another check with a 4x4 Hilbert matrix and some choices for b and the initial guess showed that Gauss-Seidel was convergent. #### evinda ##### Well-known member MHB Site Helper Hi! From the Gauss-Seidel method on wiki we can see that Gauss-Seidel is known to converge if either: • A is symmetric positive-definite, • A is strictly or irreducibly diagonally dominant. The Gauss–Seidel method sometimes converges even if these conditions are not satisfied. Note that it will still only converge if the rounding errors made by the computer are within small enough bounds. Your 4-1 band matrix is easily strictly diagonally dominant, so it converges. Your 2-(-1) band matrix is not strictly diagonally dominant, so it is an edge case. Combined with large enough matrices and rounding errors, it will probably diverge. I checked that with a 4x4 matrix A and some choices for b and the initial guess, Gauss-Seidel was actually convergent. Your Hilbert matrix is symmetric positive-definite, so it will converge if the matrix is small enough. Thanks to Opalg we already know that large Hilbert matrices have a determinant of near-zero, making it ill-conditioned, meaning that the rounding errors will become too large if the matrix is large enough. Another check with a 4x4 Hilbert matrix and some choices for b and the initial guess showed that Gauss-Seidel was convergent. Is the 4-1 band matrix also symmetric positive-definite or not?? Also,do these two method converge/diverge for the same reasons or for different ones? #### Klaas van Aarsen ##### MHB Seeker Staff member Is the 4-1 band matrix also symmetric positive-definite or not?? Also,do these two method converge/diverge for the same reasons or for different ones? Yes, the 4-1 band matrix is positive definite, since it is symmetric, strictly diagonally dominant, and has positive real entries on its diagonal (see wiki). I'd say that the other 2 matrices diverge for different reasons. The Hilbert matrix is supposed to converge, but big matrices won't due to being ill-conditioned. The 2-(-1) band matrix is an edge case for Gauss-Seidel to begin with. Its conditioned well enough, but certainly with rounding errors it may not converge. #### evinda ##### Well-known member MHB Site Helper Yes, the 4-1 band matrix is positive definite, since it is symmetric, strictly diagonally dominant, and has positive real entries on its diagonal (see wiki). I'd say that the other 2 matrices diverge for different reasons. The Hilbert matrix is supposed to converge, but big matrices won't due to being ill-conditioned. The 2-(-1) band matrix is an edge case for Gauss-Seidel to begin with. Its conditioned well enough, but certainly with rounding errors it may not converge. So,the Jacobi method won't converge for the 2-(-1) band matrix because it is not strictly dominant??And why could the Gauss-Seidel method converge for it? #### evinda ##### Well-known member MHB Site Helper Also,is the Hilbert matrix strictly dominant,so that the Jacobi methodi would converge,but it doesn't because of the fact that the matrix is ill-conditioned??Or isn't it strictly dominant? #### Klaas van Aarsen ##### MHB Seeker Staff member So,the Jacobi method won't converge for the 2-(-1) band matrix because it is not strictly dominant??And why could the Gauss-Seidel method converge for it? It is more accurate to say that the 2-(-1) band matrix might converge for both the Jacobi and the Gauss-Seidel method, but based on the strictly diagonally dominant criterium, it is not known to be. Furthermore, the 2-(-1) band matrix is not symmetric positive-definite, so the Gauss-Seidel is not known to be convergent based on that criterium. However, I believe the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1, making the Jacobi method convergent. #### evinda ##### Well-known member MHB Site Helper It is more accurate to say that the 2-(-1) band matrix might converge for both the Jacobi and the Gauss-Seidel method, but based on the strictly diagonally dominant criterium, it is not known to be. Furthermore, the 2-(-1) band matrix is not symmetric positive-definite, so the Gauss-Seidel is not known to be convergent based on that criterium. However, I believe the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1, making the Jacobi method convergent. Yes,I also noticed that the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1 for all the dimensions I checked,but I get as result that the methods do not converge..Why does this happen?? #### Klaas van Aarsen ##### MHB Seeker Staff member Also,is the Hilbert matrix strictly dominant,so that the Jacobi methodi would converge,but it doesn't because of the fact that the matrix is ill-conditioned??Or isn't it strictly dominant? The Hilbert matrix is not strictly diagonally dominant. You may want to look up the definition if you want to use it in your work. Yes,I also noticed that the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1 for all the dimensions I checked,but I get as result that the methods do not converge..Why does this happen?? Is it perhaps convergent for matrices, say, up to 10x10? #### evinda ##### Well-known member MHB Site Helper The Hilbert matrix is not strictly diagonally dominant. You may want to look up the definition if you want to use it in your work. So,doesn't the Jacobi method converge because of the fact that the matrix is not strictly dominant and bacause it is also ill conditioned..Or for an other reason? Is it perhaps convergent for matrices, say, up to 10x10? No,I have checked it for 50<=dimension<=1000..and I found the spectral radious p: 0.9962<=p<=1..But the methods do not converge #### Klaas van Aarsen ##### MHB Seeker Staff member No,I have checked it for 50<=dimension<=1000..and I found the spectral radious p: 0.9962<=p<=1..But the methods do not converge Note that the spectral radius approaches 1 when the matrix becomes large enough. At that point the Jacobi method will converge very slowly. Then even small rounding errors will throw the method off to diverge. Perhaps you want to try if for dimensions up to 10? You might want to do the same thing for the Hilbert matrix. #### evinda ##### Well-known member MHB Site Helper Note that the spectral radius approaches 1 when the matrix becomes large enough. At that point the Jacobi method will converge very slowly. Then even small rounding errors will throw the method off to diverge. Perhaps you want to try if for dimensions up to 10? You might want to do the same thing for the Hilbert matrix. I think I am not able to do this,because I haven't get taught how to do this in matlab So,if I am asked to tell why the methods do not converge,is the right answer that they would converge because the spectral radious is smaller or equal to one,but they don't because of the rounding errors??or because of the precision of the digits?? #### Klaas van Aarsen ##### MHB Seeker Staff member I think I am not able to do this,because I haven't get taught how to do this in matlab Huh? I do not understand. If you can do it for 50 and 1000, it seems to me you should also be able to do it for 10. So,if I am asked to tell why the methods do not converge,is the right answer that they would converge because the spectral radious is smaller or equal to one,but they don't because of the rounding errors??or because of the precision of the digits?? Yes. But note that for the Jacobi method, the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1. Btw, it is the lack of precision of the digits is the reason for the rounding errors. If we had infinitely many significant digits, there wouldn't be any rounding errors. #### Klaas van Aarsen ##### MHB Seeker Staff member Here's my result for n=5 for the Hilbert matrix. #### evinda ##### Well-known member MHB Site Helper Huh? I do not understand. If you can do it for 50 and 1000, it seems to me you should also be able to do it for 10. Yes. But note that for the Jacobi method, the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1. Btw, it is the lack of precision of the digits is the reason for the rounding errors. If we had infinitely many significant digits, there wouldn't be any rounding errors. Oh,sorry!!!I thought you meant that I should change the precision of the digits.. So,the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1,and the the 2-(-1) band matrix bacause of the rounding errors?? - - - Updated - - - Here's my result for n=5 for the Hilbert matrix. View attachment 1748 I found the max eigenvalue 3.4441 using the Jacobi method,and 1.0000,using the Gauss-Seidel method.. #### Klaas van Aarsen ##### MHB Seeker Staff member Oh,sorry!!!I thought you meant that I should change the precision of the digits.. That explains it. So,the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1,and the the 2-(-1) band matrix because of the rounding errors?? That would be for the Jacobi method yes. Now that I have made the calculations for the Hilbert matrix with the Gauss-Seidel method, I can see that the iteration matrix for the Hilbert matrix has a spectral radius approaching 1, meaning it also diverges due to rounding errors. I found the max eigenvalue 3.4441 using the Jacobi method,and 1.0000,using the Gauss-Seidel method.. Looks like the same result then! #### evinda ##### Well-known member MHB Site Helper That explains it. That would be for the Jacobi method yes. Now that I have made the calculations for the Hilbert matrix with the Gauss-Seidel method, I can see that the iteration matrix for the Hilbert matrix has a spectral radius approaching 1, meaning it also diverges due to rounding errors. Looks like the same result then! I also found that the spectral radius is approaching 1 with the Gauss Seidel,and >= 42.6769 with the Jacobi method..Have you found the same result? #### Klaas van Aarsen ##### MHB Seeker Staff member I also found that the spectral radius is approaching 1 with the Gauss Seidel,and >= 42.6769 with the Jacobi method..Have you found the same result? That's what I have for a dimension of 50 for the Hilbert matrix. With dimension 500, I get 1 respectively 435.661. #### Klaas van Aarsen ##### MHB Seeker Staff member Btw, it still only means that the methods might diverge. It still depends on the actual b vector and the initial guess what will happen. The best we can say is that they are not guaranteed to converge. Only the 4-1 band matrix is guaranteed to converge. For the 4-1 band matrix I get a spectral radius of 0.249 for Gauss-Seidel and 0.499 for Jacobi. #### evinda ##### Well-known member MHB Site Helper That's what I have for a dimension of 50 for the Hilbert matrix. With dimension 500, I get 1 respectively 435.661. I get the same result!!! - - - Updated - - - Btw, it still only means that the methods might diverge. It still depends on the actual b vector and the initial guess what will happen. The best we can say is that they are not guaranteed to converge. Only the 4-1 band matrix is guaranteed to converge. For the 4-1 band matrix I get a spectral radius of 0.249 for Gauss-Seidel and 0.499 for Jacobi. I understand!!!!Thank you very much!!!!!!!!!
2020-09-18T07:47:10
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/convergence-of-methods.8057/", "openwebmath_score": 0.8575644493103027, "openwebmath_perplexity": 940.8164388300022, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9632305318133554, "lm_q2_score": 0.87407724336544, "lm_q1q2_score": 0.8419378879728444 }
https://chemistry.stackexchange.com/questions/110670/can-we-combine-two-reactions-and-then-calculate-the-equilibrium-concentrations?noredirect=1
# Can we combine two reactions and then calculate the equilibrium concentrations? At a certain temperature, $$\ce{N2O5}$$ dissociates as: \begin{align} \ce{N2O5 (g) &<=> N2O3 (g) + O2 (g)} & K_1 &= 4.5 \end{align} At the same time, $$\ce{N2O3}$$ also dissociates as: \begin{align} \ce{N2O3 (g) &<=> N2O (g) + O2 (g)} & K_2 &= \mathrm{?} \end{align} When 4 moles of $$\ce{N2O5}$$ are heated in a flask of $$\pu{1 L}$$ volume, it has been observed that the concentration of $$\ce{O2}$$ is $$\pu{4.5 M}$$ at equilibrium. What are the equilibrium concentrations of the other products? The combined reaction: $$\ce{ N2O5(g) <=> N2O(g) + 2O2(g)}$$ $$K_\text{combined} = K_1 \times K_2 = 4.5 K_2$$ There are 4 moles of $$\ce{N2O5}$$ initially. If $$x$$ moles of $$\ce{N2O5}$$ dissociate to give $$x$$ moles of $$\ce{N2O}$$ and $$2x$$ moles of $$\ce{O2}$$, then plugging in the values in the $$K_c$$ expression for this reaction we get $$4.5 K_2 = \frac{4x^3}{4-x},$$ where $$x = \frac{4.5}{2 \pu{mol}}.$$ Solving for $$K_2$$, I get $$K_2=\frac{81}{14}$$, which is incorrect. Is it allowed to first combine the reactions and then use the combined reaction? • I think there is sufficient information about where the OP's attempt to solve this went awry. It might help to see the "combined reaction", though. And maybe which species are in the flask after equilibrium is established. – Karsten Theis Mar 9 at 15:25 • The misconception is the same as in this question. If there are two coupled equilibria, e.g. A reacts to B reacts to C, B might be non-zero, so you can't just combine the reactions to A reacts to C and neglect that there might be some B at equilibrium. I think this question merits an answer. – Karsten Theis Mar 15 at 15:48 • Does this mean that to apply stoichiometry to an equilibrium problem it is necessary for the reactants to directly convert to products (elementary reaction)? – Sanom Dane Mar 15 at 15:59 • You can, but you have to account for it. You can't say "x moles of N2O5 dissociate to give x moles of N2O", you have to say "x moles of N2O5 dissociate to give x moles of N2O3 or N2O", and use additional information to figure out how much is N2O3 and how much is N2O. – Karsten Theis Mar 15 at 16:46 Different from Zhe's answer, I will use the approach of a combined ICE table using the model discussed in How can you use ICE tables to solve multiple coupled equilibria?. This results in a system with two unknowns that, in my opinion, gives a bit more insight into the problem as we are solving it. $$x$$ are changes due to dissociation of $$\ce{N2O5}$$ (first reaction), and $$y$$ are changes due to dissociation of $$\ce{N2O3}$$ (second reaction). $$\begin{array}{|c|c|c|c|c|} \hline &[\ce{N2O5}] & [\ce{N2O3}] & [\ce{N2O}]&[\ce{O2}] \\ \hline I & 4 & 0 & 0 & 0 \\ \hline C & -x & +x-y & +y & +x+y \\ \hline E & 4-x & +x-y & +y & 4.5 \\ \hline \end{array}$$ I have two unknowns, $$x$$ and $$y$$. Unless one of them is much bigger than the other (in which case I can first neglect the smaller one and come back to it later), I have to solve a system of two equations for $$x$$ and $$y$$ simultaneously. The first equation is already buried in the ICE table (subscript $$eq$$ is for equilibrium state): $$x + y = \ce{[O2]}_{eq} = 4.5 \ \ \ \ \text{or solved for y: }\ \ \ \ y = 4.5 - x$$ The second equation is via the equilibrium constant $$K_1$$: $$K_1 = 4.5 = \frac{[\ce{N2O3}]_{eq} [\ce{O2}]_{eq}}{[\ce{N2O5}]_{eq}} = \frac{[\ce{N2O3}]_{eq} \times 4.5}{[\ce{N2O5}]_{eq}}$$ Canceling 4.5 and rearranging gives $$[\ce{N2O5}]_{eq} = [\ce{N2O3}]_{eq}$$, and substituting from the ICE table gives a second equation for x and y: $$4-x = x - y\ \ \ \ \ \ \ \ \text{(or simplified } 2x = 4 + y)$$ Substituting the first equation into the second, we get: $$2x = 4 + y = 4 + 4.5 - x$$ $$x = 8.5/3 = 17/6$$ Substituting the value for $$x$$ back into the first equation: $$y = 4.5 - x = 4.5 - 17/6 = 10/6 = 5/3$$ Now we can tabulate the equilibrium concentrations and (to compare with Zhe's answer) the equilibrium constant $$K_2$$: $$c(\ce{N2O5})_{eq} = 7/6 M = 1.167 M$$ $$c(\ce{N2O3})_{eq} = 7/6 M = 1.167 M$$ $$c(\ce{N2O})_{eq} = 5/3 M = 1.667 M$$ $$K_2 = \frac{[\ce{N2O}]_{eq} [\ce{O2}]_{eq}}{[\ce{N2O3}]_{eq}} = \frac{5/3 \times 4.5}{7/6} = \frac{45}{7}$$ We can check that the equilibrium concentrations add up to the initial concentration of 4 M, and that we obtain the correct value for $$K_1$$ when substituting our answers into the equilibrium expression. Is it allowed to first combine the reactions and then use the combined reaction? Yes, but we have to consider all species simultaneously because none of them are minor species. When calculating, for example, hydronium and hydroxide concentrations in a solution of a weak acid, we can set aside the autodissociation of water because hydroxide is a minor species, and adjusting the hydroxide concentration after calculating the weak acid equilibrium is fine because it does not change the hydronium concentration much, hydroxide being the minor species. Here, we can't do that and have to take the more comprehensive approach of considering the two equilibria simultaneously. • One suggestion is actually to have 2 rows in the ICE table for the changes. This way, you can write separately the changes for each equilibrium to draw a better analogy to the single equilibrium problem. – Zhe Mar 18 at 12:23 • @Zhe That is a great suggestion, but we need a better table type-setting tool on StackExchange so it's easier to make subdivisions in tables. – Karsten Theis Mar 18 at 13:05 • Tables are kind of gross with every tool. :/ – Zhe Mar 18 at 14:01 • @KarstenTheis@KarstenTheis:I take each reaction as independent equilibrium,I calculate the amount of $\ce{N2O3}~\text{and the amount of} ~\ce{O2}$ produced in the first reaction where the concentration of each equal$\pu{2.55234M}$,then I found the amount of the species in the second equilibrium where $[\ce{N2O}]=[\ce{O2}]=4.5-2.55234=\pu{1.94766M}$ and $[\ce{N2O3}]= 2.5534-1.94766=\pu{0.60468M}$ to calculate $K_2 = 6.3$. correct the solution. – Adnan AL-Amleh Mar 18 at 21:05 Karsten's insight is correct. Your mistake is in assuming that you have either $$\ce{N2O5}$$ or $$\ce{N2O}$$ and $$\ce{O2}$$. There are four concentrations of interest: $$\ce{[N2O5]}$$, $$\ce{[N2O3]}$$, $$\ce{[N2O]}$$, and $$\ce{[O2]}$$. How can you relate these variables? Well, you can apply conservation of matter in terms of oxygen and nitrogen. For nitrogen: $$2\ce{[N2O5]} + 2\ce{[N2O3]} + 2\ce{[N2O]} = 2\times(4\ \mathrm{M})\tag{1}$$ For oxygen: $$5\ce{[N2O5]} + 3\ce{[N2O3]} + \ce{[N2O]} + 2 \ce{[O2]} = 5\times(4\ \mathrm{M})\tag{2}$$ Then, there are the equilibrium expressions: $$K_{1}=4.5=\frac{\ce{[N2O3][O2]}}{\ce{[N2O5]}}\tag{3}$$ $$K_{2}=\frac{\ce{[N2O][O2]}}{\ce{[N2O3]}}\tag{4}$$ If you substitute $$\ce{[O2]} = 4.5\ \mathrm{M}$$ into (2), (3), and (4), you end up with: $$2\ce{[N2O5]} + 2\ce{[N2O3]} + 2\ce{[N2O]} = 2\times(4\ \mathrm{M})\tag{1}$$ $$5\ce{[N2O5]} + 3\ce{[N2O3]} + \ce{[N2O]} + 2\times(4.5\ \mathrm{M}) = 5\times(4\ \mathrm{M})\tag{2'}$$ $$4.5=\frac{\ce{[N2O3]\times(4.5\ \mathrm{M})}}{\ce{[N2O5]}}\tag{3'}$$ $$K_{2}=\frac{\ce{[N2O]\times(4.5\ \mathrm{M})}}{\ce{[N2O3]}}\tag{4'}$$ That's 4 equations and 4 unknowns, and it's all algebra from there. Assuming I did that correctly, I got $$K_{2} = \frac{45}{7}$$
2019-08-18T11:34:37
{ "domain": "stackexchange.com", "url": "https://chemistry.stackexchange.com/questions/110670/can-we-combine-two-reactions-and-then-calculate-the-equilibrium-concentrations?noredirect=1", "openwebmath_score": 0.8186065554618835, "openwebmath_perplexity": 644.3108075057031, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9632305339244012, "lm_q2_score": 0.8740772335247532, "lm_q1q2_score": 0.8419378803392115 }
https://www.physicsforums.com/threads/probability-theory-work-check.954291/
Probability Theory, work check Homework Statement Hi all, could someone give my working a quick skim to see if it checks out? Many thanks in advance. Suppose that 5 cards are dealt from a 52-card deck. What is the probability of drawing at least two kings given that there is at least one king? The Attempt at a Solution Let ##B## denote the event that at least 2 kings are drawn, and ##A## the event that at least 1 king is drawn. Because ##B## is a strict subset of ##A##, $$P(B|A) = P(A \cap B)/P(A) = P(B)/P(A)$$ Compute ##P(A)##, ##P(A^c )## denotes the probability of not drawing a single king. $$P(A) = 1 - P(A^c) = 1 - \frac{48 \choose 5}{52 \choose 5} \approx 1 - 0.6588$$ Compute ##P(B)##, ##P(B^c)## denotes the probability of not drawing at least 2 kings, which is the sum of probabilities of drawing 1 king ##P(1)## and the probability of not drawing a single king ##P(A^c)##. $$P(B) = 1 - P(B^c) = 1 - (P(1) - P(A^c))$$ $$P(1) = \frac{5 \times {4\choose 1} \times 48 \times 47 \times 46 \times 45 }{52 \times 51 \times 50 \times 49 \times 48} \approx 0.299$$ where the numerator is the number of ways one can have a hand of 5 containing a single king. $$P(B) \approx 1 - (0.299 + 0.6588) \approx 0.0422$$ finally, $$P(B|A) = P(B)/P(A) = 0.0422 / (1-0.6588) \approx 0.1237$$ I'm pretty sure that there is a quicker way to do all of this even if my work checks out, I'd appreciate if someone could demonstrate a more efficient calculation! verty Homework Helper I agree with how you have calculated it. I get 0.1222. PeroK PeroK Homework Helper Gold Member 2020 Award I get the same answer as @verty. The way I looked at it was: ## p = \frac{1 - p_0 - p_1}{1-p_0} = 1 - \frac{p_1}{1-p_0}## Where ##p_n## is The probability of drawing exactly ##n## kings. WWCY Ray Vickson Homework Helper Dearly Missed Homework Statement Hi all, could someone give my working a quick skim to see if it checks out? Many thanks in advance. Suppose that 5 cards are dealt from a 52-card deck. What is the probability of drawing at least two kings given that there is at least one king? The Attempt at a Solution Let ##B## denote the event that at least 2 kings are drawn, and ##A## the event that at least 1 king is drawn. Because ##B## is a strict subset of ##A##, $$P(B|A) = P(A \cap B)/P(A) = P(B)/P(A)$$ Compute ##P(A)##, ##P(A^c )## denotes the probability of not drawing a single king. $$P(A) = 1 - P(A^c) = 1 - \frac{48 \choose 5}{52 \choose 5} \approx 1 - 0.6588$$ Compute ##P(B)##, ##P(B^c)## denotes the probability of not drawing at least 2 kings, which is the sum of probabilities of drawing 1 king ##P(1)## and the probability of not drawing a single king ##P(A^c)##. $$P(B) = 1 - P(B^c) = 1 - (P(1) - P(A^c))$$ $$P(1) = \frac{5 \times {4\choose 1} \times 48 \times 47 \times 46 \times 45 }{52 \times 51 \times 50 \times 49 \times 48} \approx 0.299$$ where the numerator is the number of ways one can have a hand of 5 containing a single king. $$P(B) \approx 1 - (0.299 + 0.6588) \approx 0.0422$$ finally, $$P(B|A) = P(B)/P(A) = 0.0422 / (1-0.6588) \approx 0.1237$$ I'm pretty sure that there is a quicker way to do all of this even if my work checks out, I'd appreciate if someone could demonstrate a more efficient calculation! Your method is OK. The method of Perok in #3 is faster. However, I have one quibble: you ought to keep more significant figures when doing calculations that involve subtractions, so as to avoid "subtractive error magnification". In your case you do not do too badly, getting 0.1237 instead of 2257/18472 ≈ 0.12218, but the general principle still holds. (In some problems subtractive error magnification can lead to huge errors, perhaps even incorrect final signs, etc.) Your method is OK. The method of Perok in #3 is faster. However, I have one quibble: you ought to keep more significant figures when doing calculations that involve subtractions, so as to avoid "subtractive error magnification". In your case you do not do too badly, getting 0.1237 instead of 2257/18472 ≈ 0.12218, but the general principle still holds. (In some problems subtractive error magnification can lead to huge errors, perhaps even incorrect final signs, etc.) Ah okay, I'll keep that in mind the next time. Thanks!
2021-02-27T09:43:01
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/probability-theory-work-check.954291/", "openwebmath_score": 0.8417617082595825, "openwebmath_perplexity": 589.195341509548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9632305349799242, "lm_q2_score": 0.8740772236840657, "lm_q1q2_score": 0.8419378717829694 }
https://math.codidact.com/posts/284280
Q&A # Is Pythagorean theorem really valid in higher dimensional space? +3 −0 I saw that someone was writing Pythagorean theorem in 3 dimensional space. The equation was : $$c=\sqrt{x^2+y^2+z^2}$$ If it's really correct than Pythagorean should work in higher dimensional space either. So I can write that $$c=\sqrt{\sum_{n=0}^N x_n^2}$$ At first I was thinking how right triangle should look like in 3 dimensional space. From my imagination, the triangle will look like a pyramid but all sides of that pyramid will be looking like right triangle that's how Pythagorean works in 3 dimension. It's impossible to imagine the shape in 4 dimension or higher dimension. Is Pythagorean theorem really valid in higher dimensional space? Why does this post require moderator attention? Why should this post be closed? +3 −0 One generalization of the Pythagorean theorem to three dimensions is de Gua's theorem: if a tetrahedron has a right-angle corner (like the corner of a cube), then the square of the area of the face opposite the right-angle corner is the sum of the squares of the areas of the other three faces So the faces that are not opposite the right-angle corner act analogously to the non-hypotenuse sides of a right triangle, and the face that is opposite the right-angle corner is analogous to the hypotenuse. Instead of summing the squares of lengths of sides, you sum the squares of areas of faces. This generalizes to higher dimensions: an $n$-dimensional simplex (a shape made of $n + 1$ vertices in $n$-dimensional space, all the edges that connect them, all the triangular faces that those edges create, all the tetrahedra that those triangles create, and so on up to $(n - 1)$-dimensional hyperfaces) with a vertex all of whose edges are pairwise perpendicular to each other will satisfy the property that the square of the measure (length, area, volume, etc.) of the hyperface that doesn't contain that vertex is equal to the sum of the squares of the measures of the other $n$ hyperfaces. Another generalization of the Pythagorean theorem to three dimensions is simply the three-dimensional Euclidean metric: the square of the distance between two points in three-dimensional space is equal to the sum of the squares of the separation between those points along each of the coordinate axes, assuming some Cartesian coordinate system. In two dimensions, those axial separations form the sides of a right triangle, with the hypotenuse connecting the two points; but you could equivalently say that those axial separations form a rectangle, and the two points are connected by a diagonal of the rectangle. That latter visualization is more useful in higher dimensions: in three dimensions, you can't make a single triangle out of the segments anymore, but you can picture a rectangular prism formed by the two points on opposite corners with sides parallel to the coordinate axes, and the lengths being squared are three sides of the prism and its diagonal. This too generalizes to $n$-prisms in $n$-dimensional space. (And both of these generalizations are themselves special cases of this!) Why does this post require moderator attention? +2 −0 Consider a vector, $\mathbf v=(a,b,c)$, in $3$-space. We can project this onto the $xy$-plane, say, producing the vector $(a, b, 0)$ which we can identify with the $2$-vector, $(a, b)$. This two vector corresponds to the hypotenuse of a right triangle whose side lengths are $a$ and $b$. Therefore the squared length of this vector is $|(a,b)|^2 = a^2 + b^2$ according to the usual Pythagorean formula for a right triangle. But now we can consider the plane spanned by $(a,b,0)$ and $\mathbf v$, and in that plane view $v$ as the hypotenuse of a right triangle with side lengths $|(a,b)|$ and $c$. The squared length of the vector $\mathbf v$ is $|(a,b)|^2 + c^2 = a^2 + b^2 + c^2$. This logic can be extended to any dimension via an inductive argument showing that $|(a_1,\dots,a_n)|^2 = |(a_1,\dots a_{n-1})|^2 + a_n^2$. That said, there are many other (probably better) ways of thinking about this. For example, any two vectors (in any dimension) gives rise to a (potentially degenerate) triangle in the plane spanned by those vectors (which will be ambiguous if they are linearly dependent). Namely, given vectors $\mathbf u$ and $\mathbf v$, we can consider the triangle $ABC$ such that the $A + \mathbf u = B$, $B + \mathbf v = C$ and thus $A + (\mathbf u + \mathbf v) = C$. The squared length of the "hypotenuse" $\overline{AC}$ is $$(\mathbf u + \mathbf v) \cdot (\mathbf u + \mathbf v) = \mathbf u \cdot \mathbf u + 2\mathbf u \cdot \mathbf v + \mathbf v \cdot \mathbf v$$ where $\cdot$ represents the dot or inner product and $\mathbf u^2 = |\mathbf u|^2 = \mathbf u \cdot \mathbf u$. The triangle will be a right triangle when $\mathbf u$ and $\mathbf v$ are orthogonal, i.e. at $\pm 90^\circ$ to each other in the plane they span. This is represented by $\mathbf u \cdot \mathbf v = 0 = |\mathbf u||\mathbf v|\cos(\pm 90^\circ)$. We now get a more abstract rendition of the Pythagorean theorem; now in any dimension and for any inner product space. Why does this post require moderator attention? This community is part of the Codidact network. We have other communities too — take a look! Want to advertise this community? Use our templates! Like what we're doing? Support us!
2022-05-18T23:36:04
{ "domain": "codidact.com", "url": "https://math.codidact.com/posts/284280", "openwebmath_score": 0.7874221205711365, "openwebmath_perplexity": 201.6812484344707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9928785725186069, "lm_q2_score": 0.8479677602988601, "lm_q1q2_score": 0.8419290193873324 }
http://math.stackexchange.com/questions/169550/existence-of-a-prime-ideal-in-an-integral-domain-of-finite-type-over-a-field-wit
# Existence of a prime ideal in an integral domain of finite type over a field without Axiom of Choice Let $A$ be an integral domain which is finitely generated over a field $k$. Let $f \neq 0$ be a non-invertible element of $A$. Can one prove that there exists a prime ideal of $A$ containing $f$ without Axiom of Choice? I came up with this question to solve this problem. This is a related question. - Dear Makoto, I see an impressive number of questions on this site , from many users, wondering whether this or that result can be proved without the axiom of choice. So I'm curious: why do you (and they) want to know that? Would a negative answer prevent you from using such a result? Is it a philosophical concern? Do you keep in mind the list of theorems requiring that axiom? Let me emphasize that I'm not criticizing you in the least. Since you seem to be a serious and dedicated person, I thought you might give me an articulate answer. But I welcome answers from other participants too. –  Georges Elencwajg Jul 11 '12 at 17:45 @Georges: I don't know about other users, but I'm interested in these questions because 1) I want to know what parts of commutative algebra do and do not require choice so I know what can be made explicit in principle for doing actual computations and 2) I may at some point want to do commutative algebra in categories where the axiom of choice may be false. –  Qiaochu Yuan Jul 11 '12 at 17:50 @GeorgesElencwajg I have asked some questions like this too. I did it purely out of curiosity. –  user23211 Jul 11 '12 at 17:50 @Qiaochu, this is exactly the type of explanation I was hoping for. Thank you. –  Georges Elencwajg Jul 11 '12 at 18:14 Please let me know the reason for the downvotes. Unless you make it clear, it's hard to improve my question. –  Makoto Kato Jul 28 '12 at 5:23 This answer builds on Qiaochu's and uses the same definition as Qiaochu, to wit: A ring $R$ is noetherian if, for any nonempty collection of ideals $\mathcal{I}$, there is some $I \in \mathcal{I}$ which is not properly contained in any $J \in \mathcal{I}$. Theorem: If $R$ is noetherian, then $R[x]$ is noetherian. This proof is basically taking the standard proof and rephrasing it to use Qiaochu's definition and be careful about choice. I'm going to try to systematically use the following conventions: Ideals in $R[x]$ get capital letters; ideal in $R$ get overlined capital letters. Sets of ideals in $R[x]$ get calligraphic letters. I found that I could manage to write this without ever assigning a name to a set of ideals in $R$. For any ideal $I \subseteq R[x]$, and any integer $j \geq 0$, define $$s_j(I) := \{ r \in R : \mbox{there is an element of I of the form } r x^j + r_{j-1} x^{j-1} + \cdots +r_0 \}$$ Observe that $s_j(I)$ is an ideal and $s_j(I) \subseteq s_{j+1}(I)$. Lemma: If $R$ is noetherian, and $I \subseteq R[x]$ is an ideal, then there is an index $j$ such that $s_k(I) = s_j(I)$ for $k \geq j$. Proof: Let $s_j(I)$ be a maximal element in $\{ s_j(I) \}_{j \geq 0}$. Let $k \geq j$. Then we observed above that $s_k(I) \supseteq s_j(I)$. But, by the definition of a maximal element, we do not have $s_k(I) \supsetneq s_j(I)$, so $s_k(I) = s_j(I)$. $\square$. We will denote the ideal $s_j(I)$ defined in the above lemma as $s_{\infty}(I)$. For $\mathcal{I}$ a collection of ideals in $R[x]$, we will write $s_j(\mathcal{I})$ or $s_{\infty}(\mathcal{I})$ for the result of applying $s_j$ or $s_{\infty}$ to each element of $\mathcal{I}$. So $s_j(\mathcal{I})$ is a set of ideals in $R$. Let $\mathcal{I}$ be a collection of ideals in $R[x]$. Since $R$ is noetherian, there is a maximal element $\bar{J}$ in $s_{\infty}(\mathcal{I})$. Let $\mathcal{J}$ be the set of all $I \in \mathcal{I}$ with $s_{\infty}(I) = \bar{J}$. Note that no element of $\mathcal{I} \setminus \mathcal{J}$ contains an element of $\mathcal{J}$, by the maximality of $\bar{J}$, so it is enough to show that $\mathcal{J}$ has a maximal element. Choose an ideal $K \in \mathcal{J}$. (Making one choice does not use AC.) Let $m$ be an index such that $s_m(K) = \bar{J}$. Let $\mathcal{K}$ be the collection of ideals $I \in \mathcal{J}$ for which $s_m(I) = \bar{J}$. Note that no element of $\mathcal{J} \setminus \mathcal{K}$ can properly contain an element of $\mathcal{K}$, so it is enough to show that $\mathcal{K}$ has a maximal element. We now make finitely many dependent choices. Choose a maximal element $\bar{J}^{m-1}$ in $s_{m-1}(\mathcal{K})$; let $\mathcal{K}_{m-1}$ be the set of $I \in \mathcal{K}$ with $s_{m-1}(I)=\bar{J}^{m-1}$; it is enough to show that $\mathcal{K}_{m-1}$ has a maximal element. Choose a maximal element $\bar{J}^{m-2}$ in $s_{m-2}(\mathcal{K}_{m-1})$; let $\mathcal{K}_{m-2}$ be the set of $I \in \mathcal{K}^{m-1}$ with $s_{m-2}(I)=\bar{J}^{m-2}$; it is enough to show that $\mathcal{K}_{m-2}$ has a maximal element. Continue in this manner to construct $\mathcal{K}_{m-3}$, $\mathcal{K}_{m-4}$, ..., $\mathcal{K}_0$. Since we are only making finitely many choices, we don't need AC; see my answer here. At the end, we have a nonempty collection $\mathcal{K}_0$ of ideals such that, for any $I$ and $J \in \mathcal{K}_0$, and any $j \geq 0$, we have $s_j(I)= s_j(J)$. I claim that any element of $\mathcal{K}_0$ is maximal. Let $I$ and $J \in \mathcal{K}_0$ and suppose that $I \supseteq J$. I will prove that $I=J$. This shows that every element of $\mathcal{K}_0$ is maximal. Let $I_{\leq d}$ be the set of polynomials in $I$ of degree $\leq d$. I will show by induction on $d$ that $I_{\leq d} = J_{\leq d}$. The base case is $d=-1$, where both sides are $\{ 0 \}$. Since $I \supseteq J$, I just need to show that $I_{\leq d} \subseteq J_{\leq d}$. Let $f \in I_{\leq d}$ and let the leading term of $f$ be $r x^d$. Then $r \in s_d(I) = s_d(J)$ so there is some $g \in J_{\leq d}$ with leading term $r$. Since $I \supseteq J$, we have $g \in I$ and hence $f-g \in I$. Since $\deg(f-g) < d$, by the induction hypothesis, we have $f-g \in J$. So $f = (f-g)+g \in J$. QED - Great! I was about to start doing this but you seem to have beaten me to it. I'm pretty satisfied now. –  Qiaochu Yuan Jul 11 '12 at 19:03 @DavidSpeyer Perhaps I'm slow. May I ask why $\mathcal{L}^{m-1}$ is non-empty? –  Makoto Kato Jul 16 '12 at 21:24 $\mathcal{L}^{m-1}$ is the set of values of $I^{m-1}$ for $I \in \mathcal{K}$, so it is nonempty because $\mathcal{K}$ is nonempty. Did you mean to ask why $\mathcal{K}_{m-1}$ is nonempty? Because it is constructed to contain that element of $\mathcal{K}$ which gave raise to the element $J^{m-1}$ in $\mathcal{L}^{m-1}$. I might need better notation here. –  David Speyer Jul 16 '12 at 21:37 @DavidSpeyer I thought $I^{m-1}$ might be empty for $I \in \mathcal{K}$. No? –  Makoto Kato Jul 16 '12 at 21:50 $I^{m-1}$ is an ideal. It certainly isn't empty, because it contains $0$. But we also don't care whether the ideal is empty, we care whether $\mathcal{L}^{m-1}$, which is a set of ideals, is empty. You are convincing me I need better notation, though... –  David Speyer Jul 16 '12 at 22:14 This problem reduces to proving Hilbert's basis theorem without choice, which I don't currently know how to do, but which I believe can be done. Let me explain the reduction. Below "ring" means "commutative unital ring." Definition: A ring $R$ is Noetherian if any non-empty collection of ideals of $R$ has a maximal element. (This definition is equivalent to the usual definitions in the presence of dependent choice, but in the absence of dependent choice I believe it is known that this definition is stronger. In any case it implies the other definitions.) Fields are obviously Noetherian. Note that Noetherian rings contain maximal ideals by definition. Proposition: Let $R$ be a Noetherian ring and $f : R \to S$ a surjective ring homomorphism. Then $S$ is Noetherian. Proof. Let $I_i$ be a non-empty collection of ideals in $S$. Then $f^{-1}(I_i)$ is a non-empty collection of ideals in $R$ which by assumption has a maximal element $I_j$. Since $f$ is surjective, $I_j$ is also maximal in $S$. Assumption: If $R$ is Noetherian, then $R[x]$ is Noetherian. From this assumption it follows that any finitely-generated ring over a Noetherian ring is Noetherian, hence any ideal in such a ring is contained in a maximal ideal (in particular a prime ideal). Let me explain the problem with the standard proof. This proof (as found in e.g. Atiyah-MacDonald) shows that if every ideal of $R$ is finitely-generated, then every ideal of $R[x]$ is finitely-generated (and I am not sure this works without dependent choice either). As far as I know, this condition is not equivalent to Noetherian (as defined above) in ZF. What we can prove in ZF is the following. Proposition: If $R$ is Noetherian, then every ideal of $R$ is finitely-generated. Proof. Let $I$ be an ideal of $R$. Then the collection of finitely-generated ideals contained in $I$ has a maximal element $J = (r_1, ... r_n)$. If $I$ is not all of $J$, then there exists $r_{n+1} \in I$ which is not in $J$, but then $(r_1, ... r_{n+1})$ is a finitely-generated ideal containing $J$, which contradicts the minimality of $J$. However, as far as I can tell, we cannot prove the converse without dependent choice. - Beat me to it! I think the proof of Hilbert basis theorem which I know, arguing about ideals of leading coefficients, does not use choice (explicitly...). –  Andrew Jul 11 '12 at 18:03 @Andrew: see my edit. –  Qiaochu Yuan Jul 11 '12 at 18:13 Once the basic tools have been established, constructing a counterexample to the equivalence between the definitions of a Noetherian ring is not a hard task (what is harder is the proof that DC is really needed and not something weaker like countable choice). Your belief that the definitions are not equivalent is indeed true. –  Asaf Karagila Jul 11 '12 at 18:16 @QiaochuYuan If you directly look at the standard proof without using the word finitely generated anywhere, I think you can directly show $R$ noetherian implies $R[x]$ noetherian; see my answer. –  David Speyer Jul 11 '12 at 18:58 For future reference, Wilfrid Hodges wrote an excellent paper called Six impossible rings [J. Algebra 31 (1974)] where he examines the three Noetherian conditions and the three Artinian conditions. Using six pathological rings, he concludes that no implications between these six conditions other than the obvious ones are provable in ZF. –  François G. Dorais Jul 17 '12 at 0:34 To simplify the notations, we assume the charcteristic of $k$ is $0$. The positive characteristic case can be proved similarly. By Noether normalization lemma(this can be proved without AC), there exist algebraically independent elements $x_1, ..., x_n$ in $A$ such that $A$ is a finitely generated module over the polynomial ring $A' = k[x_1,..., x_n]$. Let $K$ and $K'$ be the fields of fractions of $A$ and $A'$ respectively. Let $L$ be the smallest Galois extension of $K'$ containing $K$. Let $G$ be the Galois group of $L/K'$. Let $B$ be the integral closure of $A$ in $L$. It is well known that $B$ is a finite $A'$-module. Let $g = \prod_{\sigma \in G} \sigma(f)$. Since $A'$ is integrally closed, $g \in A'$. Since $g$ is non-invertible, there exists an irreducible polynomial $h$ which divides $g$. Then $P = hA'$ is a prime ideal of $A'$ containing $g$. By this, there exists a prime ideal $Q$ of $B$ lying over $P$. Since $g \in Q$, there exists $\sigma \in G$ such that $\sigma(f) \in Q$. Then $f \in \sigma^{-1}(Q)$. Hence $f \in A \cap \sigma^{-1}(Q)$ QED -
2015-07-29T14:22:58
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/169550/existence-of-a-prime-ideal-in-an-integral-domain-of-finite-type-over-a-field-wit", "openwebmath_score": 0.9176200032234192, "openwebmath_perplexity": 105.23359571294992, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540722737478, "lm_q2_score": 0.8596637541053281, "lm_q1q2_score": 0.8419151983691909 }
https://math.stackexchange.com/questions/1194072/where-do-summation-formulas-come-from
# Where do summation formulas come from? It's a classic problem in an introductory proof course to prove that $\sum_{ i \mathop =1}^ni = \frac{n(n+1)}{2}$ by induction. The problem with induction is that you can't prove what the sum is unless you already have an idea of what it should be. I would like to know what the process is for getting the idea. Wikipedia has plenty of summation formulas listed, and there are surely lots more, but I think I should be able to simplify summations without referring to a table. I don't suppose there's a universal technique for deriving all of them, but it would be good to know at least a few things to try. This question was motivated by an answer involving summation, and while I have no doubt that it's true, I wouldn't know how to get the answer to the particular summation without being told beforehand. • Generally, I imagine, they come from people figuring out a pattern for some small number of iterations and then proving it continues with induction. In particular, you might want to look up the story of Gauss as a little kid supposedly figuring out exactly the sum you wrote here in the case of $n=100$. – user137731 Mar 17 '15 at 16:04 • The book "Concrete Mathematics" by Graham, Knuth & Patashnik covers lots of techniques for computing sums. – Hans Lundmark Mar 17 '15 at 16:06 • For such sums of the form $$S_n = \sum_{k=1}^n p(k)$$ With $\deg(p) = m$, the closed form, if it exists, will be a polynomial of degree $m+1$. This and FTA will allow you to find it with a (small) finite number of evaluations ($m+2$). – AlexR Mar 17 '15 at 16:06 • Finite calculus is one way of computing sums cs.purdue.edu/homes/dgleich/publications/… – Jean-Sébastien Mar 17 '15 at 16:09 • As you surmised, there is no reason to think there is a universal method. But there are universal methods for certain families of summations. For example, there are such methods for the family of sums $\sum_1^n k^d$, where $d$ ranges over the positive integers. – André Nicolas Mar 17 '15 at 16:11 There do in fact exist unversal techniques for deriving exact expressions for summations. Donald Knuth asked in a problem in one of his books to develop computer programs for simplifying sums that involve binomial coefficients. The answer to that problem was given by Marko Petkovsek, Herbert Wilf and Doron Zeilberger in their book "A = B", that can de downloaded free of charge here. • – Henry Mar 18 '15 at 8:48 This approach calculates $\sum_{i=1}^n i^k$ in terms of $\sum_{i=1}^n i^j$ for $j=0,1,\dots,k-1$. This lets us actually calculate $\sum_{i=1}^n i^k$ since certainly $\sum_{i=1}^n i^0 = n$. I've posted this a few times, and at this point have it saved for re-use. We start out with this equation: $$n^k = \sum_{i=1}^n i^k-(i-1)^k.$$ This holds for $k \geq 1$. This follows by telescoping; the $n^k$ term survives, while the $0^k$ term is zero and all the other terms cancel. Using the binomial theorem: \begin{align*} n^k & = \sum_{i=1}^n i^k-\sum_{j=0}^k {k \choose j} (-1)^j i^{k-j} \\ & = \sum_{i=1}^n \sum_{j=1}^{k} {k \choose j} (-1)^{j+1} i^{k-j} \\ & = \sum_{j=1}^{k} {k \choose j} (-1)^{j+1} \sum_{i=1}^n i^{k-j} \end{align*} So now we can solve this equation for $\sum_{i=1}^n i^{k-1}$: $$\sum_{i=1}^n i^{k-1} = \frac{n^k + \sum_{j=2}^k {k \choose j} (-1)^j \sum_{i=1}^n i^{k-j}}{k}.$$ More clearly, let $S(n,k)=\sum_{i=1}^n i^k$, then $$S(n,k)=\frac{n^{k+1} + \sum_{j=2}^{k+1} {k+1 \choose j} (-1)^j S(n,k+1-j)}{k+1}$$ for $k \geq 1$. Now start the recursion with $S(n,0)=n$. An easy induction argument from this expression shows that $S(n,k)$ is a polynomial in $n$ of degree $k+1$. This means that if you want to get $S(n,k)$ for some large $k$, you can safely assume a polynomial form and then calculate the interpolating polynomial, rather than actually stepping up the recursion from the bottom. May be with an example it might be more clear. Take for example $$1+2+...+50$$ As you can remark, you have that $$51=1+50=2+49=3+48=4+47=...$$ and thus, in $$1+...+n,$$ if $n$ is even, you sum $$\underbrace{(n+1)+...+(n+1)}_{\frac{n}{2}\ times}=\frac{n(n+1)}{2}.$$ If $n$ is odd, you sum $$\underbrace{(n+1)+...+(n+1)}_{\frac{(n-1)}{2}\ times}+\frac{n+1}{2}=\frac{(n+1)(n-1)+n+1}{2}=\frac{(n+1)(n-1+1)}{2}=\frac{n(n+1)}{2}$$ • I should have picked a harder example in my question! This sum works out nice when you add opposite ends of the sequence. What if you're summing $i^2$ instead of $i$? I don't think it's so nice then. – NoName Mar 17 '15 at 20:07 Like Gauß did: Write first n numbers in ordered direction in first row. Write same numbers in opposite direction in second row. Then add column-numbers. You get $n$ times the number $n+1$. This product $n \cdot (n + 1)$ is two times the sum of first $n$ numbers. Divide by two, and you get the sum of the first $n$ numbers. $$\begin{array}{l} \begin{array}{*{20}{c}} 1&2&.&.&.&.&.&.&{n - 1}&n\\ n&{n - 1}&.&.&.&.&.&.&2&1\\ {n + 1}&{n + 1}&.&.&.&.&.&.&{n + 1}&{n + 1} \end{array}\\ \sum\limits_{k = 1}^n k = \frac{n}{2}(n + 1) \end{array}$$ • This answer would be better if you explained what you were doing. – Teepeemm Mar 17 '15 at 20:51 • @Teepeemm: We are all mathematicians. We follow the rules. Thank's for advise. – Frieder Mar 17 '15 at 21:35
2021-05-06T00:20:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1194072/where-do-summation-formulas-come-from", "openwebmath_score": 0.926037609577179, "openwebmath_perplexity": 273.6859254814496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540692607815, "lm_q2_score": 0.859663754105328, "lm_q1q2_score": 0.8419151957790528 }
https://math.stackexchange.com/questions/1266245/what-does-sum-limits-n-1n-1-frac1n-sum-n-3n1-frac1n
# What does $\sum\limits_{n=1}^{N-1} \frac{1}{n} - \sum_{n=3}^{N+1} \frac{1}{n}$ simplify to? A solution to one of the exercises in my text states: $$\sum\limits_{n=1}^{N-1} \frac{1}{n} - \sum_{n=3}^{N+1} \frac{1}{n} = \frac{1}{1} + \frac{1}{2} - \frac{1}{N} - \frac{1}{N+1}$$ I have no idea how to get the right hand side of the above equation. I do realize that the left hand side involves finite sums of harmonic series which as far as I know, there is no closed form solution. So I am really puzzled as to the manipulations to be done to arrive at the right hand side. Please tell me how to get the left right hand side. Judging by how my solution states it, it seems like this is a standard result. So please also let me know what are the properties of summation that are being used so that I can solve similar questions myself in the future. Note that $$\sum_{n=1}^{N-1}\frac 1n=\frac 11+\frac 12+\color{red}{\frac{1}{3}+\frac14+\cdots+\frac{1}{N-1}}$$ and that $$\sum_{n=3}^{N+1}\frac 1n=\color{red}{\frac{1}{3}+\frac{1}{4}+\cdots+\frac{1}{N-1}}+\frac{1}{N}+\frac{1}{N+1}$$ • I wanted to write an answer, but this just sums up too perfectly what I wanted to say. – 5xum May 4 '15 at 11:24 • Thank you so much for this answer. It is so obvious now. Could you explain how you saw this relationship? Looking at the left hand side, it was not obvious to me that there are terms going to be cancelled. – mauna May 4 '15 at 11:32 • @mauna: Well, since $1$ is near to $3$ and $N-1$ is near to $N+1$, so you can see that many terms can be cancelled. – mathlove May 4 '15 at 11:37 $\sum_{n=1}^{N-1}\frac{1}{n}-\sum_{n=3}^{N+1}\frac{1}{n}$ $=\frac{1}{1}+\frac{1}{2}+\sum_{n=3}^{N-1}\frac{1}{n}-\sum_{n=3}^{N-1}\frac{1}{n}-\frac{1}{N}-\frac{1}{N+1}$ $=\frac{1}{1}+\frac{1}{2}-\frac{1}{N}-\frac{1}{N+1}$ As for the harmonic series, this isn't a harmonic series per se, rather it is a difference of finite harmonic series with just different boundaries of summation, which should clear up why such a closed form is possible. First thing to notice: $$\sum_{n=1}^{N-1}\frac 1n=\frac 11+\frac 12+\cdots+\frac{1}{N-1}$$ and that $$\sum_{n=3}^{N+1}\frac 1n=\frac{1}{3}+\cdots+\frac{1}{N}+\frac{1}{N+1}$$ So what we get: $$\left(\sum_{n=1}^{N-1}\frac 1n\right)-\left(\sum_{n=3}^{N+1}\frac 1n\right)=\frac{3}{2}-\frac{2N+1}{N^2+N}$$
2019-07-20T03:58:23
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1266245/what-does-sum-limits-n-1n-1-frac1n-sum-n-3n1-frac1n", "openwebmath_score": 0.8765785098075867, "openwebmath_perplexity": 124.59475118406317, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540722737478, "lm_q2_score": 0.8596637469145053, "lm_q1q2_score": 0.8419151913268292 }
http://math.stackexchange.com/questions/30778/is-there-a-formula-for-solving-the-congruence-equation-ax2-bx-c-0
# Is there a formula for solving the congruence equation $ax^2 + bx + c=0$? Using the quadratic formula, we have either 0, 1, or 2 solutions. I wonder if we could use it this formula for congruence? Or is there a formula to solve quadratic equation for congruence? Edit Assume that $ax^2 + bx + c \equiv 0 \pmod{p}$, where $p$ is prime with $(a, p) = 1$, then is there a formula for this equation? Thanks, - Yes: if $p\gt 2$, then you can just use the quadratic formula, suitably interpreted: "dividing" by $2a$ means multiplying by the modular inverse of $2a$ modulo $p$; and $\sqrt{b^2-4ac}$ means any modular class whose square is congruent to $b^2-4ac$ modulo $p$, if such exists. If there is no such modular class, then the quadratic is irreducible modulo $p$. If $p=2$, then you get no solutions if all of $a$, $b$, and $c$ are odd; you get a single double solution $x=1$ if $a$ and $c$ are odd, $b$ even; both $x=0,1$ if $b$ odd, $c$ even; and a single double $x=0$ solution if $b,c$ even. – Arturo Magidin Apr 4 '11 at 2:44 @Arturo Magidin: Thank you. I got it. – Chan Apr 4 '11 at 2:55 Regarding the modulus $\rm\:p=2\:$ see the Parity Root Test. – Bill Dubuque Apr 4 '11 at 3:42 If $n = p$ is prime, the situation is straightforward. When $p = 2$ there are a small number of cases, and when $p > 2$ the quadratic formula holds. (Note that the quadratic formula fails when $p = 2$ because you can't divide by $2$. This is because you can't complete the square $\bmod 2$.) If $n$ is composite, the situation is more complicated. $x$ is a solution if and only if $x$ is a solution $\bmod p^k$ for every prime power factor of $n$ by the Chinese Remainder Theorem, so in particular if, say, $n$ is a product of $k$ distinct primes there can be as many as $2^k$ solutions obtained by combining roots modulo the prime factors of $n$. After the above step the problem reduces to the prime power case $n = p^k$. In this case the question of what solutions look like is completely answered by Hensel's lemma. Again the case $p = 2$ is special. - @Quiaochu Yuan: Thank you. How about if $(a, p) = 1$, is it a special case? – Chan Apr 4 '11 at 2:35 @Chan: if $p > 2$ and $n = p^k$ then each solution $\bmod p$ can be uniquely extended to a solution $\bmod n$ by Hensel's lemma. If $p = 2$ then I think one needs to look at solutions $\bmod 8$. – Qiaochu Yuan Apr 4 '11 at 2:38 @Chan: Did you mean $(a,p)=p$? When $(a,p)=1$, you are in the "easy" case, because $a$ is invertible modulo $p$ so you can divide by $2a$. – Arturo Magidin Apr 4 '11 at 2:38 @Quiaochu Yuan: Thank you. – Chan Apr 4 '11 at 2:53 @Arturo Magidin: I meant $(a, p) = 1$. I remember now, the "invertible part". Many thanks ;) – Chan Apr 4 '11 at 2:54 The quadratic formula works just as well modulo n as long as $(2a,n) = 1$ and $b^2-4ac$ is a quadratic residue mod n. If either of those conditions do not hold, then there are no solutions. Edit: as pointed out in the comments, this is not a complete answer; see Qiaochu Yuan's for a much better one. - @Harry Stern: Thank you. That was exactly what I thought initially. – Chan Apr 4 '11 at 1:54 @Chan I'm glad I could be helpful! – Harry Stern Apr 4 '11 at 2:00 @Harry: That's false, e.g. $\rm\ m\ n\ x^2 + x\$ has root $\rm x = 0\ (mod\ n)\$ and $\rm\ (2a,n) = (2\ m\ n,\ n) = n > 1\:$ for $\rm\:n>1\:.$ Also there can be arbitrarily many roots for composite $\rm\:n\:.$ – Bill Dubuque Apr 4 '11 at 2:02 @Harry Stern: It is absolutely helpful. Great thanks ;) – Chan Apr 4 '11 at 2:02 This answer is incomplete when $n$ is composite. As Bill Dubuque says, there can be arbitrarily many roots for composite $n$ because we can combine pairs of roots modulo the prime factors of $n$ by CRT. – Qiaochu Yuan Apr 4 '11 at 2:12 Here (link) is a thorough discussion of the steps in reducing general moduli quadratic equation problems to those of prime moduli, including the case $p=2$. -
2016-05-25T21:05:36
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/30778/is-there-a-formula-for-solving-the-congruence-equation-ax2-bx-c-0", "openwebmath_score": 0.92940753698349, "openwebmath_perplexity": 251.91680479834054, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540668504082, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.8419151901857617 }
https://math.stackexchange.com/questions/2679379/arrangements-of-people-in-a-circle
Arrangements of people in a circle Suppose there are $20$ kids. $10$ boys and $10$ girls. What is the probability that the kids will arrange in this way: the circle will include exactly $5$ separate couples of girls and between each $2$ girls couple there will be at least one boy. What I tried to do is to separate the kids to $10$ groups The first $5$ groups will be groups of $3$ people - $2$ girls and a boy where the boy sits on the right and the girls in the middle and left. And other $5$ groups will include only one boy in each group. Then we have $(10-1)!$ permutations of the arrangements of the groups in the circle. Then I include the internal permutations of each group For the first group I have a choice of $10 \choose 2$ and $10\choose1$ for the boy, and the also multiply by the number of different arrangments inside the group - $2$ and thus we have ${10 \choose 2}{10 \choose 1}2$ for the first group, ${8 \choose 2}{9 \choose 1}2$ for the second group until the last group (${2 \choose 2}{6 \choose 1}2$) And for the boys we add ${5 \choose 1}{4 \choose 1}{3 \choose 1}{2 \choose 1}{1 \choose 1}$ Since we have $(20-1)!$ permutations of all the kids in the circle then the desired probability should be $$\frac{9!{10 \choose {2,2,2,2,2}}{2^5}{10!}}{19!}$$ What is the problem in my logic? It looks like you have arranged the boys twice, once when you arrange the groups and once when you select which boy is in which group. In my solution, I decide where the boys will be placed, then arrange them in the chosen seats. Suppose Anne is one of the girls. Seat her first. There are $19!$ ways to arrange the remaining $19$ children as we proceed clockwise around the table. As for the favorable cases, there are two ways to choose on which side of Anne another girl will sit. Once that choice is made, the positions of the girls are determined by how many boys sit between each pair of girls. Let $x_k$ be the number of boys in the $k$th group to Anne's left as we proceed clockwise around the table. There must be five such groups, one to the left of each pair of girls. Thus, $$x_1 + x_2 + x_3 + x_4 + x_5 = 10 \tag{1}$$ Since there must be at least one boy between each pair of girls, equation 1 is an equation in the positive integers. A particular solution of equation 1 corresponds to the placement of four addition signs in the nine spaces between successive ones in a row of $10$ ones. $$1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1$$ For instance, the choice $$1 1 + 1 1 1 + 1 + 1 + 1 1 1$$ corresponds to the solution $x_1 = 2$, $x_2 = 3$, $x_3 = x_4 = 1$, and $x_5 = 3$. The number of solutions of equation 1 in the positive integers is the number of ways we can select four of the nine spaces between successive ones in a row of ten ones, which is $$\binom{9}{4}$$ Once the seats for the boys have been selected, they can be arranged in those $10$ seats in $10!$ ways as we proceed clockwise around the circle from Anne. The remaining girls can be seated in $9!$ ways as we proceed clockwise around the circle from Anne. Hence, the number of favorable cases is $$2\binom{9}{4}10!9!$$ giving a probability of $$\frac{2\binom{9}{4}10!9!}{19!}$$ • @ChristianBlatter Thank you for catching the error, which I have now corrected. – N. F. Taussig Mar 7 '18 at 13:57 I think you're overcounting because of calculating all the permutations in the circle as well as multiplying afterwards ${5 \choose 1}{4 \choose 1}{3 \choose 1}{2 \choose 1}{1 \choose 1}$ for each of the boys. To see why this is the case, let's label our groups as $G_1, G_2, B_1$, followed by $G_3, G_4, B_2$, and so on... up to $G_9, G_{10}, B_5$ and having the remaining groups be $B_6$ and $B_7$ up to $B_{10}$. One of the $9!$ permutations you list has the following sequence of people in it: $B_6, B_7, G_1, G_2, B_1$ and the remaining 15 kids are in some order. Consider another permutation which has $B_7, B_6, G_1, G_2, B_1$, and the remaining 15 kids in the same order as the previous permutation. Now note that when you're considering the ${5 \choose 1}{4 \choose 1}{3 \choose 1}{2 \choose 1}{1 \choose 1}$ for assigning each of the single group boys to $B_7, B_6, B_8, B_9, B_10$, in one possible assignment, you'll have Boy 6 be Alex and Boy 7 be Joe and in another assignment, you can have Boy 6 be Joe and Boy 7 be Alex. In the case that Boy 6 is Alex and Boy 7 is Joe, consider the first permutation presented, which was: $B_6, B_7, G_1, G_2, B_1$. This will end up being Alex, Joe, $G_1, G_2, B_1$ followed by the 15 other kids in some set order. Now consider the case that Boy 6 is Joe and Boy 7 is Alex, and consider the second permutation presented, which was: $B_7, B_6, G_1, G_2, B_1$. This will also end up being Alex, Joe, $G_1, G_2, B_1$ followed by the 15 other kids in the same set order, which shows that you're overcounting. I'm not yet sure if that's the only flaw in your logic, and I'll continue working on this to see if I find any others.
2019-05-26T03:44:58
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2679379/arrangements-of-people-in-a-circle", "openwebmath_score": 0.7187069654464722, "openwebmath_perplexity": 136.08963142955676, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540692607815, "lm_q2_score": 0.8596637469145053, "lm_q1q2_score": 0.8419151887366914 }
http://www.gradesaver.com/textbooks/math/other-math/discrete-mathematics-with-applications-4th-edition/chapter-1-speaking-mathematically-exercise-set-1-3-page-22/19
## Discrete Mathematics with Applications 4th Edition $g(x)=\frac{2x^{3}+2x}{x^{2}+1}=\frac{(2x)(x^{2}+1)}{x^{2}+1}=2x=f(x)$ Yes, the functions $f$ and $g$ are equivalent. Note that $x^{2}+1\ne0$; therefore, $g$ has the same domain as $f$.
2017-09-22T15:36:20
{ "domain": "gradesaver.com", "url": "http://www.gradesaver.com/textbooks/math/other-math/discrete-mathematics-with-applications-4th-edition/chapter-1-speaking-mathematically-exercise-set-1-3-page-22/19", "openwebmath_score": 0.8153178691864014, "openwebmath_perplexity": 164.6203267883279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9793540668504083, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8419151884251713 }
https://math.stackexchange.com/questions/1981396/is-the-limit-of-sequence-enough-of-a-proof-for-convergence
# Is the limit of sequence enough of a proof for convergence? I have a sequence $a_{n}=\frac{(n+1)(n^{2})}{(2n+1)(3n^{2}+1)}$ , and limit of it when $n$ goes to infinity is $\frac{1}{6}$. Because limit is some number, is that enough of a proof that this sequence is convergent, or should i do something more? • It is enough. your sequence CONVERGES to $\frac{1}{6}$. – hamam_Abdallah Oct 23 '16 at 14:10 • This is the definition of convergence, so what you say is ok. – zar Oct 23 '16 at 14:10 • You could also argue that $a_n$ can be written as the quotient of two convergent sequences whose denominator has nonzero limit, hence $a_n$ is convergent. – lzralbu Oct 23 '16 at 14:18 It is enough to write, for $n>1$, $$a_{n}=\frac{(n+1)(n^{2})}{(2n+1)(3n^{2}+1)}=\frac16\cdot\frac{1+\frac1n}{\left(1+\frac1{2n}\right)\left(1+\frac1{3n^2}\right)}$$ giving, as $n \to \infty$, $$a_n \to \frac16\cdot\frac{1+0}{\left(1+0\right)\left(1+0\right)}=\frac16.$$ • Thank you, i needed conformation because im confused when i need to use that $\epsilon$ proof and when determining the limit is enough. – Tars Nolan Oct 23 '16 at 14:14 • @TarsNolan You are welcome. – Olivier Oloa Oct 23 '16 at 14:15 • But isn't there a theorem that says that if u(n) tends to 0 as n tends to infinity the series can be convergent but not necessary and that's why we use the ratio test and root test. But if u(n) doesn't tend to 0 as n tends to infinty then sure shot the series is divergent. Am I wrong because even I had this doubt ? – Shashaank Oct 23 '16 at 14:21 • @Shashaank I think you are making a confusion between convergence of a sequence $\left\{u_n\right\}_{n\ge 0}$ and convergence of a series $\sum_{n\ge 0} u_n$. – Olivier Oloa Oct 23 '16 at 14:39 • @OlivierOloa Ok so is it that what I am saying is about the convergence of a series and the question and the answers addresse the convergence of a sequence ? – Shashaank Oct 23 '16 at 14:47 Theorems on limits lift up the burden of doing $\varepsilon$-$\delta$ proofs. If you know that $\lim_{n\to\infty}a_n=l$ and $\lim_{n\to\infty}b_n=m$ (real $l$ and $m$), then you can also say \begin{gather} \lim_{n\to\infty}(a_n+b_n)=l+m \tag{Theorem 1}\\[6px] \lim_{n\to\infty}(a_nb_n)=lm \tag{Theorem 2}\\[6px] \lim_{n\to\infty}\frac{a_n}{b_n}=\frac{l}{m} \tag{Theorem 3} \end{gather} Theorem 3 requires the hypothesis $m\ne0$ (so also $b_n\ne0$ from some point on). Such theorems can also be extended to the cases when one or both the limits are infinity, but this would take too far. In your case, the sequence $a_n$ is not, as written, in one of the above forms, but you can rewrite it as $$a_n=\frac{n+1}{2n+1}\frac{n^2}{3n^2+1}$$ Now let's examine $$b_n=\frac{n+1}{2n+1}$$ We can't apply the third theorem above, but as soon as we rewrite $$b_n=\frac{1+\frac{1}{n}}{2+\frac{1}{n}}$$ we see that at numerator and denominator we have sequences to which we can apply the first theorem above, because we know that $\lim_{n\to\infty}\frac{1}{n}=0$. Thus, combining theorems 1 and 3, we get $$\lim_{n\to\infty}b_n=\frac{1}{2}$$ Similarly $$\lim_{n\to\infty}\frac{n^2}{3n^2+1}= \lim_{n\to\infty}\frac{1}{3+\frac{1}{n^2}}=\frac{1}{3}$$ Now apply theorem 2 and the requested limit is $\frac{1}{6}$. More simply, you can directly do as in Olivier Oloa's answer. Mentioning the application of the above theorems is usually omitted (like Olivier did). You don't need to check the limit with an $\varepsilon$-$\delta$ proof, because you're applying theorems that have been proved correct and the known fact that $\lim_{n\to\infty}\frac{1}{n}=0$. I will just elaborate on Olivier Oloa's answer. In the comments you mention you are unsure whether you should use "that $\epsilon$ proof". You don't have to (but sort of are anyway) and here's why. The definition of $\lim_{n\to \infty} a_n = a$ is something like this $$\forall \epsilon>0 \;\exists n_0\in \mathbb N,\; \forall n \geq n_0: |a_n - a| < \epsilon$$ Now, if you had just that and nothing else, you would indeed need to prove that your sequences converges to $1/6$ using the definition, i.e. "that $\epsilon$ proof" Luckily, you've probably been shown (or even have proved) some basic results, namely things such as $$\lim_{n\to \infty} \frac 1n = 0$$ and (for $\lim a_n = a$ and $\lim b_n = b$) $$\lim_{n \to \infty} a_n + b_n = a+b$$ and so on. So altogether what you're doing in the answer you've given is repeatedly using all these already proven theorems and rules to prove what your limit goes to (I recommend very carefully going through your argument to see what exact rules you've used). The theorems themselves are proven by the whole $\epsilon$ machinery, giving you the power to prove the convergence of the sequence without using a single $\epsilon$ Example: Prove that $$\lim_{n\to\infty} \frac 2n = 0$$ Now, you could do this (and quite easily) through the definition, i.e. doing the $\epsilon$ proof. But, using the two facts I wrote down earlier, you could argue $$\lim_{n\to\infty} \frac 2n = \lim_{n\to\infty} \frac 1n + \frac 1n = \lim_{n\to\infty} \frac 1n + \lim_{n\to\infty} \frac 1n = 0 + 0 = 0$$ While this might be a quite convoluted usage (what rule could you use to prove this perhaps more directly?), it illustrates how one escapes the need to prove things from definition by using already proven facts/theorems.
2019-12-08T00:41:15
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1981396/is-the-limit-of-sequence-enough-of-a-proof-for-convergence", "openwebmath_score": 0.9490106105804443, "openwebmath_perplexity": 303.41175652877035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540698633748, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8419151874941286 }
https://math.stackexchange.com/questions/2189714/polynomial-form-of-detaxb/2189721
# Polynomial form of $\det(A+xB)$ Let $A$ and $B$ be two $2 \times 2$ matrices with integer entries. Prove that $\det(A+xB)$ is an integer polynomial of the form $$P(x) = \det(A+xB) = \det(B)x^2+mx+\det(A).$$ I tried expanding the determinant of $\det(A+xB)$ for two arbitrary matrices, but it got computational. Is there another way? • Why not the expansion ? It is not so hardworking and gives also the vaue of $m$. – Emilio Novati Mar 16 '17 at 17:37 • @Widawensen I also wanted to generalize and not just for $2 \times 2$. – user19405892 Mar 16 '17 at 17:38 • @user19405892 Ok. I see. – Widawensen Mar 16 '17 at 17:39 The constant term $\mathrm{det}(A)$ comes from setting $x = 0.$ The coefficient $\mathrm{det}(B)$ is the constant term of $x^2 P(1/x) = \mathrm{det}(xA + B),$ again setting $x = 0.$ • Can you explain why $x^2P(1/x) = \det(xA+B)$? – user19405892 Mar 16 '17 at 17:32 • By homogeneity $\mathrm{det}(xA+B) = x^2 \mathrm{det}(A + B/x)$ – user399601 Mar 16 '17 at 17:45 $$\begin{pmatrix}a_{11}&&a_{12}\\a_{21}&&a_{22}\end{pmatrix}+x\begin{pmatrix}b_{11}&&b_{12}\\b_{21}&&b_{22}\end{pmatrix}=\begin{pmatrix}a_{11}+xb_{11} &&a_{12}+xb_{12}\\a_{21}+xb_{21}&&a_{22}+xb_{22}\end{pmatrix}=C$$ $$\det(C)= a_{11}a_{22}+a_{11}xb_{22}+a_{22}xb_{11}+x^2b_{11}b_{22}- a_{21}a_{12}-a_{21}xb_{12}-a_{12}xb_{21}-x^2b_{21}b_{12}$$ $$=\det(A)+x\left[\det\begin{pmatrix}a_{11}&&a_{12}\\b_{21}&&b_{22}\end{pmatrix}+\det\begin{pmatrix}b_{11}&&b_{12}\\a_{21}&&a_{22}\end{pmatrix}\right]+x^2\det(B)$$ Another approach. All matrices are complex valued. This argument can be extended to $n\times n$ matrices: see the end of the post. (EDIT sorry, I have found another error, the general case is incomplete). First step: Assume that $A=I$ and let $\lambda_1, \lambda_2$ denote the eigenvalues of $B$. (They might not be distinct, this does not affect the argument). Then $$\tag{1}\det(I+xB)=(1+x\lambda_1)(1+x\lambda_2)=1 + x\,\mathrm{trace} (B) + x^2 \det B.$$ Here we use the fact that the eigenvalues of $I+xB$ are $1+x\lambda_1, 1+x\lambda_2$ and that the determinant equals the product of the eigenvalues. Second step: Now assume that $A$ is invertible. Write $$\tag{2}\det(A+xB)=\det(A)\det(I+xA^{-1}B) = \det(A) + x\,\mathrm{trace}(\det(A)A^{-1}B) + x^2\det(B).$$ Here we used the first step together with Binet's theorem (i.e., $\det(MN)=\det(M)\det(N)$). Final step Now we remove the invertibility assumption on $A$ with a continuity argument. Note that, if $A=\begin{bmatrix} a & b \\ c & d\end{bmatrix}$ is invertible, one has $$\det(A)A^{-1}=\begin{bmatrix} d & -b \\ -c & a\end{bmatrix}.$$ If one rewrites formula (2) as follows $$\tag{3} \det(A+xB)=\det(A) + x\, \mathrm{trace}\left( \begin{bmatrix} d & -b \\ -c & a\end{bmatrix}B\right) + x^2 \det(B),$$ one sees at once that it holds true for all matrices, not only for invertible ones. More precisely, the formula (3) is true for all invertible $A$ and invertible matrices are a dense subset of the space of all matrices, so the formula extends by continuity. When all matrices are integer valued, the term $$m=\mathrm{trace}\left( \begin{bmatrix} d & -b \\ -c & a\end{bmatrix}B\right)$$ is integer and this terminates the proof. The $n\times n$ case. In the general case, one obtains in the $A=I$ case the formula $$\begin{split}\det(I+xB)&=1+\mathrm{trace}(B)x + \sum_{i<j} \lambda_i\lambda_j x^2+\ldots +\sum_{i_1<\ldots<i_{n-1}} \lambda_{i_1}\ldots\lambda_{i_{n-1}}x^{n-1} + \det(B)x^n \\ &= 1+\mathrm{trace}(B)x+p_2(B)x^2+\ldots + p_{n-1}(B)x^{n-1}+\det B x^n.\end{split}$$ The coefficients $p_2(B)\ldots p_{n-1}(B)$ are the sums of all principal minors of $B$ of order $2,\ldots n-1$ respectively. Therefore, using the same argument as before, we obtain in the case of invertible $A$ the formula $$\det(A+xB)=\det(A) + \mathrm{trace}(\det A A^{-1}B) x+ \det A p_2( A^{-1}B)x^2+\ldots+ \det A p_{n-1}( A^{-1}B)x^{n-1} + \det (B) x^n.$$ To remove the invertibility assumption on $A$, the formula one should use is the following: $$\det(A)A^{-1}=\mathrm{adj}(A),$$ where $\mathrm{adj}(A)$ denotes the adjugate matrix of $A$. The final result is (warning incomplete) $$\tag{4} \det(A+xB)=\det(A) + \mathrm{trace}(\mathrm{adj}(A)B) x+ \det A p_2(A^{-1}B)x^2+\ldots +\det A p_{n-1}(A^{-1}B)x^{n-1} + \det (B) x^n.$$ TO DO Find formulas for $\det A\cdot p_{2}(A^{-1}B)m \ldots \det A\cdot p_{n-1}(A^{-1}B)$ which do not rely on invertibility of $A$. • So the polynomial is always a quadratic for any $n$? Shouldn't it be $n$? – user19405892 Mar 16 '17 at 20:27 • @user19405892: I have added the $n\times n$ case, I hope it is correct and that it can be useful to you. – Giuseppe Negro Mar 19 '17 at 16:31 • @user19405892: I am sorry, there was an error in the general formula. The coefficients of $x$ and $x^n$ are correct. The others are correct for an invertible $A$, I need to find a way to express them for general $A$. – Giuseppe Negro Mar 20 '17 at 11:47 • Your "TO DO" implies finding a general formula for $\det(A+B)$ for two matrices $A$ and $B$. (You have an $x$ there as well, but obviously you can let it get swallowed by the $B$.) I know of only two such formulas. One is what you get by expanding the Leibniz expansion and then combining terms (spelled out in all the painful details in Theorem 5.157 of my Notes on the combinatorial fundamentals of algebra, version of 15 February 2017. It is sometimes useful, but completely impractical in general. ... – darij grinberg Mar 23 '17 at 19:50 • – darij grinberg Mar 23 '17 at 20:02
2019-06-19T03:41:16
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2189714/polynomial-form-of-detaxb/2189721", "openwebmath_score": 0.9301890134811401, "openwebmath_perplexity": 182.2406255371409, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.979354068055595, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8419151859400459 }
https://mathematica.stackexchange.com/questions/88623/symbolic-area-calculation-for-a-parametric-self-intersecting-closed-curve/88637
# Symbolic area calculation for a parametric self-intersecting closed curve The parametric equation of the curve is: $$\begin{cases} x &= -9 \sin (2 t)-5 \sin (3 t) \\[6pt] y & = 9 \cos (2 t)-5 \cos (3 t) \end{cases}\quad t\in[0,2\pi]$$ which can be easily visualized as: The implicit form: $$\begin{array}{rl} F(x,y)=&625(x^2+y^2)^3-36450y(5x^4-10x^2y^2+y^4)\\ &+585816(x^2+y^2)^2-41620992(x^2+y^2)+550731776=0 \end{array}$$ It is relatively easier to obtain the numerical solution of the enclosed area. Is it possible to find the symbolic one? The key seems to be how to find the symbolic coordinates of the self-intersection points of the curve. • I think How to get intersection values from a parametric graph is useful for you. – xyz Jul 20 '15 at 1:21 • For my own part, I would be inclined to try it out, if I could copy-paste the formulas into Mathematica, but then I'm just a little bit lazy that way. – Michael E2 Jul 20 '15 at 14:38 • Closely related: A: Finding the centroid of the area between two curves – Jens Jul 20 '15 at 17:02 • @MichaelE2 Try <code> ContourPlot[625x^6+1875x^4y^2+1875x^2y^4+625y^6-182250x^4y+364500x^2y^3-36450y^5+585816x^4+1171632x^2y^2+585816y^4-41620992x^2-41620992y^2+550731776==0,{x,-15,15},{y,-15,15}] </code> – LCFactorization Jul 20 '15 at 22:48 Although Belisarius' creative solution is entirely satisfactory, a solution symbolic at every step may be useful. To begin, define x[t_] := -9 Sin[2 t] - 5 Sin[3 t] y[t_] := 9 Cos[2 t] - 5 Cos[3 t] and note that t = π corresponds to the uppermost point in the star in the question, {0, 14}}. From there, the point {0, -5} can be reached by increasing or decreasing t by 2 π/5 + t0, where t0 is the change in t from the uppermost point to the nearest points at which two curve segments intersect. This quantity is obtained by, Solve[x[π + t] - x[π - t] == 0, t] /. C[1] -> 0; t0 = %[[4, 1, 2]] - 2 π/5 (* (3 π)/5 + ArcTan[(2 Sqrt[15 (5 - 3/10 (9 - Sqrt[181]))])/(9 - Sqrt[181])] *) (This simple derivation is based on the solution by Michael E2 to question 33947, as highlighted by Shutao Tang in a comment above.) Then, following Belisarius, we apply Green's Theorem. 5/2 Integrate[(y[t] D[x[t], t] - x[t] D[y[t], t]), {t, Pi + t0, Pi - t0}] // TrigExpand // FullSimplify (* -(252/625) Sqrt[3 (-68561 + 5154 Sqrt[181])] + 261 π - 435 ArcCot[Sqrt[1/33 (-79 + 6 Sqrt[181])]] *) The numerical value of this answer is 214.853, as expected. • +1 clear, concise and provides the analytic expression :) – ubpdqn Jul 22 '15 at 3:12 • Such a result is what was really expected. – LCFactorization Jul 22 '15 at 4:55 • Sorry, just to understand your sayings: How comes my answer isn't "a solution symbolic at every step" ? – Dr. belisarius Jul 23 '15 at 21:50 • @Belisarius Certainly, I meant no criticism. Your solution is creative and produces a quite satisfactory result. I merely meant to convey that the approach I used did not require using Root functions at an intermediate point. Would you like me to revise my first sentence? Best wishes. – bbgodfrey Jul 24 '15 at 1:35 • Ah,ok. Kind of a language barrier. Agree! – Dr. belisarius Jul 24 '15 at 2:16 The plan is first get the "external" contour and then use Green's theorem to find its area. r[t_] := {-9 Sin[2 t] - 5 Sin[3 t], 9 Cos[2 t] - 5 Cos[3 t], 0} (*find the intersections*) tr = Quiet@ToRules@Reduce[{r@t1 == r@t2, 0 < t1 < t2 < 2 Pi}, {t1, t2}]; pt = {t1, t2} /. {tr} // Flatten; pts = SortBy[pt, N@# &]; pps = Partition[pts, 2]; Now we can use Green's theorem to calculate the area by evaluating a line integral of a piecewise smooth function between those points. First we verify that the curve is oriented as expected: Show[ParametricPlot[Most@r@t, {t, #[[1]], #[[2]]}, PlotRange -> {{-14, 14}, {-14, 14}}] & /@ pps] /. Line :> Arrow So: (*Green's theorem, "vectorial" form*) k[{t2_, t1_}] = Integrate[Last@Cross[r@t, r'@t], {t, t1, t2}]/2; arean = k /@ pps; area = Total@arean; arean // N area // N (* {42.9706, 42.9706, 42.9706, 42.9706, 42.9706} *) (* 214.853 *) So area is composed by five numerically equivalent integrals. Let's get a symbolic form by using the first of them (the easier one to work with). You can verify that if we define: ff[n_] := ArcTan[Sqrt@Root[1 - 2026 #1 + 67761 #1^2 - 2123042 #1^3 + 33867982 #1^4 - 251359006 #1^5 + 1020220287 #1^6 - 2365497302 #1^7 + 3139485186 #1^8 - 2365497302 #1^9 + 1020220287 #1^10 - 251359006 #1^11 + 33867982 #1^12 - 2123042 #1^13 + 67761 #1^14 - 2026 #1^15 + #1^16 &, n]]; then arean[[1]] == -((252*Sqrt[3*(-68561 + 5154*Sqrt[181])])/3125) - 174*(ff[1] - ff[2]) which is one fifth of the symbolic result you're after. We may try to quickly verify if the result is reasonable by doing some image processing. We'll use a somewhat large image size (3000 pxs width) to get some accuracy: lims = {-1, 1} + # &/@ N@Outer[#2[#1, t] &, Most@r@t, {MinValue, MaxValue}]; img = Show[ParametricPlot[a Most@r@t, {t, #[[1]], #[[2]]}, {a, 0, 1}, PlotRange -> lims, Axes -> False, Mesh -> False, Frame -> False, ImageSize -> 3000] & /@ Partition[pts, 2]]; And now we count colored and white pixels counts = Last /@ (ImageData[Binarize@img] // Flatten // Tally) (* {6228712, 2366288} *) And comparing the the area quotients calculated by both methods: Times @@ (Subtract @@@ lims)/area // N (* 3.64587 *) Tr@counts/counts[[2]] // N (* 3.63227 *) we see that the results agree within reasonable limits • Exploiting the symmetry of the curve just might result in something simpler… – J. M.'s technical difficulties Jul 20 '15 at 4:36 • @J. M. I was doing that. Not very simple, though – Dr. belisarius Jul 20 '15 at 5:27
2020-05-27T09:36:18
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/88623/symbolic-area-calculation-for-a-parametric-self-intersecting-closed-curve/88637", "openwebmath_score": 0.3483494520187378, "openwebmath_perplexity": 3799.1356710434197, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540656452214, "lm_q2_score": 0.8596637451167995, "lm_q1q2_score": 0.841915183867935 }
https://math.stackexchange.com/questions/3196392/computing-the-order-of-9-31-in-mathbbz-31-mathbbz/3196421
# Computing the order of $[9]_{31}$ in $(\mathbb{Z}/31\mathbb{Z})^*$ A part of Aluffi's "Algebra: Chapter 0" exercise II.4.12 suggests computing the order of $$[9]_{31}$$ in $$(\mathbb{Z}/31\mathbb{Z})^*$$. Sure, I could just multiply $$9$$ a few times until I get $$1$$ as a remainder (and thus derive that the order in question is 15), but is there a better way? A few thoughts of mine: • Firstly, $$[9] = [3]^2$$, so it'd be sufficient to prove that $$[3]$$ is a generator (and indeed it is). But I was unable to do this efficiently. • Another attack direction is that, since $$31$$ is prime, one might note that $$(\mathbb{Z}/31\mathbb{Z})^*$$ is cyclic and, having $$30$$ elements, is isomorphic to $$\mathbb{Z}/30\mathbb{Z}$$. Maybe we could derive something meaningful by inspecting some isomorphism $$\varphi$$ between the two? I tried deriving what should it do to the elements of $$(\mathbb{Z}/31\mathbb{Z})^*$$, and I was able to figure out how it behaves on the powers of $$[2]$$, but it didn't bring me closer to understanding what it does to $$[3]$$ or $$[9]$$. So shall I just accept my fate and consider this to be an exercise in multiplication and division with remainder? • It might help to consider $$\operatorname{Aut}(\Bbb Z_{31})\cong (\Bbb Z/31\Bbb Z)^*.$$ Just as another perspective . . . – Shaun Apr 21 at 22:10 • Could you please expand on that? I tried myself and it wasn't particularly fruitful for me, but anything involving morphisms looks promising and elegant! – 0xd34df00d Apr 22 at 0:04 • The idea wasn't fully fledged, @0xd34df00d, I'm afraid; it would just have been the first place I'd look. Perhaps if you examine a proof of the isomorphism, something'd pop out. Sorry :) – Shaun Apr 22 at 5:39 By lil' Fermat and Lagrange's theorem, all non-zero elements in $$\mathbf Z/31\mathbf Z$$ have order a divisor of $$30$$. So the order of $$9$$ is among $$\;\{2, 3,5,6,10,15,30\}$$. It is not very long to check that, $$\bmod 31$$, $$\begin{gather}9^2\equiv -12, \quad 9^3\equiv -12\cdot 9=-108\equiv 16,\quad 9^5\equiv -12\cdot 16=-192\equiv -6,\\ 9^6\equiv-6\cdot 9=-54\equiv8, \quad 9^{10}\equiv 36\equiv 5,\quad 9^{15}\equiv 5\cdot -6=-30\equiv 1, \end{gather}$$ so $$9$$ has order $$15$$. • Since $9=3^2$ and since $3$ must have one of the orders you listed, $9$ must have order $3$ or $5$ or $15$. So you only need to do half of the checking in this answer. – Andreas Blass Apr 21 at 23:08 • Sure, but anyway, most of these computations are useful steps to compute the other powers. – Bernard Apr 21 at 23:18 • Agreed, but there are other ways to do the computations. Here's how I did them (I'm not claiming it's better, just different). Modulo 31, we have $3^3=27\equiv-4$, so $9^3\equiv(-4)^2=16$. And $3^5=243\equiv-5$ so $9^5\equiv(-5)^2=25\equiv-6$. In other words, I just kept taking advantage of the facts that $9=3^2$ and that I know some powers of $3$. (And I got lucky in that $3^5=243$ is very close to an obvious multiple $248$ of $31$.) – Andreas Blass Apr 21 at 23:29 This is a variant on Bernard's answer, mainly to show one way to reduce the amount of computation (which isn't a lot to begin with) needed to identify the order. As noted in Bernard's answer, the only possible orders for the nonzero elements of $$\mathbb{Z}/31\mathbb{Z}$$ are the divisors of $$30$$, i.e., $$1$$, $$2$$, $$3$$, $$5$$, $$6$$, $$10$$, $$15$$, and $$30$$. Since $$31\equiv3$$ mod $$4$$, $$-1$$ is not a square mod $$31$$. Therefore, since $$9$$ is a square, its order cannot be even. (If $$9^{2n}\equiv1$$ mod $$31$$, then $$9^n\equiv\pm1$$ mod $$31$$.) So in order to conclude that its order is $$15$$, it suffices to rule out $$3$$ and $$5$$ as orders. (The only element of order $$1$$ is $$1$$.) Note that $$9\cdot7=63\equiv1$$ mod $$31$$. Now $$9^2=81\equiv-12$$ mod $$31$$, so $$9^4\equiv144\equiv20$$. Since $$20$$ is neither $$9$$ nor $$7$$ mod $$31$$, we can conclude that neither $$9^3$$ nor $$9^5$$ is $$1$$ mod $$31$$. • @OP "...it suffices to rule out $3,5$" is a special case of the Order Test. – Bill Dubuque Apr 22 at 2:23
2019-06-17T21:17:41
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3196392/computing-the-order-of-9-31-in-mathbbz-31-mathbbz/3196421", "openwebmath_score": 0.9478173851966858, "openwebmath_perplexity": 169.5215730965175, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985936370890583, "lm_q2_score": 0.8539127585282745, "lm_q1q2_score": 0.8419036462005337 }
https://math.stackexchange.com/questions/310508/probability-of-choosing-two-equal-bits-from-three-random-bits
# Probability of choosing two equal bits from three random bits Given three random bits, pick two (without replacement). What is the probability that the two you pick are equal? I would like to know if the following analysis is correct and/or if there is a better way to think about it. $$\Pr[\text{choose two equal bits}] = \Pr[\text{2nd bit} = 0 \mid \text{1st bit} = 0] + \Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1]$$ Given three random bits, once you remove the first bit the other two bits can be: 00, 01, 11, each of which occurring with probability $\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$. Thus, $$\Pr[\text{2nd bit} = 0] = 1\cdot\frac{1}{4} + \frac{1}{2}\cdot\frac{1}{4} + 0\cdot\frac{1}{4} = \frac{3}{8}$$ And $\Pr[\text{2nd bit} = 1] = \Pr[\text{2nd bit} = 0]$ by the same analysis. Therefore, $$\Pr[\text{2nd bit}=0 \mid \text{1st bit} = 0] = \frac{\Pr[\text{1st and 2nd bits are 0}]}{\Pr[\text{1st bit}=0]} = \frac{1/2\cdot3/8}{1/2} = \frac{3}{8}$$ and by the same analysis, $\Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1] = \frac{3}{8}$. Thus, $$\Pr[\text{choose two equal bits}] = 2\cdot\frac{3}{8} = \frac{3}{4}$$ • Not correct. Whatever the first bit picked, the probability the second bit matches it is $1/2$. – André Nicolas Feb 21 '13 at 20:22 • Why "$\Pr[\text{2nd bit} = 0] = 1\cdot\frac{1}{4} + \frac{1}{2}\cdot\frac{1}{4} + 0\cdot\frac{1}{4} = \frac{3}{8}$"? And "$\Pr[\text{choose two equal bits}] = \Pr[\text{2nd bit} = 0 \mid \text{1st bit} = 0] + \Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1]$" should be "$\Pr[\text{choose two equal bits}] = \Pr[\text{2nd bit} = 0, \text{1st bit} = 0] + \Pr[\text{2nd bit} = 1, \text{1st bit} = 1]$" – TMM Feb 21 '13 at 20:26 In your listing of possibilities you missed 10. That would get $\Pr[\text{2nd bit} = 0] =\frac 12$ Then when you say $\Pr[\text{choose two equal bits}] = \Pr[\text{2nd bit} = 0 \mid \text{1st bit} = 0] + \Pr[\text{2nd bit} = 1 \mid \text{1st bit} = 1]$ you need to multiply the first term by $\Pr[\text{1st bit} = 0]=\frac 12$ and the second by $\Pr[\text{1st bit} = 1]=\frac 12$ This will give a final answer of $\frac 12$ Really, starting with three bits is a red herring. The possibilities for the two bits are $00,10,01,11$, all equally likely. $2$ of the $4$ have the bits the same. Clearly the intuition tells us that the answer should be $\frac{1}{2}$, because choosing 2 random bits at random without replacing is just like simply choosing two random bits which are equal with probability $\frac{1}{2}$. There are 3 way to choose two bits out of 3, i.e. choosing $B_1$ and $B_2$ or $B_2$ and $B_3$ or $B_1$ and $B_3$ which are equally likely, thus: $$\Pr\left(\text{two same bits}\right) = \frac{1}{3} \Pr\left(B_1 = B_2\right) + \frac{1}{3} \Pr\left(B_2 = B_3\right) + \frac{1}{3} \Pr\left(B_1 = B_3\right)$$ since each bit is independent and equally likely to be either 0 or 1, $\Pr\left(B_1 = B_2\right) = \Pr\left(B_2 = B_3\right) = \Pr\left(B_1 = B_3\right) = \frac{1}{2}$. Whatever the first bit picked, the probability the second bit matches it is $1/2$. Remark: We are assuming what was not explicitly stated, that $0$'s and $1$'s are equally likely. One can very well have "random" bits where the probability of $0$ is not the same as the probability of $1$. I hope I'm not repeating anyone else's answer here. Since there are 3 bits, there are $2^3$ possible combinations. We can break them down into 2 categories: Category 1: all bits are the same, i.e. $\{000,111 \}$. Category 2: two bits are 0, one is 1. Category 3: two bits are 1, one is 0. Clearly the last category is the same as 2. Hence your law of total probability should be something like $$P(\text{sample two same bits})=P(\text{sample 2 same bits}|C1)P(C1) +2P(\text{sample 2 same bits}|C_2)P(C_2)\\ =1 \cdot \frac{2}{2^3} + 2 \cdot \frac{2}{3} \cdot \frac{1}{2} \cdot \binom{3}{1}\cdot \frac{1}{2^3}=\frac{1}{2}$$ Expanation: $\binom{3}{1} \cdot \frac{1}{2^3}$ is the probability of the outcome 'two bits are the same 1, one is different' (since you have 3 slots). $\frac{2}{3} \cdot \frac{1}{2}$ is the probability to sample two of those equal bits without replacement. 2 is of course due to 2 options (0 or 1).
2021-08-02T00:46:18
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/310508/probability-of-choosing-two-equal-bits-from-three-random-bits", "openwebmath_score": 0.9246839880943298, "openwebmath_perplexity": 243.29030232128787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363746096915, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.8419036438781845 }
http://math.stackexchange.com/questions/718727/does-nth-roots-with-non-natural-indexes-exists
# Does nth-roots with non-natural indexes exists? My high school math teacher stated that roots with non-natural indexes are meaningless, just like $\frac{\infty}{\infty}$ or $0^0$, because "mathematicians decided so and so it is unless we change axioms". It doesn't make sense to me. Isn't $\sqrt[\frac 1a]{x}$ just $x^{a}$ for every $a\in\mathbb{R}$ (or even $\mathbb{C}$) or am I missing something? - If $\alpha\ne 0$, the $\alpha$-th root of a non-negative real certainly exists. One is more likely to use the notation $x^{1/\alpha}$ for it than $\sqrt[\alpha]{x}$. –  André Nicolas Mar 19 '14 at 19:25 Is there a way I can show my teacher that she's wrong? Even just citing a theorem based on that –  mattecapu Mar 19 '14 at 19:55 Your teacher is undoubtedly aware that $x^\beta$ is defined for positive $x$ and non-zero $\beta$, in particular for $\beta=1/\alpha$. So it would be a question of finding a paper or other publication that used the notation $\sqrt[\alpha]{x}$. I have no immediate example. –  André Nicolas Mar 19 '14 at 20:01 Yeah, she knows it. But she denies that $\sqrt[\frac13]{x}=x^3$ while she accepts that $\sqrt[2]{x}=x^{\frac12}$... –  mattecapu Mar 20 '14 at 17:08 We can define exponentiation to every real number as follows: Let $\alpha\in\mathbb{R}$ and $x\in \mathbb{R}_+$ be a non negative real number. We define $x^{\alpha}$ as $$x^{\alpha}:=\text{exp}(\alpha\ln(x))$$ with this definition one can prove all the 'exponent laws', for example $$x^{\alpha}x^{\beta}=x^{\alpha+\beta}$$ Another approach is to start with exponentiation of natural numbers, defined recursively as follows: For $n\in\mathbb{N}$, and $x\in\mathbb{R}_+$; $$x^n=x^{n-1}\cdot x$$ $$x^1=x$$ Then we can extend this definition, allowing the exponent to be an integer number putting $$x^{-1}=\frac{1}{x}$$ To allow rational exponents, we define the $n$-th root as the (unique positive) solution to $y^n=x$ and we denote it as $$x^{\frac{1}{n}}=y$$ The final step needs the supreme axiom; Let $\alpha\in\mathbb{R}$ and $x\in\mathbb{R}_+$, $$x^{\alpha}:=\text{sup}\lbrace x^r \mid r\in\mathbb{Q} \,\,\text{ and }\,\, r\leq\alpha\rbrace$$ One can prove that the definition given at the beginning of this post and this last one are equivalent. - Thank you! What is the purpose of the last statement? –  mattecapu Mar 20 '14 at 17:11 @mattecapu Just to be reassured that both approaches are the same i.e. they give the same answer. –  Chazz Mar 20 '14 at 22:17 Otherwise $x^\alpha$ wouldn't be a crescent function? –  mattecapu Mar 21 '14 at 17:19 @mattecapu You can prove any property of exponentiation with any of the definitions given above. When one is defining exponentiation it is more natural to proceed first defining it for $\mathbb{N}$, then for $\mathbb{Z}$, then for $\mathbb{Q}$ and finally for $\mathbb{R}$ as we outline above. But this is a little bit complicated to work with, so one proves $x^{\alpha} =\text{sup} \lbrace x^{r}\mid \alpha\in\mathbb{Q}\,\text{ and }\,r\leq\alpha \rbrace=\text{exp}(\alpha\ln(x))$, as the latter is easier to work with. –  Chazz Mar 21 '14 at 17:54 Get it, thanks! –  mattecapu Mar 21 '14 at 22:23 You are correct. Natural powers are easy to define (repeated multiplication). If negative powers are to work nicely with positive integer powers, they must represent reciprocals. If rational powers are to work nicely with integer powers, they must represent natural roots. The way to define irrational powers is trickier, but perfectly well-defined with a bit of calculus (continuous extension of a function defined on a dense subset). As an easy counter-example to "no non-natural roots", $\sqrt[\frac{1}{2}]{2} = 2^\frac{1}{\frac{1}{2}} = 2^2 = 4$. However, even irrational roots make sense. If you have a sequence of rational numbers ($(q_n)\in\mathbb{Q}^\mathbb{N}$) that converges to $\pi$, then $\sqrt[\pi]{a} = \lim_{n\rightarrow\infty} \sqrt[q_n]{a} = \lim_{n\rightarrow\infty} a^{\frac{1}{q_n}}$. Admittedly, proving that statement makes sense is complicated, but I think it makes intuitive sense. From Wolfram|Alpha: http://www.wolframalpha.com/input/?i=2%5E%5B1%2FPi%5D . -
2015-01-30T20:40:59
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/718727/does-nth-roots-with-non-natural-indexes-exists", "openwebmath_score": 0.9817405939102173, "openwebmath_perplexity": 499.04684890444605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9859363750229257, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8419036405656215 }
http://math.stackexchange.com/questions/293171/conditional-probability
# Conditional Probability? I have a homework question that states that Bowl A has four red and two white chips and that Bowl B has three red and two white chips. A chip is drawn from random from bowl A and put into bowl B. After the chip is put into bowl B what is the probability that I draw a red chip from bowl B. I have tried this: $$\frac{2}{6}\times\frac{3}{5}+\frac{4}{6}\times\frac{4}{5}$$ But got the answer wrong so I figure I'm on the right track but still did something the wrong way. - From which bowl are you drawing the chip? – David Mitra Feb 2 '13 at 22:41 I am drawing from Bowl A – Brian Hauger Feb 2 '13 at 22:45 There is a $\color{maroon}{4\over6}$ chance that bowl $A$ winds up with three red chips and two white chips, and a $\color{darkgreen}{2\over6}$ chance that it winds up with four red chips and one white chip. The probability that you chose red is $\color{maroon}{4\over 6}\cdot {3\over5}+\color{darkgreen}{2\over6}\cdot{4\over5}$. – David Mitra Feb 2 '13 at 22:49 Oh wait I just got what you meant by your first comment. It is bowl B that I draw the final chip from. – Brian Hauger Feb 2 '13 at 22:53 Can you see where you went wrong now? (Bowl $B$ has a $4/6$ chance of winding up with four red chips and two white chips, and a $2/6$ chance of winding up with three red chips and three white chips.) – David Mitra Feb 2 '13 at 22:58 It seems you miscounted the total number of chips in bowl $B$ after the chip from bowl $A$ was put in. After the chip taken from bowl $A$ is put into bowl $B$: • The probability that bowl $B$ has four red chips and two white chips is $4/6$ (this is just the probability that a red chip was initially selected from bowl $A$). • The probability that bowl $B$ has three red chips and three white chips is $2/6$ (this is just the probability that a white chip was initially selected from bowl $A$). Let $R$ be the event that you chose a red chip from bowl $B$ after the chip from bowl $A$ was put in. $P(R)$ can be found by conditioning on what type of chip was initially chosen from bowl $A$: $\ \ \$Let $A_r$ be the event that the chip chosen from bowl $A$ and put into bowl $B$ was red. $\ \ \$Let $A_w$ be the event that the chip chosen from bowl $A$ and put into bowl $B$ was white. Then \eqalign{ P(R)&=P(R\cap A_r)+P(R\cap A_w)\cr &=P(A_r)P(R\,|\,A_r)+P(A_w)P(R\,|\,A_w)\cr &=\textstyle{4\over 6}\cdot {4\over6} +{2\over 6}\cdot {3\over6}\cr &=\textstyle{33\over54}.} (All of this assumes, of course, equally likley outcomes with regards to which chip is chosen from bowl $A$, and with regards to which chip you chose from bowl $B$ afterwards.) -
2015-11-28T17:22:26
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/293171/conditional-probability", "openwebmath_score": 0.6168906092643738, "openwebmath_perplexity": 620.792493008877, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363741964574, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8419036398598897 }
https://math.stackexchange.com/questions/1850784/statement-about-divisors-of-polynomials-and-their-roots/1850934
# Statement about divisors of polynomials and their roots I am a 10th grade student and there is a statement in my math book If $a$ is a root of the polynomial $f(x)$ then $(x-a)$ is a divisor of $f(x)$ Why is $(x-a)$ a divisor of $f(x)$? Can you please tell me? • Since a is a root it is $f(a)=0$. Therefore your polynomial includes the factor $(x-a)$. You might observe an example. $x^2-2x+1=0$ has the solutions $x_1=-1$ and $x_2=-1$. You can factor it as $(x-1)^2=0$ with the binomic formula. – MrTopology Jul 6 '16 at 10:44 • It's an prompt result of factor theorem – Zack Ni Jul 6 '16 at 10:45 • Have you learned about synthetic division? If $f(x),g(x)$ are arbitrary polynomials, we can "divide $f$ by $g$" in the sense that we can write $f(x)=g(x)q(x) +r(x)$ where $q(x), r(x)$ are also polynomials and $deg(r(x))<deg(g(x))$. Can you see that this suffices? – lulu Jul 6 '16 at 10:46 • @TheGreatDuck Why are you saying this??!!! a root of a polynomial is simply a number you plug in that makes it zero, the fact that $(x-\text{root})$ divides the polynomial is a very different story. For instance, one talkes about roots of functions (not necessarily polynomials) where this theorem actually does not hold. – Daniel Jul 6 '16 at 23:22 • @SolidSnake i forgot that the dividing point is zero. – user64742 Jul 7 '16 at 0:46 It's a great thing that you feel curiosity for the reasons of the statements that are taught to you! For this, you have to know a little bit about long division of polynomials. Just like integers, we can divide polynomials, obtaining a quotient and a remainder. More precisely: Given any polynomials $f$ and $g$, there exist polynomials $q$ (the quotient) and $r$ (remainder) such that $$f = q\cdot g + r$$ and the degree of $r$ is strictly smaller than the degree of $g$. Now, try to prove your theorem. At first, assume that $a$ is a root of $f(x)$, set $g(x) = x-a$ and apply long division (I'm sure you can do it). The procedure is below, but try to do it by yourself at first. If we apply long division, you get $q$ and $r$ such that $f = q\cdot (x-a) + r$ and $r$ has degree $0$ (why?), so $r$ is a constant. Since $f(a)=0$, we got $0=f(a)=q(a)\cdot (a-a) + r = 0 + r = r$, so $r=0$ and therefore $f = q\cdot (x-a)$. The other direction is even easier: if $f(x) = q(x)\cdot(x-a)$, can you see why $f(a)=0$? • This is a verification. But isn't there a proof? I already know about polynomial division and this way of verifying. – MartianCactus Jul 6 '16 at 17:46 • @Adi this is a proof. It's also an 'if and only if' statement or iff. As in $f(a)=0 \text{ iff } (x-a)|f(x)$ – snulty Jul 6 '16 at 19:22 • @Adi Why do you say this is a verification? usually, when I think in "verification", I think in particular cases, but we're being very general here. – Daniel Jul 6 '16 at 23:19 • I guess that he means that the existence of the remainder and quotient is not proved here. – YoTengoUnLCD Jul 7 '16 at 1:34 Let $$f (x)=a_n x^n+... +a_1 x+a_0$$ Suppose $f (r)=0$. Hence $$a_n r^n +... + a_1 r +a_0 =0$$ Then $$f (x)=a_n x^n + ... + a_1 x + a_0 - ( a_n r^n +... + a_1 r +a_0)$$ since the expression between parentheses is zero. After reordering, $$f (x) = a_n (x^n - r^n) + ... + a_1 ( x-r)$$ Note that $$b^n - t^n= (b-t)(b^{n-1} + b^{n-2} t+... + b t^{n-2}+ t^{n-1})$$ (you can check it?) Hence \begin{align} f (x)&= a_n (x-r)(x^{n-1}+...+r^{n-1})+...+a_1 (x-r)\\&= (x-r)(a_n (x^{n-1}+...+r^{n-1})+...+a_1) \end{align} For example, suppose $$f (x)= a_2 x^2+a_1 x + a_0$$ and $f (r)=0$. Hence \begin{align} f (x) &= a_2 x^2 + a_1 x + a_0 - ( a_2 r^2 + a_1 r + a_0)\\&= a_2 (x-r)(x+r)+ a_1 (x-r)\\&= (x-r)(a_2 (x+r)+ a_1) \end{align} • I like this answer because it works from first principles and doesn't require the reader to know the factor theorem or remainder theorem or how to do polynomial. (Though Adi, if you're reading, those things are well worth knowing.) – Gareth McCaughan Jul 6 '16 at 16:47 This is essentially Factor Theorem, which is a consequence of Remainder Theorem If you let the polynomial $f(x)$ be represented as $f(x) = (x-a)Q(x) + R$, then you will note that the remainder $R = 0$ if and only if $f(a) = 0$ (i.e. $a$ is a root of $f(x)$). In this circumstance, the polynomial may be represented by $f(x) = (x-a)Q(x)$ and therefore $f(x)$ is divisible by $(x-a)$. Recall the factor theorem for polynomials. $$f(x) = (x-a) q(x) + r$$ where $r = f(a)$.
2020-04-04T12:54:22
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1850784/statement-about-divisors-of-polynomials-and-their-roots/1850934", "openwebmath_score": 0.9282742738723755, "openwebmath_perplexity": 328.9156964158226, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641144, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8419036381639438 }
https://math.stackexchange.com/questions/1242905/prove-that-the-sequence-b-n-converges
# Prove that the sequence $(b_n)$ converges Prove that if $(a_n)$ converges and $|a_n - nb_n| < 2$ for all $n \in \mathbb N^+$ then $(b_n)$ converges. Is the following proof valid? Proof Since $(a_n)$ converges, $(a_n)$ must be bounded, i.e. $\exists M \in \mathbb R^+$ such that for each $n \in \mathbb N^+$, we have $|a_n| < M$. Now, by the triangle inequality, $|nb_n|$ = $|nb_n - a_n + a_n| \le |nb_n - a_n| + |a_n| < 2 + M$. Hence, $|b_n - 0| < \frac{2 + M}{n}$. Let $\epsilon > 0$ be given, and by the Archimedean Property of $\mathbb R$, we can choose $K \in \mathbb N^+$ such that $K > \frac {2+M}{\epsilon}$. Then, $n \ge K \implies n > \frac {2 + M}{\epsilon} \implies |b_n - 0| < \epsilon$. Therefore $(b_n)$ converges, and its limit is $0$. • Yes. It is correct – HK Lee Apr 20 '15 at 4:35 Your proof is correct. In fact, once you have $$|b_n|<\frac{M+2}{n}$$ it is clear that since the numerator is bounded one can make the right hand side $<\epsilon$ for arbitrarily small $\epsilon >0.$
2019-10-16T11:43:33
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1242905/prove-that-the-sequence-b-n-converges", "openwebmath_score": 0.9976540207862854, "openwebmath_perplexity": 76.54659135805679, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363708905831, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8419036370369615 }
https://math.stackexchange.com/questions/775439/calculate-lim-n-rightarrow-infty-frac1n-left-prod-k-1n-leftn3k-1
# Calculate $\lim_{n\rightarrow\infty}\frac{1}{n}\left(\prod_{k=1}^{n}\left(n+3k-1\right)\right)^{\frac{1}{n}}$ I'm need of some assistance regarding a homework question: "calculate the following: $\lim_{n\rightarrow\infty}\frac{1}{n}\left(\prod_{k=1}^{n}\left(n+3k-1\right)\right)^{\frac{1}{n}}$" Alright so since this question is in the chapter for definite integrals (and because it is similar to other questions I have answered) I assumed that I should play a little with the expression inside the limit and change the product to some Riemann sum of a known function. OK so I've tried that but with no major breakthroughs... Any hints and help is appreciated, thanks! • Have you tried logarithms? – 5xum Apr 30 '14 at 9:02 • Yes and I wasn't able to come up with something useful – user475680 Apr 30 '14 at 9:31 The product $P_n$ may be expressed as follows: $$P_n = \left [ \prod_{k=1}^n \left (1+\frac{3 k-1}{n}\right ) \right ]^{1/n}$$ so that $$\log{P_n} = \frac1{n} \sum_{k=1}^n \log{\left (1+\frac{3 k-1}{n}\right )}$$ as $n \to \infty$, $P_n \to P$ and we have $$\log{P} = \lim_{n \to \infty} \frac1{n} \sum_{k=1}^n \log{\left (1+\frac{3 k-1}{n}\right )} = \lim_{n \to \infty} \frac1{n} \sum_{k=1}^n \log{\left (1+\frac{3 k}{n}\right )}$$ which is a Riemann sum for the integral $$\log{P} = \int_0^1 dx \, \log{(1+3 x)} = \frac13 \int_1^4 du \, \log{u} = \frac13 [u \log{u}-u]_1^4 = \frac{8}{3} \log{2}-1$$ Therefore, $$P = \frac{2^{8/3}}{e}$$ • Ah I see now, really cool solution, but I don't understand this line: $\lim_{n \to \infty} \frac1{n} \sum_{k=1}^n \log{\left (1+\frac{3 k-1}{n}\right )} = \lim_{n \to \infty} \frac1{n} \sum_{k=1}^n \log{\left (1+\frac{3 k}{n}\right )}$. How do you get rid of the (-1) in the nominator? – user475680 Apr 30 '14 at 9:50 • @user475680: because the log is continuous and you may bring the limit operator inside the log, so you essentially are taking the limit of $1/n$, which obviously vanishes. – Ron Gordon Apr 30 '14 at 9:53
2019-08-18T15:55:15
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/775439/calculate-lim-n-rightarrow-infty-frac1n-left-prod-k-1n-leftn3k-1", "openwebmath_score": 0.9604335427284241, "openwebmath_perplexity": 342.43735606087245, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363750229259, "lm_q2_score": 0.8539127455162773, "lm_q1q2_score": 0.8419036369001927 }
https://math.stackexchange.com/questions/2326685/uncountable-sets-in-countable-models-of-zfc/2527876#2527876
# Uncountable sets in countable models of ZFC If we assume ZFC to be consistent we have, by the Löwenheim-Skolem theorem, the existence of a countable model $$\mathcal{U}_0$$ of ZFC. In $$\mathcal{U}_0$$ there is a infinite ordinal, that is a non-empty limit ordinal. Call the smallest one $$\omega$$. We can also construct the cardinal $$2^\omega := \mathrm{card}(\wp (\omega))$$, since the existence of the power set is given by the axioms. However, the latter is uncountable, but it is a subset of $$\mathcal{U}_0$$, which is countable; this seems to be a contradiction. I suspect that this "contradiction" can be resolved by distinguishing between infinity between models of ZFC, but I don't know how to do that. So my question is: How can I resolve this? Thanks! • Nothing of substance to add to the answer below, but for reference your question is usually referred to as Skolem's Paradox (even though it's not a paradox) Jun 17 '17 at 22:08 • Using compactness, one can show there would also be a model of ZFC, N, such that the collection of objects such that N think they are finite ordinals, is actually uncountable. (This is the "reverse Skolem paradox", as I like to call it.) Jun 17 '17 at 22:34 • @spaceisdarkgreen: So you are saying the result is not counterintuitive? Because that's all "paradox" means. Jun 18 '17 at 10:28 • @celtschk Just parroting a number of passages I've read where the author felt the need to emphasize that there's no real contradiction here. For instance in Cohen's discovery of forcing talk, he puts scare quotes around "paradox", so I guess I agree that's all ""paradox"" means, but it's one particular use case of "paradox". Jun 18 '17 at 16:43 The contradiction is inside the definition of "countable". A set is countable if there exists a surjection from $\mathbb{N}$ to our set. The function that would make our inside-the-model set countable doesn't exist inside of the model, so inside of the model, the set is uncountable. • You mean countably infinite – user81883 Jun 17 '17 at 22:05 • So the set $2^\omega \subset \mathcal{U}_0$ would still be countable if we'd compare it outside the context of $\mathcal{U}_0$? Jun 17 '17 at 22:05 • @Steven Exactly. Countability isn't about the set in-and-of itself: ultimately, it boils down to "there exists a function." Changing where that function is allowed to exist can change whether a set "is" countable. Counterintuitive, but that's how it works. – user231101 Jun 17 '17 at 22:24 • @Steven Also note that the "$2^\omega$" that lives in $\mathcal{V}_0$ is a different set than the "real-world" $2^\omega$: there are subsets of $\omega$ that aren't in $\mathcal{V}_0$. – user231101 Jun 17 '17 at 22:25 • @user81883 Thanks, corrected. Jun 19 '17 at 17:36 The issue is not that $2^\omega$ is uncountable. It is that the set $\{A\mid\mathcal U_0\models A\subseteq\omega\}$ is countable. The fact that $\mathcal U_0$ is a model of $\sf ZFC$ means that in $\mathcal U_0$ there is a object which represents this set; but also that there is no bijection between the object $\mathcal U_0$ "thinks" is $\omega$, and the object representing the set above. So $\mathcal U_0$ "thinks" there is no bijection between some object and $\omega$, which is exactly the definition for $\mathcal U_0$ "thinks" that some object is uncountable. The reverse thing is also possible, if there is a model of $\sf ZFC$, then there is one $\mathcal U_1$ such that the set $\{A\mid\mathcal U_1\models A\text{ is a finite ordinal}\}$ is uncountable. So $\omega$, or the set that $\mathcal U_1$ "thinks" is $\omega$—the epitome of countability—is in fact uncountable! • Sort of related, wouldn't it be possible for the countable model $M$ to have an uncountable set (in $V$) as an element (clearly $M$ is not transitive)? I can't think of a simple argument to rule this out. – user185596 Jun 18 '17 at 1:12 • @dav11 Yes, this will of course happen. For example, if $M\preccurlyeq V$ then e.g. $\omega_1\in M$, since $\omega_1$ is definable; now take a countable elementary substructure. Countable elementary (of course not transitive) substructures are incredibly important in set theory, and one particular context is in forcing - proper forcing is all about countable elementary submodels, and their combinatorics is essential. E.g. a basic useful property is, if $M\preccurlyeq V$ then $M\cap\omega_1\in \omega_1$, that is, the countable ordinals in $M$ are closed downwards. Jun 18 '17 at 4:29 • @NoahSchweber: Thank you. – user185596 Jun 18 '17 at 17:04 • When you say "this set" in the second line, are you referring to $2^{\omega}$. And similarly "the set above" in the fourth line. And (yet again) presumably alluding to it with "some object" in the second paragraph. Thanks, Asaf. With regards, – user12802 Jul 8 '18 at 15:49 • @Andrew: Both refer to the set defined in the first line, which is $2^\omega$ from the point of view of the model. Jul 8 '18 at 16:11
2021-11-28T11:18:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2326685/uncountable-sets-in-countable-models-of-zfc/2527876#2527876", "openwebmath_score": 0.8488641977310181, "openwebmath_perplexity": 362.0433229134251, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363717170516, "lm_q2_score": 0.8539127473751341, "lm_q1q2_score": 0.8419036359099791 }
http://sunshine2k.de/articles/coding/permutations/permutations.html
Calculating permutations # 1. Introduction This article presents different algorithms to generate all permutations of a set. Oh no, just another tutorial about this topic, there are already million of them, you think? Maybe... but still I post it for two reasons: • At first, it's more or less for myself. After never having dealt with this topic before, I noticed that it's far from trivial. This will support me in future to having a fast understanding. • Second, even if there are many code snippets on the net about generating all permutations, most of them lack description or proper examples about how they actually work. None of them is a new approach or invented by me. They are all taken from the referenced books / articles at the bottom of this site. # 2. Permutations (without repetition) A permutation is an one possible ordering of the elements of an set. Let's say we have a set s of four characters {'A', 'B', 'C', 'D'}. Then a possible permutation is e.g. {'B', 'D', 'C', 'A'}. This chapter addresses following question: How many different permutations exist for a given set? Following chapters handle the problem how to generate all these possible permutations? At first, let's see how many different permutations exist: - For a given set s with n different elements, there are n possibilities to choose the first item of the permutation. - For the second item, there are only n-1 elements left to choose from, because one item was already taken away. - For the third item, then only n-2 items remain. This is repeated for all elements of the set until only 1 item is left for the last item of the permutation. Summarizing, there are n * (n-1) * (n-2) * ... * 1 = n! different permutations possible! $n!=n*\left(n-1\right)*\left(n-2\right)*\cdots *1$ To calculate all sets, let's dive into a few different algorithms. # 3. Algorithms At first, let's define some prerequisites which share all upcoming implementations: • The set for which all permutations are generated is an array called items. • The presented example implementations have char values as elements of the set. This is just for simplicity and no constraint - the algorithms can be easily modified to use elements of other types or vene generics. • The array is initialized with 'A', 'B', 'C' and so on... • The function VisitPermutation() is called whenever items contains a new permutation of the set. • The function Swap() exchanges two elements in the items array. ## 3.1 Backtracking (taken from [2]) The main idea is to exchange each element to the end of the set and then recursively permute the rest of the set (this is all elements except the last element). Before examining the algorithm step by step, here is the listing: public void Generate(int n) { if (n == 1) { VisitPermutation(); } else { for (int c = 0; c < n - 1; c++) { Swap(ref items[c], ref items[n - 1]); Generate(n - 1); Swap(ref items[c], ref items[n - 1]); } Generate(n - 1); } } The function is invoked with the number of elements in the set, e.g. with Generate(3) if items contains the three elements 'A', 'B' and 'C'. Obviously, the algorithm works for a one-element set. It is started with the call of Generate(1), so the first if-condition is true, the permutation is visited and the function is left. More interesting is the case with two elements so check this case step-by-step: Example of the backtracking algorithm for a set with two elements 'A' and 'B' Call Generate(2) with items = ('A' 'B'): Enter loop with c = 0: Swap(items[0], items[1]) = Swap('A', 'B') -> items = ('B' 'A') Generate(1): Visit permutation: ('B' 'A') Swap(items[0], items[1]) = Swap('B', 'A') -> items = ('A' 'B') Loop left. Generate(1): Visit permutation: ('A' 'B') However, in my opinion the set shall contain at least 3 elements to properly follow the procedure of this algorithm. Example of the backtracking algorithm for a set with three elements 'A', 'B' and 'C' Call Generate(3) with items = ('A' 'B' 'C'): Enter loop with c = 0: Swap(items[0], items[2]) = Swap('A', 'C') -> items = ('C' 'B' 'A') (A is exchanged to the end, then permute the first two elements) Generate(2): Swap(items[0], items[1]) = Swap('C', 'B') -> items = ('B' 'C' 'A') Generate(1): Visit permutation: ('B' 'C' 'A') Swap(items[0], items[1]) = Swap('B', 'C') -> items = ('C' 'B' 'A') Generate(1): Visit permutation: ('C' 'B' 'A') Swap(items[0], items[2]) = Swap('C', 'A') -> items = ('A' 'B' 'C') Enter loop with c = 1: Swap(items[1], items[2]) = Swap('B', 'C') -> items = ('A' 'C' 'B') (B is exchanged to the end, then permute the first two elements) Generate(2): Swap(items[0], items[1]) = Swap('A', 'C') -> items = ('C' 'A' 'B') Generate(1): Visit permutation: ('C' 'A' 'B') Swap(items[0], items[1]) = Swap('C', 'A') -> items = ('A' 'C' 'B') Generate(1): Visit permutation: ('A' 'C' 'B') Swap(items[1], items[2]) = Swap('C', 'B') -> items = ('A' 'B' 'C') Loop left. (C is at the end, permutate the first two elements) Generate(2): Swap(items[0], items[1]) = Swap('A', 'B') -> items = ('B' 'A' 'C') Generate(1): Visit permutation: ('B' 'A' 'C') Swap(items[0], items[1]) = Swap('B', 'A') -> items = ('A' 'B' 'C') Generate(1): Visit permutation: ('A' 'B' 'C') The algorithm is easy to understand and to implement. Also there is no stack problem as the maximum call depth is equal to the number of elements in the set. However the number of swaps is significant: There is a swap of two elements, then the function is called for the other n-1 elements, then the two elements are swapped back. This means for each permutation there are two swaps, expect for the last one because the last recursive call is outside the loop. Summarized, for an set with n elements, there are 2n! - 2 swaps required. ## 3.2 Lexicographic permutation algorithm (Algorithm L) (taken from [1]) Additional to above prerequisites, to find all permutations with this algorithm the items in the initial set must be increasing, so: • items[1] <= items[2] <= ... <= items[n] The main idea of this algorithm is to 'increase' the current permutation in a way that it is changed by the smallest possible amount. This procedure is comparable to counting up natural numbers: We check if the last element can be increased. If it cannot be increased, we move on one position to the left and check if this can be increased. So e.g. if we have 13, then we increase the rightmost digit, resulting in 14. For the number 19 we cannot increase the rightmost digit (9 is the largest available digit). So we move one to the left and check if this one can be increased: In this case it can be increased, so 1 is changed to 2, and the 9 is changed to the smallest possible value 0. Transfered to the permuation algorithm, it works the following way: • Initialize the set in increasing order: items[0] <= ... <= items[n-1] • Find the largest element index j (that's the rightmost one) so that items[j] can be increased, that is that items[j] < items[j+1]. In other words: Find the longest suffix that is not increasing. Element at index j is then the element left to this suffix. • Increase items[j] with the smallest possible change. In other words: Find the rightmost element in the suffix that is greater than element at index j • Note that at this place the suffix is decreasing: items[j+1] >= ... >= items[n-1] • Reverse items[j+1] ... items[n-1]. This result in the new permutation items[0] ... items[j] items[n] ... items[j+1]. Example of retrieving the next permutation of ('1' '0' '3' '2') The largest non-increasing suffix is ('3' '2'), thus j = 1 (items[j] is '0'). The rightmost element greater than '0' is '2', so exchange '0' and '2'. This gives ('1' '2' '3' '0'). Note that the suffix ('3' '0') is decreasing. Reversing it gives finally ('1' '2' '0' '3') which is our next permutation. So here is the listing of a possible implementation: public override void Generate() { /* trivial case */ if (ItemCount == 1) { VisitPermutation(); return; } while (true) { VisitPermutation(); /* find the largest index j so that items[j] can be increased (longest suffix that is not increasing) */ int j = this.ItemCount - 2; while (items[j] >= items[j + 1]) { if (j == 0) return; j--; } /* increase items[j]: Find the rightmost element in the suffix that is greater than element at index j */ int l = this.ItemCount - 1; while (items[j] >= items[l]) { l--; } /* swap both items */ Swap(ref items[j], ref items[l]); /* reverse items[j+1] ... items[n-1]: */ l = this.ItemCount - 1; int k = j + 1; while (l > k) { Swap(ref items[k], ref items[l]); k++; l--; } } } So let's perform this algorithm by hand for a set with 3 elements to properly follow the procedure: Example of the lexicographic permutation algorithm for a set with three elements 'A', 'B' and 'C' Loop 1: Visit permutation: ('A' 'B' 'C') Find index j: j = 1 ('B', longest suffix is 'C') Find index l: l = 2 ('C', that's the rightmost element larger than 'B') Swap elements at indices j = 1 ('B') and l = 2 ('C'): -> ('A' 'C' 'B') Reverse items at indices 2 to 2 -> nothing to do -> ('A' 'C' 'B') Loop 2: Visit permutation: ('A' 'C' 'B') Find index j: j = 0 ('A', longest suffix is 'C' 'B') Find index l: l = 2 ('B', that's the rightmost element larger than 'A') Swap elements at indices j = 0 ('A') and l = 2 ('B'): -> ('B' 'C' 'A') Reverse items at indices 1 to 2 -> ('B' 'A' 'C') Loop 3: Visit permutation: ('B' 'A' 'C') Find index j: j = 1 ('A', longest suffix is 'C') Find index l: l = 2 ('C', that's the rightmost element larger than 'A') Swap elements at indices j = 1 ('A') and l = 2 ('C'): -> ('B' 'C' 'A') Reverse items at indices 2 to 2 -> nothing to do -> ('B' 'C' 'A') Loop 4: Visit permutation: ('B' 'C' 'A') Find index j: j = 0 ('B', longest suffix is 'C' 'A') Find index l: l = 1 ('C', that's the rightmost element larger than 'B') Swap elements at indices j = 0 ('B') and l = 1 ('C'): -> ('C' 'B' 'A') Reverse items at indices 1 to 2 -> ('C' 'A' 'B') Loop 5: Visit permutation: ('C' 'A' 'B') Find index j: j = 1 ('A', longest suffix is 'B') Find index l: l = 2 ('B', that's the rightmost element larger than 'A') Swap elements at indices j = 1 ('A') and l = 2 ('B'): -> ('C' 'B' 'A') Reverse items at indices 2 to 2 -> nothing to do -> ('C' 'B' 'A') Loop 6: Visit permutation: ('C' 'B' 'A') Find index j: j = 0 (whole set is not increasing) -> terminate ## 3.3 Heap's Algorithm ([3]) Heap's Algorithm is a bit similar to backtracking as it works also in a recursive way. The last element remains fixed while the remaining n-1 elements are permutated. However, the clever choice which elements are swapped differs. Here the main steps of the algorithm: • Perform following steps n times, ranging from i = 0 to n - 1: • Fix the last element. • Generate the permutation for the first n-1 elements. These permutation have all the same last element. • If n is even, swap the elements at index i with the last element. • If n is odd, swap the elements at index 0 with the last element. The key point of Heap's algorithm is the clever exchange of elements. Not only is it correct, meaning that it visits each permutation once, it also has minimal movement of elements: For each permutation one pair is swapped while the position of the other n - 2 elements remain unchanged. Here the listing of the recursive implementation of Heap's algorithm: public void Generate(int n) { if (n == 1) { VisitPermutation(); } else { for (int i = 0; i < n - 1; i++) { Generate(n - 1); if ((n % 2) == 0) { Swap(ref items[i], ref items[n - 1]); } else { Swap(ref items[0], ref items[n - 1]); } } Generate(n - 1); } } Again, let's perform the algorithm manually for a three item set to see how it works: Example of Heap's recursive algorithm for a set with three elements 'A', 'B' and 'C' Generate(3): Loop i = 0: Generate(2) Loop i = 0: Generate(1): Visit permutation: ('A' 'B' 'C') n = 2 is even: Swap elements at indices 0 and 1 -> ('B' 'A' 'C') Generate(1): Visit permutation: ('B' 'A' 'C') n = 3 is odd: Swap elements at indices 0 and 2 -> ('C' 'A' 'B') Loop i = 1: Generate(2) Loop i = 0: Generate(1): Visit permutation: ('C' 'A' 'B') n = 2 is even -> Swap elements at indices 0 and 1 -> ('A' 'C' 'B') Generate(1): Visit permutation: ('A' 'C' 'B') n = 3 is odd -> Swap elements at indices 0 and 2 -> ('B' 'C' 'A') Generate(2) Loop i = 0: Generate(1): Visit permutation: ('B' 'C' 'A') n = 2 is even -> Swap elements at indices 0 and 1 -> ('C' 'B' 'A') Generate(1): Visit permutation: ('C' 'B' 'A') # 4. Miscellaneous ## 4.1 Additional implemented algorithms Beside to the three algorithms described above, the zip archive contains also implementations of some other permutation generation algorithms. In total, following eight algorithms are implemented: • Backtracking: Described in chapter 3.1. • Backtracking reversed: Similar to the previous recursive backtracking algorithm, except that each element is exchanged to the first position instead of to the last position. • Heap' algorithm (recursive): Described in chapter 3.3. • Heap' algorithm (iterative): An iterative version of Heap's algorithm. • Algorithm L: Described in chapter 3.2. • Algorithm P: 'Plain changes permutation algorithm' as described in [1]. • Algorithm T: 'Plain change algorithm' as described in [1]. Uses a precomputed lookup table of size n! containing the information of all transitions. • Recursive algorithm ("K-Level"): Very interesting, short recursive algorithm taken, see [4]. ## 4.2 Number of items swaps Just as information, instead of run time measurements, here are the number of item swaps for some input values: Number of swaps n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 n = 10 n = 11 n = 12 Backtracking 0 2 10 46 238 1438 10078 80638 725758 7257598 79833598 958003198 Algorithm L 0 1 7 34 182 1107 7773 62212 559948 5599525 61594835 739138086 Heap (recursive) 0 1 5 23 119 719 5039 40319 362879 3628800 39916800 479001599 Heap (iterative) 0 1 5 23 119 719 5039 40319 362879 3628800 39916800 479001599 Algorithm P 0 1 5 23 119 719 5039 40319 362879 3628799 39916799 479001599 Algorithm T 0 1 5 23 119 719 5039 40319 362879 3628799 39916799 N/A Backtracking (reversed) 0 2 10 46 238 1438 10078 80638 725758 7257598 79833598 958003198 K-Level recursive N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A ## 5. Conclusion & References I hope you found this article interesting and could learn something! Drop me a line for corrections, hints, criticism, praise ;) Sunshine, January 2016 ### References [1] Donald E. Knuth - The Art Of Computer Programming Pre-Fascicle 2B. A Draft Of Section 7.2.1.2: Generating all permutations [2] Robert Sedgewick - Permutation Generation Methods [3] Heap's algorithm @ Wikipedia [4] Recursive algorithm ("K-Level") by Alexander Bogomolny
2020-02-23T20:05:34
{ "domain": "sunshine2k.de", "url": "http://sunshine2k.de/articles/coding/permutations/permutations.html", "openwebmath_score": 0.2811872065067291, "openwebmath_perplexity": 3299.044537869931, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985936372130286, "lm_q2_score": 0.8539127455162773, "lm_q1q2_score": 0.8419036344301305 }
https://mathoverflow.net/questions/245312/kappa-homogeneous-topological-spaces/245319
$\kappa$-homogeneous topological spaces Let $\kappa>0$ be a cardinal and let $(X,\tau)$ be a topological space. We say that $X$ is $\kappa$-homogeneous if 1. $|X| \geq \kappa$, and 2. whenever $A,B\subseteq X$ are subsets with $|A|=|B|=\kappa$ and $\psi:A\to B$ is a bijective map, then there is a homeomorphism $\varphi: X\to X$ such that $\varphi|_A = \psi$. Questions: Is it true that for $0<\alpha < \beta$ there is a space $X$ such that $|X|\geq \beta$, and $X$ is $\alpha$-homogeneous, but not $\beta$-homogeneous? Is there even such a space that is $T_2$? Also it would be nice to see an example for $\alpha=1, \beta=2$. And I was wondering whether there is a standard name for $\kappa$-homogeneous spaces. (Not all of these questions have to be answered for acceptance of answer.) • Shouldn't you be asking for a space that is $\alpha$-homogeneous, but not $\beta$-homogeneous? It seems that $\alpha$ and $\beta$ are mixed up in your question. Jul 28 '16 at 14:08 • Set theorists would be inclined to make the definition for sets of size less than $\kappa$, rather than equal to $\kappa$, since this works better with limit cardinals. On that way of making the definition, your concept would be called $\kappa^+$-homogeneous. Jul 28 '16 at 14:10 • Do you have any restrictions about the cardinality of $X$? If $\kappa$ is infinite and you allow $|X| = \kappa$ then the discrete topology on $\kappa$ is $\alpha$-homogeneous for all $\alpha < \kappa$ but not $\kappa$ homogeneous, as witnessed by $A = X$ and any $B \subset X$, $|B| = \kappa$, $B \neq X$. Jul 28 '16 at 21:13 • For $n < \omega$ what you call $n$-homogeneity seems to be called strong $n$-homogeneity in the literature. (The "not strong" version just requires that there is a auto-homeomorphism $\varphi$ such that $\varphi [A] = B$, i.e., the two sets can be mapped onto each other by a homeomorphism, but you don't specify which point goes where.) Jul 30 '16 at 12:14 • It's a trivial modification of Joel and Yair's example, but finite discrete and indiscrete spaces of size $\alpha$ are $\alpha$-homogeneous but cannot be $\beta$-homogeneous for $\beta>\alpha$ because of your first condition. Sep 23 '16 at 19:13 This is a great question! The disjoint union of two circles is $1$-homogeneous, but not $2$-homogeneous. It is $1$-homogenous, since you can swap any two points and extend this to a homeomorphism (basically, "all points look alike"). But it is not $2$-homogeneous, since you can let $A$ be two points from one circle, and let $B$ be two points from different circles; there is no way to extend a bijection of $A$ with $B$ to a homeomorphism of $X$ (not all pairs look alike). The real line $\mathbb{R}$ is $2$-homogeneous, but not $3$-homogeneous. For $2$-homogeneity, given any two pairs of reals $a,b$ and $x,y$, then no matter how you map these bijectively, you can extend to a homeomorphism of the line by affine translation (all pairs look alike). But the line is not $3$-homogeneous, since we can biject the triples $0,1,2$ mapping to $0,2,1$, respectively; this does not extend to a homeomorphism, since it doesn't respect between-ness (not all triples look alike). The unit circle is $3$-homogeneous, but not $4$-homogeneous (thanks to Andreas Blass in the comments). For $3$-homogeneity, given two triples of points, we can match up the first in each by rotating the circle, and then match up the other two by stretching or by flipping and stretching, depending on whether the orientation was preserved or not (all triples look alike). It is not $4$ homogeneous, since we can have four points in clockwise rotation, and then try to fix the first two and swap the other two; this cannot extend to a homeomorphism, since fixing the first two fixes the orientation, which is not respected by swapping the other two (not all quadruples look alike). I don't know examples yet that are $4$-homogeneous, but not $5$-homogeneous, or $n$-homogeneous but not $n+1$-homogeneous, for $n\geq 4$. The real plane $\mathbb{R}^2$ appears to be $n$-homogeneous for every finite $n$, but not $\omega$-homogeneous (thanks again to Andreas). It is $n$-homogeneous, because given any two sets of $n$ points, we can imagine the plane made of stretchable latex and simply pull the points each to their desired targets, with the rest of plane getting stretched as it will. One can see this inductively, handling one additional point at a time: having moved any finitely many points, nail them down through the latex; now any additional point can be stretched to any desired target, before also nailing it down, and so on (so all $n$-tuples look alike). It is not $\omega$-homogeneous, since a countable dense set can be bijected with a countable bounded set, and this will not extend to a homeomorphism. Meanwhile, the infinite case is settled. For any infinite cardinal $\beta$, the discrete space of size $\beta$ is $\alpha$-homogeneous for every $\alpha<\beta$ but not $\beta$-homogeneous, in the OP's terminology. Any bijection of small subsets can be extended to a permutation, since there are $\beta$ many points left over, but if $X$ has size $\beta$, then we can take $A=X$ and $B=X-\{a\}$, which are bijective, but this bijection cannot be extended to a permutation of $X$. In particular, the countable discrete space is $n$-homogeneous for every finite $n$, but not $\omega$-homogeneous, just like $\mathbb{R}^2$. • The plane $\mathbb{R}^2$ is obviously $3$-homogeneous, but it is also $4$-homogeneous, and I think also $5$-homogeneous. For higher finite levels, I can't quite see what happens yet. Jul 28 '16 at 16:10 • I think I can "see" that the plane is $n$-homogeneous for all finite $n$, but I don't see (yet) a clear proof. The picture I have is that the plane is made of very elastic material, and handles are attached to $n$ points. You can use the handles to drag those points wherever you want, and the rest of the plane "flows" along. (I vaguely recall a similar picture associated with an old theorem, perhaps of Alexander.) Jul 28 '16 at 18:03 • Another example along the same lines as your answer: a circle is 3-homogeneous but not 4-homogeneous. Jul 28 '16 at 18:25 • I had made a comment to that effect earlier---that the plane is $n$-homogeneous for every $n$---but then deleted it when I lost confidence in it. But now your handle manner of expressing it is very convincing, so I'm inclined to agree again. Meanwhile, of course it is not $\aleph_0$-homogeneous. And I like your circle example! Can you make such examples for every $n$? Jul 28 '16 at 18:25 • Unfortunately, I don't see how to handle larger finite $n$. Model-theorists might be able to contribute something here, because the examples of the line and the circle correspond to a couple of standard examples of such homogeneity properties in model theory, namely dense linear orders and circular orders. (The latter is a ternary relation, $R(x,y,z)$ intended to mean that $x$, $y$, and $z$ occur in clockwise order around the circle.) I think the model theorists have examples of fancier sorts of homogeneity; I don't know whether they can be made into topological examples. Jul 28 '16 at 18:47 The sort of space you describe is usually called strongly $\kappa$-homogeneous. If you google that phrase you will find some interesting results about these kinds of spaces (mostly concerning how this property relates to other homogeneity properties). The earliest reference I could find to strongly $n$-homogeneous spaces (only finite values of $n$ are considered) is in a 1953 paper by C. E. Burgess (available here). Despite the fact that these kinds of spaces have been in the literature for well over half a century, it is unknown whether there is a topological space that is strongly $4$-homogeneous but not strongly $5$-homogeneous. (This is stated explicitly in this paper by Ancel and Bellamy from last year -- see the second paragraph of the second page.) As far as I can tell (although I don't have an authoritative reference), it is also unknown whether there is a strongly $n$-homogeneous space that is not also strongly $(n+1)$-homogeneous for any finite $n \geq 4$. Therefore Joel's answer (along with some of Andreas's comments below it) constitutes the state-of-the-art knowledge on this question. • Of course, there are the trivial examples of finite discrete and indiscrete spaces, which give two strongly $n$-homogeneous but not $n+1$-homogeneous spaces for each $n$. Feb 27 '17 at 15:56
2021-09-17T13:43:27
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/245312/kappa-homogeneous-topological-spaces/245319", "openwebmath_score": 0.8759823441505432, "openwebmath_perplexity": 282.2800184927178, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534392852383, "lm_q2_score": 0.8577681086260461, "lm_q1q2_score": 0.8418594603202268 }
http://mathhelpforum.com/geometry/152073-how-many-triangles.html
# Math Help - How many triangles? 1. ## How many triangles? How many triangles in the picture? I counted over 140 then got lost.. Is there a way to calculate the number of triangles in the picture? 2. Originally Posted by metlx How many triangles in the picture? I counted over 140 then got lost.. Is there a way to calculate the number of triangles in the picture? 140 seems like too many to me, what about $\binom{7}{2}+\binom{7}{2}-1+(7-2)^2$ Well, maybe that's not exactly right: you have to check for yourself. But the basic idea that I want to convey is that you should try to use well known counting formulas to figure it out. In my attempt you can see, I hope, that $\scriptstyle \binom{7}{2}$, $\scriptstyle \binom{7}{2}-1$, and $\scriptstyle (7-2)^2$ are meant to count all triangles of a certain type. 3. apparently the right answer is 216. (6^3) 4. Originally Posted by metlx apparently the right answer is 216. (6^3) Ah, yes, I didn't even see some of the triangles, much less count them correctly. - I seem to be almost blind at the moment.... 5. The answer is $\binom{6}{2} \times 6 \times 2 + 6 \times 6$ EDIT: The above formula is equivalent to $6^3 ~~ ( n^3 )$ , actually . To derive it , i find that all the triangles found in the figure must be constructed by at least one line passing through $A$ and one through $B$ . The other line ( the third line , triangle is constructed by three lines ) can only be selected in twelve lines , try it ! However , we find that every triangle has been counted twice , so let's halve it . Therefore , the number of triangles is $6 \times 6 \times 6$ 6. Originally Posted by simplependulum The answer is $\binom{6}{2} \times 6 \times 2 + 6 \times 6$ EDIT: The above formula is equivalent to $6^3 ~~ ( n^3 )$ , actually . Could you please explain where did you get that formula (the first one). How can you see that in a triangle above? 7. Originally Posted by metlx How many triangles in the picture? I counted over 140 then got lost.. Is there a way to calculate the number of triangles in the picture? Counting by hand... Starting at the bottom left-hand corner, using that corner as a vertex in each case, there are 6 triangles with 2 vertices right next to each other on the right-hand side. If we now choose any of the other 5 lines that emanate from the bottom right-hand vertex, then there are 6 times as many triangles starting from the bottom left-hand vertex that fit between two adjacent lines emanating from the bottom-left vertex. If we start at the bottom right-hand vertex, we double this number while subtracting the double-counted one at the base. Next, we can count the triangles that are between 2 lines that have a single line between them (emanating from the bottom left vertex). There are 5(6) of these. Double that and subtract the overlapping one and the two that overlap at the base with triangles one space thick. Next count the triangles with two lines in between. There are 4(6) of these. Double and subtract the overlap. Then subtract the 2 that overlap with those that are 2 spaces thick. Finally subtract the 2 that overlap with those that are one space thick. Next count the triangles with 3 lines in between. There are 3(6) of these. Double and subtract the 1 overlap. Subtract the 2 that overlap with those that have 2 lines between. Subtract the 2 that overlap with those that have 1 line between. Subtract the 2 that overlap with those that have no line between. Next count the triangles with 4 lines between the outer edges. There are 2(6) of these. Double and subtract the 1 overlap. Subtract 2 and 2 and 2 and 2 for the remaining overlaps. Count the triangles with 5 lines between the edges. There is only 1. Total is $6\left(2(6)+2(5)+2(4)+2(3)+2(2)\right)+1-5-4(2)-3(2)-2(2)-2$ $=6\left(12+10+8+6+4\right)+1-25$ $=6(40)-6(4)=6(36)=6^3$ 8. Originally Posted by metlx How many triangles in the picture? I counted over 140 then got lost.. Is there a way to calculate the number of triangles in the picture? The mathematics is much clearer when we avoid having to subtract double-counted triangles. Any triangle in the figure must have either X or Y as a vertex. We can independently count (1) triangles with Y as a vertex but not X (2) triangles with X as a vertex but not Y (3) triangles with both X and Y as a vertex. With Y omitted and X as a vertex, we can form a triangle by selecting 2 of the 6 blue points on the red line. The number of ways to do this is $\binom{6}{2}=\frac{6(5)}{2}$ Doing the same with the other 5 lines above the red one leaving Y gives $\frac{6(6)5}{2}=\frac{5}{2}6^2$ triangles with X as vertex but not Y There are the same number of triangles containing Y but not X. Therefore the number of triangles containing only one of those vertices is $(5)6^2$ With both X and Y as vertices, 6 triangles can be formed on the line with the blue dots in the 2nd diagram. The same number of triangles can be formed on the 5 lines leaving Y immediately below that. That gives $6^2$ triangles that have both X and Y as vertices. The total number of triangles is $(5)6^2+(1)6^2=(6)6^2=6^3$
2015-03-03T01:41:46
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/geometry/152073-how-many-triangles.html", "openwebmath_score": 0.8221731781959534, "openwebmath_perplexity": 463.92794030507144, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534354878827, "lm_q2_score": 0.8577681068080749, "lm_q1q2_score": 0.8418594552787222 }