Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
https://math.stackexchange.com/questions/3195581/are-there-any-irrational-transcendental-numbers-for-which-the-distribution-of-de
# Are there any irrational/transcendental numbers for which the distribution of decimal digits is not uniform? I conjecture that for irrational numbers, there is generally no pattern in the appearance of digits when you write out the decimal expansion to an arbitrary number of terms. So, all digits must be equally likely. I vaguely remember hearing this about $$\pi$$. Is it true for all irrational numbers? If not, what about transcendental? If true, how might I go about proving this? For my attempt, I'm not quite sure how to approach this, all I have are some experimental results to validate the conjecture. I started with $$\sqrt2$$. The occurrences of the various digits in the first 5,916 decimal terms are: 563, 581, 575, 579, 585, 608, 611, 565, 637, 612. And here are the occurrences of the decimals in the first 1993 digits of $$\pi$$: 180, 212, 206, 188, 195, 204, 200, 196, 202, 210 Same for the first 9825 digits of $$e$$: 955, 971, 993, 991, 968, 974, 1056, 990, 975, 952 It does seem that the percentage representations of each digit is very close to 10% in all cases. Edit: It's clear the conjecture is false (thanks for the answers). Still curious why all "naturally ocurring" irrational numbers (like the ones mentioned here) do appear to be normal. I know this is unproven, so feel free to provide conjectures. • Check out the Liouville number. It is transcendental but only consists of 0 and 1. Apr 21, 2019 at 7:40 • The digits of the Liouville number are highly patterned, just not patterned in a way that makes the number rational. Apr 21, 2019 at 9:33 • Try $0.1001000100001000001000000100.....$ – BAI Apr 21, 2019 at 10:12 • Take the square root of 2 and remove all 1's from it. Are you claiming the result is now e.g. rational? Apr 22, 2019 at 6:47 • @RohitPandey Or do they? Apr 22, 2019 at 17:07 What you mention is not true for all irrational numbers, but for a special subset of them called Normal numbers. From the wiki article: While a general proof can be given that almost all real numbers are normal (in the sense that the set of exceptions has Lebesgue measure zero), this proof is not constructive and only very few specific numbers have been shown to be normal. And It is widely believed that the (computable) numbers $$\sqrt{2}$$, $$\pi$$, and $$e$$ are normal, but a proof remains elusive. Note that there are infinitely many irrational numbers that are not normal. In 1909, Borel introduced the concept of a Normal number and proved (with a few gaps resolved later) the following theorem: Almost all real numbers are normal, in the sense that the set of non-normal irrational numbers has Lebesgue measure zero. 1. The number of non-normal irrational numbers is uncountable Theorem 4 of this reference. 2. There is a subset of normal numbers called Abnormal numbers and Absolutely Abnormal numbers which are uncountable. Abnormal numbers are not normal to a given base $$b$$ while Absolutely Abnormal numbers are not normal to any base $$b \ge 2$$. • "almost all real numbers are normal" should be complemented by the known fact that "the set of non-normal numbers is uncountable" . So, there are relatively few non-normal numbers, but there are (uncountably infinite) many irrationals which are not normal. Apr 21, 2019 at 17:39 • @leonbloy "there are uncountably infinite irrationals which are not normal". How? Apr 21, 2019 at 18:25 • Given the decimal expansion of a normal number $x$, you can construct a non-normal (or abnormal) number by taking any known abnormal number and replacing its decimals at positions $2^i$ with the $i$th decimal of $x$. The result follows from the uncountability of the normal numbers. Apr 21, 2019 at 20:36 • @user1952500 Proof that non-normal numbers are uncountable: see eg core.ac.uk/download/pdf/56374383.pdf (page 8). It's even true that the set of all "absolute abnormal" numbers (not normal in any base) is uncountable maa.org/sites/default/files/pdf/upload_library/22/Ford/… Apr 21, 2019 at 21:01 • @leonbloy in base 10 one can just note that the set of numbers whose decimal digits consist of only ones and zeros, is uncountable. It’s like a cantor set Apr 21, 2019 at 22:35 This definitely does not hold for all irrational or transcendental numbers. As noted in the comments by Fabian, various Liouville numbers come to mind as examples where the digits of the number are not at all uniformly distributed, yet these same numbers were constructed with the specific intention of being transcendental. The property you refer to - this "equidistribution of digits" and such - is what defines the so-called simply normal numbers in base $$10$$. If you have heard $$\pi$$ exhibits this very property, it's technically wrong because so far $$\pi$$ has not been proven to be simply normal in any base. It is suspected to be simply normal in every base, and even (absolutely) normal in every base, but it remains an open problem. Simple normality in base $$b$$ means that the frequency of each digit in the first $$n$$ digits tends to $$1/b$$ as $$n$$ tends to infinity. Normality in base $$b$$ means that for each finite digit sequence of length $$k$$, its frequency in the first $$n$$ digits tends to $$1/b^k$$ as $$n$$ tends to infinity. In fact, it seems only rather contrived numbers, such as $$0.123456789101112...$$ (Champernowne's number, obtained by concatenation of the naturals) among others in the article, are known for sure to be simply normal in base $$10$$, and in fact is normal in base $$10$$ (but not even known to be simply normal in other bases that are not powers of $$10$$). Nothing is known about a lot of the more "natural" numbers - like $$\pi,e,\sqrt 2$$. But I suppose Chaitin's constant could be considered a "natural" number, and it is normal in every base. Trivially, we do know that almost all real numbers are normal in every base, equivalently a real number drawn uniformly randomly from $$[0,1]$$ is normal with probability $$1$$. As for how to prove it for common numbers? Well, the proof for Chaitin's constant (overview in this Math Overflow post) relies on its algorithmic randomness, which is in fact just a much stronger form of normality. Roughly, normality in base $$b$$ says that each finite digit sequence of length $$k$$ has frequency in the first $$n$$ digits tending to $$b^{-k}$$ as $$n$$ tends to infinity, whereas algorithmic randomness says that there is some constant $$c$$ such that for every $$n$$ the shortest program (in some prefix-free encoding) that outputs the first $$n$$ bits has bit-length at least $$n-c$$, which intuitively means incompressible up to a constant. Note that if a number was not normal, it can be compressed using arithmetic encoding. On the other hand, Champernowne's constant is a very clear example of a highly compressible (so not algorithmically random) but normal number, since the first $$n$$ bits can obviously be output by a fixed program run on $$n$$ (which can be stored in $$O(\log n)$$ bits in prefix-free encoding). Since $$\pi$$'s digits do not follow some nice pattern like Champernowne's number, nor are they algorithmically random, it is unlikely that normality proofs for known normal numbers would give much clue for $$\pi$$. So far, at least. Of course this raises the question of "why do we conjecture them to be normal then?" Empirical evidence based on the first trillions of digits of $$\pi$$ do 'support' it, but of course that is nowhere near proof. It is just like if you toss a coin $$1000000$$ times and observe $$500469$$ heads and $$499531$$ tails, and conclude that you do not have evidence that it is not a fair memoryless coin, since the number of heads for a fair memoryless coin would be in the range $$[499500,500500]$$ with likelihood about $$1/2$$. So does your observation count as empirical evidence that it is a fair memoryless coin? Not really... Lack of evidence against is not really evidence for. Similarly, that is all we have for the question of $$\pi$$'s normality, so far. Also, such empirical evidence is notoriously hard to interpret. Again take the coin example, and suppose you observe exactly $$500000$$ heads and $$500000$$ tails. Would you think it is a memoryless coin? No! How about $$500001$$ heads and $$499999$$ tails? • "So in a twisted sort of sense, you could say "almost all" real numbers are normal." Given that the rationals are also a dense subset of the reals, but allmost all reals are irrational, this doesn't follow at all. user1952500 states that there is an -albeit constructive- proof that almost all reals are normal, but that is actually quite a remarkable result. Apr 21, 2019 at 9:16 • Wouldn't 123456789/9999999999 be both normal and rational? Apr 21, 2019 at 13:26 • @EvilSnack No. Normal numbers must contain each finite string of digits the right fraction of the time. Your example never contains $13$. Apr 21, 2019 at 15:00 • @Chieron was correct to criticize your post. The Cantor set is uncountable but has measure zero. Also, there are other errors in your answer. Chaitin's constant being normal is unrelated to its mere uncomputability; it is trivial to find uncomputable numbers that are not normal. Rather, it's because of its algorithmic randomness. Thirdly, the distributions should not become closer and closer to being equal, by the central limit theorem. Next time, please don't make claims and arguments that you can't justify. Apr 22, 2019 at 6:16 • I didn't say you deliberately got things wrong; I just said that you should refrain (next time) from posting answers with claims that you aren't able to justify. This is necessary because most who read your answer lacked the mathematical background to even realize there were multiple errors in it. Concerning your third error, if you flip a fair coin $n$ times, the number of heads and tails will not tend to become closer and closer to being equal as $n → ∞$. Likewise for the digits of normal numbers. This is a common fallacy, related to the gambler's fallacy. Apr 22, 2019 at 7:23 As user1952500 says, most (in a quite precise sense) numbers are normal and hence as you expect. However, it is very simple to create exceptions: numbers which are irrational but are not normal. The rational numbers have decimal expansions which, after a while, terminate or are periodic so just create a sequence that does not terminate or repeat but also does not have a uniform distribution of digits. A very simple way is to omit some digits. Fabian mentions the Liouville numbers which are an example of this style. There is nothing very special about the many zeros, you could swap the digits for others, e.g. $$3$$ and $$7$$. Another way would be to write an irrational number in a base less than $$10$$ and then regard it as a base $$10$$ number. It would not be normal as it would have no $$9$$s. You can't prove normality by checking calculated digits as that will always only be a finite subset. Maybe the first quadrillion digits of $$\pi$$ behave as expected and after that $$9$$ never appears. However, I would guess that irrational numbers which have not been artificially constructed are probably normal but it is very hard to prove. It has not been achieved for $$\pi$$ yet. • "very hard" is an understatement, I think. Noone has the slightest idea, how a proof of the normality of , lets say , $\pi$ could look like. It cannot even be ruled out that eventually, only digits $0$ and $1$ appear. It is very likely that such a proof is completely out of reach. Apr 21, 2019 at 8:27 • Good answer (+1) Apr 21, 2019 at 8:28 • @Peter Indeed but do we have a scale of hardness? So, you are suspecting that eventually only $0$ and $1$ appear. I was only expecting the weaker condition that $9$ ceases to appear. Apr 21, 2019 at 8:36 • @Peter I think the basic problem is that the definition of normality is not a property of a number, it is a property of a particular representation of a number. Can it even be proved that if a number is normal represented in one base it is normal in every base? (And if that conjecture is false, the whole topic is going to fall apart in a mass of special cases, and seems more like numerology than math.) Apr 21, 2019 at 15:32 • @alephzero Indeed, I don't find properties dependent on the decimal representation, or any other particular one, as interesting as properties independent of the representation. I think that it is possible for a number to be normal in one base but not another or normal in all bases but I need to check. Apr 21, 2019 at 15:50
2022-05-23T02:56:41
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3195581/are-there-any-irrational-transcendental-numbers-for-which-the-distribution-of-de", "openwebmath_score": 0.7757731080055237, "openwebmath_perplexity": 320.58768493619687, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.975946445006747, "lm_q2_score": 0.8670357718273068, "lm_q1q2_score": 0.8461804792085411 }
https://math.stackexchange.com/questions/3323706/convergence-of-sum-limits-n-1-infty-sin2-pi-n
# Convergence of $\sum\limits_{n = 1}^{\infty} \sin^2(\pi/n)$ I am trying to determine the convergence of $$\sum\limits_{n = 1}^{\infty} \sin^2(\pi/n)$$ After some time I found out that $$sin^2(\pi x) \leq (\pi x)^2$$ holds true for all $$x$$ using a graphing calculator. Which means I can substitute $$x={1\over n}$$ and get $$sin^2(\pi/n) \leq (\pi/n)^2$$. And since it is clear that $$\sum\limits_{n = 1}^{\infty}(\pi/n)^2$$ converges, $$\sum\limits_{n = 1}^{\infty} \sin^2(\pi/n)$$ converges as well by the comparison test. The problem is how can I possibly know that $$sin^2(\pi x) \leq (\pi x)^2$$ holds true when I am taking an exam and I don't have enough time to mess around with my graphing calculator? $$x+\sin\, x$$ and $$x-\sin\, x$$ are both non-decreasing functions since their derivatives are non-negative. They both vanish when $$x=0$$. Hence $$x\pm \sin \,x \geq 0$$ for all $$x \geq 0$$. This gives $$|\sin\, x| \leq x$$ and $$\sin^{2}x \leq x^{2}$$ for all $$x \geq 0$$. • Thanks. Just one thing. I didn't understand the part where $a \pm b \geq 0$ gives $|b| \leq a$ – linearAlg Aug 15 at 5:25 • Oh I think I intuitively understood that statement. Thanks – linearAlg Aug 15 at 5:26 • $a\pm b \geq 0$ gives $-b \leq a$ and $b \leq a$. Since $|b|$ is either $b$ or $-b$ it follows that $|b| \leq a$. Also, please take a look at my comment for the other answer. – Kavi Rama Murthy Aug 15 at 5:28 Remember that $$0<\sin x for $$0 (which is the case for the terms in this question). Thus, squaring both sides and replacing $$x\mapsto\pi x$$ gives $$\sin^2\pi x<(\pi x)^2$$. You are working with series, so I assume at exam time you also will know about power series. So at some point you will get to know that $$\sin x=\sum_{k=0}^\infty (-1)^m\frac{x^{2m+1}}{(2m+1)!},~~\cos x=\sum_{k=0}^\infty (-1)^m\frac{x^{2m}}{(2m)!}.$$ Directly related to series convergence is the Leibniz test for series $$\sum_{k=0}^\infty(-1)^ka_k$$, $$a_k>a_{k+1}>0$$ converging to $$0$$. One result of that test is that the value of that series is bounded by its partial sums $$s_n=\sum_{k=0}^n(-1)^ka_k$$, from below by the odd index sums $$s_{2m+1}$$ and from above by the even index sums $$s_{2n}$$. Now in combination you get that the sine power series satisfies the Leibniz test if $$x^2<2k(2k+1)$$ for all $$k\ge1$$, that is, $$|x|<\sqrt 6$$. In consequence $$x-\frac{x^3}6\le\sin x\le x ~~ \text{ for } ~~ x\ge 0,$$ with the reverse relations for $$x<0$$. This is really simple using asymptotic equivalence of functions: Near $$0$$, $$\sin x \sim x$$, so $$\;\sin^2\dfrac\pi n\sim_\infty \dfrac{\pi^2}{n^2}$$, which is a convergent power series. Now two series with equivalent general terms (and constant sign) both converge or both diverge.
2019-08-26T03:17:16
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3323706/convergence-of-sum-limits-n-1-infty-sin2-pi-n", "openwebmath_score": 0.9729518294334412, "openwebmath_perplexity": 150.46368129491114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464506036181, "lm_q2_score": 0.867035763237924, "lm_q1q2_score": 0.8461804756784509 }
https://math.stackexchange.com/questions/3121773/calculate-it-n-int-infty-infty-big-frac11-jq-bign-e-j
# Calculate $I(t,n) = \int_{-\infty}^{\infty} \big( \frac{1}{1-jq} \big)^{n} e^{-jqt} dq$ I am trying to calculate integrals of the form: $$I(t, n) = \int_{-\infty}^{\infty} \Big(\frac{1}{1-jq}\big)^{n} e^{-jqt} dq$$ where $$j = \sqrt{-1}$$. In the case when $$n=1$$, I have: $$I(t, 1) = \int_{-\infty}^{\infty} \frac{1}{1-jq} e^{-jqt} dq$$ Now, my thought was to view this as a function of $$t$$, and use the Feynmann trick. Ie: $$\frac{d}{dt} I(t, 1) = \int_{-\infty}^{\infty} \frac{\partial}{\partial t} \big(\frac{1}{1-jq} e^{-jqt} \big) dq$$ $$\frac{d}{dt} I(t, 1) = \int_{-\infty}^{\infty} \frac{-jq}{1-jq} e^{-jqt} dq$$ This looks promising, but I can't get it to go anywhere. An even more promising avenue seems to be to express the exponential as a power series around zero. This gives: $$I(t, 1) = \int_{-\infty}^{\infty} \frac{1}{1-jq} \sum_{k=0}^{\infty} \frac{(-jqx)^{k}}{k!} dq$$ Since the complex exponential is entire, we can interchange the sum and integral, giving: $$I(t, 1) = \sum_{k=0}^{\infty} \frac{x^{k}}{k!} \int_{-\infty}^{\infty} \frac{(-jq)^{k}}{1-jq} dq$$ However, I get stuck here too because I can't find an antiderivative for the integrand. Any ideas? Am I missing some fundamental theoretical concept or trick or technique that makes all of this difficulty disappear? Unfortunately, I am really not that good at integration, but I want to get better! For simplicity we rename $$j\to i, q \to -z$$ to rewrite the integral as \begin{align} \int_{-\infty}^\infty\frac{e^{itz}dz}{(1+iz)^n}&=\int_\Gamma\frac{e^{itz}dz}{(1+iz)^n}=2\pi i\;\underset{z=i}{\text {Res}}\,\frac{e^{itz}}{(1+iz)^n}\\ &=\frac {2\pi}{ i^{n-1}}\left.\frac1 {(n-1)!}\frac {d^{n-1}}{dz^{n-1}} e^{itz}\right|_{z=i}=2\pi\frac{ t^{n-1}e^{-t}}{(n-1)!},\end{align} where $$\Gamma$$ is the usual counterclockwise-oriented contour consisting of the real axis and a large semicircle in the upper complex half-plane. $$t$$ is assumed to be positive real number. • This seems to solve it. Thank you very much. I need to revisit my contour integration! So what happens if t is a nonpositive real number? Also, if q = -z, then shouldnt dq = -dz ? But it doesnt matter because of the limits of integration, right? Sorry, for the basic questions, but the only way to get better is to ask when I'm unsure. – The Dude Feb 21 at 22:30 • If $t$ is negative the coutour should lie in the lower half-plane, the pole is outside of the contour and the integral is 0. The negation of the integration variable and interchange of the limits always compensate each other. – user Feb 21 at 22:45 • Thanks for your reply. I will have to go back to my notes and go over this. This was very helpful. – The Dude Feb 21 at 22:58 • Okay, I spent a few days reviewing this and now have a question -- where does the $\frac{1}{i^{n-1}}$ come from? Shouldn't that be in the numerator when you take the derivative N-1 times? Why do you divide it out beforehand? – The Dude Feb 26 at 16:07 • @TheDude $\frac1{(1+iz)^n}=\frac1{i^n (z-i)^n}$. – user Feb 26 at 16:29 If we define the Fourier transform of a function $$f$$ and its inverse as $$\mathcal F(f)(\omega)=\int_{\mathbb R}f(x)e^{-j\omega x}dx \,\,\,\text{ and }\,\,\, f(x)=\frac 1 {2\pi}\int_{\mathbb R} \mathcal F(f)(\omega)e^{jx\omega}d\omega$$ then you're looking for the Fourier transform of $$f_n(x)=f(x)^n$$ where $$f(x)=\frac{1}{1-jx}$$. Because the Fourier transform maps products to convolutions (times $$\frac 1 {2\pi}$$, given the Fourier definition we adopted), you're looking for the $$n$$-th self-convolution of $$\mathcal F(f)$$. Now, with $$H$$ denoting the Heaviside step function, we have $$f(x)=\frac{1}{1-jx}=\int_{\mathbb R}e^{-\omega}H(\omega)e^{jx\omega}d\omega$$ As a consequence, the Fourier transform of $$f$$ is $$\omega\rightarrow 2\pi e^{-\omega}H(\omega)$$. This gives us $$I(t, 1)=2\pi e^{-t}H(t)$$ and you can verify that the $$n$$-th self-convolution is given by $$I(t,n)=2\pi\frac{t^{n-1}}{(n-1)!}e^{-t}H(t)$$ • Where did you find this fourier transform pair? After looking this up, I only found the first result. I'm not sure still how the (n-1)! comes out of this ,,Fourier Matching'' as I like to call it. – The Dude Feb 22 at 14:34 • It's a known Fourier transform. Then computing the convolution is not too difficult. You can prove it by induction after trying the first few values of $n$. – Stefan Lafon Feb 22 at 14:36 • Well I guess I am just a n00b. Okay, fair enough. I gotta practice more. – The Dude Feb 22 at 14:38 • Though the factorial is still confusing... – The Dude Feb 22 at 14:38 • It comes from integrating powers of $\omega$. – Stefan Lafon Feb 22 at 14:41
2019-07-18T11:07:43
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3121773/calculate-it-n-int-infty-infty-big-frac11-jq-bign-e-j", "openwebmath_score": 0.8977739810943604, "openwebmath_perplexity": 316.5278176844698, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9643214511730025, "lm_q2_score": 0.8774767906859264, "lm_q1q2_score": 0.8461696921648816 }
https://math.stackexchange.com/questions/1411900/stating-the-induction-hypothesis?noredirect=1
# Stating the induction hypothesis I would like to ask about the best way to state the induction hypothesis in a proof by induction. Just to use a concrete example, suppose I wanted to prove that $n!\ge 2^{n-1}$ for every positive integer $n$. Assuming that I have already verified the case $n=1$, which of the following statements of the induction hypothesis would be best to use, and, more importantly, are any of them unacceptable? 1) Let $n\in\mathbb{N}$ with $n!\ge2^{n-1}$. 2) Let $n!\ge2^{n-1}$ for some $n\in\mathbb{N}$. 3) Assume that $n!\ge2^{n-1}$ for some $n\in\mathbb{N}$. 4) Let $k!\ge2^{k-1}$. (I realize that this is partly a matter of taste and style, and please note that I am not asking how to finish the inductive step.) • I'd go with (3). – Akiva Weinberger Aug 27 '15 at 20:24 • I would also take (3)... So it is clear that it is the premise of the induction step... – Stephan Kulla Aug 27 '15 at 20:28 • I would strongly avoid 3). To me it says assume there exists an $n$ such that $\dots$. Then $n$ has been quantified away and is no longer free. Something like suppose that $k$ is a natural number such that $\dots$ seems fine. – André Nicolas Aug 27 '15 at 20:30 • It is important to have the word "assume", so (3). The other ones are somewhat confusing since "let" could lead the reader to think it is sufficient to find one value of $n$ or $k$ satisfying the inequality. – DirkGently Aug 27 '15 at 20:30 • The inductive hypothesis is an assumption not a definition so (3) is the best way to state it, but you don't want to have the "for some" bit. You just want to assume $n! \ge 2^{n-1}$. – Rob Arthan Aug 27 '15 at 20:34 The Principle of Mathematical Induction says that for all "properties" $P$, $$\left(P(0)\land\forall k\in \mathbb N\left(P(k) \implies P(k+1)\right)\right)\implies \forall n\in \mathbb N(P(n)).$$ So you're basically asking how to write the $\forall k\in \mathbb N\left(P(k)\implies P(k+1)\right)$ bit. It's a universal statement. It's common to start those by "Let $k\in \mathbb N$". Then you want to prove the conditional statement $P(k)\implies P(k+1)$. It's common to prove these by starting with "suppose $P(k)$ holds" (or some variation). Wrapping it up, I'd write "Let $k\in \mathbb N$ and suppose that $P(k)$" or "Let $k\in \mathbb N$ be such that $P(k)$ holds" or some variation of this. This includes (1) and to some extent (4). I wouldn't use (2) or (3) because the word "some" strongly suggests existential quantification which isn't even present in the formulation of the Principle of Mathematical Induction used in this answer (which is the most common anyway).
2019-05-19T06:49:43
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1411900/stating-the-induction-hypothesis?noredirect=1", "openwebmath_score": 0.8458281755447388, "openwebmath_perplexity": 189.10062186852426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9643214450208031, "lm_q2_score": 0.877476793890012, "lm_q1q2_score": 0.8461696898562378 }
https://stats.stackexchange.com/questions/352277/can-i-still-use-linear-regression-assumptions-test-on-a-linear-model-with-a-poly/352460
# Can I still use Linear Regression assumptions test on a linear model with a Polynomial variable I have a multivariate linear model (y=x1+x2) which gives me the following results when using R's plot() function: I can clearly see that the Normality and Linearity assumptions are not the best. Thus, I decided to add a 2nd degree polynomial variable to the model using poly() (which should give me a model such as y=x1^2+x1+x2) and got the following results: I clearly see an improvement in both assumptions, however, that made me think: can I even do these two tests after I included the polynomial variable? Is the model still considered linear? • I am not well versed in R's specifics w.r.t. poly(), but assuming it adds a term that is polynomial in the data, you should be fine. A linear model is linear in the parameters, not necessarily in the data. Jun 20 '18 at 8:51 • poly() is used on the model itself, e.i: y~poly(x1,2)+x2. The data stays untouched. Jun 20 '18 at 9:29 • Yes, the model is still linear (in its parameters and that's what is important here). Yes, you can use these plots. Jun 20 '18 at 10:57 • The term "linear" here has two meanings. One is that the model is a straight line. The other is that linear algebra can directly be used to solve for the coefficients in a technique named "linear regression". One is the name of a mathematical model, one is the name of a mathematical technique. Jun 20 '18 at 13:09 • A linear regression model is defined as $\mathbf{y} = X\boldsymbol\beta + \boldsymbol\varepsilon$ where $X$ is the design matrix (all your x-variables) and $\boldsymbol\beta$ is the parameter vector. In the design matrix, a column can be squared values of another column, it won't change this basic, linear model. (Note that poly by default creates orthogonal polynomials.) Jun 20 '18 at 13:22 Is the model still considered linear Given a dataset composed of a vector $\mathbf{x} = \{ x_1, x_2,...,x_n\}$ of $n$ explanatory variables and one dependent variable $y$ we assume in this model that the relationship between $\mathbf{x}$ and $y$ is linear $$y = \beta_0 1 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_n x_n + \epsilon$$ Where $\beta_0$ is an intercept term and $\epsilon$ is the error variable, an unobserved random variable that adds "noise" to the linear relationship. As @Roland points out, the linear relationship must hold in the parameters, not necessarily in the data. So nothing stops you from taking functions of the explanatory variables, and then performing the linear regression again. For example in your case, you could let: $$z_1 = x_1, \ z_2 = x_2, \ z_3 = x_1^2, \ z_4 = x_2^2$$ And then perform linear regression on $z$ as: $$y = \beta_0 1 + \beta_1 z_1 + \beta_2 z_2 + \beta_3 z_3 + \beta_4 z_4 + \epsilon$$ which is equivalent to $$y = \beta_0 1 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1^2 + \beta_4 x_2^2 + \epsilon$$ I clearly see an improvement in both assumptions If your data shows signs of an underlying polynomial relationship between the explanatory variables, then fitting a linear regression model on polynomial variables will improve your model. As always, this comes with many advantages and disadvantages, some discussed in the posts linked below. ### An example Here is a toy example of trying to fit a linear regression model on a noisy sine curve. As you may know, the sine curve can be approximated by a sum of polynomials, so intuitively we would expect a polynomial linear regression model to do well under certain conditions: ### More details See these excellent posts for more details and explanations
2022-01-20T09:21:09
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/352277/can-i-still-use-linear-regression-assumptions-test-on-a-linear-model-with-a-poly/352460", "openwebmath_score": 0.6187834739685059, "openwebmath_perplexity": 377.39476100626325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9643214501476359, "lm_q2_score": 0.8774767842777551, "lm_q1q2_score": 0.8461696850856091 }
http://gateoverflow.in/99639/maths-graph-theory
88 views Let G be a graph with 10 vertices and 31 edges. If G has 3 vertices of degree 10, 1 vertex of degree 8 and 2 vertices of degree 5 and the other four vertices of degree at least 3, how many vertices are of degree 3________? my solution: Σ deg(v) = 2|E| 3*10 + 1*8 + 2*5 + 4*(>=3) = 2*31 4*(>=3) = 62- (30+8+10) = 14 I think we can have 3 vertices each of degree 3 and vertex of degree 5, so my answer is 3 but given answer is 2. 3 vertex of degree 10 means it is a multigraph...in simple graph degree cannot be greater than 9...or this que may be framed wrongly multigraph also follows this property: Σ deg(v) = 2|E| what's wrong? ohk, sum of degree  = 2 * edges 3 * 10 +  1 * 8 + 2 * 5  + x1 + x2 + x3  + x4 =  2 * 31 x1 + x2 + x3 + x4 =   62 - 48 x1 + x2 + x3 + x4 = 14 each node have degree atleast three means x1 + x2 + x3 + x4 =   14 - 4 * 3 = 2 x1 + x2 + x3 + x4 = 2 Means either one node take 2 value , means  one node have degree 3 + 2 :  5 ( Not possible , because their are only 2 nodes of degree 5 ) Or two nodes have 1 , 1 value       means two nodes have 4 degree , and left two nodes have 3 degree. Hence nodes having degree  = 3  are 2. selected ( Not possible , because their are only 2 nodes of degree 5 )  i was thinking the same later because it is mentioned in the question that there 2 vertices of 5 degree. thanks! You can also do it like this: Let 'x' be the vertices of degree 3 So, (4-x) vertices are of degree >=4 (because 4 vertices have degree >=3) So, 3(10) + 1(8) + 2(5) + x(3)  + (4-x)(4) >= 62 $\therefore$  2 >= x Yes, thats why  I have taken >= 62
2017-02-24T21:40:26
{ "domain": "gateoverflow.in", "url": "http://gateoverflow.in/99639/maths-graph-theory", "openwebmath_score": 0.71247398853302, "openwebmath_perplexity": 741.1382558756962, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9643214511730026, "lm_q2_score": 0.8774767794716264, "lm_q1q2_score": 0.8461696813506916 }
https://mathematica.stackexchange.com/questions/147937/contourplot-and-3dplot-of-fx-y-z-x-z-y-z-x-y-z
ContourPlot and 3Dplot of $f(x,y,z)=x z + y z - x y z$ I have a function of 3 variables: $x$, $y$, and $z$. This is the function: $$f(x,y,z)=x z + y z - x y z$$ 1. Is there a way for me to graph this function? (3D graph) 2. Can you sketch several representative contour plots from the family of equations for various choices of c. We might place them all together in one plot? 3. Can this function of three variables be visualized as a 2D grid of 2D contours? As shown in the picture Thanks for the help. You (apparently) have a scalar function of three variables, so you cannot use a simple ContourPlot; you must use ContourPlot3D. Moreover 3DPlot (which does not exist in Mathematica but instead Plot3D) takes a function of two variables and plots the value in the third dimension. DensityPlot3D[x z + y z - x y z, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, PlotLegends->Automatic] If you want contours: ContourPlot3D[x z + y z - x y z, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, Contours -> 10] If you want two-dimensional slices: GraphicsGrid[ Partition[ Table[ ContourPlot[x z + y z - x y z, {x, -2, 2}, {y, -2, 2}], {z, -2, 2, .5}], 3]] • Thank you very much for the wonderful work .... Can you plote afunction of 3 variables could be visualized as a 2D grid of 2D contours as this i.stack.imgur.com/QRjO6.png Jun 8 '17 at 22:28 • @Emad: So PLEASE ANSWER: What is $c$?? Jun 8 '17 at 22:56 • c representation a constant level set ...... we assume $f(x,y,z)=c$ where c is constant Jun 8 '17 at 23:01 • @ David G. Stork Please see this question math.stackexchange.com/questions/1573755/… to understand me Jun 8 '17 at 23:06 This is for illustrative purposes. ContourPlot can be used for grid of graphics and SliceContourPlot3D for a 3D visualization. Noting the link to the provided graphic does not relate to provided function, the differences in plot are expected. f[x_, y_, z_] := x z + y z - x y z cp[c_, z0_] := ContourPlot[f[x, y, z0] == c, {x, -4, 4}, {y, -4, 4}, FrameLabel -> {Row[{"c=", c, ", z=", z0}], None}, BaseStyle -> 12] Grid[Table[cp[i, j], {j, Range[-2, 2]}, {i, Range[-2, 2]}], Frame -> All, Spacings -> {2, 0}] g[x_, y_, z_, c_] := f[x, y, z] - c scp[c_, z0_] := SliceContourPlot3D[ g[x, y, z, c], {z == z0}, {x, -4, 4}, {y, -4, 4}, {z, -4, 4}, Contours -> {0}, ContourShading -> None, ContourStyle -> Thick]
2021-10-16T01:59:12
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/147937/contourplot-and-3dplot-of-fx-y-z-x-z-y-z-x-y-z", "openwebmath_score": 0.3499150276184082, "openwebmath_perplexity": 2461.126881864552, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.964321452198369, "lm_q2_score": 0.8774767746654976, "lm_q1q2_score": 0.8461696776157736 }
http://zvvs.managerencoherence.fr/newton-method-to-find-roots.html
# Newton Method To Find Roots Remark 1 The new ninth order method requires six function evaluations and has the order of convergence nine. And let's say that x is the cube root of 3. This program is not a generalised one. GRAPHICAL INTERPRETATION :Let the given equation be f(x) = 0 and the initial approximation for the root is x 0. ' The function. Newton's method is a way to find a solution to the equation to as many decimal places as you want. In mathematics, the bisection method is a root-finding method that applies to any continuous functions for which one knows two values with opposite signs. So while Newton's Method may find a root in fewer iterations than Algorithm B, if each of those iterations takes ten times as long as iterations in Algorithm B then we have a problem. For instance, if we needed to find the roots of the polynomial , we would find that the tried and true techniques just wouldn't work. The Newton-Raphson method of finding roots of nonlinear equations falls under the category of _____ methods. Binary Search is a technique found in the field of Computer Science that is used to find, or search for, an element in a sorted list/array. You might just think, why not just start with $x_0 = 0. Modify it appropriately to do the following to hand in: 1. See root-finding examples. However I, dont know how to change the function ( x^8-7x^7+14x^6-14x^5+27x^4-14x^3+14x^2-7x+1. Solution: Try another initial point. Had you started just a bit lower, say x0=1. Newton-Raphson method for locating a root in a given interval The Newton-Raphson method is another numerical method for solving equations of the form This is best illustrated by the example below which is covered in the video. Newton-Raphson Method (a. Root Finder finds all zeros (roots) of a polynomial of any degree with either real or complex coefficients using Bairstow's, Newton's, Halley's, Graeffe's, Laguerre's, Jenkins-Traub, Abert-Ehrlich, Durand-Kerner, Ostrowski or Eigenvalue method. Below is the syntax highlighted version of Newton. Newton's Method in Matlab. Newton Search for a Minimum Newton's Method The quadratic approximation method for finding a minimum of a function of one variable generated a sequence of second degree Lagrange polynomials, and used them to approximate where the minimum is located. In this way you avoid using the division operator (like in your method, c1 = -d/g , ) - small but some gain at least! Besides, no fears if the denominator becomes 0. From that initial estimate, you. In the Newton-Raphson method, two main operations are carried out in each iteration: (a) evaluate the Jacobian matrix and (b) obtain its inverse. I understand newton's method and I was able to find all the real roots of the function. I've previously discussed how to find the root of a univariate function. The most widely used method for computing a root is Newton's method, which consists of the iterations of the computation of + = − ′ (), by starting from a well-chosen value. Newton's method for finding roots. This is essentially the Gauss-Newton algorithm to be considered later. By using this information, most numerical methods for (7. We already know that for many real numbers, such as A = 2, there is no rational number x with this property. Introduction Finding the root of nonlinear equations is one of important problem in science and engineering [5]. In mathematics, Newton method is an efficient iterative solution which progressively approaches better values. Newton's method for finding roots of functions. Newton method root finding: School project help. A method similar to this was designed in 1600 by Francois Vieta a full 43 years before Newton's birth. Start at x = 2+3i and use your polynew routine to find a root of the polynomial p(x) = x^2 - 6 * x + 10 Deflation. x i+1 x i x f(x) tangent. Quasi-Newton methods: approximating the Hessian on the fly ¶ BFGS : BFGS (Broyden-Fletcher-Goldfarb-Shanno algorithm) refines at each step an approximation of the Hessian. Bisection method is one of the many root finding methods. Newton's method, also known as Newton-Raphson, is an approach for finding the roots of nonlinear equations and is one of the most common root-finding algorithms due to its relative simplicity and speed. Newton's Method Formula In numerical analysis, Newton's method is named after Isaac Newton and Joseph Raphson. Comparative Study Of Bisection, Newton-Raphson And Secant Methods Of Root- Finding Problems International organization of Scientific Research 3 | P a g e III. We calculate the tangent line at and find. Newton Raphson method, also called the Newtons method, is the fastest and simplest approach of all methods to find the real. Newton's Method is an application of derivatives will allow us to approximate solutions to an equation. Store it in some variable say a, b and c. I'd like to write a program that uses the Newton Raphson method to calculate a root of a polynomial (determined by the user) given an initial guess. Just decide how much of the complex plane to draw, and for each pixel in the image, iterate Newton's method on the corresponding complex number and see what happens. There are various methods available for finding the roots of given equation such as Bisection method, False position method, Newton-Raphson method, etc. we will have some estimate of the root that is being sought. with initial approximation. Consider the problem of finding the square root of a number. Find a zero of the function func given a nearby starting point x0. So, it is basically used to find roots of a real-valued function. Cut and paste the above code into the Matlab editor. It is closely related to the secant method, but has the advantage that it requires only a single initial guess. A brief overview of the Newton-Raphson method can be found in 8. We will find root by this method in mathematica here. Atul Roy 4,273. The find_zerofunction provides the primary interface. It is based on the simple idea of linear approximation. 3 Newton's method Newton's method is an algorithm to find numeric solutions to the equation f(x) = 0. This program graphs the equation X 3 / 3 - 2 * X + 5. The equation to be solved is X 3 + a ⁢ X 2 + b ⁢ X + c = 0. Newton's method is used as the default method for FindRoot. Problem: Write a Scilab program for Newton Raphson Method. 4 Newton-Raphson and Secant Methods. Today I am going to explain Bisection method for finding the roots of given equation. The most widely used method for computing a root is Newton's method, which consists of the iterations of the computation of + = − ′ (), by starting from a well-chosen value. Needing help with using newton raphson method to Learn more about newtonraphson, method, roots, help. the bisection method or secant method, Newton’s method does not physi-cally take an interval, but it computes a better guess as to where the root may be, and that better guess will converge to a root. The first term on the right hand side is zero since is a root. In this post, only focus four basic algorithm on root finding, and covers bisection method, fixed point method, Newton-Raphson method, and secant method. The method as taught in basic calculus, is a root-finding algorithm that uses the first few terms of the Taylor series of a function f(x)\,\! in the vicinity of a. with initial approximation. The Newton iteration is then implemented interactively from first principles, and the calculations are repeated by an application of the underlying theory. It is based on the simple idea of linear approximation. (Remember from algebra that a zero of function f is the same as a solution or root of the equation f(x) = 0 or an x intercept of the graph of f. This tutorial explores a numerical method for finding the root of an equation: Newton's method. Newton's Method In this section we will explore a method for estimating the solutions of an equation f(x) = 0 by a sequence of approximations that approach the solution. Quasi-Newton methods: approximating the Hessian on the fly ¶ BFGS : BFGS (Broyden-Fletcher-Goldfarb-Shanno algorithm) refines at each step an approximation of the Hessian. Given that Maxima can evaluate expr or f over [a, b] and that expr or f is continuous, find_root is guaranteed to find the root, or one of the roots if there is more than one. For a radicand α, beginning from some initial value x 0 and using (1) repeatedly with successive values of k, one obtains after a few steps a sufficiently accurate value of α n if x 0 was not very far from the searched root. Always converge. Newton’s method works like this: Let a be the initial guess, and let b be the better guess. To remedy this, let's look at some Quasi-Newtonian methods. None of these Ans - B Using Newton-Raphson method, find a root correct to three decimal places of the equation x3 - 3x - 5 = 0 A. Take an initial guess root of the function, say x 1. /***** * Compilation: javac Newton. You can use a root deflation scheme, so as you find a root, you modify the function, so the root you just found is no longer a root. However, we will see that calculus gives us a way of finding approximate solutions. ) •Secant Method Part 2. The goal of our research was to understand the dynamics of Newton's method on cubic polynomials with real coefficients. Dana Mackey (DIT) Numerical Methods 17. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding and curve fitting. To find the roots of a function using graphing and Newton's method. Also, this method is not 100% in finding roots. Newton's Method in Matlab. We start with this case, where we already have the quadratic formula, so we can check it works. I want generate R code to determine the real root of the polynomial x^3-2*x^2+3*x-5. Using an initial guess of 1 with Newton's method. derive the Newton-Raphson method formula, 2. Newton's method for finding roots. Newton's method (also known as the Newton-Raphson method or the Newton-Fourier method) is an efficient algorithm for finding approximations to the zeros (or roots) of a real-valued function f(x). Newton’s method involves choosing an initial guess x 0, and then, through an iterative process, nding a sequence of numbers x 0, x 1, x 2, x 3, 1 that converge to a solution. The Newton-Raphson method (or Newton’s method) is one of the most efficient and simple numerical methods that can be used to find the solution of the equation f(x) = 0. Newton's Method Sometimes we are presented with a problem which cannot be solved by simple algebraic means. Toggle Main Navigation. Thus, we can create a function (using your f[x_, sq_] = x^2 - sq ) that gives us the next x value when looking for the square root of sq. These include: Bisection-like algorithms. I want generate R code to determine the real root of the polynomial x^3-2*x^2+3*x-5. Examples : Newton‐Raphson method does not work when the. Get an answer for '1/x = 1 + x^3 Use Newton's method to find all roots of the equation correct to six decimal places. 5 lies between 01 and 0. 0 This variant uses the first and second derivative of the function, which is not very efficient. Newton's Method for Solving Equations. Newton's method is also called Newton-Raphson method. Secant method avoids calculating the first derivatives by estimating the derivative values using the slope of a secant line. Use Newton's method to find the absolute maximum value of the function f(x) = 2x sin x, 0 ≤ x ≤ π correct to six decimal places. 1–3) • introducing the problem • bisection method • Newton-Raphson method • secant method • fixed-point iteration method x 2 x 1 x 0. This method uses the derivative of f(x) at x to estimate a new value of the root. This can get tricky too, if you are not careful.$ Newton's Method works best if the starting value is close to the root you seeking. Online calculator. Newton’s Method for Finding Roots A laboratory exercise|Part III Newton’s method is a very good \root nder. Following is the syntax for sqrt() method − import math math. Create initial guess x(n). ) •Simple One-Point Iteration •Newton-Raphson Method (Needs the derivative of the function. Guess the initial value of xo, here the gu. One Dimensional Root Finding Newton’s Method Bisection is a slow but sure method. We can model this as a vector-valued function of a vector variable. Java Examples: Math Examples - Square Root Newtons Method. Use Newton's method to find all roots of the equation correct to six decimal places. In a nutshell, the former is slow but robust and the latter is fast but not robust. (b) Use Newton's method to approximate the root correct to six decimal places. We set an approximate value for the root (x0). Newton's method for root-finding for a vector-valued function of a vector variable We have multiple real-valued functions, each of multiple variables. (Enter your answers as a comma-separated list. For simplicity, we have assumed that derivative of function is also provided as input. The worst thing about Newton's method is that it may fail to converge. The Method Newton’s method is a numerical method for finding the root(s) x of the the equation f. Newton's Method, in particular, uses an iterative method. Like so much of the differential calculus, it is based on the simple idea of linear approximation. Newton's method calculator or Newton-Raphson Method calculator is an essential free online tool to calculate the root for any given function for the desired number of decimal places. Kite is a free autocomplete for Python developers. It's also called a zero of f. The Newton-Raphson method assumes the analytical expressions of all partial derivatives can be made available based on the functions , so that the Jacobian matrix can be computed. In symbol form we’re looking for:. Newton's method calculates the roots of equations. Di erent methods converge to the root at di erent rates. As I have used circular references like this to solve some of the problems that I face, I have found that computation time can be a concern. Let's try to solve x = tanx for x. ) Elena complains that the recursive newton function in Project 2 includes an extra argument for the estimate. Once you have saved this program, for example as newton. One great example of that is Kepler’s equation I’m not going to go into this equation in this post, but small e is a constant and large E and M both are variables. The angle the line tangent to the function f(x) makes at x= 3 with the x -axis is 57 0. Theory and Proof. The roots of a quadratic equation are the values of 'x', which should satisfy the given equation. In other words, it finds the values of x for which F(x) = 0. Note that for a quadratic equation ax2+bx+c = 0, we can solve for the solutions using the quadratic formula. You have a function which performs a single step and a predicate which tells you when you're done. Get an answer for '1/x = 1 + x^3 Use Newton's method to find all roots of the equation correct to six decimal places. This formula defines Newton's method. When we find the line tangent to a curve at a given point, the line is also called the best linear approximation of the curve at that point. This guess is based on the reasoning that a value of 2 will be too high since the cube of. Newton-Raphson Method is a root finding iterative algorithm for computing equations numerically. The Attempt at a Solution First I attempted to write the fifth root of 36 in exponential form as show below: Let the 5th root of 36 = x Let f(x) = x^1/5 - 36 So, f'(x) = 1/5x^-4/5 Is this right so far?. Newton's method also requires computing values of the derivative of the function in question. So, we need a function whose root is the cube root we're trying to calculate. I don't know enough about the topic to explain in more detail, however. Before you dig too deeply into the code, though, you should familiarize yourself with what Newton's method. The Newton method consists of finding approximations for the function's roots by first setting an initial guess an then iteratively improve the guess precision. Comparative Study Of Bisection, Newton-Raphson And Secant Methods Of Root- Finding Problems International organization of Scientific Research 3 | P a g e III. Follow the first three. This program allows the user to enter integer value, and then finds square root of that number using math function Math. Among all these methods, factorization is a very easy method. I found some old code that I had written a few years ago when illustrating the difference between convergence properties of various root-finding algorithms, and this example shows a …. Adjust the Julia/SymPy function so it works with initial values with nonzero imaginary parts. Newton's Method (also called the Newton-Raphson method) is a recursive algorithm for approximating the root of a differentiable function. This first one is about Newton's method, which is an old numerical approximation technique that could be used to find the roots of complex polynomials and any differentiable function. In other words, it finds the values of X for which F(X) = 0. The only tricky part about using Newton's method is picking a. Take for example the 6th degree polynomial shown below. Worksheet 25: Newton’s Method Russell Buehler b. Newton Raphson method: it is an algorithm that is used for finding the root of an equation. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding and curve fitting. Newton's method is used to find a sequence of approximations a 1, a 2, a 3, to the root that approaches the root (ie, a n is closer to the root than a n –1 is). If the function is y = f(x) and x0 is close to a root, then we usually expect the formula below to give x1 as a better approximation. Root finding functions for Julia. 5)\) can be found with,. Solution: Since f(0) = −1 < 0 and f(1) = 0. 4 Newton-Raphson and Secant Methods. The secant method can be thought of as a finite-difference approximation of Newton's method. I have been trying to write a Newton's Method program for the square root of a number and have been failing. This method is named after Isaac Newton and Joseph Raphson and is used to find a minimum or maximum of a function. It is an open bracket method and requires only one initial guess. Finding nth root of a real number using newton raphson method. This means that there is a basic mechanism for taking an approximation to. of equation / T0 can not be find with the Newton‐Raphson method. Calculate Square Root without a Square Root Calculator. m, typing the filename, newton, at the prompt in the Command window will run the program. Spreadsheet Calculus: Newton's Method: Sometimes you need to find the roots of a function, also known as the zeroes. For g : Rn! Rn and x = g(x) The algorithm is simply: Step 1. Two widely-quoted matrix square root iterations obtained by rewriting this Newton iteration are shown to have excellent. Newton's method is a tool you can use to estimate the root of a function, which is the point at which the function crosses the x-axis. The iteration goes on in this way:. To remedy this, let's look at some Quasi-Newtonian methods. Beginning with the classical Newton method, several methods for finding roots of equations have been proposed each of which has its own advantages and limitations. GRAPHICAL INTERPRETATION :Let the given equation be f(x) = 0 and the initial approximation for the root is x 0. Newton-Raphson Method is a root finding iterative algorithm for computing equations numerically. In Newton's method, the approximating function is the line tangent to the residual function, F, at some point, , where is close to the location of a root. Newtons method was designed to find roots, but it can also be applied to solving certain equations, where there are no closed form solutions. (a) Draw the tangent lines that are used to find x 2 and x 3, and estimate the numerical. Householder's Methods are used to find roots for functions of one real variable with continuous derivatives up to some order. where, g is the root found out using Newton Raphson. The classical way to compute that is by successive approximations using the method of Isaac Newton. Root Finding Methods: Newton-Raphson Method Syful Akash Shahjalal University of Science & Technology, Bangladesh Department of Physics 23 March, 2018 Abstract In this study report I try to represent a brief description of root finding methods which is an important topic in Computational Physics course. Had you started just a bit lower, say x0=1. Newton's method is used as the default method for FindRoot. The box labeled \x n" will update to show the next approximation to the root using Newton’s Method (so after the rst iteration you get \x 2"). Viewed 109 times. The Newton Method, when properly used, usually comes out with a root with great efficiency. multiplicity 2 # [int] The multiplicity of the root when using the modified newton method Exercise: In the Newton's root finding algorithm, it is important to choose a reasonable initial search value. To get started with Newton's Method you need to select an initial value \$x_0. Note that for a quadratic equation ax2+bx+c = 0, we can solve for the solutions using the quadratic formula. Some functions may have several roots. 2787 is wrong. 5 x 2 - 3 x + 0. The Newton-Raphson Method. The point is, you cannot simply just modify Newton's method to find multiple roots. basic gauss elimination method, gauss elimination with pivoting, gauss jacobi method, gauss seidel method Program to construct Newton's Divided Difference Interpolation Formula from the given distinct data points and estimate the value of the function. Definition: This describes a "long hand" or manual method of calculating or extracting cube roots. Calculates the root of the equation f(x)=0 from the given function f(x) and its derivative f'(x) using Newton method. However, sometimes this method is called the Raphson method, since Raphson invented the same algorithm a few years after Newton, but his article was published much earlier. The process involves making a guess at the true solution and then applying a formula to get a better guess and so on until we arrive at an acceptable approximation for the solution. None of these Ans - B Using Newton-Raphson method, find a root correct to three decimal places of the equation x3 - 3x - 5 = 0 A. So we would have to enter that manually in our code. The most widely used method for computing a root is Newton's method, which consists of the iterations of the computation of + = − ′ (), by starting from a well-chosen value. However, we will see that calculus gives us a way of finding approximate solutions. Adjust the Julia/SymPy function so it works with initial values with nonzero imaginary parts. We want to solve the equation f(x) = 0. Exercise 2: Find a root of f(x) =ex −3x. Newton's Method is an application of derivatives will allow us to approximate solutions to an equation. However, there are more interesting things that can happen. sqrt( x ) Note − This function is not accessible directly, so we need to import math module and then we need to call this function using math static object. Users are responsible to pick a good one. I have been trying to write a Newton's Method program for the square root of a number and have been failing. The classical way to compute that is by successive approximations using the method of Isaac Newton. Newton-Raphson Method may not always converge, so it is advisable to ask the user to enter the maximum. Examples : Newton‐Raphson method does not work when the. ex = 5 - 4x? Give exact and approximate solutions to three decimal places: x^2-12x+36=81?. Newton's method, also known as Newton-Raphson, is an approach for finding the roots of nonlinear equations and is one of the most common root-finding algorithms due to its relative simplicity and speed. Newton’s Method for Finding Roots A laboratory exercise|Part III Newton’s method is a very good \root nder. For example, if y = f(x) , it helps you find a value of x that y = 0. Finding roots of polynomials is a venerable problem of mathematics, and even the dynamics of Newton’s method as applied to polynomials has a long history. So we would have to enter that manually in our code. , convergence is not achieved after any reasonable number of iterations) means either that has no roots or that the Newton-Raphson steps were too long in some iterations (as mentioned above, each step of the Newton-Raphson method goes in the descent direction of the function , having its minimum , if a root exists). This is an iterative method invented by Isaac Newton around 1664. For example, if y = f(x), it helps you find a value of x that y = 0. Newton-Rapson’s Method Norges teknisk-naturvitenskapelige universitet Professor Jon Kleppe Institutt for petroleumsteknologi og anvendt geofysikk 1 Finding roots of equations using the Newton-Raphson method Introduction Finding roots of equations is one of the oldest applications of mathematics, and is required for. First, recall Newton's Method is for finding roots (or zeros) of functions. In numerical analysis, Newton's method (also known as the Newton-Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. It is possible to modify Newton’s method to make it converge regardless of the root’s multiplicity: >>> findroot ( f , - 10 , solver = 'mnewton' ) 1. Newton's method can be used to find approximate roots of any function. Find a set of values that converge to a root of a function using Newton's method. Examples with detailed solutions on how to use Newton's method are presented. We already know that for many real numbers, such as A = 2, there is no rational number x with this property. 5 lies between 01 and 0. Like so much of the di erential calculus, it is based on the simple idea of linear approximation. From that initial estimate, you. Finding Roots. Finding Roots. Binary Search & Newton-Raphson Root Finding The Two Methods and Their Uses. The Newton-Raphson method is a powerful technique for solving equations numerically. This program graphs the equation X^3/3 - 2*X + 5. On the negative side, it requires a formula for the derivative as well as the function, and it can easily fail. Toggle Main Navigation. The Algorithm The bisection method is an algorithm, and we will explain it in terms of its steps. Note that for a quadratic equation ax2+bx+c = 0, we can solve for the solutions using the quadratic formula. The Newton Method, properly used, usually homes in on a root with devastating e ciency. Just decide how much of the complex plane to draw, and for each pixel in the image, iterate Newton's method on the corresponding complex number and see what happens. include: Bisection and Newton-Rhapson methods etc. Find x in[a,b]. Use Newton's method to find a solution to x2 − 17 = 0. Please,I need a program in visual basic to solve the question below:-By applying Newton Raphson method,find the root of 3x-2tanx=0 given that there is a root between pie/6 and pie/3. This program graphs the equation X 3 / 3 - 2 * X + 5. 5 or so, it should have converged to 0 as a root. That is what it is, but it may also be interpreted as a method of optimization. Some will. You can use a root deflation scheme, so as you find a root, you modify the function, so the root you just found is no longer a root. Usually iterations will converge quickly to the root. We set an approximate value for the root (x0). (3) we would have $$p=2$$, but it converges so quickly that it can be difficult to see the convergence (there are not enough terms in the sequence). The initial estimate of the root is x 0 =3 , and f(3)=5. Like so much of the di erential calculus, it is based on the simple idea of linear approximation. Homework Equations xn+1 = xn - f(xn)/f'(xn) 3. I'd like to write a program that uses the Newton Raphson method to calculate a root of a polynomial (determined by the user) given an initial guess. Although this method is a bit harder to apply than the Bisection Algorithm, it often finds roots that the Bisection Algorithm. ) •Secant Method Part 2. In numerical analysis, Newton's method, also known as the Newton-Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. It's also called a zero of f. It's a solution or root of the equation f (x) = 0, ie, a point where the graph of f intersects the x-axis. f(x) = 0 Themethodconsistsofthe following steps: Pick a point x 0 close to a root. Newton's method is extremely fast, much faster than most iterative methods we can design. Newton-Raphson method is also one of the iterative methods which are used to find the roots of given expression. To remedy this, let's look at some Quasi-Newtonian methods. Finding roots of polynomials is a venerable problem of mathematics, and even the dynamics of Newton's method as applied to polynomials has a long history. Such equations occur in vibration analysis. ) •Bisection Method •False-Position Method •Open Methods (Need one or two initial estimates. These methods are called iteration methods. ' and find homework help for other Math questions at eNotes. Finding roots of polynomials is a venerable problem of mathematics, and even the dynamics of Newton's method as applied to polynomials has a long history. Let f(x) be a real-valued function on the real line that has two continuous derivatives. Finding Square Roots But enough about how it can go wrong; let's see how it can go right! We can use Newton's method to find the square root of a given number x by solving for the root of the quadratic q(y)=x-y 2. ≈ means "approximately equal to". Off On A Tangent. Newton's Method for approximating the roots of a curve by successive interations after an initial guess Despite being by far his best known contribution to mathematics, calculus was by no means Newton’s only contribution. Find the first derivative f’ (x) of the given function f (x). Newton Raphson method: it is an algorithm that is used for finding the root of an equation. In a nutshell, the former is slow but robust and the latter is fast but not robust. In 1976, my Cornell colleague John Hubbard began looking at the dynamics of Newton’s method, a powerful algorithm for finding roots of equations in the complex plane. We see that the function graph crosses the x-axis somewhere between -0. It arises in a wide variety of practical applications in physics, chemistry, biosciences, engineering, etc. Newton's method is a tool you can use to estimate the root of a function, which is the point at which the function crosses the x-axis. (a) Derive Newton's method for finding the root of an arbitrary matrix-valued function $$\displaystyle f =f(X)$$, where by "root" we mean that X is a root of f if $$\displaystyle f(X)= \mathbf{0}$$, where 0 is the matrix of all zeroes. Newton's method is an iterative method. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. √2 is a solution of x = √2 or x² = 2. Users are responsible to pick a good one. The Newton method used in finite element analysis is identical to that taught in basic calculus courses. Exercise: Newton's method is flexible in ways that bisection is not. Newton-Raphson Method with MATLAB code: If point x0 is close to the root a, then a tangent line to the graph of f(x) at x0 is a good approximation the f(x) near a. without converging to a root. ' and find homework help for other Math questions at eNotes. In a nutshell, the former is slow but robust and the latter is fast but not robust. So we have reduced the problem to finding a square root of a number between 1 and 2. In other words, we solve f(x) = 0 where f(x) = x−tanx. Newton's method calculates the roots of equations. Please input the function and its derivative, then specify the options below. Off On A Tangent. I have another form to the function f(x) ,but I don't know if it's suitable to be solved by Newton's method in matlab,the other form is:. Explore complex roots or the step‐by‐step symbolic details of the calculation. Note: In Maple 2018, context-sensitive menus were incorporated into. So while Newton’s Method may find a root in fewer iterations than Algorithm B, if each of those iterations takes ten times as long as iterations in Algorithm B then we have a problem. N Bodies Computational Physics and Computer Science. It then runs a round of Newton's approximation method to further refine the estimate and tada, we've got something near the inverse square root.
2019-11-20T16:28:34
{ "domain": "managerencoherence.fr", "url": "http://zvvs.managerencoherence.fr/newton-method-to-find-roots.html", "openwebmath_score": 0.8345716595649719, "openwebmath_perplexity": 297.25593681290644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9886682458008671, "lm_q2_score": 0.8558511543206819, "lm_q1q2_score": 0.8461528594088757 }
https://math.stackexchange.com/questions/2718697/using-darboux-integrals-to-prove-even-integration-on-a-symmetric-interval
# Using Darboux integrals to prove even integration on a symmetric interval Recall a function $f:[-a,a] \rightarrow R$ is said to be even if $f(x)=f(-x)$. Let $f$ be an integrable, even function. Prove that: $\int_{-a}^a f\,$ = $2 \int_0^a f\,$. . I'm trying to prove this using Darboux/Riemann Integrals, as stated in the title. I'm unsure how to interpret $f(x)=f(-x)$. My best guess was to say that $U(f,P)=U(f,-P)$ after I define a partition but, even if that's right, I can't see how that works. My proof so far is as follows: As $f$ is integrable, $\exists$ a sequence of partitions $P_n$ on $[-a,a]$ : $\lim\limits_{n \to \infty} (U(f,P_n)-L(f,P_n))=0$ and $\lim\limits_{n \to \infty} (U(f,P_n))=$ $\int_{-a}^a f\,$. Now, let ${Q_n}$ be a refinement of $P_n$ such that $Q_n=P_n \cup (-P_n)$. Then, $Q_n$ is symmetric about the origin and: (i.) $\lim\limits_{n \to \infty} (U(f,Q_n)-L(f,Q_n)=0$ (see ***). (ii.) $L(f,P_n) \le L(f,Q_n) \le U(f,Q_n) \le U(f,P_n)$, by refinement. (iii.) $\lim\limits_{n \to \infty} (U(f,Q_n))=$ $\int_{-a}^a f\,$, by (i.). ***$\lim\limits_{n \to \infty} (U(f,Q_n)-L(f,Q_n))=$ $\lim\limits_{n \to \infty} ([U(f,P_n)+U(f,-P_n)]-[L(f,P_n)+L(f,-P_n)])=0$ Here is where I get stuck... As $f$ is even, $U(f,P_n)=U(f,-P_n)$ and $L(f,P_n)=L(f,-P_n)$. And that's it.. I believe the final lines look like this: Thus, $U(f,Q_n)=U(f,P_n \cup (-P_n))=U(f,P_n)+U(f,-P_n)=U(f,-P_n)+U(f,-P_n)=2U(f,-P_n)$. And, as $\int_{-a}^a f\,$ = $\int_{-a}^0 f\,$ $+$ $\int_0^a f\,$, $\lim\limits_{n \to \infty} (2U(f,-P_n)=$ $\lim\limits_{n \to \infty} (U(f,Q_n)$ Hence, $\int_{-a}^a f\,$ $=$ $\int_{-a}^0 f\,$ $+$ $\int_0^a f\,$ $=$ $\int_0^a f\,$ $+$ $\int_0^a f\,$ $=$ $2\int_0^a f\,$. End Proof Any help would be extremely helpful. As you can see, I get pretty lost at the end there. Thank you. • I edited the last line "Hence..." because I forgot a negative sign on one of the limits. Also, I may have to include the set {0} in the union, but I am not sure. So it may have to look like this: $Q_n = P_n \cup (-P_n) \cup$ {$0$}. – Marcus Apr 2 '18 at 19:22 Since $f$ is Riemann integrable on $[-a,a]$, by the Riemann criterion, for any $\epsilon > 0$ there exists a partition $P'$ of $[-a,a]$ such that $U(f,P') - L(f,P') < \epsilon.$ If the point $x=0$ is not included in $P'$, we can add it to obtain a partition $P$ which refines $P'$. Otherwise take $P = P'$ when it includes $x = 0$. Since $P' \subset P$ we have $$L(f,P') \leqslant L(f,P) \leqslant U(f,P) \leqslant U(f,P')$$ and $$U(f,P) - L(f,P) \leqslant U(f,P') - L(f,P') < \epsilon.$$ Let us denote the points in $P$ as $$-a = y_0 < y_1 < \ldots < y_n = 0 = x_0 < x_1< \ldots x_m = a,$$ where we take into consideration the fact that there may be a different number of points in the partition below and above $0$. Note that $P^- = (y_0,y_1, \ldots,y_n)$ is a partition of $[-a,0]$ and $P^+ = (x_0,x_1, \ldots,x_n)$ is a partition of $[0,a]$. We can write $$U(f,P) = U(f,P^-) + U(f,P^+), \,\,\, L(f,P) = L(f,P^-) + L(f,P^+),$$ and it follows that $$U(f,P^-) - L(f,P^-) + U(f,P^+) - U(f,P^+) = U(f,P) - L(f,P) < \epsilon.$$ Hence, since the upper sum minus the lower sum is positive, we have $$U(f,P^+) - L(f,P^+) < \epsilon,$$ and the integral of $f$ over $[0,a]$ exists and is squeezed between the lower and upper sums as $$\tag{1}L(f,P^+) \leqslant \int_0^a f \leqslant U(f,P^+).$$ Since $f(x) = f(-x)$, we have by the definition of upper and lower sums $$\tag{2}U(f,P^-) = \sum_{j=1}^n \sup_{x \in [y_{j-1},y_j]}f(x) \,(y_j- y_{j-1}) = \sum_{j=1}^n \sup_{-x \in [-y_j,-y_{j-1}]}f(-x) \,(-y_{j-1}- (-y_j)), \\ L(f,P^-) = \sum_{j=1}^n \inf_{x \in [y_{j-1},y_j]}f(x) \,(y_j- y_{j-1}) = \sum_{j=1}^n \inf_{-x \in [-y_j,-y_{j-1}]}f(-x) \,(-y_{j-1}- (-y_j))$$ Notice that the sums appearing on the right-hand sides of the two equations in (2) are themselves upper and lower sums with respect to a partition of $[0,a]$ since $-y_j \in [0,a]$. Consequently, $$\tag{3}L(f,P^-) \leqslant \int_0^a f \leqslant U(f,P^-).$$ Adding (1) and (3) we get $$\tag{4}L(f,P) \leqslant 2\int_0^a f \leqslant U(P,f).$$ But $f$ is integrable over $[-a,a]$ and we must also have $$\tag{4}L(f,P) \leqslant \int_{-a}^a f \leqslant U(P,f).$$ Thus, for any $\epsilon > 0$, $$\left|\int_{-a}^af - 2\int_0^a f \right| < U(f,P) - L(f,P) < \epsilon,$$ and it follows that $$\int_{-a}^af = 2\int_0^a f.$$ • Wow, that makes so much sense. Thank you so much! – Marcus Apr 4 '18 at 23:59 • @Marcus: You're welcome. Glad to help. – RRL Apr 5 '18 at 3:00
2019-05-22T06:48:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2718697/using-darboux-integrals-to-prove-even-integration-on-a-symmetric-interval", "openwebmath_score": 0.9618600606918335, "openwebmath_perplexity": 123.63743273834933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682474702956, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.8461528590201343 }
https://forum.math.toronto.edu/index.php?PHPSESSID=p3keptbghdddifthvj5q7mucc5&topic=1191.msg4314
### Author Topic: FE-P5  (Read 6327 times) #### Victor Ivrii • Administrator • Elder Member • Posts: 2563 • Karma: 0 ##### FE-P5 « on: April 11, 2018, 08:47:26 PM » For the system of ODEs \begin{equation*} \left\{\begin{aligned} &x'_t = x (x -y+1)\, , \\ &y'_t = y (x - 2)\,. \end{aligned}\right. \end{equation*} a.  Describe the locations of all critical points. b. Classify their types (including whatever relevant: stability, orientation, etc.). c. Sketch the phase portraits near the critical points. d.  Sketch the full phase portrait of this system of ODEs. #### Tim Mengzhe Geng • Full Member • Posts: 21 • Karma: 6 ##### Re: FE-P5 « Reply #1 on: April 12, 2018, 12:34:35 AM » For part(a) Let x(x-y+1)=0 and y(x-2)=0 We will have all the three critical points (x,y)=(0,0),(2,3) or (-1,0) For part(b) F=x(x-y+1) G=y(x-2) Therefore, the Matrix $J$ J={ \left[\begin{array}{ccc} 2x-y+1 & -x \\ y & x-2 \end{array} \right ]}, At point(0,0), we have J[0,0]={ \left[\begin{array}{ccc} 1 & 0 \\ 0 & -2 \end{array} \right ]}, Eigenvalues are \lambda_1=-2 \lambda_2=1 Therefore, (0,0) is a saddle point and thus unstable. At point(2,3), we have J[2,3]={ \left[\begin{array}{ccc} 2 & -2 \\ 3 & 0 \end{array} \right ]}, \lambda_3=1+\sqrt{5}i \lambda_4=1-\sqrt{5}i Therefore, (2,3) is a spiral point and is unstable. The orientation is counterclockwise. J[-1,0]={ \left[\begin{array}{ccc} -1 & 1 \\ 0 & -3 \end{array} \right ]}, \lambda_5=-1 \lambda_6=-3 Therefore, (-1,0) is a node and is asymptotically stable. #### Nikola Elez • Jr. Member • Posts: 10 • Karma: 2 ##### Re: FE-P5 « Reply #2 on: April 12, 2018, 12:38:13 AM » For part c/d #### Victor Ivrii • Administrator • Elder Member • Posts: 2563 • Karma: 0 ##### Re: FE-P5--solution « Reply #3 on: April 18, 2018, 06:48:47 AM » a.  Solving $x(x-y+1)=0$, $y(x-2)=0$ we get cases \begin{align*} &x=y=0  &&\implies A_1=(0,0),\\ &x=x-2=0  &&\implies \text{impossible}\\ &y=x-y+1=0 &&\implies A_2=(-1,0),\\ &x-y+1=x-2=0 &&\implies A_3=(2,3). \end{align*} b. Linearizations at these points have matrices \begin{align*} & \begin{pmatrix} 1 &\ \ 0\\ 0 &-2 \end{pmatrix} && \begin{pmatrix} -1 &\ \ 1\\ 0 &-3 \end{pmatrix} && \begin{pmatrix} 2 &-2\\ 3 &0 \end{pmatrix} \\[5pt] \text{with eigenvalues    }&\{1,-2\} &&\{-1,-3\} && \{1-\sqrt{5}i,1+\sqrt{5}i \} \end{align*} and therefore * $A_1$ is a saddle, * $A_2$ is a stable node, and * $A_3$ is unstable focal point and since left bottom number is $3>0$ it is counterclockwise oriented. c. Axis are: in $A_1$:  $\mathbf{e}_1=(1,0)^T$ unstable ($\lambda_1=1$), $\mathbf{e}_2=(0,1)^T$ stable ($\lambda_2=-2$). in $A_2$: $\mathbf{f}_1=(1,0)^T$ ($\lambda_1=-1$), $\mathbf{f}_2=(1,-2)^T$ ($\lambda_1=-3$). Since $\lambda_1 >\lambda_2$, all trajectories have an entry directions $\pm \mathbf{f}_1$ (except two, which have entry directions $\pm \mathbf{f}_2$). Then we draw trajectories near critical points (See attachment  P5-loc.png). d. One should observe that either $x=0$ in every point of the trajectory, or in no point; and that $y=0$ in every point of the trajectory, or in no point. It allows us to make a "skeleton'' of the phase portrait (see attachment), impose local pictures on it and finally draw a global portrait « Last Edit: April 18, 2018, 11:01:01 AM by Victor Ivrii » #### Victor Ivrii • Administrator • Elder Member • Posts: 2563 • Karma: 0 ##### FE-P5 Comments « Reply #4 on: April 19, 2018, 01:16:12 PM » a. Some students missed some stationary points and/or reported wrong points. All further analysis in the wrong points was ignored as irrelevant. For all three correct points found I gave 10pts, with a reduction of 3 pts for each missed points, and 1pts for extra points (only for those who found 3 correct points). b. Linearization was easy, but some borked it. Finding eigenvalues was supposed to be a breeze but ... One does not need to solve any equations to find eigenvalues of the diagonal or triangular matrices (some students wrote wrong equations and found wrong eigenvalues). Also eigenvectors of the diagonal matrices are obvious, and of the triangular are easy. Not everyone found correctly eigenvalues of the matrix at $(2,3)$. T recall, that they are $1\pm i\sqrt{5}$. One does not need to look for eigenvectors, however one should look at the sign of bottom left element of the matrix and conclude what is the direction of rotation (I have not subtracted points for missing justification that it is counter-clockwise) c. Drawing of the local pictures. At (0,0) mant=y draw incoming and outgoing lines like X instead of +, others indicated the wrong directions. Point $(-1,0)$ was more difficult. And in $(2,3)$ some draw "hairy monsters" d. Even when all local pictures were drawn correctly, some students draw intersecting lines (trajectories do not intersect!) and not everyone observed that $x=0$ and $y=0$ consist of trajectories (see "skeleton" in my post above) In some papers with no calculations or with calculations, leading to wrong conclusions) there are "miraculously" correct pictures. Those were discarded because "only solutions (not just answers) are evaluated". « Last Edit: April 20, 2018, 05:55:56 AM by Victor Ivrii »
2021-08-04T11:50:06
{ "domain": "toronto.edu", "url": "https://forum.math.toronto.edu/index.php?PHPSESSID=p3keptbghdddifthvj5q7mucc5&topic=1191.msg4314", "openwebmath_score": 0.9188506603240967, "openwebmath_perplexity": 6228.097188315884, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682474702956, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.8461528590201343 }
https://math.stackexchange.com/questions/1140868/how-many-binary-sequences-of-length-7-have-at-least-two-1s
# How many binary sequences of length 7 have at least two 1's? How many binary sequences of length 7 have at least two 1's? Can someone please explain the procedure in detail please. I tried solving it using the "count what you do not want" procedure, but I got nowhere. Thank you in advance • What do you get if you count those that have exactly zero or exactly one 1? – fuglede Feb 9 '15 at 17:16 The number of $7$-digit sequences is $2^7=128$ The number of $7$-digit sequences with $0$ occurrences of "one" is $\binom70=1$ The number of $7$-digit sequences with $1$ occurrence of "one" is $\binom71=7$ The number of $7$-digit sequences with $2$ or more occurrences of "one" is $128-(1+7)=120$
2020-07-05T10:46:20
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1140868/how-many-binary-sequences-of-length-7-have-at-least-two-1s", "openwebmath_score": 0.628487229347229, "openwebmath_perplexity": 76.3270143099161, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682447992099, "lm_q2_score": 0.8558511451289037, "lm_q1q2_score": 0.8461528494639871 }
https://math.stackexchange.com/questions/2284480/finding-eigenvalues-and-corresponding-eigenvectors
# Finding eigenvalues and corresponding eigenvectors I have the matrix: $$A= \begin{bmatrix} 7 & -2 \\ 15 & -4 \\ \end{bmatrix}$$ and I am asked to find the eigenvalues and eigenvectors. I found the eigenvalues to be $\lambda = 1,2$. Now I need to find the eigenvectors: $$(A-\lambda I)\mathbf u=\mathbf 0$$ $$\begin{bmatrix} 5 & -2 \\ 15 & -6 \\ \end{bmatrix} \mathbf u=\mathbf 0$$ I created the augmented matrix and row reduced to get: $$\begin{bmatrix} 5 & -2 & 0 \\ 0 & 0 & 0\\ \end{bmatrix}$$ I set $u_2=s$ and $u_1=\frac{2}{5}s$, thus $$\mathbf u=s\begin{bmatrix} \frac{2}{5} \\ 1 \end{bmatrix}$$ However the answers say that the vector is $(2,5)$. I don't know where I went wrong. • You didn't go wrong. They just multiplied by 5. Since you have a constant s that won't matter. – B.A May 17 '17 at 4:09 • Every non-zero scalar multiple of an eigenvector is also an eigenvector with the same eigenvalue. That’s why it’s a mistake to speak of the eigenvector of $2$. – amd May 17 '17 at 6:51 You mention at the end of your work that you set $u_2=s$ and $u_1=\frac{2}{5}s$. You could have just as easily set $u_2=5s$ and $u_1=2s$ and it would still have solved the equation, as setting $s$ constant with anything for which the equation $u_2=\frac{2}{5}u_1$ holds will give you a solution. So you didn't do anything wrong in your analysis, you just chose a different pair of numbers to satisfy the equation than did the answer, and you solved it in a more general sense (as a function of a constant $s$) than did the answer. Your result is correct and is essentially the same as the answer. The eigenvalue equation is $$\mathbf{A}u = \lambda u. \tag{1}$$ Matrices are the embodiment of linear systems. So if the eigenvalue is scaled by $\alpha \lambda$, the scaled eigenvector becomes $\alpha u$. That is, $(1)$ implies $$\alpha \mathbf{A}u = \alpha \lambda u, \qquad \alpha \in \mathbb{C}.$$
2021-04-15T02:25:20
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2284480/finding-eigenvalues-and-corresponding-eigenvectors", "openwebmath_score": 0.9128859043121338, "openwebmath_perplexity": 120.40905586504337, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682461347528, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.8461528487894892 }
https://math.stackexchange.com/questions/4106602/arranging-numbers-1-to-1000-such-that-the-difference-of-two-adjacent-numbers-is
# Arranging numbers 1 to 1000 such that the difference of two adjacent numbers is not a square nor a prime number I've been working on the following problem for a while: Prove that it's possible to arrange numbers 1 to 1000 an order such that each number appears once and |$$x_j - x_{j+1}$$| is not a perfect square nor a prime number. The idea is just to prove that such an ordering exists, not to explicitly construct it (thankfully). My first thought was to try to construct an explicit ordering of 1 to 10 that satisfies the given constraints and then see if I could extrapolate a pattern. Unfortunately, I wasn't able to do this (5 minus any other number in that sequence gives either a prime or a perfect square, I believe...) I found online that there are 168 primes and 31 perfect squares between 1 and 1000, and this seems like potentially useful information. However, I'm still not able to connect the dots and figure out how to think about this problem ... Any help would be much appreciated. Consider the graph with 1000 vertices where vertex $$i$$ is adjacent to $$j$$ iff $$|i-j|$$ is neither prime nor square. There are at most $$2*(168+31)=398$$ vertices which are not adjacent to any point, so each vertex has degree at least $$999-398=601>1000/2$$. Since each vertex has degree greater than half the size of the graph, it's a well-known theorem that there is a Hamiltonian circuit. Use that ordering. Thus, you can arrange the numbers from 1 to 1000 so that no two consecutive ones have a difference which is a prime or a square. • Oh yeah, I hadn't even thought about Hamiltonian circuits. Thanks so much for this clear explanation. – nero Apr 18 at 5:00 You can also construct it: Start with $$1$$ and keep adding $$6$$ i.e $$1,7,13$$ until you hit $$997$$ then go back to $$3$$ and keep adding $$6$$ until you get to $$999$$ and go back to $$5$$ repeat until $$995$$ then go back to $$2$$ repeat until $$998$$ and go back to $$4$$ repeat until $$1000$$ and go back to $$6$$ repeat until $$996$$ and you're finished. The difference between consecutive terms is either $$6,994,993$$ - those clearly aren't primes, and $$993$$ and $$994$$ aren't perfect squares since they are $$>31^2$$ (and $$32^2>1000$$). • Slightly less obvious but simpler to describe the construction: just add $91$ each time (or subtract $909$ on wraparound). – Erick Wong Apr 18 at 9:07 • @ErickWang Yeah that's easier to describe , $77$ as well (though it's harder to prove that $923$ is composite), the wraparound is constant because $91\mid 1001$ and $77\mid 1001$ (you probably know this but might be useful for the OP). – kingW3 Apr 18 at 9:35
2021-05-09T17:44:11
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4106602/arranging-numbers-1-to-1000-such-that-the-difference-of-two-adjacent-numbers-is", "openwebmath_score": 0.6699258089065552, "openwebmath_perplexity": 158.9916378874661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.988668248138067, "lm_q2_score": 0.8558511396138366, "lm_q1q2_score": 0.8461528468689801 }
https://www.cut-the-knot.org/triangle/ThreeBrokenSticks.shtml
# A Triangle out of Three Broken Sticks ### Solution, Part 1 Assume the sticks are of length $1$ and consider the cube with vertices $A(0,0,0),$ $B(1,0,0),$ $D(0,1,0),$ $E(0,0,1),$ $C(1,1,0),$ $F(1,0,1),$ $H(0,1,1),$ and $G(1,1,1).$ The left pieces of the sticks are defined by their right points and I shall describe their lengths with the real $x,y,z\in(0,1).$ Thus the cube constitutes the sample space for all possible combinations of the breaking points. For $x,y,z$ to form a triangle, we need three inequalities: \begin{align} z&\lt x+y\\ y&\lt x+z\\ x&\lt y+z. \end{align} Note that, say, line $AF$ has the equation $z=x$ and line $AH$ the equation $z=y,$ so that the plane through $A,F,H$ is described by $z=x+y.$ The plane divides the space into two half-spaces. In the one that contains $G,$ $x+y\gt z.$ Similarly, the plane $AFC$ is described by $x=y+z$ and the plane $ACH$ by $y=x+z.$ The set of points that satisfies the three inequalities belongs to the intersection of the three half-spaces and the cube: that's the figure $V=ACFHG.$ It consists of two pyramids with the base $CFH$ and apices at $A$ and $G.$ $V$ is obtained from the cube by cutting corner pyramids $ACFB,$ $ACHD,$ and $AFHE.$ Each of these has volume $\displaystyle \frac{1}{3}\cdot\frac{1}{2}\cdot 1=\frac{1}{6}.$ It follows that the volume of $V$ is $\displaystyle 1-3\cdot\frac{1}{6}=\frac{1}{2}.$ ### Solution, Part 2 If now $x,y,z$ stand for the lengths of the longest pieces than $\displaystyle \frac{1}{2}\lt x,y,z\lt 1.$ Then for example, $\displaystyle x+y\gt\frac{1}{2}+\frac{1}{2}=1\gt z,$ and, similarly, for the other two inequalities. We conclude that the probability in this case is $1.$ ### Solution, Part 3 If now $x,y,z$ stand for the lengths of shortest pieces then $0\lt x,y,z\lt\frac{1}{2}$ and each of the three is drawn uniformly randomly from the interval $\displaystyle\left(0,\frac{1}{2}\right).$ The situation is similar to Part 1 so that the probability in this case is $\displaystyle \frac{1}{2}.$ ### Solution, Part 4 First I should mention that quite a few people observed that in Part 1 whether we choose left or right pieces because both side lengths of a broken stick have the same probability distribution. This implies that when a random choice is made between the left and right pieces for each stick, the probability of getting a triangle is no different than in Part 1, i.e., $\displaystyle \frac{1}{2}.$ It occurred to me that there might be a difference with the case where the choice is made between the long and the short pieces, rather than between the left and the right ones. Many people insisted their should not be any difference. I made a mistake of thinking that since $\min(x,1-x)$ is distributed on $\displaystyle \left[0,\frac{1}{2}\right]$ while $\max(x,1-x)$ is distributed on $\displaystyle \left[\frac{1}{2},1\right].$ Tangentially, support also came from the difference in probabilities in Parts 2 and 3. That was explained and corrected by Timon Klugge. where the three numbers $x,y,z$ satisfied $0\le x,y,z\le\frac{1}{2}$ and formed a triangle. Let $\Delta(x,y,z)=1,$ if $x,y,z$ form a triangle, and $0,$ otherwise. We could denote the probability we found as $\displaystyle P\left(\Delta(x,y,z)=1\,|\,0\le x,y,z\le\frac{1}{2}\right)=\frac{1}{2}.$ Now, if $\Delta(x,y,z)=1$ then also $\Delta(2x,2y,2z)=1,$ such that we were able to conclude that in the domain of definition, $[0,1]^3,$ $\Delta(x,y,z)=1$ half the time. Now, there is a harder road to arrive at the same conclusion. Assume $\displaystyle 0\le x,y,z\le\frac{1}{2},$ what is the probability of $P(\Delta(x,y,z)=1)?$ In other words, what is $\displaystyle P\left(\Delta(x,y,z)=1\text{ and }0\le x,y,z\le\frac{1}{2}\right)?$ The formula for conditional probability gives \displaystyle\begin{align}&P\left(\Delta(x,y,z)=1\text{ and }0\le x,y,z\le\frac{1}{2}\right)\\ &\qquad= P\left(\Delta(x,y,z)=1)\,|\,0\le x,y,z\le\frac{1}{2}\right)\cdot P\left(0\le x,y,z\le\frac{1}{2}\right)\\ &\qquad=\frac{1}{2}\cdot\frac{1}{8}=\frac{1}{16}. \end{align} Similarly, with the result of Part 2, we conclude that \displaystyle\begin{align}&P\left(\Delta(x,y,z)=1\text{ and }\frac{1}{2}\le x,y,z\le 1\right)\\ &\qquad= P\left(\Delta(x,y,z)=1)\,|\,\frac{1}{2}\le x,y,z\le 1\right)\cdot P\left(\frac{1}{2}\le x,y,z\le 1\right)\\ &\qquad=1\cdot\frac{1}{8}=\frac{1}{8}. \end{align} Now we have to consider six additional case that come in two groups of three. If $S,L$ stand for "short" and long, respectively and define $S_x=\min(x,1-x)$ and $L_x=\max(x,1-x),$ and similarly for $y$ and $z$ then there are eight combinations $SSS,$ $SSL,$ ..., $LLL.$ The first, $SSS,$ and the last, $LLL,$ have been just treated. We need to find $P(\Delta(x,y,z)=1\,|\,SSL)$ and then also $P(\Delta(x,y,z)=1\,|\,SLL).$ ### SSL We have three variables $x,y,z$ that satisfy \displaystyle \begin{align} x&\lt\frac{1}{2}\\ y&\lt\frac{1}{2}\\ z&\gt\frac{1}{2} \end{align} that, in addition, need to satisfy, $x+y\gt z.$ Points $(x,y,z)$ that satisfy the four conditions are located inside the corner pyramid in a small cube: The volume of this pyramid, relative to the larger cube is $\displaystyle \frac{1}{8}\cdot\frac{1}{6}=\frac{1}{48}.$ Cycling through the three variables gives $\displaystyle 3\cdot\frac{1}{48}=\frac{1}{16}.$ ### SLL In this case we need to satisfy the following inequalities: \displaystyle \begin{align} x&\lt\frac{1}{2}\\ y&\gt\frac{1}{2}\\ z&\gt\frac{1}{2}\\ x+y&\gt z\\ x+z&\gt y. \end{align} The points $(x,y,z)$ that satisfy all five inequalities lie outside two pyramids in the small cube: The relative volume of that space is $\displaystyle \frac{1}{8}\left(1-2\cdot\frac{1}{6}\right)=\frac{1}{12}.$ Cycling through the three variables gives $\displaystyle 3\cdot\frac{1}{12}=\frac{1}{4}.$ Putting everything together, denote $\mathcal{S}=\{SSS, SSL,\ldots, SSS\},$ then, the formula of total probability, \displaystyle\begin{align}P(\Delta(x,y,z)=1)&=\sum_{\omega\in\mathcal{S}}P(\Delta)x,y,z)=1\,|\,\omega)\\ &=\frac{1}{16}+\frac{1}{16}+\frac{1}{4}+\frac{1}{8}\\ &=\frac{1}{2}. \end{align} As Long Huynh Huu has observed that was clear to start with. ### Solution, Part 5 The expectation that $x,y,z\in(0,1)$ form side lengths of a triangle is $\displaystyle E_{x,y,z}=0\cdot\frac{1}{2}+1\cdot\frac{1}{2}.$ The same is true if we replace $x$ with $1-x,$ $y$ with $1-y,$ or $z$ with $1-z.$ There are eight such combinations. The eight events are not mutually exclusive (explaining the difficulty with Part 4), however the individual expectations can be summed up, giving the total expectation of $\displaystyle 8\cdot \frac{1}{2}=4.$ ### Solution 2 WLOG, let the length of the sticks be unity. Let the length of the three pieces (sorted) be $x\geq y\geq z$. Let the regions in the $X,Y,Z$ space (and the projections of the space) where the sorting holds and where forming a triangle is possible be termed "feasible region" and "positive region", respectively. Left Pieces The figure below shows the feasible region in the $YZ$ plane in white with the conditions on $x$. The conditions in black and blue are for the positive and feasible regions, respectively. The probability of triangulation is given by \displaystyle \begin{align} P&=\frac{\int_{z=0}^{1/2}\int_{y=z}^{1-z}\int_{x=y}^{y+z}dxdydz + \int_{y=1/2}^{1}\int_{z=1-y}^{y}\int_{x=y}^{1} dxdzdy} {\int_{z=0}^{1/2}\int_{y=z}^{1-z}\int_{x=y}^{1}dxdydz + \int_{y=1/2}^{1}\int_{z=1-y}^{y}\int_{x=y}^{1} dxdzdy} \\ &=\frac{1/24+1/24}{1/8+1/24}=1/2. \end{align} Longer Pieces The figure below corresponds to this case. Clearly, the probability is $1$ in this case. Shorter Pieces This case is same as the "left pieces" except that $x$, $y$, and $z$ are drawn from $(0,0.5]$ instead of $(0,1.0)$. The probability does not change by this scaling and the probability is $1/2$. Random Chioce For the case of "left pieces", the length of the stick has a uniform distribution of $(0,1)$. Choosing one side or the other does not change the distribution for the chosen stick as both sides are identically distributed. Thus, the probability for this case remaines $1/2$. Expected Number of Triangles There are $8$ combinations depending on which side is chosen for each stick. For each combination, the chosen side for a stick can be arbitrarily labelled "left". This approach maps every combination to the "left pieces" case and the probability of forming a triangle is $1/2$. Thus, the expected number of triangles is $4$. ### Acknowledgment This is an extension of problem 96 (in my Russian edition) from Challenging Mathematical Problems With Elementary Solutions by A. M. Yaglom and I. M. Yaglom.
2019-03-23T13:17:03
{ "domain": "cut-the-knot.org", "url": "https://www.cut-the-knot.org/triangle/ThreeBrokenSticks.shtml", "openwebmath_score": 0.9177099466323853, "openwebmath_perplexity": 429.45008707165425, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9908743647666859, "lm_q2_score": 0.8539127548105611, "lm_q1q2_score": 0.8461202584890855 }
https://math.stackexchange.com/questions/122296/how-to-evaluate-this-integral-relating-to-binomial
# How to evaluate this integral? (relating to binomial) I saw some result that some article used, (without proving) that stated:$$\int_0^1 p^k (1-p)^{n-k} \mathrm{d}p = \frac{k!(n-k)!}{(n+1)!}$$ But I was wondering, how would you integrate it? How did this integral come about? Is it something to do with the binomial distribution? You can also proof it "by story". Let "random number" mean a number picked from $[0,1]$ with uniform probability. Then the formula below can be interpreted as follows. $$\int_0^1 p^k (1-p)^{n-k} \mathrm{d}p = \frac{k!(n-k)!}{(n+1)!}$$ The left-hand side is the probability of taking random $p$ and then drawing a sequence of $n$ numbers from which some $k$ numbers are smaller than $p$ and some $n-k$ are larger. To understand the right-hand side, consider $n+1$ random numbers sorted, so that first $k$ are the smallest and last $n-k$ are the largest (with $(k+1)$-th being our $p$ from left-hand side interpretation); however, there are $(n+1)!$ permutations total, with $k!(n-k)!$ having the desired property (in a sorted sequence we disregard the order of first $k$ and last $n-k$), thus the right-hand side fraction denotes the same probability. • Hmm interesting way, cheers! – Heijden Mar 20 '12 at 16:11 • $p^k(1-p)^{n-k}$ is the probability you described. But what about the integral? Integral makes us talk not about single $p$ but about whole bunch of them. – Yola Jan 22 '18 at 13:55 • @Yola $p^k(1-p)^{n-k}$ is the probability of drawing a sequence of $n$ numbers, from which some $k$ numbers are smaller than $p$ and some $n−k$ are larger, given a particular, fixed value of $p$. And then, we sum all these probabilities for any possible value of $p$ using the integral. Does that explain your question? – dtldarek Jan 22 '18 at 14:52 • That's exactly that i thought, but i just can't coneive this part with integration. Probably i should just get used to it, and this will become clearer for me later. Thanks! +1 – Yola Jan 22 '18 at 15:14 • @Yola Then I suggest this: imagine the same problem, but say that $p \in \{0, 1/2\}$ (each happening with probability $1/2$). Work out the formula on the left. Then consider $p \in \{0/4, 1/4, 2/4, 3/4\}$ (all with probabilities $1/4$). Then consider $p \in \{0/8, 1/8, \ldots, 7/8\}$, and so on. When you spot the common theme, do $p \in \{0/2^m, 1/2^m, \ldots, (2^m-1)/2^m\}$. Going with $m$ to infinity is exactly the step that makes that sum an integral (here such a simplified integration is possible, because we are integrating a polynomial, which is a very well-behaved function on [0,1]). – dtldarek Jan 22 '18 at 16:49 This can be proven using repeated integration by parts: $$\begin{eqnarray} \int_0^1 p^k(1-p)^{n-k} &=& \frac{1^{k+1}(1-1)^{n-k}}{k+1}-\frac{0^{k+1}(1-0)^{n-k}}{k+1}+\frac{n-k}{k+1}\int_0^1 p^{k+1}(1-p)^{n-k-1}\\ &=& \frac{n-k}{k+1}\int_0^1 p^{k+1}(1-p)^{n-k-1}\\ &=& \frac{(n-k)(n-k-1)}{(k+1)(k+2)}\int_0^1 p^{k+2}(1-p)^{n-k-2}\\ &\vdots&\\ &=& \frac{k!(n-k)!}{n!}\int_0^1 p^{n}=\frac{k!(n-k)!}{n!}\frac{1}{n+1}=\frac{k!(n-k)!}{(n+1)!}\\ \end{eqnarray}$$ Not a natural derivation, but there is slightly different approach toward it. Let's consider the quantity $$I(n,k) = \int_{0}^{1} \binom{n}{k} p^k (1-p)^{n-k} \; dp.$$ Then by integration by parts, as in two former answers, we have $$I(n, k+1) = I(n, k).$$ Let $I$ denote this common value. Thus $$1 = \int_{0}^{1} 1 \; dp = \int_{0}^{1} \sum_{k=0}^{n} \binom{n}{k} p^k (1-p)^{n-k} \; dp = \sum_{k=0}^{n} I = (n+1)I$$ and the result follows. • Thanks for your help sos440, nice to see a diff. way also! – Heijden Mar 20 '12 at 16:11 There are probably several ways. An easy one is by induction on $k$. If $k=0$, then $$\int_0^1(1-p)^n\,dp=\left.-\frac{(1-p)^{n+1}}{n+1}\right|_0^1=\frac1{n+1}=\frac{0!(n-0)!}{(n+1)!}.$$ Now assume that the formula holds for some $k$. Then, integrating by parts, $$\begin{eqnarray} \int_0^1p^{k+1}(1-p)^{n-(k+1)}\,dp&=&\int_0^1p^{k+1}(1-p)^{(n-1)-k}\,dp\\ &=&\left.-\frac{p^{k+1}(1-p)^{n-k}}{n-k}\right|_0^1+\int_0^1\frac{(k+1)p^k(1-p)^{n-k}}{n-k}\,dp\\ &=&\frac{(k+1)}{n-k} \frac{k!(n-k)!}{(n+1)!}\\ &=&\frac{(k+1)!(n-(k+1))!}{(n+1)!}. \end{eqnarray}$$ The induction principle then guarantees that the formula holds for all $k$. • Thanks martin, didnt realise I could do it by induction! – Heijden Mar 20 '12 at 16:12
2021-04-23T15:16:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/122296/how-to-evaluate-this-integral-relating-to-binomial", "openwebmath_score": 0.9708001017570496, "openwebmath_perplexity": 392.85319031570486, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765563713599, "lm_q2_score": 0.8633916222765627, "lm_q1q2_score": 0.8461035487984679 }
http://mathhelpforum.com/discrete-math/181410-counting.html
# Math Help - counting 1. ## counting arrange 12 people into 4 teams of 3 people would u choose the team first then the players: 4C1 x 12C3 first team 3C1 x 9C3 second team and carry on for third and fourth then multiply all of ur results together? 2. Originally Posted by qwerty10 arrange 12 people into 4 teams of 3 people This is known in the trade as an unordered partition. Suppose that we have N distinct objects and $N=j\cdot k$ then we can group them into k cells with j in each cell. The number of ways is $\frac{N!}{(k!)^j(j!)}$ 3. But how would u then extend that to allow for the restriction that 2 particular people are not allowed in the same team? 4. Originally Posted by qwerty10 But how would u then extend that to allow for the restriction that 2 particular people are not allowed in the same team? Why did you change this question after four days? For this new question, the answer is $\binom{10}{2}\binom{8}{2}\frac{6!}{(3!)^2(2!)}$ 5. Hello, qwerty10! Plato is correct . . . Here is an explanation. Arrange 12 people into 4 teams of 3 people. Suppose we label the teams A, B, C and D. Choose 3 of the 12 people for team A. . . There are: . $_{12}C_3$ ways. Choose 3 of the remaining 9 people for team B. . . There are: . $_9C_3$ ways. Choose 3 of the remaining 6 people for team C. . . There are: . $_6C_3$ ways. Choose 3 of the remaining 3 people for team D. . . There are: . $_3C_3$ ways. To arrange 12 people into 4 distinguishable teams of 3 players each, . . there are: . $\left(_{12}C_3\right)\left(_9C_3\right)\left(_6C_3 \right)\left(_3C_3\right)$ . . $=\; \frac{12!}{3!\,9!}\cdot\frac{9!}{3!\,6!}\cdot\frac {6!}{3!\,3!}\cdot\frac{3!}{3!\,0!} \;=\;\frac{12!}{(3!)^4}$ ways. Since the four teams are not distinguishable, we must divide by $4!$ . . $\text{There are: }\:\frac{12!}{(3!)^44!} \:=\:15,\!400$ ways. ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ $\text{Suppose two players, }X\text{ and }Y\text{, are }not\text{ allowed on the same team.}$ $\text{Count the ways that }X\text{ and }Y\;are\text{ on the same team.}$ $\text{Place }X\text{ and }Y\text{ on team A.}$ $\text{There are 10 choices for the third member of the team.}$ $\text{Then the other 9 people can be partitioned in }\,\frac{9!}{(3!)^3}\text{ ways.}$ Since the teams are not distinguishable we divide by 4! . . $\text{There are: }\:\frac{10\cdot 9!}{(3!)^34!} \:=\;700\text{ ways.}$ Therefore, there are: . $15,\!400 - 700 \:=\:14,\!700\text{ ways}$ . . $\text{in which }X\text{ and }Y\text{ are }not\text{ teammates.}$ Darn . . . This doesn't agree with Plato's second answer. Is one of us wrong? . . . maybe both? . 6. Originally Posted by Soroban Darn . . . This doesn't agree with Plato's second answer. Is one of us wrong? . . . maybe both? . Your 15400 is correct. We both agree there. But disagree on the 700. There are ten ways to put A & B on a team together with one other person. That leaves nine to make up the other three teams. That can be done in $\frac{9!}{(3!)^3(3!)}=280$. So there are 2800 ways that A & B on a team together. So the difference is 12600. Here is a second front-door approach. $\binom{10}{2}\binom{8}{2}\binom{5}{2}=12600$. First pick A's team mates, then pick B's team mates then select the first person in alphabetical order from those not selected then pick his team mates, and we has a team of three left over. 7. Hello, Thank you for the help. I see where I have gone wrong-ive just been doing the standard distinguishable approach and not taking into account that the teams are not distinguishable. For part 1 for the total number of teams, I don't understand why in the distinguishable case( say we have teams 1,2,3) why we dont choose the team first then choose the members of the team? 8. Originally Posted by qwerty10 I see where I have gone wrong-ive just been doing the standard distinguishable approach and not taking into account that the teams are not distinguishable. For part 1 for the total number of teams, I don't understand why in the distinguishable case( say we have teams 1,2,3) why we dont choose the team first then choose the members of the team? There is no need to select teams that are already given. Say we have a red team, blue team, and green team, distinguishable teams. We have a roster of the twelve people is alphabetical order. If we rearrange the string "BBBBGGGGRRRR" in any order and put is next to that roster have one possible division into teams. That can be done in $\frac{12!}{(4!)^3}$ ways. BUT also notice that $\frac{12!}{(4!)^3}=\underbrace {\binom{12}{4}}_{blue}\underbrace {\binom{8}{4}}_{green}\underbrace {\binom{4}{4}}_{red}.$ In other words, we could pick the blue team, then the green and finally the red. 9. Thanks. So then when we proceed to part 2 where 2 people A and B can't be on the same team and we want number of ways teams can be selected, do we then have to choose the team first then the members because its possible that couple could go into either of the three teams, assuming their distinguishable teams again to keep it simple
2014-03-13T11:19:16
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/discrete-math/181410-counting.html", "openwebmath_score": 0.7065538763999939, "openwebmath_perplexity": 713.4225970487414, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9799765581257486, "lm_q2_score": 0.8633916134888614, "lm_q1q2_score": 0.8461035417014511 }
https://math.stackexchange.com/questions/2081396/can-every-true-theorem-that-has-a-proof-be-proven-by-contradiction/2081513
Can every true theorem that has a proof be proven by contradiction? After reading and being inspired by, Can every proof by contradiction also be shown without contradiction? and after some thought, I still don't have an answer to this. Does every theorem with a true proof have a proof by contradiction? • Let $P$ be a proof of the theorem. Assume the theorem is false. However we can exhibit $P$, which contradicts the assumption. – Stahl Jan 3 '17 at 4:11 • I think @Stahl is correct – MPW Jan 3 '17 at 4:22 • @Masacroso note that I did not post it as an answer, but rather as a comment ;) I'm neither a logician nor particularly interested in the subtleties of logic, and I assume someone can and will give a better explanation than I can. However, I have a hunch that one can formalize what I've said. – Stahl Jan 3 '17 at 4:26 • I think @Stahl is correct. Many so-called proofs by contradiction amount to assuming the result is false, proceeding to prove the result, then saying that the proven result then contradicts the hypothesis that it is false. It's a classic overkill used by many students when they could just prove the result directly. – MPW Jan 3 '17 at 4:28 • Ah, sorry, I missread the question as the opposite direction. Anyway it is not clear to me if this is possible or not, I mean that we can construct a proof by contradiction derived from any other proof, but then the status of this proof to be "by contradiction" is not very clear (because the statement is already proved before to "prove it but this derived proof"). – Masacroso Jan 3 '17 at 4:57 In classical logic, the answer is yes. Take any theorem $$T$$ and any proof $$P$$ for $$T$$. Now write the following proof: If $$\neg T$$: [Write $$P$$ here.] Thus $$T$$. Therefore $$\neg \neg T$$, by negation introduction. Thus $$T$$, by double negation elimination. One may object that this proof is essentially the same as $$P$$, and is just wrapped up. That is true, but it is a perfectly legitimate proof of $$T$$, even if it is longer than $$P$$, and it is indeed of the form of a proof by contradiction. A natural question that arises is whether the shortest proof of $$T$$ is a proof by contradiction. That is a much harder question to answer in general, but there are some easy examples, at least for any reasonable natural deduction system. For instance, the shortest proof of "$$A \to A$$" for any given statement $$A$$ is definitely not a proof by contradiction but rather just: If $$A$$: $$A$$. Therefore $$A \to A$$, by implication introduction. On the other hand, the shortest proof of "$$\neg ( A \land \neg A )$$" for any given statement $$A$$ is definitely a proof by contradiction: If $$A \land \neg A$$: $$A$$, by conjunction elimination. $$\neg A$$, by conjunction elimination. Therefore $$\neg( A \land \neg A )$$. The first part of this post shows that the shortest proof by contradiction is at most a few lines longer than the shortest proof, but nothing much else interesting can be said about the shortest proof unless... Well what if we do not allow the use of double negation elimination? If you have only the other usual rules (the first-order logic rules here but excluding ¬¬elim and including ex falso), then the resulting logic is intuitionistic logic, which is strictly weaker than classical logic, and cannot even prove the law of excluded middle, namely "$$A \lor \neg A$$" for any statement $$A$$. So if you instead ask the more interesting question of whether every true theorem can be proven in intuitionistic logic, then the answer is no. Note that intuitionistic logic plus the rule "$$\neg A \to \bot \vdash A$$" gives back classical logic, and one could say that this rule embodies the 'true principle' of proof by contradiction, in which case one can say that some true theorems require the use of a proof by contradiction somewhere. If you can prove a statement $A$ directly, you can prove it by contradiction. Just assume $\lnot A$, perform your proof of $A$, note the contradiction, and derive $A$. I find the common thought here unsettling, even if seemingly widely held. Perhaps others share my view, perhaps not (and note, it is just an intuitive view, one which--as explained to me in the comments--isn't in line with the notion of proof!) The proof in these purported proofs by contradiction must necessarily rely on the content of the direct proof. Yes, we have ourselves a contradiction, but in a proof by contradiction what convinces us (i.e., what we count as the proof) is not merely that we have a contradiction, but that we have derived a contradiction in a particular manner--from particular assumption(s)--which leads us to conclude something about these assumptions....that they must not be true. Here, all we do is prove a statement directly from the premises of the argument, and make a note that our conclusion contradicts the conclusion's_negation--leading us to believe that the negation of the conclusion's_negation (i.e., our conclusion) must be true. We then say to ourselves "aha! Indeed this is the case...since we have already (directly) proven our conclusion to be true!" It is not simply that our conclusion need not be derived via contradiction (and that it can be done directly and then "wrapped up" in a so called proof by contradiction), but rather that our conclusion has not been derived in this manner. A contradiction arising in a proof does not necessarily warrant that the method of proof being employed is what we call "proof by contradiction." It is the derivation of this contradiction which warrants the name "proof by contradiction." It is how the contradiction arises. And here, our contradiction doesn't really arise so much as the situation is one in which we are pointing out that a directly derived conclusion is contradictory to its negation. • Your intuition makes sense, but raises the challenge of how to precisely define what a "proof by contradiction" actually is then. – Eric Wofsey Jan 3 '17 at 23:10 • The conclusion that gives you your result in a straightforward proof becomes the conclusion that gives you a contradiction in the new proof. In other words, we can rephrase any proof as a proof by contradiction. – Morgan Rodgers Jan 3 '17 at 23:13 • I'm not reading the other answers as suggesting to insert a reference to another proof, but rather to repeat the arguments. – Morgan Rodgers Jan 4 '17 at 2:28 • I meant that it is of a specific syntactic form; it begins with a subproof of a contradiction under an assumption of the form $\neg A$, and then deduces $\neg \neg A$ and then applies DNE to get $A$. So when I said "form" in my answer I do not refer to the notion of "method" that most non-logicians have in mind. Is what I say clearer now? – user21820 Jan 4 '17 at 5:15 • The specific details of the syntactic form will depend on the specific formal system chosen, so you can consider the proof sketch in my answer akin to pseudo-code. If you use the rules in my linked post, then "form of a proof by contradiction" in my answer corresponds to a formal proof whose last 3 lines are of the form: $$¬A→⊥. \quad [→intro] \\ ¬¬A. \quad \quad \quad [¬intro] \\ A. \quad \quad \quad \quad [¬¬elim]$$. By the way, your answer is related to mathoverflow.net/questions/3776/…. – user21820 Jan 4 '17 at 5:38
2019-05-21T00:28:04
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2081396/can-every-true-theorem-that-has-a-proof-be-proven-by-contradiction/2081513", "openwebmath_score": 0.8424243927001953, "openwebmath_perplexity": 251.62871913055332, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765552017674, "lm_q2_score": 0.8633916152464016, "lm_q1q2_score": 0.8461035408992584 }
https://math.stackexchange.com/questions/1396595/reduced-row-echelon-form-with-a-variable
# Reduced Row Echelon Form with a Variable For what value of k does the system of equations not have a unique solution? $$\left\{ \begin{array}{c} x-2y+2z=0 \\ 2x+ky-z=3 \\ x-y+3z=-5 \end{array} \right.$$ I know that this means I have to find the value(s) of k where the system of equations has either no solutions or an infinite number of them. I converted the above system into a matrix and tried to simplify it into Row Reduced Echelon Form (rref), however, had no luck because of the one variable. Here's what I did. $$=\left[\begin{array}{ccc|c}1&-2&2&0\\2&k&-1&3\\1&-1&3&-5\end{array}\right]$$ $$R_1 - R_3$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\2&k&-1&3\\0&-1&-1&5\end{array}\right]$$ $$2*R_1$$ $$=\left[\begin{array}{ccc|c}2&-4&4&0\\2&k&-1&3\\0&-1&-1&5\end{array}\right]$$ $$R_1 - R_2$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&-4-k&5&-3\\0&-1&-1&5\end{array}\right]$$ $$(-1)*R_3$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&-4-k&5&-3\\0&1&1&-5\end{array}\right]$$ $$4*R_3$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&-4-k&5&-3\\0&4&4&-20\end{array}\right]$$ $$R_2 + R_3$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&-k&9&-23\\0&4&4&-20\end{array}\right]$$ $$4*R_2$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&-4k&36&-92\\0&4&4&-20\end{array}\right]$$ $$k*R_3$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&-4k&36&-92\\0&4k&4k&-20k\end{array}\right]$$ $$R_2 + R_3$$ $$=\left[\begin{array}{ccc|c}1&-2&2&0\\0&0&4k+36&-92-20k\\0&4k&4k&-20k\end{array}\right]$$ I've tried numerous other methods relying entirely on guess and check, but have not had any success. I want to know whether I am on the right track, and if so how I should continue to get to the final answer. I would also like to know whether or not there is a more systematic way of simplifying matrices into RREF form. Any help will be greatly appreciated, thanks in advance. • Keep going: do $4R_2+kR_3$ – Santiago Canez Aug 14 '15 at 2:27 • Now what? I'm sorry if it's a silly question - am a tenth grader who's trying to self-study :P – StopReadingThisUsername Aug 14 '15 at 2:33 • It would have been better to put the result into the third row rather than the second as you did. Or, you can now simply switch the second and third rows. From the resulting form you should be able to determining when there will be a unique solution. You can read about "Gaussian elimination" to learn more about this type of procedure. – Santiago Canez Aug 14 '15 at 2:36 "For what value of $k$ does the system not have a unique solution" This would include both the case of infinitely many solutions and also the case of no solution. A much easier approach than performing row operations is the following: Let us express this as a matrix equation $Ax=b$ instead of as a system of equations. There is a theorem: $Ax=b$ has a unique solution if and only if $A$ is invertible. There is another theorem: $A$ is invertible if and only if $\det(A)\neq 0$ So, the value(s) for $k$ such that the system does not have a unique solution are those such that $\det(A)=0$. $\det(A)=1(k\cdot 3 - (-1)\cdot(-1)) - (-2)(2\cdot 3 - (-1)\cdot 1) + 2(2\cdot (-1) - k\cdot 1)$ $=(3k-1)+(14)+(-4-2k)$ $= k+9$ So, if $\det(A)=0$, you would have $k+9=0$, which implies that $k=-9$ As for your method and current work, you are nearly there. You left off at: $=\left[\begin{array}{ccc|c}1&-2&2&0\\0&0&4k+36&-92-20k\\0&4k&4k&-20k\end{array}\right]$ Now, rowswap to get $R_2\leftrightarrow R_3$ $=\left[\begin{array}{ccc|c}1&-2&2&0\\0&4k&4k&-20k\\0&0&4k+36&-92-20k\end{array}\right]$ It will be that there are no solutions when the final row looks like $[0~0~0~|~n]$ with $n\neq 0$, or infinitely many solutions if the final row looks like $[0~0~0~|~0]$ (assuming that there are no other $[0~0~0~|~n]$ rows elsewhere after row reduction is complete) To get the third entry of the last row equal to zero, that would happen when $4k+36=0$ which occurs when $k=-9$. There is a slight error in your work which is difficult to notice, and that is the step where you multiplied a row by $k$. In the case that you allow $k=0$, you have in effect destroyed the information that that row could have given you. Remember that you are only allowed to multiply rows by nonzero numbers. When multiplying by unknown quantities, you should be careful after reaching a final answer that doing so didn't cause a problem (like it did this time). Your work would imply that by setting $k=0$ the second row becomes all zeroes, implying infinitely many solutions, but that is not the case (as shown by my first method). Instead, of doing $4R_2\mapsto R_2$, followed by $kR_3\mapsto R_3$, followed by $R_2+R_3\mapsto R_2$., you would have avoided this error by simply doing $4R_2+kR_3\mapsto R_3$. Doing so wouldn't have caused any loss of information. • After the step $4*R_2$, make two cases. Case-1 should be as you and @JMoravitz talked about (considering $k \neq 0$). Other case, consider $k=0$ and proceed further. – Rajat Aug 14 '15 at 3:50
2019-12-14T08:23:11
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1396595/reduced-row-echelon-form-with-a-variable", "openwebmath_score": 0.8050471544265747, "openwebmath_perplexity": 181.99147765064646, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765563713599, "lm_q2_score": 0.8633916134888613, "lm_q1q2_score": 0.8461035401867265 }
https://powerofproofs.wordpress.com/2011/01/22/the-locker-problem/
## The Locker Problem There is a hallway of lockers with lockers numbered 1 through 100. There are also 100 students. Student 1 opens every locker, then student 2 closes every other locker, then student 3 opens or closes every 3rd locker (if it’s open then she closes it and if it’s closed then she opens it), and so on, where the nth student opens or closes every locker which is a multiple of n. (Does it matter in which order the students go down the hallway in?) The problem: Which lockers are open after all the students walk down the hallway? Think about it for a bit, and I’ll post a solution later. ### 16 responses to this post. 1. Posted by Shobhit Gupta on January 24, 2011 at 1:30 am I started out with 10 numbers, and I noticed a pattern of the number of flips applied to each number. And it turns out that its totally related to the factors (or divisors) of the number. For example, the 4th door will be flipped 3 times: By Student no. 1 By Student no. 2 By Student no. 4 Its the factors… So I checked out: http://www.wolframalpha.com/input/?i=Divisors+1+to+100 And then: http://www.wolframalpha.com/input/?i=number+of+Divisors+1+to+100+|+divisible+by+2 So I see a bigger picture now: 1 False 2 True 1 False 4 True 1 False 6 True 1 False until 18 True 1 False Thanks for such a thoughtful exercise. 2. Posted by Shobhit Gupta on January 24, 2011 at 1:35 am My last link (|+divisible+by+2) doesn’t open properly by clicking on it. Make sure you copy the link and paste it manually. 3. Yes you’re right–it’s related to the divisors of the number 🙂 I don’t understand what you mean by “1 False, 2 True”, etc. though. What kinds of numbers have lockers which end up open at the end? There’s one characterization and then an even deeper characterization. (I don’t mind if you post the answer if you have it, maybe put a disclaimer.) 4. Posted by Shobhit Gupta on January 25, 2011 at 5:40 am Oh yeah, I should have put a disclaimer in my 1st comment as well. Next time I will be careful about it. Well let me do that here: ————— ————— My True and False meant: True = ‘door is closed’ False = ‘door is open’ But anyway, in better words, all those doors that have odd number of factors are open. So the final answer would be: 1,4,9,16,25,36,49,64,81,100 Also, this is how my previous True False stuff relates… 1 = 1 4 = 1+2+1 9 = 1+2+1+4+1 16 = 1+2+1+4+1+6+1 25 = 1+2+1+4+1+6+1+8+1 and so on… 5. Posted by Shobhit Gupta on January 25, 2011 at 5:49 am Oh! Just gave more though about ‘deeper characterization’. I just noticed that the answer in above comment is actually a series of n^2. Now I realize that only the perfect squares can have odd number of factors. Nice! 6. Yes, great job! The sequence you wrote out for the squares is also equivalent to saying that $n^2$ is equal to the sum of the first $n$ odd numbers (add the preceding 1 to each number and there’s a 1 at the end). In general, when you take differences of a quadratically behaving sequence (f(x) is a quadratic equation and you look at f(1), f(2), …) you get a linear sequence, and the pattern continues: take differences of a cubic and you get a quadratic, etc. Pretty neat huh? Experimentation is often a great way to notice patterns, and then mathematicians try to prove their assertions once they come up with them. (I see you thought like a computer programmer pulling out Wolfram Alpha 🙂 ) Can you prove why perfect squares are the only numbers with an odd number of factors? (You need to know how to find the number of factors of a number given its prime factorization.) 7. Posted by Jenny on January 25, 2011 at 10:28 pm I saw a variation on this problem – where only odd-numbered students come to flip doors. Is there a nice way count the open doors in that case? 8. Posted by Shobhit Gupta on January 26, 2011 at 12:38 am >> I saw a variation on this problem – where only odd-numbered students come to flip doors. Is there a nice way count the open doors in that case? I think the answer would remain same if only the odd-numbered students come to flip doors. I have my own simplistic reasoning for that. Will explain once someone validates my answer. >> Can you prove why perfect squares are the only numbers with an odd number of factors? Nope. I guess I will have to turn to Google for the answer 🙂 9. Okay, so the solution to the initial problem is that the only numbers with an odd number of factors are perfect squares. This is because for numbers which aren’t perfect squares factors of the form k and 2/k (so if you have 12, your pairs are {1,12}, {2,6}, {3,4}) but with squares the square root of the number isn’t part of a pair. In this problem, we only care about numbers with an odd number of *odd* factors. To count the number of odd factors of a number, we can just look at the number of factors of the greatest odd factor, because that includes all the other odd factors. Try solving it from here 🙂 I don’t think the answer is 17. 10. Posted by Jenny on February 1, 2011 at 10:42 pm Yes, that’s what I tried doing, but it got rather complicated…I mean, it’s all the numbers up to 100. Never mind, then. I was just wondering if there was a quick solution, because this was a timed test. And Mr. Cocoros says it’s 17 🙂 • Well what I’ve gathered is that for a number to have an odd number of odd factors, it has to be of the form a power of 2 times an odd power. Which isn’t too bad to count, because for one thing 4 times an odd square is a square itself so your number has to be of the form k^2 or 2k^2. And oh you’re right, it is 17 🙂 Because its just 10 perfect square up to 100 + 7 squares less than 50 so that 2k^2 is less than 100. 11. Posted by Shobhit Gupta on February 2, 2011 at 3:12 am I tried it the naive way. And I couldn’t match up 17. Maybe I did something wrong.
2017-06-24T01:44:29
{ "domain": "wordpress.com", "url": "https://powerofproofs.wordpress.com/2011/01/22/the-locker-problem/", "openwebmath_score": 0.6259084939956665, "openwebmath_perplexity": 652.3033012348848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.979976554032175, "lm_q2_score": 0.8633916152464017, "lm_q1q2_score": 0.8461035398894422 }
https://math.stackexchange.com/questions/2432428/indeterminate-form-1-infty-vs-0-infty
# Indeterminate form $1^\infty$ vs. $0^\infty$ Why is $1^\infty$ an indeterminate form while $0^\infty = 0$? If $0\cdot0\cdot0\cdots = 0$ shouldn't $1\cdot1\cdot1\cdots = 1$? • Check out these two links: math.stackexchange.com/questions/520795/… math.stackexchange.com/questions/10490/… – Brenton Sep 16 '17 at 23:35 • $0^{+\infty}=0$, but $|0^{-\infty}|=+\infty$. Here on Wikipedia it is mentioned that $0^{\infty}$ is not an indeterminate form because $0^{+\infty}$ is $0$ and $0^{-\infty}$ is $1/0$, which is not $0$, but still we know that $(0^+)^{-\infty}=+\infty$ and $(0^{-})^{-\infty}=-\infty$, or $|0^{-\infty}|=+\infty$. – user236182 Sep 16 '17 at 23:36 • A very small number raised to a large power is even smaller. – Simply Beautiful Art Sep 16 '17 at 23:36 • @stevengregory $$\left(\frac1n\right)^n=\frac1{n^n}\stackrel{n\to\infty}\longrightarrow\frac1\infty=0$$ – Simply Beautiful Art Sep 16 '17 at 23:51 • @stevengregory Yes... and why are you mentioning this? – Simply Beautiful Art Sep 16 '17 at 23:55 To say that $1^\infty$ is an indeterminate form means that there is more than one object that can be $\lim\limits_{x\,\to\,\text{something}} f(x)^{g(x)}$ where $f(x)\to1$ and $g(x)\to\infty,$ so that the limit depends on which functions $f$ and $g$ are. Thus \left. \begin{align} & \lim_{x\to\infty} \left(1+\frac 1 x\right) = 1 \quad\text{and} \quad \lim_{x\to\infty} \left( 1 + \frac 1 x \right)^x = e \\[10pt] & \qquad \text{and} \\[10pt] & \lim_{x\to\infty} \left( 1 - \frac 1 x\right) = 1 \quad \text{and} \quad \lim_{x\to\infty} \left( 1 - \frac 1 x\right)^x = \frac 1 e. \end{align} \right\} \longleftarrow \text{two different numbers} • Sir, I'm not sure if you can count, but that's three different numbers ;-) – Simply Beautiful Art Sep 17 '17 at 1:21 • @SimplyBeautifulArt : That depends on which numbers are referred to. However, there are of course three kinds of people in the world: those who can count and those who can't. – Michael Hardy Oct 5 '17 at 0:37 • LMAO $\vphantom{....}$ – Simply Beautiful Art Oct 5 '17 at 1:01 I want to address two questions here: 1. What does we mean when we say $1^\infty$ is indeterminate? First of all, we should understand what $1^\infty$ means. It is not the product $1 \cdot 1 \cdot 1 \cdots$. Instead, what it represents is that if you have two limits $\lim_{n \to \infty} x_n = 1$ and $\lim_{n \to \infty} y_n \to \infty$ then we cannot determine the value of $\lim_{n \to \infty} (x_n)^{y_n}$. What does it mean not to be able to determine the value of a limit? It means that the value of the limit depends on the sequences we choose. That is, we could have two pairs of sequences $(x_n, y_n)$ and $(x_n', y_n')$ where $x_n, x_n' \to 1$ and $y_n, y_n' \to \infty$ but $$\lim_{n \to \infty} (x_n)^{y_n} \ne \lim_{n \to \infty} (x_n')^{y_n'}.$$ This is different than a determined form. For instance if $x_n \to 1$ and $y_n \to 2$ then we will always have $$\lim_{n \to \infty} x_n + y_n = 3, \qquad \lim_{n \to \infty} x_n y_n = 2 \quad\text{ and }\quad \lim_{n \to \infty} (x_n)^{y_n} = 1$$ regardless of what the sequences $x_n$ and $y_n$ are. 2. What makes $1^\infty$ different than $0^\infty$? For the form $0^\infty$ we are trying to find the limit of $(x_n)^{y_n}$ where $x_n \to 0$ and $y_n \to \infty$. By definition, $x_n \to 0$ means that for every $\varepsilon > 0$, $|x_n| < \varepsilon$ eventually (for all $n \ge$ some $N = N(\varepsilon)$). In particular, we can take $\varepsilon = 1/2$. Then if $n$ is large enough, $$0 \le \left| (x_n)^{y_n} \right| \le \left| (1/2)^{y_n} \right| \to 0$$ as $n \to \infty$. It follows that $\lim_{n \to \infty} (x_n)^{y_n} = 0$. What you'll notice is that $1/2$ could have been any number between $0$ and $1$. That is, if $x_n$ is "close to $0$" in the sense that $|x_n| < r$ and $r < 1$ then we can conclude that $\lim_{n \to \infty} (x_n)^{y_n} = 0$. Note that we cannot conclude this for $1^\infty$. That is, when we try to approximate $x_n$ by $1 +$ some error then it matters what the error is. For $0^\infty$ as long as the error is $< 1$ then we can conclude that the limit is $0$. But for $1^\infty$ if the error is • positive, then $(1 + \text{error})^{y_n} \to \infty$ • zero, then $(1 + \text{error})^{y_n} \to 1$ • negative, then $(1 + \text{error})^{y_n} \to 0$ This gives us no information about the value of $\lim_{n \to \infty} (x_n)^{y_n}$ and indeed, we can find sequences $x_n$ and $y_n$ where $(x_n)^{y_n}$ tends to $\infty$ or $1$ or $0$. In fact, we can make $(x_n)^{y_n}$ converges to any given real number $\ge 0$ or infinity. Appendix: a sketch of a construction of sequences $x_n$ and $y_n$ such that $x_n \to 1$, $y_n \to \infty$ and $(x_n)^{y_n} \to r$ where $r \in (0, \infty)$. (Try finding sequences for $r = 0, \infty$ as an exercise.) Let $p_n/q_n$ be a sequence of rational numbers converging to $\log(r)$ where the denominators, $q_n \to \infty$. For instance if $\log(r) = 2$ we could take the sequence $2/1, 20/10, 200/100, 2000/1000$ and so on. Consider the limit $$\lim_{n \to \infty} \left( 1 + \frac{1}{q_n} \right)^{p_n}$$ which we can take logarithms and use a Taylor series for $\log(1 + x)$to get \begin{align} \log\left( \lim_{n \to \infty} \left( 1 + \frac{1}{q_n} \right)^{p_n} \right) &= \lim_{n \to \infty} p_n \log\left( 1 + \frac{1}{q_n} \right) \\ &= \lim_{n \to \infty} p_n \left( \frac{1}{q_n} - \frac{1}{2q_n^2} + \frac1{3q_n^3} - \frac{1}{4q_n^4} + \cdots \right) \\ &= \lim_{n \to \infty} \frac{p_n}{q_n}\left( 1 - \frac{1}{2q_n} + \frac1{3q_n^2} - \frac{1}{4q_n^3} + \cdots \right) \\ &= \lim_{n \to \infty} \frac{p_n}{q_n} \left( \lim_{n \to \infty} 1 - \lim_{n \to \infty} \frac{1}{2q_n} + \lim_{n \to \infty}\frac1{3q_n^2} - \cdots \right) \\ &= \log(r)(1 - 0 + 0 - \cdots) = r. \end{align} Therefore $\left( 1 + \frac{1}{q_n} \right)^{p_n} \to r$. (I am sweeping some details about swapping limits with summations under the rug.) • Very nice answer. +1 the idea is represented via sequences and can be easily transformed into the limit of functions of a real variable. So the use of sequences makes no essential difference. – Paramanand Singh Sep 17 '17 at 3:31 • I like this answer because it makes a point at the very beginning that calling something an "indeterminate form" has a very particular meaning. The phrasing of the question suggests that OP has forgotten (or has never received) this meaning, so this seems to go at the heart of the matter. – David K Sep 17 '17 at 9:31 It is far easier to think about these indeterminate forms by taking logarithms. $$\ln\left( f(x)^{g(x)} \right) = g(x) \ln f(x)$$ If $f \rightarrow 1$ and $g \rightarrow \infty$, you have an $\infty \cdot 0$ indeterminate form. If $f \rightarrow 0$ and $g \rightarrow \infty$ you have an $\infty \cdot - \infty$ form, which is not indeterminate at all. In fact $$\exp \ln(f(x)^{g(x)} ) \xrightarrow{f(x) \rightarrow 0, g(x) \rightarrow \infty} \exp(- \infty) \rightarrow 0 \text{.}$$ Without logarithms... Here are three different $1^\infty$s: • $\lim_{n \rightarrow \infty} (2^{1/\ln n})^n = \infty$ • $\lim_{n \rightarrow \infty} (2^{1/n})^n = 2$ • $\lim_{n \rightarrow \infty} (2^{1/n^2})^n = 1$ Only the last one is doing what you seem to expect. This is because $1/n^2$ is going to $0$ faster than $n$ can overpower. When that doesn't happen we can arrange for other limits. Now any indeterminate form $0^\infty$ is $f(x)^{g(x)}$ with $\lim_{x \rightarrow L} f(x) = 0$ and $\lim_{x \rightarrow L} g(x) = \infty$, where $L$ is either a real number or, if it is either of $\pm \infty$, adjust the following as appropriate. Since $\lim_{x \rightarrow L} f(x) = 0$, there is a neighborhood of $L$, $(L-d, L+d)$, on which $|f(x)| < 1/2$. So let's look at what happens to $(1/2)^{g(x)}$ as $x$ gets close to $L$. For $\lim_{x \rightarrow L} g(x) = \infty$, there is an $e_1$ such that $g(x) > 1$ on $(L-e_1, L+e_1)$. Similarly, $g(x) > 2$ on some $(L-e_2, L+e_2)$, and so on for some sequence of nested open sets collapsing toward $L$. Eventually, there is some $N$ such that $(L-e_N, L+e_N) \subseteq (L-d, L+d)$, so talk about choices of $n > N$. On $(L-e_n, L+e_n)$, $(1/2)^{g(x)} < (1/2)^n = 2^{-n}$. That is, $(1/2)^{g(x)}$ shrinks to $0$ as $x$ approaches $L$. If, instead of $1/2$, we use the actual, smaller magnitude values of $f$, $f(x)^{g(x)}$ approaches $0$ also. There's no indeterminacy here.
2019-11-18T16:37:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2432428/indeterminate-form-1-infty-vs-0-infty", "openwebmath_score": 0.9976679086685181, "openwebmath_perplexity": 163.54040756723853, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765552017675, "lm_q2_score": 0.8633916134888614, "lm_q1q2_score": 0.8461035391769103 }
http://web2.0calc.com/questions/help-plz_22
+0 # HELP PLZ +3 76 11 +304 In how many ways can you spell the word COOL in the grid below? You can start on any letter, then on each step, you can step one letter in any direction (up, down, left, right, or diagonal). $$\begin{array}{ccccc} C&C&C&C&C\\ L&O&O&O&L\\ L&O&O&O&L\\ L&O&O&O&L\\ C&C&C&C&C\\ \end{array}$$ I have gotten many answers, but they are all wrong so far. Wrong answers: 84, 64 and 4 Mr.Owl  Oct 24, 2017 Sort: #1 +22 0 I kind of see this as a tricky english question because if you look into the wording of the problem it's asking How many ways can you SPELL the word cool. (not how many times can you FIND the word) There's only one way to spell the word "cool" So i believe the answer is 1 MicahSmith07  Oct 24, 2017 #3 +443 0 This is a math forum, and the poster is a serious user.  seriously doubt he would post a joke like that unless it was somehow indicated to be a joke thread. helperid1839321  Oct 24, 2017 #4 +304 0 The answer 1 was incorrect, I just can't figure this one out! Mr.Owl  Oct 24, 2017 #7 0 lol wow it was supposed to be a joke. Guest Oct 24, 2017 #2 +443 0 There are 4 that have 1 entry point from the C, and you can't use the middle C, because if you get there, there is nothing to go to: 4(1*1*3) makes 12 (you have to move diagonally away, then vertical, then you are at the middle and can go diagonally or horizontal, making 3 possibilities.) Starting from the middle outside: 4(2 + 10 = 12) = 48. Staring from the top and bottom center: 2(2 + 3 + 3 + 2 = 10) = 20. I'm getting 80, but I feel like I did the last one wrong. helperid1839321  Oct 24, 2017 #5 +304 0 WHY?!?! I have tried 64, 4, 84, 1, 80 and 2!! HOW HARD IS THIS?? Mr.Owl  Oct 24, 2017 #6 +443 0 The C's are mirrored on either side, so we know that the solution has to be even. That's all I can think of. helperid1839321  Oct 24, 2017 #8 +1375 +4 I have been trying and trying and trying, and I am counting 96. A "C" in the corner has 3 ways to make the word "cool." Since there are 4 of them, 3*4=12 A "C" adjacent to a C in the corner has 13 ways to make the word "cool." Since there are 4 of them, 4*13=52 A "C" in the center column has 16 ways to make the word "cool." Since there are 2 of them, 16*2=32 Now, add these combinations together. 12+52+32 = 96 ways to make the word "cool." TheXSquaredFactor  Oct 25, 2017 #9 +304 0 Dude, u must be legend... it worked. Mr.Owl  Oct 25, 2017 #10 +1375 +3 I am ??!!??!?!?!?!?!!? Well, thank you! I wish I could explain how I got that answer. I will give my best attempt, I guess. Column 1 Column 2 Column 3 Column 4 Column 5 Row 1 L L Row 2 L O O L Row 3 L O O O L Row 4 C C C C C This is just one side of the table. I have color coded the O's such that if you land on an O as your second-to-last letter, then you will have a certain number of possibilities. A red O means there are 3 possibilities. A Blue O means there are 2 possibilities. Hopefully, you can see why this is the case. Let's start with the easiest: The "C" in the corner. Before I start, I will use my own style of Cartesian coordinates where it represents the intersection of a column and a row. For example, $$(C1,R1)$$ is (Column 1, Row 1), for short. In this case, this letter happens to be L. There is only one case to consider now for the corner "C." It is the following: 1) $$(C5,R4)\Rightarrow(C4,R3)\Rightarrow(C4,R2)$$ Look at that! I have landed on a red O, which signifies 3 possibilities. This means that the corner "C" has 3 possibilities. There are 4 instances of the corner "C," so 4*3=12 Time to consider the next case: The "C" adjacent to the corner "C" Now, let's consider how many cases there are for the sequence $$(C4,R4)\Rightarrow(C3,R3)$$. Well, the number of possibilities is equal to the sum of the number of possibilities its neighbors are immediately adjacent to. $$(C3,R3)$$  is adjacent to 2 red and 2 blue O's. Because there are 3 possibilities for a red one and 2 possibilities for a blue one, the number of possibilities is $$3+3+2+2=10$$. However, this is only one sequence. Let's consider the next sequence of $$(C4,R4)\Rightarrow(C4,R3)\Rightarrow(C4,R2)$$. Oh look! This is a red "O," which has 3 possibilities, so let's add the number of possibilities together. $$10+3=13$$ There are 4 of these in the diagram, so $$13*4=52$$ Now, let's consider the center "C." Well, we can use the same logic as before to know that $$(C3,R4)\Rightarrow(C3,R3)$$ has 10 possibilities. We know that there are 2 avenues to red O's, which equals 6 additional paths. In total, that equates to $$10+6=16$$ ways. There are two instances of there, so $$2*16=32$$ The last step is to add the numbers together. $$12+52+32=96$$ ways. TheXSquaredFactor  Oct 25, 2017 #11 +5261 +2 Wow! That was good thinking, X2 !!! hectictar  Oct 25, 2017 ### 7 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
2017-11-25T01:50:30
{ "domain": "0calc.com", "url": "http://web2.0calc.com/questions/help-plz_22", "openwebmath_score": 0.7676718235015869, "openwebmath_perplexity": 930.8283641929564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765563713599, "lm_q2_score": 0.8633916099737806, "lm_q1q2_score": 0.8461035367420299 }
https://math.stackexchange.com/questions/2825180/bound-on-the-derivative-of-a-holomorphic-function-from-right-half-plane-to-unit
# Bound on the derivative of a holomorphic function from right half plane to unit disc Let $D=\{z\in\mathbb{C}:|z|<1\}$ and let $V=\{z\in\mathbb{C}:\Re(z)>0\}$. Let $f:V\to D$ be a holomorphic function. Prove that $$\forall z\in V:|f'(z)|\leq\frac{1-|f(z)|^2}{2\Re(z)}.$$ By taking $B_z=\{\xi\in V:|\xi-z|<\Re(z)\}$ and using Cauchy's formula, I only managed to obtain $$|f'(z)|\leq\frac{\sup\limits_{\partial B_z}|f|}{\Re(z)}\leq\frac{1}{\Re(z)}.$$ Edit: It seems to be very much related to Schwarz lemma. Maybe it is possible to use some mapping from $D$ to $V$, say $\phi:D\to V$ and then look at$f\circ \phi:D\to D$. • Yes, taking a biholomorphic $\phi \colon D \to V$ and using the (differential form of the) Schwarz-Pick lemma [no 't' in the Schwarz of the Schwarz (with or without Pick) lemma] gives the desired bound. – Daniel Fischer Jun 19 '18 at 19:24 $$\psi \colon z \mapsto \frac{z-1}{z+1}$$ maps $V$ biholomorphically to $D$. With $$\phi = \psi^{-1} \colon w \mapsto \frac{1+w}{1-w}\,,$$ the differential version of the Schwarz-Pick lemma applied to $g = f\circ \phi \colon D \to D$ tells us that $$\frac{\lvert g'(w)\rvert}{1 - \lvert g(w)\rvert^2} \leqslant \frac{1}{1 - \lvert w\rvert^2}\tag{1}$$ for all $w \in D$. Writing $z = \phi(w)$ (and consequently $w = \psi(z)$), we have $g(w) = f(z)$ and $g'(w) = f'(z)\cdot \phi'(w)$. Since $\phi'(w) = \frac{2}{(1-w)^2}$, plugging into $(1)$ and rearranging yields \begin{align} \lvert f'(z)\rvert &\leqslant \frac{1 - \lvert f(z)\rvert^2}{(1 - \lvert w\rvert^2)\cdot \lvert\phi'(w)\rvert} \\ &= \bigl(1 - \lvert f(z)\rvert^2\bigr)\cdot \frac{\lvert 1-w\rvert^2}{2(1 - \lvert w\rvert^2)} \\ &= \frac{1 - \lvert f(z)\rvert^2}{2}\cdot \frac{\lvert 1 - \psi(z)\rvert^2}{1 - \lvert \psi(z)\rvert^2} \\ &= \frac{1 - \lvert f(z)\rvert^2}{2}\cdot \frac{\bigl\lvert\frac{2}{z+1}\bigr\rvert^2}{1 - \bigl\lvert \frac{z-1}{z+1}\bigr\rvert^2} \\ &= \frac{1 - \lvert f(z)\rvert^2}{2}\cdot \frac{4}{\lvert z+1\rvert^2 - \lvert z-1\rvert^2} \\ &= \frac{1 - \lvert f(z)\rvert^2}{2}\cdot \frac{4}{4 \Re (z)} \\ &= \frac{1 - \lvert f(z)\rvert^2}{2\Re (z)}\,. \end{align}
2019-09-18T14:08:17
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2825180/bound-on-the-derivative-of-a-holomorphic-function-from-right-half-plane-to-unit", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 238.39348361474228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9799765552017675, "lm_q2_score": 0.8633916099737806, "lm_q1q2_score": 0.8461035357322135 }
https://math.stackexchange.com/questions/1007249/regularity-of-the-heat-kernel
Regularity of the heat kernel Let $(M,g)$ be a compact Riemannian manifold. Let $H:M\times M\times\mathbb{R}_{>0}\to\mathbb{R}$ be the heat kernel. i.e. $H\in C^0(M\times M\times\mathbb{R}_{>0})$ is the unique continuous function such that for all $y\in M$, (A) $H^y\in C^{2,1}(M\times\mathbb{R}_{>0})$ (B) $\left(\Delta^g-\dfrac{\partial}{\partial t}\right)H^y=0$ (C) $\displaystyle\lim_{t\to0}H^y_t=\delta_y$ , where $H^y(x,t):=H(x,y,t)$, $H^y_t(x):=H^y(x,t)$, $C^{2,1}(M\times\mathbb{R}_{>0}):=\{\varphi:M\times\mathbb{R}_{>0}\to\mathbb{R}|\text{ For each chart }(U;x^1,\cdots,x^m)\subset M, \dfrac{\partial\varphi}{\partial t}, \dfrac{\partial\varphi}{\partial x^i},\text{ and }\dfrac{\partial^2\varphi}{\partial x^i\partial x^j}:U\times\mathbb{R}_{>0}\to\mathbb{R} \text{ are well defined and continuous.}\}$. ${\bf [Question 1]}$ From (A) and (B) above it is derived that $H^y\in C^\infty(M\times\mathbb{R}_{>0})$ for all $y\in M$. How about the regularity of H as a function on $M\times M\times\mathbb{R}_{>0}$? Does it hold that $H\in C^\infty(M\times M\times \mathbb{R}_{>0})$? If not, aren't there any regularity result of $H:M\times M\times\mathbb{R}_{>0}\to\mathbb{R}$ which is useful to exchange integrals and differentiation? ${\bf [Question 2]}$ Suppose that $F:M\times[0,T]\to \mathbb{R}$ is a continuous function. Then is it true that the function \begin{eqnarray} u(x,t):=-\int_0^t\int_M H(x,y,t-\tau)F(y,\tau)\mu_g(dy)d\tau \end{eqnarray} belongs to $C^{1,0}(M\times[0,T])\cap C^{2,1}(M\times(0,T))$? ${\bf [Question 3]}$ Suppose that $f:M\to\mathbb{R}$ be a $C^1$ function. Does the function \begin{eqnarray} v(x,t):=\int_M H(x,y,t)f(y)\mu_g dy \end{eqnarray} belong to $C^{1,0}(M\times[0,\infty))\cap C^{2,1}(M\times\mathbb{R}_{>0})$? Please tell me also references. Thank you. • Possible reference for you: Chavel's book Eigenvalues in Riemannian Geometry has a chapter on the heat kernel. – Neal Nov 7 '14 at 18:50 Smoothness in all the questions has a local character. And in local coordinates it's a parabolic equation $Lu=F$ with (smooth) variable coefficients. So the local theory in $\mathbb R^n$ will do. On Question2 the answer is no. If function $F$ is continuous it does not follow that $u$ is locally from $C^{2,1}$. For the heat equation see Nonclassical solution to u_t-\Delta u=f in one space dimension? question here. On Question3 the answer is yes. As it was mentioned it's a question of local regularity. And for $\mathbb R^n$ one can differentiate the equation $Lu=0$ with respect to the space variable $x_i$, transfer all the terms with $u$ to the rhs and obtain a Cauchy problem for $\partial_iu$ with a continuous initial condition $\partial_i f$ and a continuous rhs. It is included in the definition of a fundamental solution that $v(x,t)\to f(x)$ as $t\to0\!+$ for continuous $f$, see, for example, A. Friedman, Partial Differential Equations of Parabolic Type. For question 1, see Theorem 5.2.1 of E. B. Davies's book Heat Kernels and Spectral Theory, which asserts that indeed the heat kernel is a $C^\infty$ function on $M \times M \times (0,\infty)$. (It also applies to Riemannian manifolds $M$ which are complete but not compact.) For question 2, it suffices to assume $F$ is bounded and measurable. Differentiation under the integral sign will show that $u \in C^\infty(M \times (0,T))$. Indeed, for this it suffices to assume $F$ is bounded and measurable. To get continuity up to $t=T$, extend $F$ to $\tilde{F} : M \times [0,T+\epsilon]$ by $\tilde{F}(x,t) = F(x,t)$ for $t \le T$ and $\tilde{F}(x,t) = 0$ for $t > T$. Then the corresponding function $\tilde{u}$ is continuous (even smooth) on $M \times (0, T+\epsilon)$ and $\tilde{u}(x,t) = u(x,t)$ for $t \le T$. To get continuity at $t=0$, simply note that $$|u(x,t)| \le t \cdot\left(\sup_{M \times [0,t]} |F|\right)\left( \sup_{\tau \in [0,t]} \int_M H(x,y,t-\tau) \mu_g(dy)\right)$$ But $\sup_{M \times [0,t]} |F|$ is finite if $F$ is bounded, and $\int_M H(x,y,t-\tau) \mu_g(dy) = 1$ for any $x, t, \tau$. I guess you still want to show that the spatial derivatives of $u$ are continuous up to $t=0$. That shouldn't be hard but maybe takes a little more thought. For question 3, as before, if $f$ is merely bounded and measurable then $u \in C^\infty(M \times (0,\infty))$ by differentiating under the integral sign. Still working on continuity at 0. It's really just going to come from the continuity of $f$, the fact that $H(x,y,t) \to \delta_x$ and the triangle inequality. • $M$ is compact. – stb2084 Nov 7 '14 at 16:56 • @stb2084: Thanks, I had overlooked that. – Nate Eldredge Nov 7 '14 at 17:44 • Thanks a lot. But are the spacial derivatives of $v$ are continuous at $t=0$ in the 3rd question? – stb2084 Nov 8 '14 at 9:42 • In my 2nd question, in my understanding, $u$ satisfies $\left(\Delta-\dfrac{\partial}{\partial t}\right)u=F$ in $M\times (0,T)$. So if $u\in C^\infty(M\times(0,T))$, $F$ must be smooth. – stb2084 Nov 8 '14 at 12:07
2019-05-20T13:35:48
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1007249/regularity-of-the-heat-kernel", "openwebmath_score": 0.9485917687416077, "openwebmath_perplexity": 182.6339676313142, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765587105448, "lm_q2_score": 0.8633916047011595, "lm_q1q2_score": 0.8461035335946173 }
https://math.stackexchange.com/questions/623922/ellipse-bounding-rectangle
# Ellipse bounding rectangle I'm trying to find the ellipse that bounds a rectangle in a way that the "distance" between the rectangle and the ellipse is the same vertically and horizontally. Here is an image to illustrate what I mean: It's not perfectly drawn but the two "x" dimension need to be the same. What I tried so far is: The ellipse equation is $$\left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 = 1$$ • The "ellipse bounding the rectangle" constraint gives us $$\left(\frac{w}{a}\right)^2 + \left(\frac{h}{b}\right)^2 = 1$$ • The "x dimension must be equal" constraint gives us $$a-w = b-h$$ So it's basically a $$2$$ equations system with $$2$$ unknowns. But I got stuck at this point. If I do a substitution I have • $$a = b-h+w$$ • $$\left(\frac{w}{b-h+w}\right)^2 + \left(\frac{y}{b}\right)^2 = 1$$ and I don't know how to solve it. • What do not you center everything (ellipse + rectangle) at (0,0) ? – Claude Leibovici Jan 1 '14 at 12:39 • I think I did, didn't I? The illustration might be false though (in a way "a", "b", "w" and "h" might actually be "2a", "2b", "2w" and "2h"). But it doesn't change the problem much. – user1534422 Jan 1 '14 at 12:45 • Are you looking for the smallest allipse or for any ellipse containing the rectangle ? Michael Albanese and I did not understand the same thing. Please clarify. – Claude Leibovici Jan 1 '14 at 13:12 • I looking for the ellipse that goes through the rectangle edges. – user1534422 Jan 1 '14 at 13:40 • Have you be able to continue with what I gave you as an answer ? – Claude Leibovici Jan 1 '14 at 13:45 If you centre the rectangle on the origin, then it has vertices at $(-\frac{w}{2}, -\frac{h}{2})$, $(\frac{w}{2}, -\frac{h}{2})$, $(\frac{w}{2}, \frac{h}{2}),$ and $(-\frac{w}{2}, \frac{h}{2})$. An ellipse centred at the origin has equation $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ where $a$ is the 'horizontal' radius, and $b$ is the 'vertical' radius (here I am taking $a$, $b$ positive). Then the ellipse you are looking for must satisfy $$a-\frac{w}{2} = b - \frac{h}{2}$$ or equivalently $$2a - w = 2b - h.$$ If $w$ and $h$ are given, this is one equation between the two unknowns $a$ and $b$. This corresponds to the fact that there is an infinite family of ellipses which satisfies the distance condition. You need one more piece of information to uniquely determine the ellipse. For example, if you would like the ellipse to go through the vertices of the rectangle, then we must have $$\frac{\left(\dfrac{w}{2}\right)^2}{a^2} + \frac{\left(\dfrac{h}{2}\right)^2}{b^2} = \frac{w^2}{(2a)^2} + \frac{h^2}{(2b)^2} = 1.$$ Rearranging $b$ for $a$ in the previous equation, we obtain $b = a + \frac{1}{2}(h-w)$. Substituting into the latter equation, we see that $a$ must satisfy $$\frac{w^2}{(2a)^2} + \frac{h^2}{(2a + h - w)^2} = 1$$ which, when rearranged, is a quartic in $a$. Note, if $h = w$ (i.e. the rectangle is a square), then $a = b = \frac{w}{2}\sqrt{2}$ so the ellipse is a circle with radius equal to the distance between the origin and a vertex of the square (as one would expect). Example: If $h = 1$ and $w = 2$ we obtain $$\frac{4}{(2a)^2} + \frac{1}{(2a-1)^2} = 1$$ which becomes $$4a^4-4a^3-4a^2+4a-1=0.$$ This only has two real roots, but the only positive one is $a = 1.2908$. From the first equation, we then see that $b = 0.7908$. The result can be seen below (made using WolframAlpha). • I thought that the OP was looking for the largest recatngle or the smallest ellipse. If this is not the case, you are right and .... I am wrong. Happy New Year !! – Claude Leibovici Jan 1 '14 at 13:09 • So yeah... it's a quartic as I suspected :( Which means the solution is a monster :D I'm going to take a look to the heuristic since I going to use that in an IT project. – user1534422 Jan 1 '14 at 13:11 • @ClaudeLeibovici: My interpretation was that the rectangle is given, and the ellipse is sought after. Hopefully the OP will be able to provide some clarification. – Michael Albanese Jan 1 '14 at 13:11 • @user1534422: Unless you want a general formula, using a calculator to solve the resulting quartic should be sufficient. – Michael Albanese Jan 1 '14 at 13:12 • If my approach is correct, there is no quartic but only quadratic. – Claude Leibovici Jan 1 '14 at 13:14 HINT Let us put everything centered at (0,0) and since everything is symmetrical, we shall just look at the first quadrant. So you look for a value of "x", smaller than "a", such that (a - x) be equal to (b - y). But you know that the upright corner of the rectangle is also along the ellipse. This means that y = b Sqrt[1 - (x/a)^2]. Can you continue from here ? Happy New Year
2019-09-16T01:58:25
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/623922/ellipse-bounding-rectangle", "openwebmath_score": 0.8826993107795715, "openwebmath_perplexity": 434.35790123724104, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9799765587105448, "lm_q2_score": 0.8633916047011595, "lm_q1q2_score": 0.8461035335946173 }
http://xahomeworkhkxy.sgoods4.me/writing-piecewise-functions.html
Writing piecewise functions Rated 5/5 based on 28 review # Writing piecewise functions Yes, piecewise functions isn’t particularly exciting but it can, at least, be enjoyable we dare you to prove us wrong. Worksheet piecewise functions name: algebra 2 part i carefully graph each of the following identify whether or not he graph is a function. Worksheet piecewise functions name: write equations for the piecewise functions whose graphs are shown below assume that the units are 1 for every tic marc. Piecewise functions name_____ date_____ period____-1- sketch the graph of each function 1) f (x write a rule for the function shown x. A fun foldable to review or teach what a piecewise function is, how to evaluate a piecewise function, how to write a piecewise function. How to write a piecewise function from a given graph. And this is how we write it: piecewise functions let us make functions that do anything we want the floor function is a very special piecewise function. How to write piecewise functions in - 28 images - objectives write and graph piecewise functions ppt, homework help and piecewise function, writing equations of. Write a piecewise function for the absolute value of a quadratic function - duration: 3:40 jeremy klassen 3,251 views 3:40 expressing a quadratic. ## Writing piecewise functions Match the formula of a piecewise function to its graph. Write the piecewise functions for the graph shown solution: step 1: locate the break point here it is at x = 2 step 2: find the equation of the graph to the left. Writing equations for piecewise functions and word problems mr swartz. How we can define piecewise function in matlab learn more about piecewise, function, symbolic symbolic math toolbox. Write the piecewise function $f(t) = \begin how can you rewrite piecewise functions in terms of the unit write this piecewise function in terms of the unit. Demonstrates the process of creating a function definition of a piecewise function given its graph. Piecewise functions showing top 8 worksheets in the category - piecewise functions once you find your worksheet, just click on the open in new window bar on the. Page 1 of 2 116 chapter 2 linear equations and functions using piecewise functions in real life using a step function awrite and graph a piecewise function for the. A piecewise defined function is a function defined by at least two equations (pieces), each of which applies to a different part of the domain. How to write piecewise functions - 28 images - piecewise functions, 2 7 piecewise functions, algebra 2 graphing a piecewise function, piecewise functions she math. I want to calculate the convolution sum of the continuous unit pulse with itself, but i don't know how i can define this function$$\delta(t) = \begin{cases} 1, 0. Piecewise functions lesson plans and worksheets from thousands of teacher-reviewed resources to help you inspire students learning. Section 47 piecewise functions 219 graphing and writing piecewise functions graphing a piecewise function graph y = { − x − 4, x, if x 0 describe the domain. Obtaining equations from piecewise function graphs you may be asked to write a piecewise function, given a graph now that we know what piecewise functions are all. How to write this piecewise function using latex i tried$ \begin{array}{cc} \{ & \begin {array}{cc how to write a function (piecewise) with bracket outside. Help with piecewise function can't use learn more about piecewise, symbolic, calculus symbolic math toolbox. Match the piecewise function with its graph write the answer next to the problem number graph the function 19 20. Represents a piecewise function with values val i in the regions defined by the conditions cond i piecewise [{{val 1, cond 1}. Steps to writing piecewise functions to find the equation of the line 1 find two points on the line 2 find slope 3 use point and slope in point slope form and. Lesson/unit&plan&name:&(graphing(piecewise(functions& & rationale/lesson&abstract:&&students(will(graph ( write a scenario represented by this function.
2018-01-24T11:45:23
{ "domain": "sgoods4.me", "url": "http://xahomeworkhkxy.sgoods4.me/writing-piecewise-functions.html", "openwebmath_score": 0.5579310655593872, "openwebmath_perplexity": 1258.4709538980323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes\n\n", "lm_q1_score": 0.9799765552017675, "lm_q2_score": 0.863391602943619, "lm_q1q2_score": 0.84610352884282 }
https://math.stackexchange.com/questions/1534120/largest-rectangle-not-touching-any-rock-in-a-square-field/1598054
# Largest rectangle not touching any rock in a square field You want to build a rectangular house with a maximal area. You are offered a square field of area 1, on which you plan to build the house. The problem is, there are $n$ rocks scattered in unknown locations throughout the field. The rocks are unmovable, and you cannot build on rocks. What is the largest area of a rectangle that you can build, in the worst case? Formally: let $S_n$ be a set of $n$ points in the unit square. Define $\textrm{MaxArea}(S_n)$ as the maximum area of an axis-parallel rectangle in the unit square that does not contain, in its interior, any point in $S$. Define: $$\textrm{MinMaxArea}(n) = \inf_{S_n} (\textrm{MaxArea}(S_n))$$ where the infimum is on all possible sets $S_n$ of $n$ points. What are good bounds on $\textrm{MinMaxArea}(n)$? EXAMPLE: In the picture below, the unit square is scaled to a 100-by-100 square. There are $n=100$ rocks. Apparently, the largest possible rectangle that does not contain any rocks in its interior is a rectangle such as ABCD, whose area is $.06\times .58$, which is approximately $\frac{1}{4\sqrt{n}}$, so: $$\textrm{MinMaxArea}(n) \leq \frac{1}{4\sqrt{n}}$$ Is there another arrangement of rocks in which the largest rectangle is smaller? • Just out of curiosity, where does this problem come from? (+1) – A.P. Nov 17 '15 at 21:32 • @A.P. It comes from my Ph.D. research about fair division of land. See my profile :) – Erel Segal-Halevi Nov 17 '15 at 21:37 • Is it necessary that the house be aligned with the grid (e.g. to align with a road or the cardinal directions)? – Marconius Nov 17 '15 at 21:40 • @Marconius actually, the two variants of this problem are interesting, and the bounds are probably different. For example, in the figure I added, the area of the largest axis-parallel rectangle is approximately $1/(4 \sqrt{n})$, but the area of the largest rotated rectangle is approximately $1/\sqrt{n}$. – Erel Segal-Halevi Nov 17 '15 at 21:46 • When $n=2$, the points are $(1/\phi, 1/\phi)$ and $(1-1/\phi, 1-1/\phi)$ with area $1-1/\phi$ where $\phi$ is the golden ratio. – Ben Longo Dec 19 '15 at 20:38 I dealt with this problem long ago. Here are my results. They are still unpublished, so I support an idea to write a joint paper. • Thanks! Is it the same paper here? stetson.edu/~efriedma/mathmagic/0899/ravsky.ps The problem in your paper seems very similar to my problem, with one difference: your function, $T(n)$, is a supremum (on all possible n-tuples of points), while I defined it as an infimum. Apparently, the supremum is always 1, as it is always possible to select $n$ points on the boundary of the unit square. Is there anything I misunderstand? – Erel Segal-Halevi Dec 19 '15 at 19:45 • N.B. You also refer to question 3 in this page: www2.stetson.edu/~efriedma/mathmagic/0899.html where he defines T(n) as "minimum area". I am a bit confused. Is this minimum or maximum? – Erel Segal-Halevi Dec 19 '15 at 19:48 • @ErelSegal-Halevi The problem which considered Friedman and I is the same as yours. So in my definition of $T(т)$ should be $\inf$ instead of $\sup$ (there is a misprint (the paper was not reviewed :-) )). Thanks for your attentivity. – Alex Ravsky Dec 20 '15 at 6:02 • In this case, this is indeed the same problem. Thanks! – Erel Segal-Halevi Dec 20 '15 at 13:13 Let me shorten MaxArea($S_n$) to $M(S_n)$ for convenience, and let $M(n) = \inf_{S_n} M(S_n)$ be MinMaxArea (this is overloading the notation, but I hope it won't be confusing). Then, $M(S_n) \le D(S_n)$, where $D(S_n)$ is the classical discrepancy function: $$D(S_n) = \sup_R \left|\frac{|S_n \cap R|}{n} - \mathrm{area}(R) \right|,$$ where the supremum is over axis-parallel rectangles in $[0,1]^2$. There are quite a few constructions of $n$-point sets $S_n$ for which $D(S_n) = O(\log(n)/n)$. One example is $$S_n = \left\{ \left(\frac{i}{n}, i\sqrt{2} \bmod 1\right) \right\}_{i = 0}^{n-1}.$$ Another is the van der Corput set. This shows that $M(n) = O(\log(n)/n)$. As far as lower bounds go, it is known that the above bound on $D(n)$ is tight, i.e. $D(n) = \Omega(\log(n)/n)$. However, even better bounds are possible if we work directly with $M(n)$. Let $n(\epsilon)$ be the size of the smallest point set $P$ such that $M(P) \le \epsilon$ (this is called an $\epsilon$-net). Then it is known that $n(\epsilon) = O(\frac{1}{\epsilon}\log \log \frac{1}{\epsilon})$, which implies that $M(n) = O(\log \log(n)/n)$. (Note that the $\epsilon$-nets literature usually works with discrete range spaces, but because the bounds on $n(\epsilon)$ are a function of $\epsilon$ only, we can just take an arbitrarily fine discrete approximation of $[0,1]^2$.)
2019-08-20T12:26:30
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1534120/largest-rectangle-not-touching-any-rock-in-a-square-field/1598054", "openwebmath_score": 0.8591876029968262, "openwebmath_perplexity": 247.3004594779302, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631647185701, "lm_q2_score": 0.8577681122619885, "lm_q1q2_score": 0.8460708698054087 }
http://math.stackexchange.com/questions/214399/summing-0-1-uniform-random-variables-up-to-1?answertab=oldest
# Summing (0,1) uniform random variables up to 1 [duplicate] So I'm reading a book about simulation, and in one of the chapters about random numbers generation I found the following exercise: For uniform $(0,1)$ random independent variables $U_1, U_2, \dots$ define $$N = \min \bigg \{ n : \sum_{i=1}^n U_i > 1 \bigg \}$$ Give an estimate for the value of $E[N]$. That is: $N$ is equal to the number of random numbers uniformly distributed in $(0,1)$ that must be summed to exceed $1$. What's the expected value of $N$? I wrote some code and I saw that the expected value of $N$ goes to $e = 2.71\dots$ The book does not ask for a formal proof of this fact, but now I'm curious! So I would like to ask for • A (possibily) simple (= undergraduate level) analytic proof of this fact • An intuitive explanation for this fact or both. - ## marked as duplicate by Rahul, Ross Millikan, Hans Lundmark, Arkamis, NorbertOct 15 '12 at 21:34 Here is a way to compute $\mathbb E(N)$. We begin by complicating things, namely, for every $x$ in $(0,1)$, we consider $m_x=\mathbb E(N_x)$ where $$N_x=\min\left\{n\,;\,\sum_{k=1}^nU_k\gt x\right\}.$$ Our goal is to compute $m_1$ since $N_1=N$. Assume that $U_1=u$ for some $u$ in $(0,1)$. If $u\gt x$, then $N_x=1$. If $u\lt x$, then $N_x=1+N'$ where $N'$ is distributed like $N_{x-u}$. Hence $$m_x=1+\int_0^xm_{x-u}\,\mathrm du=1+\int_0^xm_{u}\,\mathrm du.$$ Thus, $x\mapsto m_x$ is differentiable with $m'_x=m_x$. Since $m_0=1$, $m_x=\mathrm e^x$ for every $x\leqslant1$, in particular $\mathbb E(N)=m_1=\mathrm e$. In fact it turns out that $P(N = n) = \frac{n-1}{n!}$ for $n \ge 2$. Let $S_n = \sum_{j=1}^n U_j$, and $f_n(s)$ the probability density function for $S_n$. For $0 < x < 1$ we have $f_1(x) = 1$ and $f_{n+1}(x) = \int_0^x f_n(s) \ ds$. By induction, we get $f_n(x) = x^{n-1}/(n-1)!$ for $0 < x < 1$, and thus $P(S_n < 1) = \int_0^1 f_n(s)\ ds = \dfrac{1}{n!}$. Now $$P(N=n) = P(S_{n-1} < 1 \le S_n) = P(S_{n-1} < 1) - P(S_n - 1) = \frac{1}{(n-1)!} - \frac{1}{n!} = \frac{n-1}{n!}$$
2014-07-30T17:58:13
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/214399/summing-0-1-uniform-random-variables-up-to-1?answertab=oldest", "openwebmath_score": 0.9813275933265686, "openwebmath_perplexity": 113.30163084756596, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631675246405, "lm_q2_score": 0.8577681086260461, "lm_q1q2_score": 0.8460708686260068 }
https://math.stackexchange.com/questions/2367735/explain-why-the-graph-of-y-frac4xx21-and-y-2-sin2-arctan-x-are-the
Explain why the graph of $y=\frac{4x}{x^2+1}$ and $y=2\sin(2\arctan x)$ are the same. Explain why the graph of $y=\frac{4x}{x^2+1}$ and $y=2\sin(2\arctan x)$ are the same. The first equation is of the form of Newton's Serpentine. When you graph the second equation it appears to overlap the first equation. I'm not sure whether these two equations are identities or just very close approximations. I tried to manipulate both equations to get the other but failed. How does one explain why these two equations are identical? take $$x=\tan \alpha\\ -\frac{\pi}{2} <\alpha<\frac{\pi}{2}$$so $$y=\frac{4x}{x^2+1}=\\ y=\frac{4\tan \alpha}{(\tan \alpha)^2+1}\\=\frac{4\tan \alpha}{\frac{1}{\cos^2 \alpha}}\\=4\tan \alpha .\cos ^2 \alpha\\=4\sin \alpha \cos \alpha\\=2\sin(2\alpha)$$ Since $\sin(2A) = 2 \sin A \cos A$, \begin{align}2\sin(2 \arctan x) &= 4 \sin (\arctan x) \cos(\arctan x) \\ &=4\left( \frac{x}{\sqrt{1+x^2}} \right)\left( \frac{1}{\sqrt{1+x^2}} \right) \\ &=\frac{4x}{1+x^2}\end{align} • Did you mean $\cos(\arctan x) = \dfrac{1}{\sqrt{\color{red}{1} + x^2}}$? – N. F. Taussig Jul 22 '17 at 10:17 • yikes, thanks for catching the mistake. – Siong Thye Goh Jul 22 '17 at 14:37 The expression on the left sort of begs for the substitution $x = \tan \theta$ so that you get $$\frac{4x}{x^2 + 1} =\frac{4\tan \theta}{\sec^2 \theta} = 4\cos\theta\sin\theta = 2\sin2\theta = 2 \sin (2\arctan x)$$ We use the identity $$\sin(2\theta) = \frac{2\tan\theta}{1 + \tan^2\theta}$$ Let $\arctan x = \theta$. Then $x = \tan\theta, -\frac{\pi}{2} < x < \frac{\pi}{2}$. Hence, $$2\sin(2\arctan x) = 2\sin(2\theta) = 2 \cdot \frac{2\tan\theta}{1 + \tan^2\theta} = 2 \cdot \frac{2x}{1 + x^2} = \frac{4x}{1 + x^2}$$
2021-03-08T16:49:57
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2367735/explain-why-the-graph-of-y-frac4xx21-and-y-2-sin2-arctan-x-are-the", "openwebmath_score": 0.9998465776443481, "openwebmath_perplexity": 418.796744981349, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631635159684, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8460708669806755 }
http://mathhelpforum.com/statistics/232876-simple-probability-question.html
# Thread: Simple Probability Question 1. ## Simple Probability Question I am a bit confused by the way this question is solved:- From a of well shuffled pack 52 cards, three cards are drawn at random. Find the probability of drawing an ace, a king and a jack. Solution given:- There are 4 aces, 4 king and 4 jacks and their selection can be made in the following ways: 12C1 X 8C1 X 4C1 = 12 X 8 X 4. Total selections can be made = 52C3= 52 X 51 X 50. Therefore required probability = $\frac{(12)(8)(4)}{ (52)(51)(50)}$ I don't understand why are we taking 12C1 X 8C1 X 4C1 = 12 X 8 X 4 instead of 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the numerator. Since, we are selecting 1 ace from 4 aces, 1 king from 4 kings and 1 jack from 4 jacks shouldn't we be taking 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the favourable events ? Please advice on the above. Thanks in advance ! 2. ## Re: Simple Probability Question Originally Posted by SheekhKebab I am a bit confused by the way this question is solved:- From a of well shuffled pack 52 cards, three cards are drawn at random. Find the probability of drawing an ace, a king and a jack. Here's how I would do that: there are 52 cards, 4 aces, four kings, and a jack. The probability the first card drawn is an ace is 4/52= 1/13. There are then 51 cards left, flur of which are kings. The probability that the second card you draw is a king is 4/51. There are then 50 cards left, 4 of which are jacks. The probability the third card you draw is a jack is 4/50= 2/25. The probability of drawing "ace, king, jack" in that order is (1/13)(4/51)(2/25)= 8/(13*51*25). But if you look at "jack, ace, king" or any other specific order you will see that while you have different fractions, you have the same numerators and the same denominators in different orders so the same probability. There are 3!= 6 such orders so there the probability of drawing an ace, king, and jack is 6(8/(13*51*25)). Solution given:- There are 4 aces, 4 king and 4 jacks and their selection can be made in the following ways: 12C1 X 8C1 X 4C1 = 12 X 8 X 4. Total selections can be made = 52C3= 52 X 51 X 50. Therefore required probability = $\frac{(12)(8)(4)}{ (52)(51)(50)}$ I don't understand why are we taking 12C1 X 8C1 X 4C1 = 12 X 8 X 4 instead of 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the numerator. Since, we are selecting 1 ace from 4 aces, 1 king from 4 kings and 1 jack from 4 jacks shouldn't we be taking 4C1 X 4C1 X 4C1 = 4 x 4 X 4 for the favourable events ? Please advice on the above. Thanks in advance ! The reason for " $^{12}C_1$" is that a total of 12 "aces, kings, and jacks" and you are drawing one of them. The reason for the " $^8C_1$" is that whether that is an ace, king, or jack, there are then 8 of the remaining kind of card you are looking for (if the first card drawn was a jack, there are 8 aces and kings left) and you want to draw one of them. The reason for the " $^4C_1$" is that whichever of ace, jack, or king, the first two cards are, there are 4 cards of the remaining type and you want to draw 1 of them. 3. ## Re: Simple Probability Question Hi HallsofIvy, Thanks ! That 3! I would have definitely factored in if I had solved the question completely. Since the 3! would have come from the denominator of 52C3. So, that was not my question. I was a bit confused about the way the selection was made in the numerator of the original solution. So, according to you, then, both the solutions/approach are correct ? Should we prefer one approach over the other cause the original solution doesn't appear to be much convincing ? 4. ## Re: Simple Probability Question Both solutions are correct, and both give the same answer. Neither is "more correct" than the other, though I would admit that my instinct is to solve it the way that Halls did. I tend to think first in terms of selecting cards in a particular order, then multiply by the number of ways that the order can be changed, which is what he did. But the book's solution is equally valid, and arguably more elegant - i.e. how many choices do I have for the first card, then how many choices are available for the 2nd card, then how many for the third card, with no need to go through an additional step of considering the specific order of cards selected. 5. ## Re: Simple Probability Question Hi Ebaines, Thanks ! But for the original solution I think there is an error in the denominator . 52C3= $\frac{(52)(51)(50)}{3!}$, but that 3! is missing in the original solution. So we are not getting an answer of $\frac{16}{5525}$ which is the correct answer and which we will get if we follow the other approach. Please inform whether I am correct or I am missing something ! 6. ## Re: Simple Probability Question Both approaches yield the same results: the first is $\frac {12 \times 8 \times 4}{52 \times 51 \times 50} = 0.002896$ and HallsOfIvy's approach: $\frac {6 \times 8}{13 \times 51 \times 25} = 0.002896$ Note that you can multiply the numeratort and denominator of Halls' by 8 get the same form as the first: $\frac {6 \times 8}{13 \times 51 \times 25} \times \frac 8 8 = \frac {12 \times 8 \times 4}{52 \times 51 \times 50} = 0.002896$ The first approach does not need to explicitly multiply by 3! because the fact that the three cards may be selected in any order is already included in the numerator by using 12 x 8 x 4. 7. ## Re: Simple Probability Question Hi ebaines, But the original solution mentions: Total selections can be made in 52C3 ways, which is equivalent to $\frac{(52)(51)(50)}{3!}$, which is the denominator. So where will the 3! go then ? 8. ## Re: Simple Probability Question This part of your first post is incorrect: "Total selections can be made = 52C3= 52 X 51 X 50." Instead it should be: "Total selections can be made = 52P3= 52 X 51 X 50." 9. ## Re: Simple Probability Question Thanks and yes ebaines that will take care of all the possible selections in every possible order. Therefore, there is an error in the book.
2018-02-25T03:01:46
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/statistics/232876-simple-probability-question.html", "openwebmath_score": 0.7575483918190002, "openwebmath_perplexity": 322.85663733316176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631659211718, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8460708582847031 }
https://dantopology.wordpress.com/tag/topology/
Lindelof Exercise 2 The preceding post is an exercise showing that the product of countably many $\sigma$-compact spaces is a Lindelof space. The result is an example of a situation where the Lindelof property is countably productive if each factor is a “nice” Lindelof space. In this case, “nice” means $\sigma$-compact. This post gives several exercises surrounding the notion of $\sigma$-compactness. Exercise 2.A According to the preceding exercise, the product of countably many $\sigma$-compact spaces is a Lindelof space. Give an example showing that the result cannot be extended to the product of uncountably many $\sigma$-compact spaces. More specifically, give an example of a product of uncountably many $\sigma$-compact spaces such that the product space is not Lindelof. Exercise 2.B Any $\sigma$-compact space is Lindelof. Since $\mathbb{R}=\bigcup_{n=1}^\infty [-n,n]$, the real line with the usual Euclidean topology is $\sigma$-compact. This exercise is to find an example of “Lindelof does not imply $\sigma$-compact.” Find one such example among the subspaces of the real line. Note that as a subspace of the real line, the example would be a separable metric space, hence would be a Lindelof space. Exercise 2.C This exercise is also to look for an example of a space that is Lindelof and not $\sigma$-compact. The example sought is a non-metric one, preferably a space whose underlying set is the real line and whose topology is finer than the Euclidean topology. Exercise 2.D Show that the product of two Lindelof spaces is a Lindelof space whenever one of the factors is a $\sigma$-compact space. Exercise 2.E Prove that the product of finitely many $\sigma$-compact spaces is a $\sigma$-compact space. Give an example of a space showing that the product of countably and infinitely many $\sigma$-compact spaces does not have to be $\sigma$-compact. For example, show that $\mathbb{R}^\omega$, the product of countably many copies of the real line, is not $\sigma$-compact. The Lindelof property and $\sigma$-compactness are basic topological notions. The above exercises are natural questions based on these two basic notions. One immediate purpose of these exercises is that they provide further interaction with the two basic notions. More importantly, working on these exercise give exposure to mathematics that is seemingly unrelated to the two basic notions. For example, finding $\sigma$-compactness on subspaces of the real line and subspaces of compact spaces naturally uses a Baire category argument, which is a deep and rich topic that finds uses in multiple areas of mathematics. For this reason, these exercises present excellent learning opportunities not only in topology but also in other useful mathematical topics. If preferred, the exercises can be attacked head on. The exercises are also intended to be a guided tour. Hints are also provided below. Two sets of hints are given – Hints (blue dividers) and Further Hints (maroon dividers). The proofs of certain key facts are also given (orange dividers). Concluding remarks are given at the end of the post. $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Hints for Exercise 2.A Prove that the Lindelof property is hereditary with respect to closed subspaces. That is, if $X$ is a Lindelof space, then every closed subspace of $X$ is also Lindelof. Prove that if $X$ is a Lindelof space, then every closed and discrete subset of $X$ is countable (every space that has this property is said to have countable extent). Show that the product of uncountably many copies of the real line does not have countable extent. Specifically, focus on either one of the following two examples. • Show that the product space $\mathbb{R}^c$ has a closed and discrete subspace of cardinality continuum where $c$ is cardinality of continuum. Hence $\mathbb{R}^c$ is not Lindelof. • Show that the product space $\mathbb{R}^{\omega_1}$ has a closed and discrete subspace of cardinality $\omega_1$ where $\omega_1$ is the first uncountable ordinal. Hence $\mathbb{R}^{\omega_1}$ is not Lindelof. Hints for Exercise 2.B Let $\mathbb{P}$ be the set of all irrational numbers. Show that $\mathbb{P}$ as a subspace of the real line is not $\sigma$-compact. Hints for Exercise 2.C Let $S$ be the real line with the topology generated by the half open and half closed intervals of the form $[a,b)=\{ x \in \mathbb{R}: a \le x < b \}$. The real line with this topology is called the Sorgenfrey line. Show that $S$ is Lindelof and is not $\sigma$-compact. Hints for Exercise 2.D It is helpful to first prove: the product of two Lindelof space is Lindelof if one of the factors is a compact space. The Tube lemma is helpful. Tube Lemma Let $X$ be a space. Let $Y$ be a compact space. Suppose that $U$ is an open subset of $X \times Y$ and suppose that $\{ x \} \times Y \subset U$ where $x \in X$. Then there exists an open subset $V$ of $X$ such that $\{ x \} \times Y \subset V \times Y \subset U$. Hints for Exercise 2.E Since the real line $\mathbb{R}$ is homeomorphic to the open interval $(0,1)$, $\mathbb{R}^\omega$ is homeomorphic to $(0,1)^\omega$. Show that $(0,1)^\omega$ is not $\sigma$-compact. $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Further Hints for Exercise 2.A The hints here focus on the example $\mathbb{R}^c$. Let $I=[0,1]$. Let $\omega$ be the first infinite ordinal. For convenience, consider $\omega$ the set $\{ 0,1,2,3,\cdots \}$, the set of all non-negative integers. Since $\omega^I$ is a closed subset of $\mathbb{R}^I$, any closed and discrete subset of $\omega^I$ is a closed and discrete subset of $\mathbb{R}^I$. The task at hand is to find a closed and discrete subset of $Y=\omega^I$. To this end, we define $W=\{W_x: x \in I \}$ after setting up background information. For each $t \in I$, choose a sequence $O_{t,1},O_{t,2},O_{t,3},\cdots$ of open intervals (in the usual topology of $I$) such that • $\{ t \}=\bigcap_{j=1}^\infty O_{t,j}$, • $\overline{O_{t,j+1}} \subset O_{t,j}$ for each $j$ (the closure is in the usual topology of $I$). Note. For each $t \in I-\{0,1 \}$, the open intervals $O_{t,j}$ are of the form $(a,b)$. For $t=0$, the open intervals $O_{t,j}$ are of the form $[0,b)$. For $t=1$, the open intervals $O_{t,j}$ are of the form $(a,1]$. For each $t \in I$, define the map $f_t: I \rightarrow \omega$ as follows: $f_t(x) = \begin{cases} 0 & \ \ \ \mbox{if } x=t \\ 1 & \ \ \ \mbox{if } x \in I-O_{t,1} \\ 2 & \ \ \ \mbox{if } x \in I-O_{t,2} \text{ and } x \in O_{t,1} \\ 3 & \ \ \ \mbox{if } x \in I-O_{t,3} \text{ and } x \in O_{t,2} \\ \vdots & \ \ \ \ \ \ \ \ \ \ \vdots \\ j & \ \ \ \mbox{if } x \in I-O_{t,j} \text{ and } x \in O_{t,j-1} \\ \vdots & \ \ \ \ \ \ \ \ \ \ \vdots \end{cases}$ We are now ready to define $W=\{W_x: x \in I \}$. For each $x \in I$, $W_x$ is the mapping $W_x:I \rightarrow \omega$ defined by $W_x(t)=f_t(x)$ for each $t \in I$. Show the following: • The set $W=\{W_x: x \in I \}$ has cardinality continuum. • The set $W$ is a discrete space. • The set $W$ is a closed subspace of $Y$. Further Hints for Exercise 2.B A subset $A$ of the real line $\mathbb{R}$ is nowhere dense in $\mathbb{R}$ if for any nonempty open subset $U$ of $\mathbb{R}$, there is a nonempty open subset $V$ of $U$ such that $V \cap A=\varnothing$. If we replace open sets by open intervals, we have the same notion. Show that the real line $\mathbb{R}$ with the usual Euclidean topology cannot be the union of countably many closed and nowhere dense sets. Further Hints for Exercise 2.C Prove that if $X$ and $Y$ are $\sigma$-compact, then the product $X \times Y$ is $\sigma$-compact, hence Lindelof. Prove that $S$, the Sorgenfrey line, is Lindelof while its square $S \times S$ is not Lindelof. Further Hints for Exercise 2.D As suggested in the hints given earlier, prove that $X \times Y$ is Lindelof if $X$ is Lindelof and $Y$ is compact. As suggested, the Tube lemma is a useful tool. Further Hints for Exercise 2.E The product space $(0,1)^\omega$ is a subspace of the product space $[0,1]^\omega$. Since $[0,1]^\omega$ is compact, we can fall back on a Baire category theorem argument to show why $(0,1)^\omega$ cannot be $\sigma$-compact. To this end, we consider the notion of Baire space. A space $X$ is said to be a Baire space if for each countable family $\{ U_1,U_2,U_3,\cdots \}$ of open and dense subsets of $X$, the intersection $\bigcap_{i=1}^\infty U_i$ is a dense subset of $X$. Prove the following results. Fact E.1 Let $X$ be a compact Hausdorff space. Let $O_1,O_2,O_3,\cdots$ be a sequence of non-empty open subsets of $X$ such that $\overline{O_{n+1}} \subset O_n$ for each $n$. Then the intersection $\bigcap_{i=1}^\infty O_i$ is non-empty. Fact E.2 Any compact Hausdorff space is Baire space. Fact E.3 Let $X$ be a Baire space. Let $Y$ be a dense $G_\delta$-subset of $X$ such that $X-Y$ is a dense subset of $X$. Then $Y$ is not a $\sigma$-compact space. Since $X=[0,1]^\omega$ is compact, it follows from Fact E.2 that the product space $X=[0,1]^\omega$ is a Baire space. Fact E.4 Let $X=[0,1]^\omega$ and $Y=(0,1)^\omega$. The product space $Y=(0,1)^\omega$ is a dense $G_\delta$-subset of $X=[0,1]^\omega$. Furthermore, $X-Y$ is a dense subset of $X$. It follows from the above facts that the product space $(0,1)^\omega$ cannot be a $\sigma$-compact space. $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Proofs of Key Steps for Exercise 2.A The proof here focuses on the example $\mathbb{R}^c$. To see that $W=\{W_x: x \in I \}$ has the same cardinality as that of $I$, show that $W_x \ne W_y$ for $x \ne y$. This follows from the definition of the mapping $W_x$. To see that $W$ is discrete, for each $x \in I$, consider the open set $U_x=\{ b \in Y: b(x)=0 \}$. Note that $W_x \in U_x$. Further note that $W_y \notin U_x$ for all $y \ne x$. To see that $W$ is a closed subset of $Y$, let $k: I \rightarrow \omega$ such that $k \notin W$. Consider two cases. Case 1. $k(r) \ne 0$ for all $r \in I$. Note that $\{ O_{t,k(t)}: t \in I \}$ is an open cover of $I$ (in the usual topology). There exists a finite $H \subset I$ such that $\{ O_{h,k(h)}: h \in H \}$ is a cover of $I$. Consider the open set $G=\{ b \in Y: \forall \ h \in H, \ b(h)=k(h) \}$. Define the set $F$ as follows: $F=\{ c \in I: W_c \in G \}$ The set $F$ can be further described as follows: \displaystyle \begin{aligned} F&=\{ c \in I: W_c \in G \} \\&=\{ c \in I: \forall \ h \in H, \ W_c(h)=f_h(c)=k(h) \ne 0 \} \\&=\{ c \in I: \forall \ h \in H, \ c \in I-O_{h,k(h)} \} \\&=\bigcap_{h \in H} (I-O_{h,k(h)}) \\&=I-\bigcup_{h \in H} O_{h,k(h)}=I-I =\varnothing \end{aligned} The last step is $\varnothing$ because $\{ O_{h,k(h)}: h \in H \}$ is a cover of $I$. The fact that $F=\varnothing$ means that $G$ is an open subset of $Y$ containing the point $k$ such that $G$ contains no point of $W$. Case 2. $k(r) = 0$ for some $r \in I$. Since $k \notin W$, $k \ne W_x$ for all $x \in I$. In particular, $k \ne W_r$. This means that $k(t) \ne W_r(t)$ for some $t \in I$. Define the open set $G$ as follows: $G=\{ b \in Y: b(r)=0 \text{ and } b(t)=k(t) \}$ Clearly $k \in G$. Observe that $W_r \notin G$ since $W_r(t) \ne k(t)$. For each $p \in I-\{ r \}$, $W_p \notin G$ since $W_p(r) \ne 0$. Thus $G$ is an open set containing $k$ such that $G \cap W=\varnothing$. Both cases show that $W$ is a closed subset of $Y=\omega^I$. Proofs of Key Steps for Exercise 2.B Suppose that $\mathbb{P}$, the set of all irrational numbers, is $\sigma$-compact. That is, $\mathbb{P}=A_1 \cup A_2 \cup A_3 \cup \cdots$ where each $A_i$ is a compact space as a subspace of $\mathbb{P}$. Any compact subspace of $\mathbb{P}$ is also a compact subspace of $\mathbb{R}$. As a result, each $A_i$ is a closed subset of $\mathbb{R}$. Furthermore, prove the following: Each $A_i$ is a nowhere dense subset of $\mathbb{R}$. Each singleton set $\{ r \}$ where $r$ is any rational number is also a closed and nowhere dense subset of $\mathbb{R}$. This means that the real line is the union of countably many closed and nowhere dense subsets, contracting the hints given earlier. Thus $\mathbb{P}$ cannot be $\sigma$-compact. Proofs of Key Steps for Exercise 2.C The Sorgenfrey line $S$ is a Lindelof space whose square $S \times S$ is not normal. This is a famous example of a Lindelof space whose square is not Lindelof (not even normal). For reference, a proof is found here. An alternative proof of the non-normality of $S \times S$ uses the Baire category theorem and is found here. If the Sorgenfrey line is $\sigma$-compact, then $S \times S$ would be $\sigma$-compact and hence Lindelof. Thus $S$ cannot be $\sigma$-compact. Proofs of Key Steps for Exercise 2.D Suppose that $X$ is Lindelof and that $Y$ is compact. Let $\mathcal{U}$ be an open cover of $X \times Y$. For each $x \in X$, let $\mathcal{U}_x \subset \mathcal{U}$ be finite such that $\mathcal{U}_x$ is a cover of $\{ x \} \times Y$. Putting it another way, $\{ x \} \times Y \subset \cup \mathcal{U}_x$. By the Tube lemma, for each $x \in X$, there is an open $O_x$ such that $\{ x \} \times Y \subset O_x \times Y \subset \cup \mathcal{U}_x$. Since $X$ is Lindelof, there exists a countable set $\{ x_1,x_2,x_3,\cdots \} \subset X$ such that $\{ O_{x_1},O_{x_2},O_{x_3},\cdots \}$ is a cover of $X$. Then $\mathcal{U}_{x_1} \cup \mathcal{U}_{x_2} \cup \mathcal{U}_{x_3} \cup \cdots$ is a countable subcover of $\mathcal{U}$. This completes the proof that $X \times Y$ is Lindelof when $X$ is Lindelof and $Y$ is compact. To complete the exercise, observe that if $X$ is Lindelof and $Y$ is $\sigma$-compact, then $X \times Y$ is the union of countably many Lindelof subspaces. Proofs of Key Steps for Exercise 2.E Proof of Fact E.1 Let $X$ be a compact Hausdorff space. Let $O_1,O_2,O_3,\cdots$ be a sequence of non-empty open subsets of $X$ such that \$latex $\overline{O_{n+1}} \subset O_n$ for each $n$. Show that the intersection $\bigcap_{i=1}^\infty O_i$ is non-empty. Suppose that $\bigcap_{i=1}^\infty O_i=\varnothing$. Choose $x_1 \in O_1$. There must exist some $n_1$ such that $x_1 \notin O_{n_1}$. Choose $x_2 \in O_{n_1}$. There must exist some $n_2>n_1$ such that $x_2 \notin O_{n_2}$. Continue in this manner we can choose inductively an infinite set $A=\{ x_1,x_2,x_3,\cdots \} \subset X$ such that $x_i \ne x_j$ for $i \ne j$. Since $X$ is compact, the infinite set $A$ has a limit point $p$. This means that every open set containing $p$ contains some $x_j$ (in fact for infinitely many $j$). The point $p$ cannot be in the intersection $\bigcap_{i=1}^\infty O_i$. Thus for some $n$, $p \notin O_n$. Thus $p \notin \overline{O_{n+1}}$. We can choose an open set $U$ such that $p \in U$ and $U \cap \overline{O_{n+1}}=\varnothing$. However, $U$ must contain some point $x_j$ where $j>n+1$. This is a contradiction since $O_j \subset \overline{O_{n+1}}$ for all $j>n+1$. Thus Fact E.1 is established. Proof of Fact E.2 Let $X$ be a compact space. Let $U_1,U_2,U_3,\cdots$ be open subsets of $X$ such that each $U_i$ is also a dense subset of $X$. Let $V$ a non-empty open subset of $X$. We wish to show that $V$ contains a point that belongs to each $U_i$. Since $U_1$ is dense in $X$, $O_1=V \cap U_1$ is non-empty. Since $U_2$ is dense in $X$, choose non-empty open $O_2$ such that $\overline{O_2} \subset O_1$ and $O_2 \subset U_2$. Since $U_3$ is dense in $X$, choose non-empty open $O_3$ such that $\overline{O_3} \subset O_2$ and $O_3 \subset U_3$. Continue inductively in this manner and we have a sequence of open sets $O_1,O_2,O_3,\cdots$ just like in Fact E.1. Then the intersection of the open sets $O_n$ is non-empty. Points in the intersection are in $V$ and in all the $U_n$. This completes the proof of Fact E.2. Proof of Fact E.3 Let $X$ be a Baire space. Let $Y$ be a dense $G_\delta$-subset of $X$ such that $X-Y$ is a dense subset of $X$. Show that $Y$ is not a $\sigma$-compact space. Suppose $Y$ is $\sigma$-compact. Let $Y=\bigcup_{n=1}^\infty B_n$ where each $B_n$ is compact. Each $B_n$ is obviously a closed subset of $X$. We claim that each $B_n$ is a closed nowhere dense subset of $X$. To see this, let $U$ be a non-empty open subset of $X$. Since $X-Y$ is dense in $X$, $U$ contains a point $p$ where $p \notin Y$. Since $p \notin B_n$, there exists a non-empty open $V \subset U$ such that $V \cap B_n=\varnothing$. This shows that each $B_n$ is a nowhere dense subset of $X$. Since $Y$ is a dense $G_\delta$-subset of $X$, $Y=\bigcap_{n=1}^\infty O_n$ where each $O_n$ is an open and dense subset of $X$. Then each $A_n=X-O_n$ is a closed nowhere dense subset of $X$. This means that $X$ is the union of countably many closed and nowhere dense subsets of $X$. More specifically, we have the following. (1)………$X= \biggl( \bigcup_{n=1}^\infty A_n \biggr) \cup \biggl( \bigcup_{n=1}^\infty B_n \biggr)$ Statement (1) contradicts the fact that $X$ is a Baire space. Note that all $X-A_n$ and $X-B_n$ are open and dense subsets of $X$. Further note that the intersection of all these countably many open and dense subsets of $X$ is empty according to (1). Thus $Y$ cannot not a $\sigma$-compact space. Proof of Fact E.4 The space $X=[0,1]^\omega$ is compact since it is a product of compact spaces. To see that $Y=(0,1)^\omega$ is a dense $G_\delta$-subset of $X$, note that $Y=\bigcap_{n=1}^\infty U_n$ where for each integer $n \ge 1$ (2)………$U_n=(0,1) \times \cdots \times (0,1) \times [0,1] \times [0,1] \times \cdots$ Note that the first $n$ factors of $U_n$ are the open interval $(0,1)$ and the remaining factors are the closed interval $[0,1]$. It is also clear that $X-Y$ is a dense subset of $X$. This completes the proof of Fact E.4. $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Concluding Remarks Exercise 2.A The exercise is to show that the product of uncountably many $\sigma$-compact spaces does not need to be Lindelof. The approach suggested in the hints is to show that $\mathbb{R}^{c}$ has uncountable extent where $c$ is continuum. Having uncountable extent (i.e. having an uncountable subset that is both closed and discrete) implies the space is not Lindelof. The uncountable extent of the product space $\mathbb{R}^{\omega_1}$ is discussed in this post. For $\mathbb{R}^{c}$ and $\mathbb{R}^{\omega_1}$, there is another way to show non-Lindelof. For example, both product spaces are not normal. As a result, both product spaces cannot be Lindelof. Note that every regular Lindelof space is normal. Both product spaces contain the product $\omega^{\omega_1}$ as a closed subspace. The non-normality of $\omega^{\omega_1}$ is discussed here. Exercise 2.B The hints given above is to show that the set of all irrational numbers, $\mathbb{P}$, is not $\sigma$-compact (as a subspace of the real line). The same argument showing that $\mathbb{P}$ is not $\sigma$-compact can be generalized. Note that the complement of $\mathbb{P}$ is $\mathbb{Q}$, the set of all rational numbers (a countable set). In this case, $\mathbb{Q}$ is a dense subset of the real line and is the union of countably many singleton sets. Each singleton set is a closed and nowhere dense subset of the real line. In general, we can let $B$, the complement of a set $A$, be dense in the real line and be the union of countably many closed nowhere dense subsets of the real line (not necessarily singleton sets). The same argument will show that $A$ cannot be a $\sigma$-compact space. This argument is captured in Fact E.3 in Exercise 2.E. Thus both Exercise 2.B and Exercise 2.E use a Baire category argument. Exercise 2.E Like Exercise 2.B, this exercise is also to show a certain space is not $\sigma$-compact. In this case, the suggested space is $\mathbb{R}^{\omega}$, the product of countably many copies of the real line. The hints given use a Baire category argument, as outlined in Fact E.1 through Fact E.4. The product space $\mathbb{R}^{\omega}$ is embedded in the compact space $[0,1]^{\omega}$, which is a Baire space. As mentioned earlier, Fact E.3 is essentially the same argument used for Exercise 2.B. Using the same Baire category argument, it can be shown that $\omega^{\omega}$, the product of countably many copies of the countably infinite discrete space, is not $\sigma$-compact. The space $\omega$ of the non-negative integers, as a subspace of the real line, is certainly $\sigma$-compact. Using the same Baire category argument, we can see that the product of countably many copies of this discrete space is not $\sigma$-compact. With the product space $\omega^{\omega}$, there is a connection with Exercise 2.B. The product $\omega^{\omega}$ is homeomorphic to $\mathbb{P}$. The idea of the homeomorphism is discussed here. Thus the non-$\sigma$-compactness of $\omega^{\omega}$ can be achieved by mapping it to the irrationals. Of course, the same Baire category argument runs through both exercises. Exercise 2.C Even the non-$\sigma$-compactness of the Sorgenfrey line $S$ can be achieved by a Baire category argument. The non-normality of the Sorgenfrey plane $S \times S$ can be achieved by Jones’ lemma argument or by the fact that $\mathbb{P}$ is not a first category set. Links to both arguments are given in the Proof section above. See here for another introduction to the Baire category theorem. The Tube lemma is discussed here. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma topology Daniel Ma topology Dan Ma math Daniel Ma mathematics $\copyright$ 2019 – Dan Ma Lindelof Exercise 1 A space $X$ is called a $\sigma$-compact space if it is the union of countably many compact subspaces. Clearly, any $\sigma$-compact space is Lindelof. It is well known that the product of Lindelof spaces does not need to be Lindelof. The most well known example is perhaps the square of the Sorgenfrey line. In certain cases, the Lindelof property can be productive. For example, the product of countably many $\sigma$-compact spaces is a Lindelof space. The discussion here centers on the following theorem. Theorem 1 Let $X_1,X_2,X_3,\cdots$ be $\sigma$-compact spaces. Then the product space $\prod_{i=1}^\infty X_i$ is Lindelof. Theorem 1 is Exercise 3.8G in page 195 of General Topology by Engelking [1]. The reference for Exercise 3.8G is [2]. But the theorem is not found in [2] (it is not stated directly and it does not seem to be an obvious corollary of a theorem discussed in that paper). However, a hint is provided in Engelking for Exercise 3.8G. In this post, we discuss Theorem 1 as an exercise by giving expanded hint. Solutions to some of the key steps in the expanded hint are given at the end of the post. Expanded Hint It is helpful to first prove the following theorem. Theorem 2 For each integer $i \ge 1$, let $C_{i,1},C_{i,2},\cdots$ be compact spaces and let $C_i$ be the topological sum: $C_i=C_{i,1} \oplus C_{i,2} \oplus C_{i,3} \oplus \cdots=\oplus_{j=1}^\infty C_{i,j}$ Then the product $\prod_{i=1}^\infty C_i$ is Lindelof. Note that in the topological sum $C_{i,1} \oplus C_{i,2} \oplus C_{i,3} \oplus \cdots$, the spaces $C_{i,1},C_{i,2},C_{i,3},\cdots$ are considered pairwise disjoint. The open sets in the sum are simply unions of the open sets in the individual spaces. Another way to view this topology: each of the $C_{i,j}$ is both closed and open in the topological sum. Theorem 2 is essentially saying that the product of countably many $\sigma$-compact spaces is Lindelof if each $\sigma$-compact space is the union of countably many disjoint compact spaces. The hint for Exercise 3.8G can be applied much more naturally on Theorem 2 than on Theorem 1. The following is Exercise 3.8F (a), which is the hint for Exercise 3.8G. Lemma 3 Let $Z$ be a compact space. Let $X$ be a subspace of $Z$. Suppose that there exist $F_1,F_2,F_3,\cdots$, closed subsets of $Z$, such that for all $x$ and $y$ where $x \in X$ and $y \in Z-X$, there exists $F_i$ such that $x \in F_i$ and $y \notin F_i$. Then $X$ is a Lindelof space. The following theorem connects the hint (Lemma 3) with Theorem 2. Theorem 4 For each integer $i \ge 1$, let $Z_i$ be the one-point compactification of $C_i$ in Theorem 2. Then the product $Z=\prod_{i=1}^\infty Z_i$ is a compact space. Furthermore, $X=\prod_{i=1}^\infty C_i$ is a subspace of $Z$. Prove that $Z$ and $X$ satisfy Lemma 3. Each $C_i$ in Theorem 2 is a locally compact space. To define the one-point compactifications, for each $i$, choose $p_i \notin C_i$. Make sure that $p_i \ne p_j$ for $i \ne j$. Then $Z_i$ is simply $Z_i=C_i \cup \{ p_i \}=C_{i,1} \oplus C_{i,2} \oplus C_{i,3} \oplus \cdots \cup \{ p_i \}$ with the topology defined as follows: • Open subsets of $C_i$ continue to be open in $Z_i$. • An open set containing $p_i$ is of the form $\{ p_i \} \cup (C_i - \overline{D})$ where $D$ is open in $C_i$ and $D$ is contained in the union of finitely many $C_{i,j}$. For convenience, each point $p_i$ is called a point at infinity. Note that Theorem 2 follows from Lemma 3 and Theorem 4. In order to establish Theorem 1 from Theorem 2, observe that the Lindelof property is preserved by any continuous mapping and that there is a natural continuous map from the product space in Theorem 2 to the product space in Theorem 1. $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Proofs of Key Steps Proof of Lemma 3 Let $Z$, $X$ and $F_1,F_2,F_3,\cdots$ be as described in the statement for Lemma 3. Let $\mathcal{U}$ be a collection of open subsets of $Z$ such that $\mathcal{U}$ covers $X$. We would like to show that a countable subcollection of $\mathcal{U}$ is also a cover of $X$. Let $O=\cup \mathcal{U}$. If $Z-O=\varnothing$, then $\mathcal{U}$ is an open cover of $Z$ and there is a finite subset of $\mathcal{U}$ that is a cover of $Z$ and thus a cover of $X$. Thus we can assume that $Z-O \ne \varnothing$. Let $F=\{ F_1,F_2,F_3,\cdots \}$. Let $K=Z-O$, which is compact. We make the following claim. Claim. Let $Y$ be the union of all possible $\cap G$ where $G \subset F$ is finite and $\cap G \subset O$. Then $X \subset Y \subset O$. To establish the claim, let $x \in X$. For each $y \in K=Z-O$, there exists $F_{n(y)}$ such that $x \in F_{n(y)}$ and $y \notin F_{n(y)}$. This means that $\{ Z-F_{n(y)}: y \in K \}$ is an open cover of $K$. By the compactness of $K$, there are finitely many $F_{n(y_1)}, \cdots, F_{n(y_k)}$ such that $F_{n(y_1)} \cap \cdots \cap F_{n(y_k)}$ misses $K$, or equivalently $F_{n(y_1)} \cap \cdots \cap F_{n(y_k)} \subset O$. Note that $x \in F_{n(y_1)} \cap \cdots \cap F_{n(y_k)}$. Further note that $F_{n(y_1)} \cap \cdots \cap F_{n(y_k)} \subset Y$. This establishes the claim that $X \subset Y$. The claim that $Y \subset O$ is clear from the definition of $Y$. Each set $F_i$ is compact since it is closed in $Z$. The intersection of finitely many $F_i$ is also compact. Thus the $\cap G$ in the definition of $Y$ in the above claim is compact. There can be only countably many $\cap G$ in the definition of $Y$. Thus $Y$ is a $\sigma$-compact space that is covered by the open cover $\mathcal{U}$. Choose a countable $\mathcal{V} \subset \mathcal{U}$ such that $\mathcal{V}$ covers $Y$. Then $\mathcal{V}$ is a cover of $X$ too. This completes the proof that $X$ is Lindelof. $\text{ }$ Proof of Theorem 4 Recall that $Z=\prod_{i=1}^\infty Z_i$ and that $X=\prod_{i=1}^\infty C_i$. Each $Z_i$ is the one-point compactification of $C_i$, which is the topological sum of the disjoint compact spaces $C_{i,1},C_{i,2},\cdots$. For integers $i,j \ge 1$, define $K_{i,j}=C_{i,1} \oplus C_{i,2} \oplus \cdots \oplus C_{i,j}$. For integers $n,j \ge 1$, define the product $F_{n,j}$ as follows: $F_{n,j}=K_{1,j} \times \cdots \times K_{n,j} \times Z_{n+1} \times Z_{n+2} \times \cdots$ Since $F_{n,j}$ is a product of compact spaces, $F_{n,j}$ is compact and thus closed in $Z$. There are only countably many $F_{n,j}$. We claim that the countably many $F_{n,j}$ have the property indicated in Lemma 3. To this end, let $f \in X=\prod_{i=1}^\infty C_i$ and $g \in Z-X$. There exists an integer $n \ge 1$ such that $g(n) \notin C_{n}$. This means that $g(n) \notin C_{n,j}$ for all $j$, i.e. $g(n)=p_n$ (so $g(n)$ must be the point at infinity). Choose $j \ge 1$ large enough such that $f(i) \in K_{i,j}=C_{i,1} \oplus C_{i,2} \oplus \cdots \oplus C_{i,j}$ for all $i \le n$. It follows that $f \in F_{n,j}$ and $g \notin F_{n,j}$. Thus the sequence of closed sets $F_{n,j}$ satisfies Lemma 3. By Lemma 3, $X=\prod_{i=1}^\infty C_i$ is Lindelof. Reference 1. Engelking R., General Topology, Revised and Completed edition, Elsevier Science Publishers B. V., Heldermann Verlag, Berlin, 1989. 2. Hager A. W., Approximation of real continuous functions on Lindelof spaces, Proc. Amer. Math. Soc., 22, 156-163, 1969. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma topology Daniel Ma topology Dan Ma math Daniel Ma mathematics $\copyright$ 2019 – Dan Ma Helly Space This is a discussion on a compact space called Helly space. The discussion here builds on the facts presented in Counterexample in Topology [2]. Helly space is Example 107 in [2]. The space is named after Eduard Helly. Let $I=[0,1]$ be the closed unit interval with the usual topology. Let $C$ be the set of all functions $f:I \rightarrow I$. The set $C$ is endowed with the product space topology. The usual product space notation is $I^I$ or $\prod_{t \in I} W_t$ where each $W_t=I$. As a product of compact spaces, $C=I^I$ is compact. Any function $f:I \rightarrow I$ is said to be increasing if $f(x) \le f(y)$ for all $x (such a function is usually referred to as non-decreasing). Helly space is the subspace $X$ consisting of all increasing functions. This space is Example 107 in Counterexample in Topology [2]. The following facts are discussed in [2]. • The space $X$ is compact. • The space $X$ is first countable (having a countable base at each point). • The space $X$ is separable. • The space $X$ has an uncountable discrete subspace. From the last two facts, Helly space is a compact non-metrizable space. Any separable metric space would have countable spread (all discrete subspaces must be countable). The compactness of $X$ stems from the fact that $X$ is a closed subspace of the compact space $C$. Further Discussion Additional facts concerning Helly space are discussed. 1. The product space $\omega_1 \times X$ is normal. 2. Helly space $X$ contains a copy of the Sorgenfrey line. 3. Helly space $X$ is not hereditarily normal. The space $\omega_1$ is the space of all countable ordinals with the order topology. Recall $C$ is the product space $I^I$. The product space $\omega_1 \times C$ is Example 106 in [2]. This product is not normal. The non-normality of $\omega_1 \times C$ is based on this theorem: for any compact space $Y$, the product $\omega_1 \times Y$ is normal if and only if the compact space $Y$ is countably tight. The compact product space $C$ is not countably tight (discussed here). Thus $\omega_1 \times C$ is not normal. However, the product $\omega_1 \times X$ is normal since Helly space $X$ is first countable. To see that $X$ contains a copy of the Sorgenfrey line, consider the functions $h_t:I \rightarrow I$ defined as follows: $\displaystyle h_t(x) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ 0 \le x \le t \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ t for all $0. Let $S=\{ h_t: 0. Consider the mapping $\gamma: (0,1) \rightarrow S$ defined by $\gamma(t)=h_t$. With the domain $(0,1)$ having the Sorgenfrey topology and with the range $S$ being a subspace of Helly space, it can be shown that $\gamma$ is a homeomorphism. With the Sorgenfrey line $S$ embedded in $X$, the square $X \times X$ contains a copy of the Sorgenfrey plane $S \times S$, which is non-normal (discussed here). Thus the square of Helly space is not hereditarily normal. A more interesting fact is that Helly space is not hereditarily normal. This is discussed in the next section. Finding a Non-Normal Subspace of Helly Space As before, $C$ is the product space $I^I$ where $I=[0,1]$ and $X$ is Helly space consisting of all increasing functions in $C$. Consider the following two subspaces of $X$. $Y_{0,1}=\{ f \in X: f(I) \subset \{0, 1 \} \}$ $Y=X - Y_{0,1}$ The subspace $Y_{0,1}$ is a closed subset of $X$, hence compact. We claim that subspace $Y$ is separable and has a closed and discrete subset of cardinality continuum. This means that the subspace $Y$ is not a normal space. First, we define a discrete subspace. For each $x$ with $0, define $f_x: I \rightarrow I$ as follows: $\displaystyle f_x(y) = \left\{ \begin{array}{ll} \displaystyle 0 &\ \ \ \ \ \ 0 \le y < x \\ \text{ } & \text{ } \\ \displaystyle \frac{1}{2} &\ \ \ \ \ y=x \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ x Let $H=\{ f_x: 0. The set $H$ as a subspace of $X$ is discrete. Of course it is not discrete in $X$ since $X$ is compact. In fact, for any $f \in Y_{0,1}$, $f \in \overline{H}$ (closure taken in $X$). However, it can be shown that $H$ is closed and discrete as a subset of $Y$. We now construct a countable dense subset of $Y$. To this end, let $\mathcal{B}$ be a countable base for the usual topology on the unit interval $I=[0,1]$. For example, we can let $\mathcal{B}$ be the set of all open intervals with rational endpoints. Furthermore, let $A$ be a countable dense subset of the open interval $(0,1)$ (in the usual topology). For convenience, we enumerate the elements of $A$ and $\mathcal{B}$. $A=\{ a_1,a_2,a_3,\cdots \}$ $\mathcal{B}=\{B_1,B_2,B_3,\cdots \}$ We also need the following collections. $\mathcal{G}=\{G \subset \mathcal{B}: G \text{ is finite and is pairwise disjoint} \}$ $\mathcal{A}=\{F \subset A: F \text{ is finite} \}$ For each $G \in \mathcal{G}$ and for each $F \in \mathcal{A}$ with $\lvert G \lvert=\lvert F \lvert=n$, we would like to arrange the elements in increasing order, notated as follow: $F=\{t_1,t_2,\cdots,t_n \}$ $G=\{E_1,E_2,\cdots,E_n \}$ For the set $F$, we have $0. For the set $G$, $E_i$ is to the left of $E_j$ for $i. Note that elements of $G$ are pairwise disjoint. Furthermore, write $E_i=(p_i,q_i)$. If $0 \in E_1$, then $E_1=[p_1,q_1)=[0,q_1)$. If $1 \in E_n$, then $E_n=(p_n,q_n]=(p_n,1]$. For each $F$ and $G$ as detailed above, we define a function $L(F,G):I \rightarrow I$ as follows: $\displaystyle L(F,G)(x) = \left\{ \begin{array}{ll} \displaystyle t_1 &\ \ \ \ \ 0 \le x < q_1 \\ \text{ } & \text{ } \\ \displaystyle t_2 &\ \ \ \ \ q_1 \le x < q_2 \\ \text{ } & \text{ } \\ \displaystyle \vdots &\ \ \ \ \ \vdots \\ \text{ } & \text{ } \\ \displaystyle t_{n-1} &\ \ \ \ \ q_{n-2} \le x < q_{n-1} \\ \text{ } & \text{ } \\ \displaystyle t_n &\ \ \ \ \ q_{n-1} \le x \le 1 \\ \end{array} \right.$ The following diagram illustrates the definition of $L(F,G)$ when both $F$ and $G$ have 4 elements. Figure 1 – Member of a countable dense set Let $D$ be the set of $L(F,G)$ over all $F \in \mathcal{A}$ and $G \in \mathcal{G}$. The set $D$ is a countable set. It can be shown that $D$ is dense in the subspace $Y$. In fact $D$ is dense in the entire Helly space $X$. To summarize, the subspace $Y$ is separable and has a closed and discrete subset of cardinality continuum. This means that $Y$ is not normal. Hence Helly space $X$ is not hereditarily normal. According to Jones’ lemma, in any normal separable space, the cardinality of any closed and discrete subspace must be less than continuum (discussed here). Remarks The preceding discussion shows that both Helly space and the square of Helly space are not hereditarily normal. This is actually not surprising. According to a theorem of Katetov, for any compact non-metrizable space $V$, the cube $V^3$ is not hereditarily normal (see Theorem 3 in this post). Thus a non-normal subspace is found in $V$, $V \times V$ or $V \times V \times V$. In fact, for any compact non-metric space $V$, an excellent exercise is to find where a non-normal subspace can be found. Is it in $V$, the square of $V$ or the cube of $V$? In the case of Helly space $X$, a non-normal subspace can be found in $X$. A natural question is: is there a compact non-metric space $V$ such that both $V$ and $V \times V$ are hereditarily normal and $V \times V \times V$ is not hereditarily normal? In other words, is there an example where the hereditarily normality fails at dimension 3? If we do not assume extra set-theoretic axioms beyond ZFC, any compact non-metric space $V$ is likely to fail hereditarily normality in either $V$ or $V \times V$. See here for a discussion of this set-theoretic question. Reference 1. Kelly, J. L., General Topology, Springer-Verlag, New York, 1955. 2. Steen, L. A., Seebach, J. A., Counterexamples in Topology, Dover Publications, Inc., New York, 1995. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma topology Daniel Ma topology Dan Ma math Daniel Ma mathematics $\copyright$ 2019 – Dan Ma A little corner in the world of set-theoretic topology This post puts a spot light on a little corner in the world of set-theoretic topology. There lies in this corner a simple topological statement that opens a door to the esoteric world of independence results. In this post, we give a proof of this basic fact and discuss its ramifications. This basic result is an excellent entry point to the study of S and L spaces. The following paragraph is found in the paper called Gently killing S-spaces by Todd Eisworth, Peter Nyikos and Saharon Shelah [1]. The basic fact in question is highlighted in blue. A simultaneous generalization of hereditarily separable and hereditarily Lindelof spaces is the class of spaces of countable spread – those spaces in which every discrete subspace is countable. One of the basic facts in this little corner of set-theoretic topology is that if a regular space of countable spread is not hereditarily separable, it contains an L-space, and if it is not hereditarily Lindelof, it contains an S-space. [1] The same basic fact is also mentioned in the paper called The spread of regular spaces by Judith Roitman [2]. It is also well known that a regular space of countable spread which is not hereditarily separable contains an L-space and a regular space of countable spread which is not hereditarily Lindelof contains an S-space. Thus an absolute example of a space satisfying (Statement) A would contain a proof of the existence of S and L space – a consummation which some may devoutly wish, but which this paper does not attempt. [2] Statement A in [2] is: There exists a 0-dimensional Hausdorff space of countable spread that is not the union of a hereditarily separable and a hereditarily Lindelof space. Statement A would mean the existence of a regular space of countable spread that is not hereditarily separable and that is also not hereditarily Lindelof. By the well known fact just mentioned, statement A would imply the existence of a space that is simultaneously an S-space and an L-space! Let’s unpack the preceding section. First some basic definitions. A space $X$ is of countable spread (has countable spread) if every discrete subspace of $X$ is countable. A space $X$ is hereditarily separable if every subspace of $X$ is separable. A space $X$ is hereditarily Lindelof if every subspace of $X$ is Lindelof. A space is an S-space if it is hereditarily separable but not Lindelof. A space is an L-space if it is hereditarily Lindelof but not separable. See [3] for a basic discussion of S and L spaces. Hereditarily separable but not Lindelof spaces as well as hereditarily Lindelof but not separable spaces can be easily defined in ZFC [3]. However, such examples are not regular. For the notions of S and L-spaces to be interesting, the definitions must include regularity. Thus in the discussion that follows, all spaces are assumed to be Hausdorff and regular. One amazing aspect about set-theoretic topology is that one sometimes does not have to stray far from basic topological notions to encounter pathological objects such as S-spaces and L-spaces. The definition of a topological space is of course a basic definition. Separable spaces and Lindelof spaces are basic notions that are not far from the definition of topological spaces. The same can be said about hereditarily separable and hereditarily Lindelof spaces. Out of these basic ingredients come the notion of S-spaces and L-spaces, the existence of which is one of the key motivating questions in set-theoretic topology in the twentieth century. The study of S and L-spaces is a body of mathematics that had been developed for nearly a century. It is a fruitful area of research at the boundary of topology and axiomatic set theory. The existence of an S-space is independent of ZFC (as a result of the work by Todorcevic in early 1980s). This means that there is a model of set theory in which an S-space exists and there is also a model of set theory in which S-spaces cannot exist. One half of the basic result mentioned in the preceding section is intimately tied to the existence of S-spaces and thus has interesting set-theoretic implications. The other half of the basic result involves the existence of L-spaces, which are shown to exist without using extra set theory axioms beyond ZFC by Justin Moore in 2005, which went against the common expectation that the existence of L-spaces would be independent of ZFC as well. Let’s examine the basic notions in a little more details. The following diagram shows the properties surrounding the notion of countable spread. Diagram 1 – Properties surrounding countable spread The implications (the arrows) in Diagram 1 can be verified easily. Central to the discussion at hand, both hereditarily separable and hereditarily Lindelof imply countable spread. The best way to see this is that if a space has an uncountable discrete subspace, that subspace is simultaneously a non-separable subspace and a non-Lindelof subspace. A natural question is whether these implications can be reversed. Another question is whether the properties in Diagram 1 can be related in other ways. The following diagram attempts to ask these questions. Diagram 2 – Reverse implications surrounding countable spread Not shown in Diagram 2 are these four facts: separable $\not \rightarrow$ hereditarily separable, Lindelof $\not \rightarrow$ hereditarily Lindelof, separable $\not \rightarrow$ countable spread and Lindelof $\not \rightarrow$ countable spread. The examples supporting these facts are not set-theoretic in nature and are not discussed here. Let’s focus on each question mark in Diagram 2. The two horizontal arrows with question marks at the top are about S-space and L-space. If $X$ is hereditarily separable, then is $X$ hereditarily Lindelof? A “no” answer would mean there is an S-space. A “yes” answer would mean there exists no S-space. So the top arrow from left to right is independent of ZFC. Since an L-space can be constructed within ZFC, the question mark in the top arrow in Diagram 2 from right to left has a “no” answer. Now focus on the arrows emanating from countable spread in Diagram 2. These arrows are about the basic fact discussed earlier. From Diagram 1, we know that hereditarily separable implies countable spread. Can the implication be reversed? Any L-space would be an example showing that the implication cannot be reversed. Note that any L-space is of countable spread and is not separable and hence not hereditarily separable. Since L-space exists in ZFC, the question mark in the arrow from countable spread to hereditarily separable has a “no” answer. The same is true for the question mark in the arrow from countable spread to separable We know that hereditarily Lindelof implies countable spread. Can the implication be reversed? According to the basic fact mentioned earlier, if the implication cannot be reversed, there exists an S-space. Thus if S-space does not exist, the implication can be reversed. Any S-space is an example showing that the implication cannot be reversed. Thus the question in the arrow from countable spread to hereditarily Lindelof cannot be answered without assuming axioms beyond ZFC. The same is true for the question mark for the arrow from countable spread to Lindelf. Diagram 2 is set-theoretic in nature. The diagram is remarkable in that the properties in the diagram are basic notions that are only brief steps away from the definition of a topological space. Thus the basic highlighted here is a quick route to the world of independence results. We now give a proof of the basic result, which is stated in the following theorem. Theorem 1 Let $X$ is regular and Hausdorff space. Then the following is true. • If $X$ is of countable spread and is not a hereditarily separable space, then $X$ contains an L-space. • If $X$ is of countable spread and is not a hereditarily Lindelof space, then $X$ contains an S-space. To that end, we use the concepts of right separated space and left separated space. Recall that an initial segment of a well-ordered set $(X,<)$ is a set of the form $\{y \in X: y where $x \in X$. A space $X$ is a right separated space if $X$ can be well-ordered in such a way that every initial segment is open. A right separated space is in type $\kappa$ if the well-ordering is of type $\kappa$. A space $X$ is a left separated space if $X$ can be well-ordered in such a way that every initial segment is closed. A left separated space is in type $\kappa$ if the well-ordering is of type $\kappa$. The following results are used in proving Theorem 1. Theorem A Let $X$ is regular and Hausdorff space. Then the following is true. • The space $X$ is hereditarily separable space if and only if $X$ has no uncountable left separated subspace. • The space $X$ is hereditarily Lindelof space if and only if $X$ has no uncountable right separated subspace. Proof of Theorem A $\Longrightarrow$ of the first bullet point. Suppose $Y \subset X$ is an uncountable left separated subspace. Suppose that the well-ordering of $Y$ is of type $\kappa$ where $\kappa>\omega$. Further suppose that $Y=\{ x_\alpha: \alpha<\kappa \}$ such that for each $\alpha<\kappa$, $C_\alpha=\{ x_\beta: \beta<\alpha \}$ is a closed subset of $Y$. Since $\kappa$ is uncountable, the well-ordering has an initial segment of type $\omega_1$. So we might as well assume $\kappa=\omega_1$. Note that for any countable $A \subset Y$, $A \subset C_\alpha$ for some $\alpha<\omega_1$. It follows that $Y$ is not separable. This means that $X$ is not hereditarily separable. $\Longleftarrow$ of the first bullet point. Suppose that $X$ is not hereditarily separable. Let $Y \subset X$ be a subspace that is not separable. We now inductively derive an uncountable left separated subspace of $Y$. Choose $y_0 \in Y$. For each $\alpha<\omega_1$, let $A_\alpha=\{ y_\beta \in Y: \beta <\alpha \}$. The set $A_\alpha$ is the set of all the points of $Y$ chosen before the step at $\alpha<\omega_1$. Since $A_\alpha$ is countable, its closure in $Y$ is not the entire space $Y$. Choose $y_\alpha \in Y-\overline{A_\alpha}=O_\alpha$. Let $Y_L=\{ y_\alpha: \alpha<\omega_1 \}$. We claim that $Y_L$ is a left separated space. To this end, we need to show that each initial segment $A_\alpha$ is a closed subset of $Y_L$. Note that for each $\gamma \ge \alpha$, $O_\gamma=Y-\overline{A_\gamma}$ is an open subset of $Y$ with $y_\gamma \in O_\gamma$ such that $O_\gamma \cap \overline{A_\gamma}=\varnothing$ and thus $O_\gamma \cap \overline{A_\alpha}=\varnothing$ (closure in $Y$). Then $U_\gamma=O_\gamma \cap Y_L$ is an open subset of $Y_L$ containing $y_\gamma$ such that $U_\gamma \cap A_\alpha=\varnothing$. It follows that $Y-A_\alpha$ is open in $Y_L$ and that $A_\alpha$ is a closed subset of $Y_L$. $\Longrightarrow$ of the second bullet point. Suppose $Y \subset X$ is an uncountable right separated subspace. Suppose that the well-ordering of $Y$ is of type $\kappa$ where $\kappa>\omega$. Further suppose that $Y=\{ x_\alpha: \alpha<\kappa \}$ such that for each $\alpha<\kappa$, $U_\alpha=\{ x_\beta: \beta<\alpha \}$ is an open subset of $Y$. Since $\kappa$ is uncountable, the well-ordering has an initial segment of type $\omega_1$. So we might as well assume $\kappa=\omega_1$. Note that $\{ U_\alpha: \alpha<\omega_1 \}$ is an open cover of $Y$ that has no countable subcover. It follows that $Y$ is not Lindelof. This means that $X$ is not hereditarily Lindelof. $\Longleftarrow$ of the second bullet point. Suppose that $X$ is not hereditarily Lindelof. Let $Y \subset X$ be a subspace that is not Lindelof. Let $\mathcal{U}$ be an open cover of $Y$ that has no countable subcover. We now inductively derive a right separated subspace of $Y$ of type $\omega_1$. Choose $U_0 \in \mathcal{U}$ and choose $y_0 \in U_0$. Choose $y_1 \in Y-U_0$ and choose $U_1 \in \mathcal{U}$ such that $y_1 \in U_1$. Let $\alpha<\omega_1$. Suppose that points $y_\beta$ and open sets $U_\beta$, $\beta<\alpha$, have been chosen such that $y_\beta \in Y-\bigcup_{\delta<\beta} U_\delta$ and $y_\beta \in U_\beta$. The countably many chosen open sets $U_\beta$, $\beta<\alpha$, cannot cover $Y$. Choose $y_\alpha \in Y-\bigcup_{\beta<\alpha} U_\beta$. Choose $U_\alpha \in \mathcal{U}$ such that $y_\alpha \in U_\alpha$. Let $Y_R=\{ y_\alpha: \alpha<\omega_1 \}$. It follows that $Y_R$ is a right separated space. Note that for each $\alpha<\omega_1$, $\{ y_\beta: \beta<\alpha \} \subset \bigcup_{\beta<\alpha} U_\beta$ and the open set $\bigcup_{\beta<\alpha} U_\beta$ does not contain $y_\gamma$ for any $\gamma \ge \alpha$. This means that the initial segment $\{ y_\beta: \beta<\alpha \}$ is open in $Y_L$. $\square$ Lemma B Let $X$ be a space that is a right separated space and also a left separated space based on the same well ordering. Then $X$ is a discrete space. Proof of Lemma B Let $X=\{ w_\alpha: \alpha<\kappa \}$ such that the well-ordering is given by the ordinals in the subscripts, i.e. $w_\beta if and only if $\beta<\gamma$. Suppose that $X$ with this well-ordering is both a right separated space and a left separated space. We claim that every point is a discrete point, i.e. $\{ x_\alpha \}$ is open for any $\alpha<\kappa$. To see this, fix $\alpha<\kappa$. The initial segment $A_\alpha=\{ w_\beta: \beta<\alpha \}$ is closed in $X$ since $X$ is a left separated space. On the other hand, the initial segment $\{ w_\beta: \beta < \alpha+1 \}$ is open in $X$ since $X$ is a right separated space. Then $B_{\alpha}=\{ w_\beta: \beta \ge \alpha+1 \}$ is closed in $X$. It follows that $\{ x_\alpha \}$ must be open since $X=A_\alpha \cup B_\alpha \cup \{ w_\alpha \}$. $\square$ Theorem C Let $X$ is regular and Hausdorff space. Then the following is true. • Suppose the space $X$ is right separated space of type $\omega_1$. Then if $X$ has no uncountable discrete subspaces, then $X$ is an S-space or $X$ contains an S-space. • Suppose the space $X$ is left separated space of type $\omega_1$. Then if $X$ has no uncountable discrete subspaces, then $X$ is an L-space or $X$ contains an L-space. Proof of Theorem C For the first bullet point, suppose the space $X$ is right separated space of type $\omega_1$. Then by Theorem A, $X$ is not hereditarily Lindelof. If $X$ is hereditarily separable, then $X$ is an S-space (if $X$ is not Lindelof) or $X$ contains an S-space (a non-Lindelof subspace of $X$). Suppose $X$ is not hereditarily separable. By Theorem A, $X$ has an uncountable left separated subspace of type $\omega_1$. Let $X=\{ x_\alpha: \alpha<\omega_1 \}$ such that the well-ordering represented by the ordinals in the subscripts is a right separated space. Let $<_R$ be the symbol for the right separated well-ordering, i.e. $x_\beta <_R \ x_\delta$ if and only if $\beta<\delta$. As indicated in the preceding paragraph, $X$ has an uncountable left separated subspace. Let $Y=\{ y_\alpha \in X: \alpha<\omega_1 \}$ be this left separated subspace. Let $<_L$ be the symbol for the left separated well-ordering. The well-ordering $<_R$ may be different from the well-ordering $<_L$. However, we can obtain an uncountable subset of $Y$ such that the two well-orderings coincide on this subset. To start, pick any $y_\gamma$ in $Y$ and relabel it $t_0$. The final segment $\{y_\beta \in Y: t_0 <_L \ y_\beta \}$ must intersect the final segment $\{x_\beta \in X: t_0 <_R \ x_\beta \}$ in uncountably many points. Choose the least such point (according to $<_R$) and call it $t_1$. It is clear how $t_{\delta+1}$ is chosen if $t_\delta$ has been chosen. Suppose $\alpha<\omega_1$ is a limit ordinal and that $t_\beta$ has been chosen for all $\beta<\alpha$. Then the set $\{y_\tau: \forall \ \beta<\alpha, t_\beta <_L \ y_\tau \}$ and the set $\{x_\tau: \forall \ \beta<\alpha, t_\beta <_R \ x_\tau \}$ must intersect in uncountably many points. Choose the least such point and call it $t_\alpha$ (according to $<_R$). As a result, we have obtained $T=\{ t_\alpha: \alpha<\omega_1 \}$. It follows that T with the well-ordering represented by the ordinals in the subscript is a subset of $(X,<_R)$ and a subset of $(Y,<_L)$. Thus $T$ is both right separated and left separated. By Lemma B, $T$ is a discrete subspace of $X$. However, $X$ is assumed to have no uncountable discrete subspace. Thus if $X$ has no uncountable discrete subspace, then $X$ must be hereditarily separable and as a result, must be an S-space or must contain an S-space. The proof for the second bullet point is analogous to that of the first bullet point. $\square$ We are now ready to prove Theorem 1. Proof of Theorem 1 Suppose that $X$ is of countable spread and that $X$ is not hereditarily separable. By Theorem A, $X$ has an uncountable left separated subspace $Y$ (assume it is of type $\omega_1$). The property of countable spread is hereditary. So $Y$ is of countable spread. By Theorem C, $Y$ is an L-space or $Y$ contains an L-space. In either way, $X$ contains an L-space. Suppose that $X$ is of countable spread and that $X$ is not hereditarily Lindelof. By Theorem A, $X$ has an uncountable right separated subspace $Y$ (assume it is of type $\omega_1$). By Theorem C, $Y$ is an S-space or $Y$ contains an S-space. In either way, $X$ contains an S-space. Reference 1. Eisworth T., Nyikos P., Shelah S., Gently killing S-spaces, Israel Journal of Mathmatics, 136, 189-220, 2003. 2. Roitman J., The spread of regular spaces, General Topology and Its Applications, 8, 85-91, 1978. 3. Roitman, J., Basic S and L, Handbook of Set-Theoretic Topology, (K. Kunen and J. E. Vaughan, eds), Elsevier Science Publishers B. V., Amsterdam, 295-326, 1984. 4. Tatch-Moore J., A solution to the L space problem, Journal of the American Mathematical Society, 19, 717-736, 2006. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma Every space is star discrete The statement in the title is a folklore fact, though the term star discrete is usually not used whenever this well known fact is invoked in the literature. We present a proof to this well known fact. We also discuss some related concepts. All spaces are assumed to be Hausdorff and regular. First, let’s define the star notation. Let $X$ be a space. Let $\mathcal{U}$ be a collection of subsets of $X$. Let $A \subset X$. Define $\text{St}(A,\mathcal{U})$ to be the set $\bigcup \{U \in \mathcal{U}: U \cap A \ne \varnothing \}$. In other words, the set $\text{St}(A,\mathcal{U})$ is simply the union of all elements of $\mathcal{U}$ that contains points of the set $A$. The set $\text{St}(A,\mathcal{U})$ is also called the star of the set $A$ with respect to the collection $\mathcal{U}$. If $A=\{ x \}$, we use the notation $\text{St}(x,\mathcal{U})$ instead of $\text{St}( \{ x \},\mathcal{U})$. The following is the well known result in question. Lemma 1 Let $X$ be a space. For any open cover $\mathcal{U}$ of $X$, there exists a discrete subspace $A$ of $X$ such that $X=\text{St}(A,\mathcal{U})$. Furthermore, the set $A$ can be chosen in such a way that it is also a closed subset of the space $X$. Any space that satisfies the condition in Lemma 1 is said to be a star discrete space. The proof shown below will work for any topological space. Hence every space is star discrete. We come across three references in which the lemma is stated or is used – Lemma IV.2.20 in page 135 of [3], page 137 of [2] and [1]. The first two references do not use the term star discrete. Star discrete is mentioned in [1] since that paper focuses on star properties. This property that is present in every topological space is at heart a covering property. Here’s a rewording of the lemma that makes it look like a covering property. Lemma 1a Let $X$ be a space. For any open cover $\mathcal{U}$ of $X$, there exists a discrete subspace $A$ of $X$ such that $\{ \text{St}(x,\mathcal{U}): x \in A \}$ is a cover of $X$. Furthermore, the set $A$ can be chosen in such a way that it is also a closed subset of the space $X$. Lemma 1a is clearly identical to Lemma 1. However, Lemma 1a makes it extra clear that this is a covering property. For every open cover of a space, instead of finding a sub cover or an open refinement, we find a discrete subspace so that the stars of the points of the discrete subspace with respect to the given open cover also cover the space. Lemma 1a naturally leads to other star covering properties. For example, a space $X$ is said to be a star countable space if for any open cover $\mathcal{U}$ of $X$, there exists a countable subspace $A$ of $X$ such that $\{ \text{St}(x,\mathcal{U}): x \in A \}$ is a cover of $X$. A space $X$ is said to be a star Lindelof space if for any open cover $\mathcal{U}$ of $X$, there exists a Lindelof subspace $A$ of $X$ such that $\{ \text{St}(x,\mathcal{U}): x \in A \}$ is a cover of $X$. In general, for any topological property $\mathcal{P}$, a space $X$ is a star $\mathcal{P}$ space if for any open cover $\mathcal{U}$ of $X$, there exists a subspace $A$ of $X$ with property $\mathcal{P}$ such that $\{ \text{St}(x,\mathcal{U}): x \in A \}$ is a cover of $X$. It follows that every Lindelof space is a star countable space. It is also clear that every star countable space is a star Lindelof space. Lemma 1 or Lemma 1a, at first glance, may seem like a surprising result. However, one can argue that it is not a strong result at all since the property is possessed by every space. Indeed, the lemma has nothing to say about the size of the discrete set. It only says that there exists a star cover based on a discrete set for a given open cover. To derive more information about the given space, we may need to work with more information on the space in question. Consider spaces such that every discrete subspace is countable (such a space is said to have countable spread or a space of countable spread). Also consider spaces such that every closed and discrete subspace is countable (such a space is said to have countable extent or a space of countable extent). Any space that has countable spread is also a space that has countable extent for the simple reason that if every discrete subspace is countable, then every closed and discrete subspace is countable. Then it follows from Lemma 1 that any space $X$ that has countable extent is star countable. Any star countable space is obviously a star Lindelof space. The following diagram displays these relationships. According to the diagram, the star countable and star Lindelof are both downstream from the countable spread property and the Lindelof property. The star properties being downstream from the Lindelof property is not surprising. What is interesting is that if a space has countable spread, then it is star countable and hence star Lindelof. Do “countable spread” and “Lindelof” relate to each other? Lindelof spaces do not have to have countable spread. The simplest example is the one-point compactification of an uncountable discrete space. More specifically, let $X$ be an uncountable discrete space. Let $p$ be a point not in $X$. Then $Y=X \cup \{ p \}$ is a compact space (hence Lindelof) where $X$ is discrete and an open neighborhood of $p$ is of the form $\{ p \} \cup U$ where $X-U$ is a finite subset of $X$. The space $Y$ is not of countable spread since $X$ is an uncountable discrete subspace. Does “countable spread” imply “Lindelof”? Is there a non-Lindelof space that has countable spread? It turns out that the answers are independent of ZFC. The next post has more details. We now give a proof to Lemma 1. Suppose that $X$ is an infinite space (if it is finite, the lemma is true since the space is Hausdorff). Let $\kappa=\lvert X \lvert$. Let $\kappa^+$ be the next cardinal greater than $\kappa$. Let $\mathcal{U}$ be an open cover of the space $X$. Choose $x_0 \in X$. We choose a sequence of points $x_0,x_1,\cdots,x_\alpha,\cdots$ inductively. If $\text{St}(\{x_\beta: \beta<\alpha \},\mathcal{U}) \ne X$, we can choose a point $x_\alpha \in X$ such that $x_\alpha \notin \text{St}(\{x_\beta: \beta<\alpha \},\mathcal{U})$. We claim that the induction process must stop at some $\alpha<\kappa^+$. In other words, at some $\alpha<\kappa^+$, the star of the previous points must be the entire space and we run out of points to choose. Otherwise, we would have obtained a subset of $X$ with cardinality $\kappa^+$, a contradiction. Choose the least $\alpha<\kappa^+$ such that $\text{St}(\{x_\beta: \beta<\alpha \},\mathcal{U}) = X$. Let $A=\{x_\beta: \beta<\alpha \}$. Then it can be verified that the set $A$ is a discrete subspace of $X$ and that $A$ is a closed subset of $X$. Note that $x_\beta \in \text{St}(x_\beta, \mathcal{U})$ while $x_\gamma \notin \text{St}(x_\beta, \mathcal{U})$ for all $\gamma \ne \beta$. This follows from the way the points are chosen in the induction process. On the other hand, for any $x \in X-A$, $x \in \text{St}(x_\beta, \mathcal{U})$ for some $\beta<\alpha$. As discussed, the open set $\text{St}(x_\beta, \mathcal{U})$ contains only one point of $A$, namely $x_\beta$. Reference 1. Alas O., Jumqueira L., van Mill J., Tkachuk V., Wilson R.On the extent of star countable spaces, Cent. Eur. J. Math., 9 (3), 603-615, 2011. 2. Alster, K., Pol, R.,On function spaces of compact subspaces of $\Sigma$-products of the real line, Fund. Math., 107, 35-46, 1980. 3. Arkhangelskii, A. V.,Topological Function Spaces, Mathematics and Its Applications Series, Kluwer Academic Publishers, Dordrecht, 1992. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma Michael line and Morita’s conjectures This post discusses Michael line from the point of view of the three conjectures of Kiiti Morita. K. Morita defined the notion of P-spaces in [7]. The definition of P-spaces is discussed here in considerable details. K. Morita also proved that a space $X$ is a normal P-space if and only if the product $X \times Y$ is normal for every metrizable space $Y$. As a result of this characterization, the notion of normal P-space (a space that is a normal space and a P-space) is useful in the study of products of normal spaces. Just to be clear, we say a space is a non-normal P-space (i.e. a space that is not a normal P-space) if the space is a normal space that is not a P-space. K. Morita formulated his three conjectures in 1976. The statements of the conjectures are given below. Here is a basic discussion of the three conjectures. The notion of normal P-spaces is a theme that runs through the three conjectures. The conjectures are actually theorems since 2001 [2]. Here’s where Michael line comes into the discussion. Based on the characterization of normal P-spaces mentioned above, to find a normal space that is not a P-space (a non-normal P-space), we would need to find a non-normal product $X \times Y$ such that one of the factors is a metric space and the other factor is a normal space. The first such example in ZFC is from an article by E. Michael in 1963 (found here and here). In this example, the normal space is $M$, which came be known as the Michael line, and the metric space is $\mathbb{P}$, the space of irrational numbers (as a subspace of the real line). Their product $M \times \mathbb{P}$ is not normal. A basic discussion of the Michael line is found here. Because $M \times \mathbb{P}$ is not normal, the Michael line $M$ is not a normal P-space. Prior to E. Michael’s 1963 article, we have to reach back to 1955 to find an example of a non-normal product where one factor is a metric space. In 1955, M. E. Rudin used a Souslin line to construct a Dowker space, which is a normal space whose product with the closed unit interval is not normal. The existence of a Souslin line was shown to be independent of ZFC in the late 1960s. In 1971, Rudin constructed a Dowker space in ZFC. Thus finding a normal space that is not a normal P-space (finding a non-normal product $X \times Y$ where one factor is a metric space and the other factor is a normal space) is not a trivial matter. Morita’s Three Conjectures We show that the Michael line illustrates perfectly the three conjectures of K. Morita. Here’s the statements. Morita’s Conjecture I. Let $X$ be a space. If the product $X \times Y$ is normal for every normal space $Y$ then $X$ is a discrete space. Morita’s Conjecture II. Let $X$ be a space. If the product $X \times Y$ is normal for every normal P-space $Y$ then $X$ is a metrizable space. Morita’s Conjecture III. Let $X$ be a space. If the product $X \times Y$ is normal for every normal countably paracompact space $Y$ then $X$ is a metrizable $\sigma$-locally compact space. The contrapositive statement of Morita’s conjecture I is that for any non-discrete space $X$, there exists a normal space $Y$ such that $X \times Y$ is not normal. Thus any non-discrete space is paired with a normal space for forming a non-normal product. The Michael line $M$ is paired with the space of irrational numbers $\mathbb{P}$. Obviously, the space $\mathbb{P}$ is paired with the Michael line $M$. The contrapositive statement of Morita’s conjecture II is that for any non-metrizable space $X$, there exists a normal P-space $Y$ such that $X \times Y$ is not normal. The pairing is more specific than for conjecture I. Any non-metrizable space is paired with a normal P-space to form a non-normal product. As illustration, the Michael line $M$ is not metrizable. The space $\mathbb{P}$ of irrational numbers is a metric space and hence a normal P-space. Here, $M$ is paired with $\mathbb{P}$ to form a non-normal product. The contrapositive statement of Morita’s conjecture III is that for any space $X$ that is not both metrizable and $\sigma$-locally compact, there exists a normal countably paracompact space $Y$ such that $X \times Y$ is not normal. Note that the space $\mathbb{P}$ is not $\sigma$-locally compact (see Theorem 4 here). The Michael line $M$ is paracompact and hence normal and countably paracompact. Thus the metric non-$\sigma$-locally compact $\mathbb{P}$ is paired with normal countably paracompact $M$ to form a non-normal product. Here, the metric space $\mathbb{P}$ is paired with the non-normal P-space $M$. In each conjecture, each space in a certain class of spaces is paired with one space in another class to form a non-normal product. For Morita’s conjecture I, each non-discrete space is paired with a normal space. For conjecture II, each non-metrizable space is paired with a normal P-space. For conjecture III, each metrizable but non-$\sigma$-locally compact is paired with a normal countably paracompact space to form a non-normal product. Note that the paired normal countably paracompact space would be a non-normal P-space. Michael line as an example of a non-normal P-space is a great tool to help us walk through the three conjectures of Morita. Are there other examples of non-normal P-spaces? Dowker spaces mentioned above (normal spaces whose products with the closed unit interval are not normal) are non-normal P-spaces. Note that conjecture II guarantees a normal P-space to match every non-metric space for forming a non-normal product. Conjecture III guarantees a non-normal P-space to match every metrizable non-$\sigma$-locally compact space for forming a non-normal product. Based on the conjectures, examples of normal P-spaces and non-normal P-spaces, though may be hard to find, are guaranteed to exist. We give more examples below to further illustrate the pairings for conjecture II and conjecture III. As indicated above, non-normal P-spaces are hard to come by. Some of the examples below are constructed using additional axioms beyond ZFC. The additional examples still give an impression that the availability of non-normal P-spaces, though guaranteed to exist, is limited. Examples of Normal P-Spaces One example is based on this classic theorem: for any normal space $X$, $X$ is paracompact if and only if the product $X \times \beta X$ is normal. Here $\beta X$ is the Stone-Cech compactification of the completely regular space $X$. Thus any normal but not paracompact space $X$ (a non-metrizable space) is paired with $\beta X$, a normal P-space, to form a non-normal product. Naturally, the next class of non-metrizable spaces to be discussed should be the paracompact spaces that are not metrizable. If there is a readily available theorem to provide a normal P-space for each non-metrizable paracompact space, then there would be a simple proof of Morita’s conjecture II. The eventual solution of conjecture II is far from simple [2]. We narrow the focus to the non-metrizable compact spaces. Consider this well known result: for any infinite compact space $X$, the product $\omega_1 \times X$ is normal if and only if the space $X$ has countable tightness (see Theorem 1 here). Thus any compact space with uncountable tightness is paired with $\omega_1$, the space of all countable ordinals, to form a non-normal product. The space $\omega_1$, being a countably compact space, is a normal P-space. A proof that normal countably compact space is a normal P-space is given here. We now handle the case for non-metrizable compact spaces with countable tightness. In this case, compactness is not needed. For spaces with countable tightness, consider this result: every space with countable tightness, whose products with all perfectly normal spaces are normal, must be metrizable [3] (see Corollary 7). Thus any non-metrizable space with countable tightness is paired with some perfectly normal space to form a non-normal product. Any reader interested in what these perfectly normal spaces are can consult [3]. Note that perfectly normal spaces are normal P-spaces (see here for a proof). Examples of Non-Normal P-Spaces Another non-normal product is $X_B \times B$ where $B \subset \mathbb{R}$ is a Bernstein set and $X_B$ is the space with the real line as the underlying set such that points in $B$ are isolated and points in $\mathbb{R}-B$ retain the usual open sets. The set $B \subset \mathbb{R}$ is said to be a Bernstein set if every uncountable closed subset of the real line contains a point in B and contains a point in the complement of B. Such a set can be constructed using transfinite induction as shown here. The product $X_B \times B$ is not normal where $B$ is considered a subspace of the real line. The proof is essentially the same proof that shows $M \times \mathbb{P}$ is not normal (see here). The space $X_B$ is a Lindelof space. It is not a normal P-space since its product with $B$, a separable metric space, is not normal. However, this example is essentially the same example as the Michael line since the same technique and proof are used. On the one hand, the $X_B \times B$ example seems like an improvement over Michael line example since the first factor $X_B$ is Lindelof. On the other hand, it is inferior than the Michael line example since the second factor $B$ is not completely metrizable. Moving away from the idea of Michael, there exist a Lindelof space and a completely metrizable (but not separable) space whose product is of weight $\omega_1$ and is not normal [5]. This would be a Lindelof space that is a non-normal P-space. However, this example is not as elementary as the Michael line, making it not as effective as an illustration of Morita’s three conjectures. The next set of non-normal P-spaces requires set theory. A Michael space is a Lindelof space whose product with $\mathbb{P}$, the space of irrational numbers, is not normal. Michael problem is the question: is there a Michael space in ZFC? It is known that a Michael space can be constructed using continuum hypothesis [6] or using Martin’s axiom [1]. The construction using continuum hypothesis has been discussed in this blog (see here). The question of whether there exists a Michael space in ZFC is still unsolved. The existence of a Michael space is equivalent to the existence of a Lindelof space and a separable completely metrizable space whose product is non-normal [4]. A Michael space, in the context of the discussion in this post, is a non-normal P-space. The discussion in this post shows that the example of the Michael line and other examples of non-normal P-spaces are useful tools to illustrate Morita’s three conjectures. Reference 1. Alster K.,On the product of a Lindelof space and the space of irrationals under Martin’s Axiom, Proc. Amer. Math. Soc., Vol. 110, 543-547, 1990. 2. Balogh Z.,Normality of product spaces and Morita’s conjectures, Topology Appl., Vol. 115, 333-341, 2001. 3. Chiba K., Przymusinski T., Rudin M. E.Nonshrinking open covers and K. Morita’s duality conjectures, Topology Appl., Vol. 22, 19-32, 1986. 4. Lawrence L. B., The influence of a small cardinal on the product of a Lindelof space and the irrationals, Proc. Amer. Math. Soc., 110, 535-542, 1990. 5. Lawrence L. B., A ZFC Example (of Minimum Weight) of a Lindelof Space and a Completely Metrizable Space with a Nonnormal Product, Proc. Amer. Math. Soc., 124, No 2, 627-632, 1996. 6. Michael E., Paracompactness and the Lindelof property in nite and countable cartesian products, Compositio Math., 23, 199-214, 1971. 7. Morita K., Products of Normal Spaces with Metric Spaces, Math. Ann., Vol. 154, 365-382, 1964. 8. Rudin M. E., A Normal Space $X$ for which $X \times I$ is not Normal, Fund. Math., 73, 179-186, 1971. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma Three conjectures of K Morita This post discusses the three conjectures that were proposed by K. Morita in 1976. These conjectures concern normality in product spaces. To start the discussion, here’s the conjectures. Morita’s Conjecture I. Let $X$ be a space. The product $X \times Y$ is normal for every normal space $Y$ if and only if $X$ is a discrete space. Morita’s Conjecture II. Let $X$ be a space. The product $X \times Y$ is normal for every normal P-space $Y$ if and only if $X$ is a metrizable space. Morita’s Conjecture III. Let $X$ be a space. The product $X \times Y$ is normal for every normal countably paracompact space $Y$ if and only if $X$ is a metrizable $\sigma$-locally compact space. These statements are no longer conjectures. Partial results appeared after the conjectures were proposed in 1976. The complete resolution of the conjectures came in 2001 in a paper by Zoli Balogh [5]. Though it is more appropriate to call these statements theorems, it is still convenient to call them conjectures. Just know that they are now known results rather open problems to be solved. The focus here is not on the evolution of the solutions. Instead, we discuss the relations among the three conjectures and why they are amazing results in the study of normality in product spaces. As discussed below, in each of these conjectures, one direction is true based on prior known theorems (see Theorem 1, Theorem 2 and Theorem 4 below). The conjectures can be stated as follows. Morita’s Conjecture I. Let $X$ be a space. If the product $X \times Y$ is normal for every normal space $Y$ then $X$ is a discrete space. Morita’s Conjecture II. Let $X$ be a space. If the product $X \times Y$ is normal for every normal P-space $Y$ then $X$ is a metrizable space. Morita’s Conjecture III. Let $X$ be a space. If the product $X \times Y$ is normal for every normal countably paracompact space $Y$ then $X$ is a metrizable $\sigma$-locally compact space. P-spaces are defined by K. Morita [11]. He proved that a space $X$ is a normal P-space if and only if the product $X \times Y$ is normal for every metrizable space $Y$ (see theorem 2 below). Normal P-spaces are also discussed here. A space $X$ is $\sigma$-locally compact space if $X$ is the union of countably many locally compact subspaces each of which is also closed subspace of $X$. As we will see below, these conjectures are also called duality conjectures because they are duals of known results. [2] is a survey of Morita’s conjecture. Duality Conjectures Here’s three theorems that are duals to the conjectures. Theorem 1 Let $X$ be a space. The product space $X \times Y$ is normal for every discrete space $Y$ if and only if $X$ is normal. Theorem 2 Let $X$ be a space. The product space $X \times Y$ is normal for every metrizable space $Y$ if and only if $X$ is a normal P-space. Theorem 3 Let $X$ be a space. The product space $X \times Y$ is normal for every metrizable $\sigma$-locally compact space $Y$ if and only if $X$ is normal countably paracompact. The key words in red are for emphasis. In each of these three theorems, if we switch the two key words in red, we would obtain the statements for the conjectures. In this sense, the conjectures are called duality conjectures since they are duals of known results. Theorem 1 is actually not found in the literature. It is an easy theorem. Theorem 2, found in [11], is a characterization of normal P-space (discussed here). Theorem 3 is a well known result based on the following theorem by K. Morita [10]. Theorem 4 Let $Y$ be a metrizable space. Then the product $X \times Y$ is normal for every normal countably paracompact space $X$ if and only if $Y$ is a $\sigma$-locally compact space. We now show that Theorem 3 can be established using Theorem 4. Theorem 4 is also Theorem 3.5 in p. 111 of [2]. A proof of Theorem 4 is found in Theorem 1.8 in p. 130 of [8]. Proof of Theorem 3 $\Longleftarrow$ Suppose $X$ is normal and countably paracompact. Let $Y$ be a metrizable $\sigma$-locally compact space. By Theorem 4, $X \times Y$ is normal. $\Longrightarrow$ This direction uses Dowker’s theorem. We give a contrapositive proof. Suppose that $X$ is not both normal and countably paracompact. Case 1. $X$ is not normal. Then $X \times \{ y \}$ is not normal where $\{ y \}$ is any one-point discrete space. Case 2. $X$ is normal and not countably paracompact. This means that $X$ is a Dowker space. Then $X \times [0,1]$ is not normal. In either case, $X \times Y$ is not normal for some compact metric space. Thus $X \times Y$ is not normal for some $\sigma$-locally compact metric space. This completes the proof of Theorem 3. $\square$ The First and Third Conjectures The first conjecture of Morita was proved by Atsuji [1] and Rudin [13] in 1978. The proof in [13] is a constructive proof. The key to that solution is to define a $\kappa$-Dowker space. Suppose $X$ is a non-discrete space. Let $\kappa$ be the least cardinal of a non-discrete subspace of $X$. Then construct a $\kappa$-Dowker space $Y$ as in [13]. It follows that $X \times Y$ is not normal. The proof that $X \times Y$ is not normal is discussed here. Conjecture III was confirmed by Balogh in 1998 [4]. We show here that the first and third conjectures of Morita can be confirmed by assuming the second conjecture. Conjecture II implies Conjecture I We give a contrapositive proof of Conjecture I. Suppose that $X$ is not discrete. We wish to find a normal space $Y$ such that $X \times Y$ is not normal. Consider two cases for $X$. Case 1. $X$ is not metrizable. By Conjecture II, $X \times Y$ is not normal for some normal P-space $Y$. Case 2. $X$ is metrizable. Since $X$ is infinite and metric, $X$ would contain an infinite compact metric space $S$. For example, $X$ contains a non-trivial convergent sequence and let $S$ be a convergence sequence plus the limit point. Let $Y$ be a Dowker space. Then the product $S \times Y$ is not normal. It follows that $X \times Y$ is not normal. Thus there exists a normal space $Y$ such that $X \times Y$ is not normal in either case. $\square$ Conjecture II implies Conjecture III Suppose that the product $X \times Y$ is normal for every normal and countably paracompact space $Y$. Since any normal P-space is a normal countably paracompact space, $X \times Y$ is normal for every normal and P-space $Y$. By Conjecture II, $X$ is metrizable. By Theorem 4, $X$ is $\sigma$-locally compact. $\square$ The Second Conjecture The above discussion shows that a complete solution to the three conjectures hinges on the resolution of the second conjecture. A partial resolution came in 1986 [6]. In that paper, it was shown that under V = L, conjecture II is true. The complete solution of the second conjecture is given in a paper of Balogh [5] in 2001. The path to Balogh’s proof is through a conjecture of M. E. Rudin identified as Conjecture 9. Rudin’s Conjecture 9. There exists a normal P-space $X$ such that some uncountable increasing open cover of $X$ cannot be shrunk. Conjecture 9 was part of a set of 14 conjectures stated in [14]. It is also discussed in [7]. In [6], conjecture 9 was shown to be equivalent to Morita’s second conjecture. In [5], Balogh used his technique for constructing a Dowker space of cardinality continuum to obtain a space as described in conjecture 9. The resolution of conjecture II is considered to be one of Balogh greatest hits [3]. Abundance of Non-Normal Products One immediate observation from Morita’s conjecture I is that existence of non-normal products is wide spread. Conjecture I indicates that every normal non-discrete space $X$ is paired with some normal space $Y$ such that their product is not normal. So every normal non-discrete space forms a non-normal product with some normal space. Given any normal non-discrete space (no matter how nice it is or how exotic it is), it can always be paired with another normal space (sometimes paired with itself) for a non-normal product. Suppose we narrow the focus to spaces that are normal and non-metrizable. Then any such space $X$ is paired with some normal P-space $Y$ to form a non-normal product space (Morita’s conjecture II). By narrowing the focus on $X$ to the non-metrizable spaces, we obtain more clarity on the paired space to form non-normal product, namely a normal P-space. As an example, let $X$ be the Michael line (normal and non-metrizable). It is well known that $X$ in this case is paired with $\mathbb{P}$, the space of irrational numbers with the usual Euclidean topology, to form a non-normal product (discussed here). Another example is $X$ being the Sorgenfrey line. It is well known that $X$ in this case is paired with itself to form a non-normal product (discussed here). Morita’s conjectures are powerful indication that these two non-normal products are not isolated phenomena. Another interesting observation about conjecture II is that normal P-spaces are not productive with respect to normality. More specifically, for any non-metrizable normal P-space $X$, conjecture II tells us that there exists another normal P-space $Y$ such that $X \times Y$ is not normal. Now we narrow the focus to spaces that are metrizable but not $\sigma$-locally compact. For any such space $X$, conjecture III tells us that $X$ is paired with a normal countably paracompact space $Y$ to form a non-normal product. Using the Michael line example, this time let $X=\mathbb{P}$, the space of irrational numbers, which is a metric space that is not $\sigma$-locally compact. The paired normal and countably paracompact space $Y$ is the Michael line. Each conjecture is about existence of a normal $Y$ that is paired with a given $X$ to form a non-normal product. For Conjecture I, the given $X$ is from a wide class (normal non-discrete). As a result, there is not much specific information on the paired $Y$, other than that it is normal. For Conjectures II and III, the given space $X$ is from narrower classes. As a result, there is more information on the paired $Y$. The concept of Dowker spaces runs through the three conjectures, especially the first conjecture. Dowker spaces and $\kappa$-Dowker spaces provide reliable pairing for non-normal products. In fact this is one way to prove conjecture I [13], also see here. For any normal space $X$ with a countable non-discrete subspace, the product of $X$ and any Dowker space is not normal (discussed here). For any normal space $X$ such that the least cardinality of a non-discrete subspace is an uncountable cardinal $\kappa$, the product $X \times Y$ is not normal where $Y$ is a $\kappa$-Dowker space as constructed in [13], also discussed here. In finding a normal pair $Y$ for a normal space $X$, if we do not care about $Y$ having a high degree of normal productiveness (e.g. normal P or normal countably paracompact), we can always let $Y$ be a Dowker space or $\kappa$-Dowker space. In fact, if the starting space $X$ is a metric space, the normal pair for a non-normal product (by definition) has to be a Dowker space. For example, if $X=[0,1]$, then the normal space $Y$ such that $X \times Y$ is by definition a Dowker space. The search for a Dowker space spanned a period of 20 years. For the real line $\mathbb{R}$, the normal pair for a non-normal product is also a Dowker space. For “nice” spaces such as metric spaces, finding a normal space to form non-normal product is no trivial problem. Reference 1. Atsuji M.,On normality of the product of two spaces, General Topology and Its Relation to Modern Analysis and Algebra (Proc. Fourth Prague Topology sympos., 1976), Part B, 25–27, 1977. 2. Atsuji M.,Normality of product spaces I, in: K. Morita, J. Nagata (Eds.), Topics in General Topology, North-Holland, Amsterdam, 81–116, 1989. 3. Burke D., Gruenhage G.,Zoli, Top. Proc., Vol. 27, No 1, i-xxii, 2003. 4. Balogh Z.,Normality of product spaces and K. Morita’s third conjecture, Topology Appl., Vol. 84, 185-198, 1998. 5. Balogh Z.,Normality of product spaces and Morita’s conjectures, Topology Appl., Vol. 115, 333-341, 2001. 6. Chiba K., Przymusinski T., Rudin M. E.Nonshrinking open covers and K. Morita’s duality conjectures, Topology Appl., Vol. 22, 19-32, 1986. 7. Gruenhage G.,Mary Ellen’s Conjectures,, Special Issue honoring the memory of Mary Ellen Rudin, Topology Appl., Vol. 195, 15-25, 2015. 8. Hoshina T.,Normality of product spaces II, in: K. Morita, J. Nagata (Eds.), Topics in General Topology, North-Holland, Amsterdam, 121–158, 1989. 9. Morita K., On the Product of a Normal Space with a Metric Space, Proc. Japan Acad., Vol. 39, 148-150, 1963. (article information; paper) 10. Morita K., Products of Normal Spaces with Metric Spaces II, Sci. Rep. Tokyo Kyoiku Dagaiku Sec A, 8, 87-92, 1963. 11. Morita K., Products of Normal Spaces with Metric Spaces, Math. Ann., Vol. 154, 365-382, 1964. 12. Morita K., Nagata J., Topics in General Topology, Elsevier Science Publishers, B. V., The Netherlands, 1989. 13. Rudin M. E., $\kappa$-Dowker Spaces, Czechoslovak Mathematical Journal, 28, No.2, 324-326, 1978. 14. Rudin M. E., Some conjectures, in: Open Problems in Topology, J. van Mill and G.M. Reed, eds., North Holland, 184–193, 1990. 15. Telgárski R., A characterization of P-spaces, Proc. Japan Acad., Vol. 51, 802–807, 1975. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma Morita’s normal P-space In this post we discuss K. Morita’s notion of P-space, which is a useful and interesting concept in the study of normality of product spaces. The Definition In [1] and [2], Morita defined the notion of P-spaces. First some notations. Let $\kappa$ be a cardinal number such that $\kappa \ge 1$. Conveniently, $\kappa$ is identified by the set of all ordinals preceding $\kappa$. Let $\Gamma$ be the set of all finite sequences $(\alpha_1,\alpha_2,\cdots,\alpha_n)$ where $n=1,2,\cdots$ and all $\alpha_i < \kappa$. Let $X$ be a space. The collection $\left\{A_\sigma \subset X: \sigma \in \Gamma \right\}$ is said to be decreasing if this condition holds: for any $\sigma \in \Gamma$ and $\delta \in \Gamma$ with $\sigma =(\alpha_1,\alpha_2,\cdots,\alpha_n)$ $\delta =(\beta_1,\beta_2,\cdots,\beta_n, \cdots, \beta_m)$ such that $n and such that $\alpha_i=\beta_i$ for all $i \le n$, we have $A_{\delta} \subset A_{\sigma}$. On the other hand, the collection $\left\{A_\sigma \subset X: \sigma \in \Gamma \right\}$ is said to be increasing if for any $\sigma \in \Gamma$ and $\delta \in \Gamma$ as described above, we have $A_{\sigma} \subset A_{\delta}$. The space $X$ is a P-space if for any cardinal $\kappa \ge 1$ and for any decreasing collection $\left\{F_\sigma \subset X: \sigma \in \Gamma \right\}$ of closed subsets of $X$, there exists open set $U_\sigma$ for each $\sigma \in \Gamma$ with $F_\sigma \subset U_\sigma$ such that for any countably infinite sequence $(\alpha_1,\alpha_2,\cdots,\alpha_n,\cdots)$ where each finite subsequence $\sigma_n=(\alpha_1,\alpha_2,\cdots,\alpha_n)$ is an element of $\Gamma$, if $\bigcap_{n=1}^\infty F_{\sigma_n}=\varnothing$, then $\bigcap_{n=1}^\infty U_{\sigma_n}=\varnothing$. By switching closed sets and open sets and by switching decreasing collection and increasing collection, the following is an alternative but equivalent definition of P-spaces. The space $X$ is a P-space if for any cardinal $\kappa \ge 1$ and for any increasing collection $\left\{U_\sigma \subset X: \sigma \in \Gamma \right\}$ of open subsets of $X$, there exists closed set $F_\sigma$ for each $\sigma \in \Gamma$ with $F_\sigma \subset U_\sigma$ such that for any countably infinite sequence $(\alpha_1,\alpha_2,\cdots,\alpha_n,\cdots)$ where each finite subsequence $\sigma_n=(\alpha_1,\alpha_2,\cdots,\alpha_n)$ is an element of $\Gamma$, if $\bigcup_{n=1}^\infty U_{\sigma_n}=X$, then $\bigcup_{n=1}^\infty F_{\sigma_n}=X$. Note that the definition is per cardinal number $\kappa \ge 1$. To bring out more precision, we say a space $X$ is a P($\kappa$)-space of it satisfies the definition for P-space for the cardinal $\kappa$. Of course if a space is a P($\kappa$)-space for all $\kappa \ge 1$, then it is a P-space. There is also a game characterization of P-spaces [4]. A Specific Case It is instructive to examine a specific case of the definition. Let $\kappa=1=\{ 0 \}$. In other words, let’s look what what a P(1)-space looks like. The elements of the index set $\Gamma$ are simply finite sequences of 0’s. The relevant information about an element of $\Gamma$ is its length (i.e. a positive integer). Thus the closed sets $F_\sigma$ in the definition are essentially indexed by integers. For the case of $\kappa=1$, the definition can be stated as follows: For any decreasing sequence $F_1 \supset F_2 \supset F_3 \cdots$ of closed subsets of $X$, there exist $U_1,U_2,U_3,\cdots$, open subsets of $X$, such that $F_n \subset U_n$ for all $n$ and such that if $\bigcap_{n=1}^\infty F_n=\varnothing$ then $\bigcap_{n=1}^\infty U_n=\varnothing$. The above condition implies the following condition. For any decreasing sequence $F_1 \supset F_2 \supset F_3 \cdots$ of closed subsets of $X$ such that $\bigcap_{n=1}^\infty F_n=\varnothing$, there exist $U_1,U_2,U_3,\cdots$, open subsets of $X$, such that $F_n \subset U_n$ for all $n$ and such that $\bigcap_{n=1}^\infty U_n=\varnothing$. The last condition is one of the conditions in Dowker’s Theorem (condition 6 in Theorem 1 in this post and condition 7 in Theorem 1 in this post). Recall that Dowker’s theorem states that a normal space $X$ is countably paracompact if and only if the last condition holds if and only of the product $X \times Y$ is normal for every infinite compact metric space $Y$. Thus if a normal space $X$ is a P(1)-space, it is countably paracompact. More importantly P(1) space is about normality in product spaces where one factor is a class of metric spaces, namely the compact metric spaces. Based on the above discussion, any normal space $X$ that is a P-space is a normal countably paracompact space. The definition for P(1)-space is identical to one combinatorial condition in Dowker’s theorem which says that any decreasing sequence of closed sets with empty intersection has an open expansion that also has empty intersection. For P($\kappa$)-space where $\kappa>1$, the decreasing family of closed sets are no longer indexed by the integers. Instead the decreasing closed sets are indexed by finite sequences of elements of $\kappa$. The index set $\Gamma$ would be more like a tree structure. However the look and feel of P-space is like the combinatorial condition in Dowker’s theorem. The decreasing closed sets are expanded by open sets. For any “path in the tree” (an infinite sequence of elements of $\kappa$), if the closed sets along the path has empty intersection, then the corresponding open sets would have empty intersection. Not surprisingly, the notion of P-spaces is about normality in product spaces where one factor is a metric space. In fact, this is precisely the characterization of P-spaces (see Theorem 1 and Theorem 2 below). A Characterization of P-Space Morita gave the following characterization of P-spaces among normal spaces. The following theorems are found in [2]. Theorem 1 Let $X$ be a space. The space $X$ is a normal P-space if and only if the product space $X \times Y$ is normal for every metrizable space $Y$. Thus the combinatorial definition involving decreasing families of closed sets being expanded by open sets is equivalent to a statement that is much easier to understand. A space that is normal and a P-space is precisely a normal space that is productively normal with every metric space. The following theorem is Theorem 1 broken out for each cardinal $\kappa$. Theorem 2 Let $X$ be a space and let $\kappa \ge \omega$. Then $X$ is a normal P($\kappa$)-space if and only if the product space $X \times Y$ is normal for every metric space $Y$ of weight $\kappa$. Theorem 2 only covers the infinite cardinals $\kappa$ starting with the countably infinite cardinal. Where are the P($n$)-spaces placed where $n$ are the positive integers? The following theorem gives the answer. Theorem 3 Let $X$ be a space. Then $X$ is a normal P(2)-space if and only if the product space $X \times Y$ is normal for every separable metric space $Y$. According to Theorem 2, $X$ is a normal P($\omega$)-space if and only if the product space $X \times Y$ is normal for every separable metric space $Y$. Thus suggests that any P(2)-space is a P($\omega$)-space. It seems to say that P(2) is identical to P($\kappa$) where $\kappa$ is the countably infinite cardinal. The following theorem captures the idea. Theorem 4 Let $\kappa$ be the positive integers $2,3,4,\cdots$ or $\kappa=\omega$, the countably infinite cardinal. Let $X$ be a space. Then $X$ is a P(2)-space if and only if $X$ is a P($\kappa$)-space. To give a context for Theorem 4, note that if $X$ is a P($\kappa$)-space, then $X$ is a P($\tau$)-space for any cardinal $\tau$ less than $\kappa$. Thus if $X$ is a P(3)-space, then it is a P(2)-space and also a P(1)-space. In the definition of P($\kappa$)-space, the index set $\Gamma$ is the set of all finite sequences of elements of $\kappa$. If the definition for P($\kappa$)-space holds, it would also hold for the index set consisting of finite sequences of elements of $\tau$ where $\tau<\kappa$. Thus if the definition for P($\omega$)-space holds, it would hold for P($n$)-space for all integers $n$. Theorem 4 says that when the definition of P(2)-space holds, the definition would hold for all larger cardinals up to $\omega$. In light of Theorem 1 and Dowker's theorem, we have the following corollary. If the product of a space $X$ with every metric space is normal, then the product of $X$ with every compact metric space is normal. Corollary 5 Let $X$ be a space. If $X$ is a normal P-space, then $X$ is a normal and countably paracompact space. Examples of Normal P-Space Here’s several classes of spaces that are normal P-spaces. • Metric spaces. • $\sigma$-compact spaces (link). • Paracompact locally compact spaces (link). • Paracompact $\sigma$-locally compact spaces (link). • Normal countably compact spaces (link). • $\Sigma$-product of real lines. Clearly any metric space is a normal P-space since the product of any two metric spaces is a metric space. Any compact space is a normal P-space since the product of a compact space and a paracompact space is paracompact, hence normal. For each of the classes of spaces listed above, the product with any metric space is normal. See the corresponding links for proofs of the key theorems. The $\Sigma$-product of real lines $\Sigma_{\alpha<\tau} \mathbb{R}$ is a normal P-space. For any metric space $Y$, the product $(\Sigma_{\alpha<\tau} \mathbb{R}) \times Y$ is a $\Sigma$-product of metric spaces. By a well known result, the $\Sigma$-product of metric spaces is normal. Examples of Non-Normal P-Spaces Paracompact $\sigma$-locally compact spaces are normal P-spaces since the product of such a space with any paracompact space is paracompact. However, the product of paracompact spaces in general is not normal. The product of Michael line (a hereditarily paracompact space) and the space of irrational numbers (a metric space) is not normal (discussed here). Thus the Michael line is not a normal P-space. More specifically the Michael line fails to be a normal P(2)-space. However, it is a normal P(1)-space (i.e. normal and countably paracompact space). The Michael line is obtained from the usual real line topology by making the irrational points isolated. Instead of using the irrational numbers, we can obtain a similar space by making points in a Bernstein set isolated. The resulting space $X$ is a Michael line-like space. The product of $X$ with the starting Bernstein set (a subset of the real line with the usual topology) is not normal. Thus this is another example of a normal space that is not a P(2)-space. See here for the details of how this space is constructed. To look for more examples, look for non-normal product $X \times Y$ where one factor is normal and the other is a metric space. More Examples Based on the characterization theorem of Morita, normal P-spaces are very productively normal. Normal P-spaces are well behaved when taking product with metrizable spaces. However, they are not well behaved when taking product with non-metrizable spaces. Let’s look at several examples. Consider the Sorgenfrey line. It is perfectly normal. Thus the product of the Sorgenfrey line with any metric space is also perfectly normal, hence normal. It is well known that the square of the Sorgenfrey line is not normal. The space $\omega_1$ of all countable ordinals is a normal and countably compact space, hence a normal P-space. However, the product of $\omega_1$ and some compact spaces are not normal. For example, $\omega_1 \times (\omega_1 +1)$ is not normal. Another example: $\omega_1 \times I^I$ is not normal where $I=[0,1]$. The idea here is that the product of $\omega_1$ and any compact space with uncountable tightness is not normal (see here). Compact spaces are normal P-spaces. As discussed in the preceding paragraph, the product of any compact space with uncountable tightness and the space $\omega_1$ is not normal. Even as nice a space as the unit interval $[0,1]$, it is not always productive. The product of $[0,1]$ with a Dowker space is not normal (see here). In general, normality is not preserved in the product space operation. the best we can ask for is that normal spaces be productively normal with respect to a narrow class of spaces. For normal P-spaces, that narrow class of spaces is the class of metric spaces. However, normal product is not a guarantee outside of the productive class in question. Reference 1. Morita K., On the Product of a Normal Space with a Metric Space, Proc. Japan Acad., Vol. 39, 148-150, 1963. (article information; paper) 2. Morita K., Products of Normal Spaces with Metric Spaces, Math. Ann., Vol. 154, 365-382, 1964. 3. Morita K., Nagata J., Topics in General Topology, Elsevier Science Publishers, B. V., The Netherlands, 1989. 4. Telgárski R., A characterization of P-spaces, Proc. Japan Acad., Vol. 51, 802–807, 1975. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma In between G-delta diagonal and submetrizable This post discusses the property of having a $G_\delta$-diagonal and related diagonal properties. The focus is on the diagonal properties in between $G_\delta$-diagonal and submetrizability. The discussion is followed by a diagram displaying the relative strengths of these properties. Some examples and questions are discussed. G-delta Diagonal In any space $Y$, a subset $A$ is said to be a $G_\delta$-set in the space $Y$ (or $A$ is a $G_\delta$-subset of $Y$) if $A$ is the intersection of countably many open subsets of $Y$. A subset $A$ of $Y$ is an $F_\sigma$-set in $Y$ (or $A$ is an $F_\sigma$-subset of $Y$) if $A$ is the union of countably closed subsets of the space $Y$. Of course, the set $A$ is a $G_\delta$-set if and only if $Y-A$, the complement of $A$, is an $F_\sigma$-set. The diagonal of the space $X$ is the set $\Delta=\{ (x,x): x \in X \}$, which is a subset of the square $X \times X$. When the set $\Delta$ is a $G_\delta$-set in the space $X \times X$, we say that the space $X$ has a $G_\delta$-diagonal. It is straightforward to verify that the space $X$ is a Hausdorff space if and only if the diagonal $\Delta$ is a closed subset of $X \times X$. As a result, if $X$ is a Hausdorff space such that $X \times X$ is perfectly normal, then the diagonal would be a closed set and thus a $G_\delta$-set. Such spaces, including metric spaces, would have a $G_\delta$-diagonal. Thus any metric space has a $G_\delta$-diagonal. A space $X$ is submetrizable if there is a metrizable topology that is weaker than the topology for $X$. Then the diagonal $\Delta$ would be a $G_\delta$-set with respect to the weaker metrizable topology of $X \times X$ and thus with respect to the orginal topology of $X$. This means that the class of spaces having $G_\delta$-diagonals also include the submetrizable spaces. As a result, Sorgenfrey line and Michael line have $G_\delta$-diagonals since the Euclidean topology are weaker than both topologies. A space having a $G_\delta$-diagonal is a simple topological property. Such spaces form a wide class of spaces containing many familiar spaces. According to the authors in [2], the property of having a $G_\delta$-diagonal is an important ingredient of submetrizability and metrizability. For example, any compact space with a $G_\delta$-diagonal is metrizable (see this blog post). Any paracompact or Lindelof space with a $G_\delta$-diagonal is submetrizable. Spaces with $G_\delta$-diagonals are also interesting in their own right. It is a property that had been research extensively. It is also a current research topic; see [7]. A Closer Look To make the discussion more interesting, let’s point out a few essential definitions and notations. Let $X$ be a space. Let $\mathcal{U}$ be a collection of subsets of $X$. Let $A \subset X$. The notation $St(A, \mathcal{U})$ refers to the set $St(A, \mathcal{U})=\cup \{U \in \mathcal{U}: A \cap U \ne \varnothing \}$. In other words, $St(A, \mathcal{U})$ is the union of all the sets in $\mathcal{U}$ that intersect the set $A$. The set $St(A, \mathcal{U})$ is also called the star of the set $A$ with respect to the collection $\mathcal{U}$. If $A=\{ x \}$, we write $St(x, \mathcal{U})$ instead of $St(\{ x \}, \mathcal{U})$. Then $St(x, \mathcal{U})$ refers to the union of all sets in $\mathcal{U}$ that contain the point $x$. The set $St(x, \mathcal{U})$ is then called the star of the point $x$ with respect to the collection $\mathcal{U}$. Note that the statement of $X$ having a $G_\delta$-diagonal is defined by a statement about the product $X \times X$. It is desirable to have a translation that is a statement about the space $X$. Theorem 1 Let $X$ be a space. Then the following statements are equivalent. 1. The space $X$ has a $G_\delta$-diagonal. 2. There exists a sequence $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ of open covers of $X$ such that for each $x \in X$, $\{ x \}=\bigcap \{ St(x, \mathcal{U}_n): n=0,1,2,\cdots \}$. The sequence of open covers in condition 2 is called a $G_\delta$-diagonal sequence for the space $X$. According to condition 2, at any given point, the stars of the point with respect to the open covers in the sequence collapse to the given point. One advantage of a $G_\delta$-diagonal sequence is that it is entirely about points of the space $X$. Thus we can work with such sequences of open covers of $X$ instead of the $G_\delta$-set $\Delta$ in $X \times X$. Theorem 1 is not a word for word translation. However, the proof is quote natural. Suppose that $\Delta=\cap \{U_n: n=0,1,2,\cdots \}$ where each $U_n$ is an open subset of $X \times X$. Then let $\mathcal{U}_n=\{U \subset X: U \text{ open and } U \times U \subset U_n \}$. It can be verify that $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ is a $G_\delta$-diagonal sequence for $X$. Suppose that $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ is a $G_\delta$-diagonal sequence for $X$. For each $n$, let $U_n=\cup \{ U \times U: U \in \mathcal{U}_n \}$. It follows that $\Delta=\bigcap_{n=0}^\infty U_n$. $\square$ It is informative to compare the property of $G_\delta$-diagonal with the definition of Moore spaces. A development for the space $X$ is a sequence $\mathcal{D}_0,\mathcal{D}_1,\mathcal{D}_2,\cdots$ of open covers of $X$ such that for each $x \in X$, $\{ St(x, \mathcal{D}_n): n=0,1,2,\cdots \}$ is a local base at the point $x$. A space is said to be developable if it has a development. The space $X$ is said to be a Moore space if $X$ is a Hausdorff and regular space that has a development. The stars of a given point with respect to the open covers of a development form a local base at the given point, and thus collapse to the given point. Thus a development is also a $G_\delta$-diagonal sequence. It then follows that any Moore space has a $G_\delta$-diagonal. A point in a space is a $G_\delta$-point if the point is the intersection of countably many open sets. Then having a $G_\delta$-diagonal sequence implies that that every point of the space is a $G_\delta$-point since every point is the intersection of the stars of that point with respect to a $G_\delta$-diagonal sequence. In contrast, any Moore space is necessarily a first countable space since the stars of any given point with respect to the development is a countable local base at the given point. The parallel suggests that spaces with $G_\delta$-diagonals can be thought of as a weak form of Moore spaces (at least a weak form of developable spaces). Regular G-delta Diagonal We discuss other diagonal properties. The space $X$ is said to have a regular $G_\delta$-diagonal if $\Delta=\cap \{\overline{U_n}:n=0,1,2,\cdots \}$ where each $U_n$ is an open subset of $X \times X$ such that $\Delta \subset U_n$. This diagonal property also has an equivalent condition in terms of a diagonal sequence. Theorem 2 Let $X$ be a space. Then the following statements are equivalent. 1. The space $X$ has a regular $G_\delta$-diagonal. 2. There exists a sequence $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ of open covers of $X$ such that for every two distinct points $x,y \in X$, there exist open sets $U$ and $V$ with $x \in U$ and $y \in V$ and there also exists an $n$ such that no member of $\mathcal{U}_n$ intersects both $U$ and $V$. For convenience, we call the sequence described in Theorem 2 a regular $G_\delta$-diagonal sequence. It is clear that if the diagonal of a space is a regular $G_\delta$-diagonal, then it is a $G_\delta$-diagonal. It can also be verified that a regular $G_\delta$-diagonal sequence is also a $G_\delta$-diagonal sequence. To see this, let $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ be a regular $G_\delta$-diagonal sequence for $X$. Suppose that $y \ne x$ and $y \in \bigcap_k St(x, \mathcal{U}_k)$. Choose open sets $U$ and $V$ and an integer $n$ guaranteed by the regular $G_\delta$-diagonal sequence. Since $y \in St(x, \mathcal{U}_n)$, choose $B \in \mathcal{U}_n$ such that $x,y \in B$. Then $B$ would be an element of $\mathcal{U}_n$ that meets both $U$ and $V$, a contradiction. Then $\{ x \}= \bigcap_k St(x, \mathcal{U}_k)$ for all $x \in X$. To proof Theorem 2, suppose that $X$ has a regular $G_\delta$-diagonal. Let $\Delta=\bigcap_{k=0}^\infty \overline{U_k}$ where each $U_k$ is open in $X \times X$ and $\Delta \subset U_k$. For each $k$, let $\mathcal{U}_k$ be the collection of all open subsets $U$ of $X$ such that $U \times U \subset U_k$. It can be verified that $\{ \mathcal{U}_k \}$ is a regular $G_\delta$-diagonal sequence for $X$. On the other hand, suppose that $\{ \mathcal{U}_k \}$ is a regular $G_\delta$-diagonal sequence for $X$. For each $k$, let $U_k=\cup \{U \times U: U \in \mathcal{U}_k \}$. It can be verified that $\Delta=\bigcap_{k=0}^\infty \overline{U_k}$. $\square$ Rank-k Diagonals Metric spaces and submetrizable spaces have regular $G_\delta$-diagonals. We discuss this fact after introducing another set of diagonal properties. First some notations. For any family $\mathcal{U}$ of subsets of the space $X$ and for any $x \in X$, define $St^1(x, \mathcal{U})=St(x, \mathcal{U})$. For any integer $k \ge 2$, let $St^k(x, \mathcal{U})=St^{k-1}(St(x, \mathcal{U}))$. Thus $St^{2}(x, \mathcal{U})$ is the star of the star $St(x, \mathcal{U})$ with respect to $\mathcal{U}$ and $St^{3}(x, \mathcal{U})$ is the star of $St^{2}(x, \mathcal{U})$ and so on. Let $X$ be a space. A sequence $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ of open covers of $X$ is said to be a rank-$k$ diagonal sequence of $X$ if for each $x \in X$, we have $\{ x \}=\bigcap_{j=0}^\infty St^k(x,\mathcal{U}_j)$. When the space $X$ has a rank-$k$ diagonal sequence, the space is said to have a rank-$k$ diagonal. Clearly a rank-1 diagonal sequence is simply a $G_\delta$-diagonal sequence as defined in Theorem 1. Thus having a rank-1 diagonal is the same as having a $G_\delta$-diagonal. It is also clear that having a higher rank diagonal implies having a lower rank diagonal. This follows from the fact that a rank $k+1$ diagonal sequence is also a rank $k$ diagonal sequence. The following lemma builds intuition of the rank-$k$ diagonal sequence. For any two distinct points $x$ and $y$ of a space $X$, and for any integer $d \ge 2$, a $d$-link path from $x$ to $y$ is a set of open sets $W_1,W_2,\cdots,W_d$ such that $x \in W_1$, $y \in W_d$ and $W_t \cap W_{t+1} \ne \varnothing$ for all $t=1,2,\cdots,d-1$. By default, a single open set $W$ containing both $x$ and $y$ is a d-link path from $x$ to $y$ for any integer $d \ge 1$. Lemma 3 Let $X$ be a space. Let $k$ be a positive integer. Let $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ be a sequence of open covers of $X$. Then the following statements are equivalent. 1. The sequence $\mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2,\cdots$ is a rank-$k$ diagonal sequence for the space $X$. 2. For any two distinct points $x$ and $y$ of $X$, there is an integer $n$ such that $y \notin St^k(x,\mathcal{U}_n)$. 3. For any two distinct points $x$ and $y$ of $X$, there is an integer $n$ such that there is no $k$-link path from $x$ to $y$ consisting of elements of $\mathcal{U}_n$. It can be seen directly from definition that Condition 1 and Condition 2 are equivalent. For Condition 3, observe that the set $St^k(x,\mathcal{U}_n)$ is the union of $k$ types of open sets – open sets in $\mathcal{U}_n$ containing $x$, open sets in $\mathcal{U}_n$ that intersect the first type, open sets in $\mathcal{U}_n$ that intersect the second type and so on down to the open sets in $\mathcal{U}_n$ that intersect $St^{k-1}(x,\mathcal{U}_n)$. A path is formed by taking one open set from each type. We now show a few basic results that provide further insight on the rank-$k$ diagonal. Theorem 4 Let $X$ be a space. If the space $X$ has a rank-2 diagonal, then $X$ is a Hausdorff space. Theorem 5 Let $X$ be a Moore space. Then $X$ has a rank-2 diagonal. Theorem 6 Let $X$ be a space. If $X$ has a rank-3 diagonal, then $X$ has a regular $G_\delta$-diagonal. Once Lemma 3 is understood, Theorem 4 is also easily understood. If a space $X$ has a rank-2 diagonal sequence $\{ \mathcal{U}_n \}$, then for any two distinct points $x$ and $y$, we can always find an $n$ where there is no 2-link path from $x$ to $y$. Then $x$ and $y$ can be separated by open sets in $\mathcal{U}_n$. Thus these diagonal ranking properties confer separation axioms. We usually start off a topology discussion by assuming a reasonable separation axiom (usually implicitly). The fact that the diagonal ranking gives a bonus makes it even more interesting. Apparently many authors agree since $G_\delta$-diagonal and related topics had been researched extensively over decades. To prove Theorem 5, let $\{ \mathcal{U}_n \}$ be a development for the space $X$. Let $x$ and $y$ be two distinct points of $X$. We claim that there exists some $n$ such that $y \notin St^2(x,\mathcal{U}_n)$. Suppose not. This means that for each $n$, $y \in St^2(x,\mathcal{U}_n)$. This also means that $St(x,\mathcal{U}_n) \cap St(y,\mathcal{U}_n) \ne \varnothing$ for each $n$. Choose $x_n \in St(x,\mathcal{U}_n) \cap St(y,\mathcal{U}_n)$ for each $n$. Since $X$ is a Moore space, $\{ St(x,\mathcal{U}_n) \}$ is a local base at $x$. Then $\{ x_n \}$ converges to $x$. Since $\{ St(y,\mathcal{U}_n) \}$ is a local base at $y$, $\{ x_n \}$ converges to $y$, a contradiction. Thus the claim that there exists some $n$ such that $y \notin St^2(x,\mathcal{U}_n)$ is true. By Lemma 3, a development for a Moore space is a rank-2 diagonal sequence. To prove Theorem 6, let $\{ \mathcal{U}_n \}$ be a rank-3 diagonal sequence for the space $X$. We show that $\{ \mathcal{U}_n \}$ is also a regular $G_\delta$-diagonal sequence for $X$. Suppose $x$ and $y$ are two distinct points of $X$. By Lemma 3, there exists an $n$ such that there is no 3-link path consisting of open sets in $\mathcal{U}_n$ that goes from $x$ to $y$. Choose $U \in \mathcal{U}_n$ with $x \in U$. Choose $V \in \mathcal{U}_n$ with $y \in V$. Then it follows that no member of $\mathcal{U}_n$ can intersect both $U$ and $V$ (otherwise there would be a 3-link path from $x$ to $y$). Thus $\{ \mathcal{U}_n \}$ is also a regular $G_\delta$-diagonal sequence for $X$. We now show that metric spaces have rank-$k$ diagonal for all integer $k \ge 1$. Theorem 7 Let $X$ be a metrizable space. Then $X$ has rank-$k$ diagonal for all integers $k \ge 1$. If $d$ is a metric that generates the topology of $X$, and if $\mathcal{U}_n$ is the collection of all open subsets with diameters $\le 2^{-n}$ with respect to the metrix $d$ then $\{ \mathcal{U}_n \}$ is a rank-$k$ diagonal sequence for $X$ for any integer $k \ge 1$. We instead prove Theorem 7 topologically. To this end, we use an appropriate metrization theorem. The following theorem is a good candidate. Alexandrov-Urysohn Metrization Theorem. A space $X$ is metrizable if and only if the space $X$ has a development $\{ \mathcal{U}_n \}$ such that for any $U_1,U_2 \in \mathcal{U}_{n+1}$ with $U_1 \cap U_2 \ne \varnothing$, the set $U_1 \cup U_2$ is contained in some element of $\mathcal{U}_n$. See Theorem 1.5 in p. 427 of [5]. Let $\{ \mathcal{U}_n \}$ be the development from Alexandrov-Urysohn Metrization Theorem. It is a development with a strong property. Each open cover in the development refines the preceding open cover in a special way. This refinement property allows us to show that it is a rank-$k$ diagonal sequence for $X$ for any integer $k \ge 1$. First, we make a few observations about $\{ \mathcal{U}_n \}$. From the statement of the theorem, each $\mathcal{U}_{n+1}$ is a refinement of $\mathcal{U}_n$. As a result of this observation, $\mathcal{U}_{m}$ is a refinement of $\mathcal{U}_n$ for any $m>n$. Furthermore, for each $x \in X$, $\text{St}(x,\mathcal{U}_m) \subset \text{St}(x,\mathcal{U}_n)$ for any $m>n$. Let $x, y \in X$ with $x \ne y$. Based on the preceding observations, it follows that there exists some $m$ such that $\text{St}(x,\mathcal{U}_m) \cap \text{St}(y,\mathcal{U}_m)=\varnothing$. We claim that there exists some integer $h>m$ such that there are no $k$-link path from $x$ to $y$ consisting of open sets from $\mathcal{U}_h$. Then $\{ \mathcal{U}_n \}$ is a rank-$k$ diagonal sequence for $X$ according to Lemma 3. We show this claim is true for $k=2$. Observe that there cannot exist $U_1, U_2 \in \mathcal{U}_{m+1}$ such that $x \in U_1$, $y \in U_2$ and $U_1 \cap U_2 \ne \varnothing$. If there exists such a pair, then $U_1 \cup U_2$ would be contained in $\text{St}(x,\mathcal{U}_m)$ and $\text{St}(y,\mathcal{U}_m)$, a contradiction. Putting it in another way, there cannot be any 2-link path $U_1,U_2$ from $x$ to $y$ such that the open sets in the path are from $\mathcal{U}_{m+1}$. According to Lemma 3, the sequence $\{ \mathcal{U}_n \}$ is a rank-2 diagonal sequence for the space $X$. In general for any $k \ge 2$, there cannot exist any $k$-link path $U_1,\cdots,U_k$ from $x$ to $y$ such that the open sets in the path are from $\mathcal{U}_{m+k-1}$. The argument goes just like the one for the case for $k=2$. Suppose the path $U_1,\cdots,U_k$ exists. Using the special property of $\{ \mathcal{U}_n \}$, the 2-link path $U_1,U_2$ is contained in some open set in $\mathcal{U}_{m+k-2}$. The path $U_1,\cdots,U_k$ is now contained in a $(k-1)$-link path consisting of elements from the open cover $\mathcal{U}_{m+k-2}$. Continuing the refinement process, the path $U_1,\cdots,U_k$ is contained in a 2-link path from $x$ to $y$ consisting of elements from $\mathcal{U}_{m+1}$. Like before this would lead to a contradiction. According to Lemma 3, $\{ \mathcal{U}_n \}$ is a rank-$k$ diagonal sequence for the space $X$ for any integer $k \ge 2$. Of course, any metric space already has a $G_\delta$-diagonal. We conclude that any metrizable space has a rank-$k$ diagonal for any integer $k \ge 1$. $\square$ We have the following corollary. Corollary 8 Let $X$ be a submetrizable space. Then $X$ has rank-$k$ diagonal for all integer $k \ge 1$. In a submetrizable space, the weaker metrizable topology has a rank-$k$ diagonal sequence, which in turn is a rank-$k$ diagonal sequence in the original topology. Examples and Questions The preceding discussion focuses on properties that are in between $G_\delta$-diagonal and submetrizability. In fact, one of the properties has infinitely many levels (rank-$k$ diagonal for integers $k \ge 1$). We would like to have a diagram showing the relative strengths of these properties. Before we do so, consider one more diagonal property. Let $X$ be a space. The set $A \subset X$ is said to be a zero-set in $X$ if there is a continuous $f:X \rightarrow [0,1]$ such that $A=f^{-1}(0)$. In other words, a zero-set is a set that is the inverse image of zero for some continuous real-valued function defined on the space in question. A space $X$ has a zero-set diagonal if the diagonal $\Delta=\{ (x,x): x \in X \}$ is a zero-set in $X \times X$. The space $X$ having a zero-set diagonal implies that $X$ has a regular $G_\delta$-diagonal, and thus a $G_\delta$-diagonal. To see this, suppose that $\Delta=f^{-1}(0)$ where $f:X \times X \rightarrow [0,1]$ is continuous. Then $\Delta=\bigcap_{n=1}^\infty \overline{U_n}$ where $U_n=f^{-1}([0,1/n))$. Thus having a zero-set diagonal is a strong property. We have the following diagram. The diagram summarizes the preceding discussion. From top to bottom, the stronger properties are at the top. From left to right, the stronger properties are on the left. The diagram shows several properties in between $G_\delta$-diagonal at the bottom and submetrizability at the top. Note that the statement at the very bottom is not explicitly a diagonal property. It is placed at the bottom because of the classic result that any compact space with a $G_\delta$-diagonal is metrizable. In the diagram, “rank-k diagonal” means that the space has a rank-$k$ diagonal where $k \ge 1$ is an integer, which in terms means that the space has a rank-$k$ diagonal sequence as defined above. Thus rank-$k$ diagonal is not to be confused with the rank of a diagonal. The rank of the diagonal of a given space is the largest integer $k$ such that the space has a rank-$k$ diagonal. For example, for a space that has a rank-2 diagonal but has no rank-3 diagonal, the rank of the diagonal is 2. To further make sense of the diagram, let’s examine examples. The Mrowka space is a classic example of a space with a $G_\delta$-diagonal that is not submetrizable (introduced here). Where is this space located in the diagram? The Mrowka space, also called Psi-space, is defined using a maximal almost disjoint family of subsets of $\omega$. We denote such a space by $\Psi(\mathcal{A})$ where $\mathcal{A}$ is a maximal almost disjoint family of subsets of $\omega$. It is a pseudocompact Moore space that is not submetrizable. As a Moore space, it has a rank-2 diagonal sequence. A well known result states that any pseudocompact space with a regular $G_\delta$-diagonal is metrizable (see here). As a non-submetrizable space, the Mrowka space cannot have a regular $G_\delta$-diagonal. Thus $\Psi(\mathcal{A})$ is an example of a space with a rank-2 diagonal but not a rank-3 diagonal sequence. Examples of non-submetrizable spaces with stronger diagonal properties are harder to come by. We discuss examples that are found in the literature. Example 2.9 in [2] is a Tychonoff separable Moore space $Z$ that has a rank-3 diagonal but not of higher diagonal rank. As a result of not having a rank-4 diagonal, $Z$ is not submetrizable. Thus $Z$ is an example of a space with rank-3 diagonal (hence with a regular $G_\delta$-diagonal) that is not submetrizable. According to a result in [6], any separable space with a zero-set diagonal is submetrizable. Then the space $Z$ is an example of a space with a regular $G_\delta$-diagonal that does not have a zero-set diagonal. In fact, the authors of [2] indicated that this is the first such example. Example 2.9 of [2] shows that having a rank-3 diagonal does not imply having a zero-set diagonal. If a space is strengthened to have a rank-4 diagonal, does it imply having a zero-set diagonal? This is essentially Problem 2.13 in [2]. On the other hand, having a rank-3 diagonal implies a rank-2 diagonal. If we weaken the hypothesis to just having a regular regular $G_\delta$-diagonal, does it imply having a rank-2 diagonal? This is essentially Problem 2.14 in [2]. The authors of [2] conjectured that for each $n$, there exists a space $X_n$ with a rank-$n$ diagonal but not having a rank-$(n+1)$ diagonal. This conjecture was answered affirmatively in [8] by constructing, for each integer $k \ge 4$, a Tychonoff space with a rank-$k$ diagonal but not having a rank-$(k+1)$ diagonal. Thus even for high $k$, a non-submetrizable space can be found with rank-$k$ diagonal. One natural question is this. Is there a non-submetrizable space that has rank-$k$ diagonal for all $k \ge 1$? We have not seen this question stated in the literature. But it is clearly a natural question. Example 2.17 in [2] is a non-submetrizable Moore space that has a zero-set diagonal and has rank-3 diagonal exactly (i.e. it does not have a higher rank diagonal). This example shows that having a zero-set diagonal does not imply having a rank-4 diagonal. A natural question is then this. Does having a zero-set diagonal imply having a rank-3 diagonal? This appears to be an open question. This is hinted by Problem 2.19 in [2]. It asks, if $X$ is a normal space with a zero-set diagonal, does $X$ have at least a rank-2 diagonal? The property of having a $G_\delta$-diagonal and related properties is a topic that had been researched extensively over the decades. It is still an active topic of research. The discussion in this post only touches on the surface. There are many other diagonal properties not covered here. To further investigate, check with the papers listed below and also consult with information available in the literature. Reference 1. Arhangelskii A. V., Burke D. K., Spaces with a regular $G_\delta$-diagonal, Topology and its Applications, Vol. 153, No. 11, 1917–1929, 2006. 2. Arhangelskii A. V., Buzyakova R. Z., The rank of the diagonal and submetrizability, Comment. Math. Univ. Carolinae, Vol. 47, No. 4, 585-597, 2006. 3. Buzyakova R. Z., Cardinalities of ccc-spaces with regular $G_\delta$-diagonals, Topology and its Applications, Vol. 153, 1696–1698, 2006. 4. Buzyakova R. Z., Observations on spaces with zeroset or regular $G_\delta$-diagonals, Comment. Math. Univ. Carolinae, Vol. 46, No. 3, 469-473, 2005. 5. Gruenhage, G., Generalized Metric Spaces, Handbook of Set-Theoretic Topology (K. Kunen and J. E. Vaughan, eds), Elsevier Science Publishers B. V., Amsterdam, 423-501, 1984. 6. Martin H. W., Contractibility of topological spaces onto metric spaces, Pacific J. Math., Vol. 61, No. 1, 209-217, 1975. 7. Xuan Wei-Feng, Shi Wei-Xue, On spaces with rank k-diagonals or zeroset diagonals, Topology Proceddings, Vol. 51, 245{251, 2018. 8. Yu Zuoming, Yun Ziqiu, A note on the rank of diagonals, Topology and its Applications, Vol. 157, 1011–1014, 2010. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma Pseudocompact spaces with regular G-delta diagonals This post complements two results discussed in two previous blog posts concerning $G_\delta$-diagonal. One result is that any compact space with a $G_\delta$-diagonal is metrizable (see here). The other result is that the compactness in the first result can be relaxed to countably compactness. Thus any countably compact space with a $G_\delta$-diagonal is metrizable (see here). The countably compactness in the second result cannot be relaxed to pseudocompactness. The Mrowka space is a pseudocompact space with a $G_\delta$-diagonal that is not submetrizable, hence not metrizable (see here). However, if we strengthen the $G_\delta$-diagonal to a regular $G_\delta$-diagonal while keeping pseudocompactness fixed, then we have a theorem. We prove the following theorem. Theorem 1 If the space $X$ is pseudocompact and has a regular $G_\delta$-diagonal, then $X$ is metrizable. All spaces are assumed to be Hausdorff and completely regular. The assumption of completely regular is crucial. The proof of Theorem 1 relies on two lemmas concerning pseudocompact spaces (one proved in a previous post and one proved here). These two lemmas work only for completely regular spaces. The proof of Theorem 1 uses a metrization theorem. The best metrization to use in this case is Moore metrization theorem (stated below). The result in Theorem 1 is found in [2]. First some basics. Let $X$ be a space. The diagonal of the space $X$ is the set $\Delta=\{ (x,x): x \in X \}$. When the diagonal $\Delta$, as a subset of $X \times X$, is a $G_\delta$-set, i.e. $\Delta$ is the intersection of countably many open subsets of $X \times X$, the space $X$ is said to have a $G_\delta$-diagonal. The space $X$ is said to have a regular $G_\delta$-diagonal if the diagonal $\Delta$ is a regular $G_\delta$-set in $X \times X$, i.e. $\Delta=\bigcap_{n=1}^\infty \overline{U_n}$ where each $U_n$ is an open subset of $X \times X$ with $\Delta \subset U_n$. If $\Delta=\bigcap_{n=1}^\infty \overline{U_n}$, then $\Delta=\bigcap_{n=1}^\infty \overline{U_n}=\bigcap_{n=1}^\infty U_n$. Thus if a space has a regular $G_\delta$-diagonal, it has a $G_\delta$-diagonal. We will see that there exists a space with a $G_\delta$-diagonal that fails to be a regular $G_\delta$-diagonal. The space $X$ is a pseudocompact space if for every continuous function $f:X \rightarrow \mathbb{R}$, the image $f(X)$ is a bounded set in the real line $\mathbb{R}$. Pseudocompact spaces are discussed in considerable details in this previous post. We will rely on results from this previous post to prove Theorem 1. The following lemma is used in proving Theorem 1. Lemma 2 Let $X$ be a pseudocompact space. Suppose that $O_1,O_2,O_2,\cdots$ is a decreasing sequence of non-empty open subsets of $X$ such that $\bigcap_{n=1}^\infty O_n=\bigcap_{n=1}^\infty \overline{O_n}=\{ x \}$ for some point $x \in X$. Then $\{ O_n \}$ is a local base at the point $x$. Proof of Lemma 2 Let $O_1,O_2,O_2,\cdots$ be a decreasing sequence of open subsets of $X$ such that $\bigcap_{n=1}^\infty O_n=\bigcap_{n=1}^\infty \overline{O_n}=\{ x \}$. Let $U$ be open in $X$ with $x \in U$. If $O_n \subset U$ for some $n$, then we are done. Suppose that $O_n \not \subset U$ for each $n$. Choose open $V$ with $x \in V \subset \overline{V} \subset U$. Consider the sequence $\{ O_n \cap (X-\overline{V}) \}$. This is a decreasing sequence of non-empty open subsets of $X$. By Theorem 2 in this previous post, $\bigcap \overline{O_n \cap (X-\overline{V})} \ne \varnothing$. Let $y$ be a point in this non-empty set. Note that $y \in \bigcap_{n=1}^\infty \overline{O_n}$. This means that $y=x$. Since $x \in \overline{O_n \cap (X-\overline{V})}$ for each $n$, any open set containing $x$ would contain a point not in $\overline{V}$. This is a contradiction since $x \in V$. Thus it must be the case that $x \in O_n \subset U$ for some $n$. $\square$ The following metrization theorem is useful in proving Theorem 1. Theorem 3 (Moore Metrization Theorem) Let $X$ be a space. Then $X$ is metrizable if and only if the following condition holds. There exists a decreasing sequence $\mathcal{B}_1,\mathcal{B}_2,\mathcal{B}_3,\cdots$ of open covers of $X$ such that for each $x \in X$, the sequence $\{ St(St(x,\mathcal{B}_n),\mathcal{B}_n):n=1,2,3,\cdots \}$ is a local base at the point $x$. For any family $\mathcal{U}$ of subsets of $X$, and for any $A \subset X$, the notation $St(A,\mathcal{U})$ refers to the set $\cup \{U \in \mathcal{U}: U \cap A \ne \varnothing \}$. In other words, it is the union of all sets in $\mathcal{U}$ that contain points of $A$. The set $St(A,\mathcal{U})$ is also called the star of the set $A$ with respect to the family $\mathcal{U}$. If $A=\{ x \}$, we write $St(x,\mathcal{U})$ instead of $St(\{ x \},\mathcal{U})$. The set $St(St(x,\mathcal{B}_n),\mathcal{B}_n)$ indicated in Theorem 3 is the star of the set $St(x,\mathcal{B}_n)$ with respect to the open cover $\mathcal{B}_n$. Theorem 3 follows from Theorem 1.4 in [1], which states that for any $T_0$-space $X$, $X$ is metrizable if and only if there exists a sequence $\mathcal{G}_1, \mathcal{G}_2, \mathcal{G}_3,\cdots$ of open covers of $X$ such that for each open $U \subset X$ and for each $x \in U$, there exist an open $V \subset X$ and an integer $n$ such that $x \in V$ and $St(V,\mathcal{G}_n) \subset U$. Proof of Theorem 1 Suppose $X$ is pseudocompact such that its diagonal $\Delta=\bigcap_{n=1}^\infty \overline{U_n}$ where each $U_n$ is an open subset of $X \times X$ with $\Delta \subset U_n$. We can assume that $U_1 \supset U_2 \supset \cdots$. For each $n \ge 1$, define the following: $\mathcal{U}_n=\{ U \subset X: U \text{ open in } X \text{ and } U \times U \subset U_n \}$ Note that each $\mathcal{U}_n$ is an open cover of $X$. Also note that $\{ \mathcal{U}_n \}$ is a decreasing sequence since $\{ U_n \}$ is a decreasing sequence of open sets. We show that $\{ \mathcal{U}_n \}$ is a sequence of open covers of $X$ that satisfies Theorem 3. We establish this by proving the following claims. Claim 1. For each $x \in X$, $\bigcap_{n=1}^\infty \overline{St(x,\mathcal{U}_n)}=\{ x \}$. To prove the claim, let $x \ne y$. There is an integer $n$ such that $(x,y) \notin \overline{U_n}$. Choose open sets $U$ and $V$ such that $(x,y) \in U \times V$ and $(U \times V) \cap \overline{U_n}=\varnothing$. Note that $(x,y) \notin U_k$ and $(U \times V) \cap U_n=\varnothing$. We want to show that $V \cap St(x,\mathcal{U}_n)=\varnothing$, which implies that $y \notin \overline{St(x,\mathcal{U}_n)}$. Suppose $V \cap St(x,\mathcal{U}_n) \ne \varnothing$. This means that $V \cap W \ne \varnothing$ for some $W \in \mathcal{U}_n$ with $x \in W$. Then $(U \times V) \cap (W \times W) \ne \varnothing$. Note that $W \times W \subset U_n$. This implies that $(U \times V) \cap U_n \ne \varnothing$, a contradiction. Thus $V \cap St(x,\mathcal{U}_n)=\varnothing$. Since $y \in V$, $y \notin \overline{St(x,\mathcal{U}_n)}$. We have established that for each $x \in X$, $\bigcap_{n=1}^\infty \overline{St(x,\mathcal{U}_n)}=\{ x \}$. Claim 2. For each $x \in X$, $\{ St(x,\mathcal{U}_n) \}$ is a local base at the point $x$. Note that $\{ St(x,\mathcal{U}_n) \}$ is a decreasing sequence of open sets such that $\bigcap_{n=1}^\infty \overline{St(x,\mathcal{U}_n)}=\{ x \}$. By Lemma 2, $\{ St(x,\mathcal{U}_n) \}$ is a local base at the point $x$. Claim 3. For each $x \in X$, $\bigcap_{n=1}^\infty \overline{St(St(x,\mathcal{U}_n),\mathcal{U}_n)}=\{ x \}$. Let $x \ne y$. There is an integer $n$ such that $(x,y) \notin \overline{U_n}$. Choose open sets $U$ and $V$ such that $(x,y) \in U \times V$ and $(U \times V) \cap \overline{U_n}=\varnothing$. It follows that $(U \times V) \cap \overline{U_t}=\varnothing$ for all $t \ge n$. Furthermore, $(U \times V) \cap U_t=\varnothing$ for all $t \ge n$. By Claim 2, choose integers $i$ and $j$ such that $St(x,\mathcal{U}_i) \subset U$ and $St(y,\mathcal{U}_j) \subset V$. Choose an integer $k \ge \text{max}(n,i,j)$. It follows that $(St(x,\mathcal{U}_i) \times St(y,\mathcal{U}_j)) \cap U_k=\varnothing$. Since $\mathcal{U}_k \subset \mathcal{U}_i$ and $\mathcal{U}_k \subset \mathcal{U}_j$, it follows that $(St(x,\mathcal{U}_k) \times St(y,\mathcal{U}_k)) \cap U_k=\varnothing$. We claim that $St(y,\mathcal{U}_k) \cap St(St(x,\mathcal{U}_k), \mathcal{U}_k)=\varnothing$. Suppose not. Choose $w \in St(y,\mathcal{U}_k) \cap St(St(x,\mathcal{U}_k), \mathcal{U}_k)$. It follows that $w \in B$ for some $B \in \mathcal{U}_k$ such that $B \cap St(x,\mathcal{U}_k) \ne \varnothing$ and $B \cap St(y,\mathcal{U}_k) \ne \varnothing$. Furthermore $(St(x,\mathcal{U}_k) \times St(y,\mathcal{U}_k)) \cap (B \times B)=\varnothing$. Note that $B \times B \subset U_k$. This means that $(St(x,\mathcal{U}_k) \times St(y,\mathcal{U}_k)) \cap U_k \ne \varnothing$, contradicting the fact observed in the preceding paragraph. It must be the case that $St(y,\mathcal{U}_k) \cap St(St(x,\mathcal{U}_k), \mathcal{U}_k)=\varnothing$. Because there is an open set containing $y$, namely $St(y,\mathcal{U}_k)$, that contains no points of $St(St(x,\mathcal{U}_k), \mathcal{U}_k)$, $y \notin \overline{St(St(x,\mathcal{U}_n),\mathcal{U}_n)}$. Thus Claim 3 is established. Claim 4. For each $x \in X$, $\{ St(St(x,\mathcal{U}_n),\mathcal{U}_n)) \}$ is a local base at the point $x$. Note that $\{ St(St(x,\mathcal{U}_n),\mathcal{U}_n) \}$ is a decreasing sequence of open sets such that $\bigcap_{n=1}^\infty \overline{St(St(x,\mathcal{U}_n),\mathcal{U}_n))}=\{ x \}$. By Lemma 2, $\{ St(St(x,\mathcal{U}_n),\mathcal{U}_n) \}$ is a local base at the point $x$. In conclusion, the sequence $\mathcal{U}_1,\mathcal{U}_2,\mathcal{U}_3,\cdots$ of open covers satisfies the properties in Theorem 3. Thus any pseudocompact space with a regular $G_\delta$-diagonal is metrizable. $\square$ Example Any submetrizable space has a $G_\delta$-diagonal. The converse is not true. A classic example of a non-submetrizable space with a $G_\delta$-diagonal is the Mrowka space (discussed here). The Mrowka space is also called the psi-space since it is sometimes denoted by $\Psi(\mathcal{A})$ where $\mathcal{A}$ is a maximal family of almost disjoint subsets of $\omega$. Actually $\Psi(\mathcal{A})$ would be a family of spaces since $\mathcal{A}$ is any maximal almost disjoint family. For any maximal $\mathcal{A}$, $\Psi(\mathcal{A})$ is a pseudocompact non-submetrizable space that has a $G_\delta$-diagonal. This example shows that the requirement of a regular $G_\delta$-diagonal in Theorem 1 cannot be weakened to a $G_\delta$-diagonal. See here for a more detailed discussion of this example. Reference 1. Gruenhage, G., Generalized Metric Spaces, Handbook of Set-Theoretic Topology (K. Kunen and J. E. Vaughan, eds), Elsevier Science Publishers B. V., Amsterdam, 423-501, 1984. 2. McArthur W. G., $G_\delta$-Diagonals and Metrization Theorems, Pacific Journal of Mathematics, Vol. 44, No. 2, 613-317, 1973. $\text{ }$ $\text{ }$ $\text{ }$ Dan Ma math Daniel Ma mathematics $\copyright$ 2018 – Dan Ma
2020-08-05T04:49:39
{ "domain": "wordpress.com", "url": "https://dantopology.wordpress.com/tag/topology/", "openwebmath_score": 0.993003785610199, "openwebmath_perplexity": 136.46652973402306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631631151012, "lm_q2_score": 0.8577681013541611, "lm_q1q2_score": 0.8460708576709252 }
https://tobydriscoll.net/fnc-julia/krylov/gmres.html
# 8.5. GMRES¶ The most important use of the Arnoldi iteration is to solve the square linear system $$\mathbf{A}\mathbf{x}=\mathbf{b}$$. In Demo 8.4.3, we attempted to replace the linear system $$\mathbf{A}\mathbf{x}=\mathbf{b}$$ by the lower-dimensional approximation $\min_{\mathbf{x}\in \mathcal{K}_m} \| \mathbf{A}\mathbf{x}-\mathbf{b} \| = \min_{\mathbf{z}\in\mathbb{C}^m} \| \mathbf{A}\mathbf{K}_m\mathbf{z}-\mathbf{b} \|,$ where $$\mathbf{K}_m$$ is the Krylov matrix generated using $$\mathbf{A}$$ and the seed vector $$\mathbf{b}$$. This method was unstable due to the poor conditioning of $$\mathbf{K}_m$$, which is a numerically poor basis for $$\mathcal{K}_m$$. The Arnoldi algorithm yields an orthonormal basis of the same space and fixes the stability problem. Set $$\mathbf{x}=\mathbf{Q}_m\mathbf{z}$$ and obtain (8.5.1)$\min_{\mathbf{z}\in\mathbb{C}^m}\, \bigl\| \mathbf{A} \mathbf{Q}_m \mathbf{z}-\mathbf{b} \bigr\|.$ From the fundamental Arnoldi identity (8.4.6), this is equivalent to (8.5.2)$\min_{\mathbf{z}\in\mathbb{C}^m}\, \bigl\| \mathbf{Q}_{m+1} \mathbf{H}_m\mathbf{z}-\mathbf{b} \bigr\|.$ Note that $$\mathbf{q}_1$$ is a unit multiple of $$\mathbf{b}$$, so $$\mathbf{b} = \|\mathbf{b}\| \mathbf{Q}_{m+1}\mathbf{e}_1$$. Thus (8.5.2) becomes (8.5.3)$\min_{\mathbf{z}\in\mathbb{C}^m}\, \bigl\| \mathbf{Q}_{m+1} (\mathbf{H}_m\mathbf{z}-\|\mathbf{b}\|\mathbf{e}_1) \bigr\|.$ The least-squares problems (8.5.1), (8.5.2), and (8.5.3) are all $$n\times m$$. But observe that for any $$\mathbf{w}\in\mathbb{C}^{m+1}$$, $\|\mathbf{Q}_{m+1}\mathbf{w}\|^2 = \mathbf{w}^*\mathbf{Q}_{m+1}^*\mathbf{Q}_{m+1}\mathbf{w} = \mathbf{w}^*\mathbf{w} = \|\mathbf{w}\|^2.$ The first norm in that equation is on $$\mathbb{C}^n$$, while the last is on the much smaller space $$\mathbb{C}^{m+1}$$. Hence the least-squares problem (8.5.3) is equivalent to (8.5.4)$\min_{\mathbf{z}\in\mathbb{C}^m}\, \bigl\| \mathbf{H}_m\mathbf{z}-\|\mathbf{b}\|\,\mathbf{e}_1 \bigr\|,$ which is of size $$(m+1)\times m$$. We call the solution of this minimization $$\mathbf{z}_m$$, and then $$\mathbf{x}_m=\mathbf{Q}_m \mathbf{z}_m$$ is the $$m$$th approximation to the solution of $$\mathbf{A}\mathbf{x}=\mathbf{b}$$. Algorithm 8.5.1 :  GMRES Given $$n\times n$$ matrix $$\mathbf{A}$$ and $$n$$-vector $$\mathbf{b}$$: For $$m=1,2,\ldots$$, let $$\mathbf{x}_m=\mathbf{Q}_m \mathbf{z}_m$$, where $$\mathbf{z}_m$$ solves the linear least-squares problem (8.5.4), and $$\mathbf{Q}_m,\mathbf{H}_m$$ arise from the Arnoldi iteration. GMRES1 uses the Arnoldi iteration to minimize the residual $$\mathbf{b} - \mathbf{A}\mathbf{x}$$ over successive Krylov subspaces. In exact arithmetic, GMRES should get the exact solution when $$m=n$$, but the goal is to reduce the residual enough to stop at some $$m \ll n$$.2 Demo 8.5.2 We define a triangular matrix with known eigenvalues and a random vector $$\mathbf{b}$$. λ = @. 10 + (1:100) A = triu(rand(100,100),1) + diagm(λ) b = rand(100); Instead of building the Krylov matrices, we use the Arnoldi iteration to generate equivalent orthonormal vectors. Q,H = FNC.arnoldi(A,b,60); The Arnoldi bases are used to solve the least-squares problems defining the GMRES iterates. resid = [norm(b);zeros(60)] for m in 1:60 s = [norm(b); zeros(m)] z = H[1:m+1,1:m]\s x = Q[:,1:m]*z resid[m+1] = norm(b-A*x) end The approximations converge smoothly, practically all the way to machine epsilon. plot(0:60,resid,m=:o, xaxis=(L"m"),yaxis=(:log10,"norm of mth residual"), title="Residual for GMRES",leg=:none) Compare the graph in Demo 8.5.2 to the one in Demo 8.4.3. Both start with the same linear convergence, but only the version using Arnoldi avoids the instability created by the poor Krylov basis. A basic implementation of GMRES is given in Function 8.5.3. Function 8.5.3 :  gmres GMRES for a linear system 1""" 2 gmres(A,b,m) 3 4Do m iterations of GMRES for the linear system A*x=b. Returns 5the final solution estimate x and a vector with the history of 6residual norms. (This function is for demo only, not practical use.) 7""" 8function gmres(A,b,m) 9 n = length(b) 10 Q = zeros(n,m+1) 11 Q[:,1] = b/norm(b) 12 H = zeros(m+1,m) 13 14 # Initial solution is zero. 15 x = 0 16 residual = [norm(b);zeros(m)] 17 18 for j in 1:m 19 # Next step of Arnoldi iteration. 20 v = A*Q[:,j] 21 for i in 1:j 22 H[i,j] = dot(Q[:,i],v) 23 v -= H[i,j]*Q[:,i] 24 end 25 H[j+1,j] = norm(v) 26 Q[:,j+1] = v/H[j+1,j] 27 28 # Solve the minimum residual problem. 29 r = [norm(b); zeros(j)] 30 z = H[1:j+1,1:j] \ r 31 x = Q[:,1:j]*z 32 residual[j+1] = norm( A*x - b ) 33 end 34 return x,residual 35end ## Convergence and restarting¶ Thanks to Theorem 8.4.2, minimization of $$\|\mathbf{b}-\mathbf{A}\mathbf{x}\|$$ over $$\mathcal{K}_{m+1}$$ includes minimization over $$\mathcal{K}_m$$. Hence the norm of the residual $$\mathbf{r}_m = \mathbf{b} - \mathbf{A}\mathbf{x}_m$$ (being the minimized quantity) cannot increase as the iteration unfolds. Unfortunately, making other conclusive statements about the convergence of GMRES is neither easy nor simple. Demo 8.5.2 shows the cleanest behavior: essentially linear convergence down to the range of machine epsilon. But it is possible for the convergence to go through phases of sublinear and superlinear convergence as well. There is a strong dependence on the eigenvalues of the matrix, a fact we state with more precision and detail in the next section. One of the practical challenges in GMRES is that as the dimension of the Krylov subspace grows, the number of new entries to be found in $$\mathbf{H}_m$$ and the total number of columns in $$\mathbf{Q}$$ also grow. Thus both the work and the storage requirements are quadratic in $$m$$, which can become intolerable in some applications. For this reason, GMRES is often used with restarting. Suppose $$\hat{\mathbf{x}}$$ is an approximate solution of $$\mathbf{A}\mathbf{x}=\mathbf{b}$$. Then if we set $$\mathbf{x}=\mathbf{u}+\hat{\mathbf{x}}$$, we have $$\mathbf{A}(\mathbf{u}+\hat{\mathbf{x}}) = \mathbf{b}$$, or $$\mathbf{A}\mathbf{u} = \mathbf{b} - \mathbf{A}\hat{\mathbf{x}}$$. The conclusion is that if we get an approximate solution and compute its residual $$\mathbf{r}=\mathbf{b} - \mathbf{A}\hat{\mathbf{x}}$$, then we need only to solve $$\mathbf{A}\mathbf{u} = \mathbf{r}$$ in order to get a correction to $$\hat{\mathbf{x}}$$.3 Restarting guarantees a fixed upper bound on the per-iteration cost of GMRES. However, this benefit comes at a price. Even though restarting preserves progress made in previous iterations, the Krylov space information is discarded and the residual minimization process starts again over low-dimensional spaces. That can significantly retard or even stagnate the convergence. Demo 8.5.4 The following experiments are based on a matrix resulting from discretization of a partial differential equation. A = FNC.poisson(50) n = size(A,1) b = ones(n); spy(A,color=:blues)
2022-05-28T22:40:14
{ "domain": "tobydriscoll.net", "url": "https://tobydriscoll.net/fnc-julia/krylov/gmres.html", "openwebmath_score": 0.7928893566131592, "openwebmath_perplexity": 1055.6928078265214, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631643177029, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8460708569092986 }
https://math.stackexchange.com/questions/215583/calculating-number-of-boolean-functions
# calculating number of boolean functions I would just like to clarify if I am on the right track or not. I have these questions: Consider the Boolean functions $f(x,y,z)$ in three variables such that the table of values of $f$ contains exactly four $1$’s. 1. Calculate the total number of such functions. 2. We apply the Karnaugh map method to such a function $f$. Suppose that the map does not contain any blocks of four $1$’s, and all four $1$’s are covered by three blocks of two $1$’s. Moreover, we find that it is not possible to cover all $1$’s by fewer than three blocks. Calculate the number of the functions with this property. 1a: I have answered $70 = \binom{8}{4}$. 1b: I have manually drawn up Karnaugh maps and have obtained the answer $12$, but my friend has $24$. Is there another way to do this? Thank you • Like you, I count $12$, $8$ in which the $1$’s occupy an L-shaped region and $4$ in which they occupy a sort of T-shaped region. – Brian M. Scott Oct 17 '12 at 12:04 • Why $70\cdot \binom{8}{4}$ instead of just $\binom{8}{4}$ for part (i)? For part (ii), consider functions of the form $xy \vee xz \vee yz$. – Dilip Sarwate Oct 17 '12 at 12:05 • sorry for the confusion, i meant the answer was 70 and i obtained it by (8C4) thank you for your responses – Z Oj Oct 17 '12 at 12:07 • @Brian M.Scott In addition to the L and T shapes aren't there also some S-like shapes? – Alan Gee Oct 17 '12 at 12:26 • @Alan: You’re right. It’s clearly my bedtime. They add another $4$, if I’m not mistaken, bringing my total to $16$. – Brian M. Scott Oct 17 '12 at 12:33 A "block of two" on a Karnaugh map corresponds to a Boolean function of the form $xy$ (complements of variables allowed). So we are looking for Boolean functions of weight $4$ ($4$ ONEs in the truth table or on the Karnaugh map) that can be covered by $3$ "blocks of two" but not by $2$ "blocks of two", that is, the minimum sum of products expression has three terms, and not two. The simplest form is thus $$xy \vee xz \vee yz$$ which is the T shape referred to (see also my comment on the main question), and complementing variables gives us $2^3 = 8$ different T shapes, some of which are "wrapped around" the edges (which is perfectly acceptable). Are there any others? Well, let's try counting. The principle of inclusion-exclusion tells us that the total weight of $4$ equals the sum of the weights of the three "blocks of two" minus the sum of the weights of the pairwise intersections of "blocks of two" plus the weight of the intersection of all three "blocks of two". Well, two different "blocks of two" must intersect in one position because if they intersect in both positions, they are identical, and if they don't intersect at all, they cover all $4$ ONEs in the function. So, all three "blocks of two" must also intersect in the same position giving $$4 = (2+2+2) - (1 + 1 + 1) + 1$$ as the total weight, and also showing that the T shape is the only one possible. We can classify all $\binom 8 4$ functions of three variables with $4$ minterms according to the "number of blocks" one obtains on a Karnaugh map. First note that the K-map method produces a prime and irredundant expression: every term (or block) is a prime implicant, and every term covers one minterm not covered by any other term in the expression. We classify prime and irredundant covers according to the number of prime implicants. Then we'll argue that the correspondence between covers and functions is one-to-one. • Prime and irredundant covers with four implicants. There are two of them, even and odd parity. No two minterms may be adjacent on the map, and there's only two ways to achieve that. • Prime and irredundant covers with three prime implicants. There are eight of them of the type shown by @DilipSarwate, namely $(x \wedge y) \vee (x \wedge z) \vee (y \wedge z)$. There are 24 more of the form $(x \wedge y \wedge z) \vee (\neg x \wedge \neg y) \vee (\neg x \wedge \neg z)$. • Prime and irredundant covers with two prime implicants. We divide them according to the number of variables appearing in the cover. • Two variables: these are the even and odd parity functions of two variables, of which there are six in total. • Three variables: These covers must be of the form $(v \wedge b) \vee (\neg v \wedge c)$, where $v$ is a variable and $b$, $c$ are literals (not of $v$). The variable $v$ can be chosen in three ways; for each choice of $v$, there are four choices of $b$ and then two choices for $c$, for a total of 24 covers. • Prime and irredundant covers with one prime implicant. The implicant is a literal; hence there are six choices in this case. In summary: $2 + (8+24) + (6+24) + 6 = 70 = \binom 8 4$ as expected. The number of covers that consist of three "blocks of two" is 8, as identified by @DilipSarwate. We now argue that the each of the 70 functions has exactly one prime and irredundant cover. This is not necessary if we are convinced we counted all the covers above, but it doesn't hurt either. Recall that the consensus of $a \wedge b$ and $\neg a \wedge c$ is $b \wedge c$. Further recall that a term of a prime cover is essential (i.e., must be part of any prime cover) if it is not covered by the disjunction of its conjunctions and consensus terms with the other prime implicants of the cover. Finally, if a prime cover consists of all essential primes, it is the unique prime and irredundant cover. All these are classic results. Since all covers listed above are made of essential primes, they are all unique. L shapes: --. or --' or '-- or .-- count: 8 T shapes: -'- or -.- count: 4 Z shapes: -.. or ..- count: 4 Each of these shapes is 3-across on the map, so you can shift each across by one to get another boolean function. total: 16 If you treat the side edges as wrap-arounds you get 32 • The wording of that question was quite confusing, but I think this is what it means... – Sean O'Brien Oct 17 '12 at 12:48 • Yeah the wording of the question is very unclear. Keep in mind that the L-blocks can be covered by only two as well. Also I think the wrap-around of the edges should be included because that wrap-around takes into account renaming the variables (ie: making the x column the y column can produce a shape which appears disconnected on the map) – Sean O'Brien Oct 17 '12 at 13:00 • yeah thats whats confusing me a little now; from my understanding of the maps you try to obtain the minimum # of blocks that covers all the 1's (which in this case is 3) but a couple of my 's', and t maps can be undertaken with 2 blocks of 2 which brings my original total down to 8 if they don't need to overlap – Z Oj Oct 17 '12 at 13:16
2021-04-12T17:57:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/215583/calculating-number-of-boolean-functions", "openwebmath_score": 0.7420335412025452, "openwebmath_perplexity": 411.36230373794376, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137879323496, "lm_q2_score": 0.8615382147637195, "lm_q1q2_score": 0.8460424057285943 }
https://fr.mathworks.com/help/matlab/ref/surface.html
Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English version of the page. # surface Create surface object ## Syntax surface(Z) surface(Z,C) surface(X,Y,Z) surface(X,Y,Z,C) surface(x,y,Z) surface(...'PropertyName',PropertyValue,...) surface(ax,...) h = surface(...) ## Properties For a list of properties, see Surface Properties. ## Description surface is the low-level function for creating surface graphics objects. Surfaces are plots of matrix data created using the row and column indices of each element as the x- and y-coordinates and the value of each element as the z-coordinate. surface(Z) plots the surface specified by the matrix Z. Here, Z is a single-valued function, defined over a geometrically rectangular grid. The values in Z can be numeric, datetime, duration, or categorical values. surface(Z,C) plots the surface specified by Z and colors it according to the data in C (see "Examples"). surface(X,Y,Z) uses C = Z, so color is proportional to surface height above the x-y plane. surface(X,Y,Z,C) plots the parametric surface specified by X, Y, and Z, with color specified by C. The values in X, Y, and Z can be numeric, datetime, duration, or categorical values. surface(x,y,Z), surface(x,y,Z,C) replaces the first two matrix arguments with vectors and must have length(x) = n and length(y) = m where [m,n] = size(Z). In this case, the vertices of the surface facets are the triples (x(j),y(i),Z(i,j)). Note that x corresponds to the columns of Z and y corresponds to the rows of Z. For a complete discussion of parametric surfaces, see the surf function. The values in x, y, and Z can be numeric, datetime, duration, or categorical values. surface(...'PropertyName',PropertyValue,...) follows the X, Y, Z, and C arguments with property name/property value pairs to specify additional surface properties. For a description of the properties, see Surface Properties. surface(ax,...) creates the surface in the axes specified by ax instead of in the current axes (gca). The option ax can precede any of the input argument combinations in the previous syntaxes. h = surface(...) returns a primitive surface object. ## Examples collapse all Plot the function $z=x{e}^{-{x}^{2}-{y}^{2}}$ on the domain $-2\le x\le 2$ and $-2\le y\le 2$. Use meshgrid to define X and Y. Then, define Z and create a surface plot. Change the view of the plot using view. [X,Y] = meshgrid(-2:0.2:2,-2:0.2:2); Z = X.*exp(-X.^2 - Y.^2); figure surface(X,Y,Z) view(3) surface creates the plot from corresponding values in X, Y, and Z. If you do not define the color data C, then surface uses Z to determine the color, so color is proportional to surface height. Use the peaks function to define XD, YD, and ZD as 25-by-25 matrices. [XD,YD,ZD] = peaks(25); Load the clown data set to get the image data X and its associated colormap, map. Flip X using the flipud function and define the flipped image as the color data for the surface, C. C = flipud(X); Create a surface plot and display the image along the surface. Since the surface data ZD and the color data C have different dimensions, you must set the surface FaceColor to 'texturemap'. figure surface(XD,YD,ZD,C,... 'FaceColor','texturemap',... 'EdgeColor','none',... 'CDataMapping','direct') colormap(map) view(-35,45) The clown data is typically viewed with the image function, which uses 'ij' axis numbering. This example reverses the image data in the vertical direction using flipud. ## Tutorials For examples, see Representing Data as a Surface. ## Tips surface does not respect the settings of the figure and axes NextPlot properties. It simply adds the surface object to the current axes. If you do not specify separate color data (C), MATLAB® uses the matrix (Z) to determine the coloring of the surface. In this case, color is proportional to values of Z. You can specify a separate matrix to color the surface independently of the data defining the area of the surface. You can specify properties as property name/property value pairs or using dot notation. surface provides convenience forms that allow you to omit the property name for the XData, YData, ZData, and CData properties. For example, surface('XData',X,'YData',Y,'ZData',Z,'CData',C) is equivalent to surface(X,Y,Z,C) When you specify only a single matrix input argument, surface(Z) MATLAB assigns the data properties as if you specified surface('XData',[1:size(Z,2)],... 'YData',[1:size(Z,1)],... 'ZData',Z,... 'CData',Z) The axis, caxis, colormap, hold, shading, and view commands set graphics properties that affect surfaces. You can also set and query surface property values after creating them using dot notation.
2019-06-24T16:16:09
{ "domain": "mathworks.com", "url": "https://fr.mathworks.com/help/matlab/ref/surface.html", "openwebmath_score": 0.6985472440719604, "openwebmath_perplexity": 1660.585429808924, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137889851291, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.8460424031444248 }
https://web2.0calc.com/questions/whats-is-bigger-1-1-or-1-half-half-of-a-half-half-of-a-half-of-a-half-infinitely
+0 # whats is bigger 1+1 or 1 + half +half of a half + half of a half of a half infinitely 0 305 5 whats is bigger 1+1 or 1 + half +half of a half + half of a half of a half infinitely Guest Apr 22, 2015 #2 +91451 +10 These can be quite deceptive MG :) Which is bigger 1+1=2 OR $$1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}....$$ This is a Geometric Series,     a=1, r=1/2 = 0.5 $$S_\infty=\frac{a}{1-r}=\frac{1}{1-0.5}=\frac{1}{0.5}=2$$ So they both have exactly the same value Melody  Apr 23, 2015 Sort: #1 +4664 +10 Let's take this question slow. 1+1= 2 1+ 0.5 + 2.5 + (half of a half of a half infinitely) (half of infinite is infinite) + infinte = INFINITE i'm going to the second one MathsGod1  Apr 22, 2015 #2 +91451 +10 These can be quite deceptive MG :) Which is bigger 1+1=2 OR $$1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}....$$ This is a Geometric Series,     a=1, r=1/2 = 0.5 $$S_\infty=\frac{a}{1-r}=\frac{1}{1-0.5}=\frac{1}{0.5}=2$$ So they both have exactly the same value Melody  Apr 23, 2015 #3 +4664 +5 0.o I don't get it, one has infinity yet it is equivalent to 2? MathsGod1  Apr 23, 2015 #4 +81014 +5 Prove this for yourself MG1  ...   add up the following.......see that the sum is approaching 2 as we add more and more terms....... 1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 +............ See how the series approaches 2 as more terms are added ??? {When you take Calculus later on, this will be one of the main ideas......it's known as a "limit" } CPhill  Apr 23, 2015 #5 +4664 0 Oh yes! I got the wording wrong! But each time you add those fractions I gets smaller but slowly bigger. PS. I thought he said half of infinity. Not half of a half ... Forever . MathsGod1  Apr 23, 2015 ### 19 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
2018-01-20T01:07:10
{ "domain": "0calc.com", "url": "https://web2.0calc.com/questions/whats-is-bigger-1-1-or-1-half-half-of-a-half-half-of-a-half-of-a-half-infinitely", "openwebmath_score": 0.9404875040054321, "openwebmath_perplexity": 3279.1126063956303, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137884587393, "lm_q2_score": 0.8615382076534742, "lm_q1q2_score": 0.8460423991997402 }
https://math.stackexchange.com/questions/3859541/where-is-the-mistake-in-my-evaluation-of-int-fracxx22x3-dx
# Where is the mistake in my evaluation of $\int\frac{x}{x^2+2x+3}\,dx$? Here is how I did it: First, write $$\int\frac{x}{x^2+2x+3}\,dx=\int\frac{2x+2-x-2}{x^2+2x+3}\,dx=\int\frac{2x+2}{x^2+2x+3}\,dx-\int\frac{x+2}{x^2+2x+3}\,dx.$$ Now consider the integral in the minuend. Letting $$u=x^2+2x+3$$, one finds $$du=(2x+2)\,dx$$, and so $$\int\frac{2x+2}{x^2+2x+3}\,dx=\int\frac{du}{u}=\ln{|x^2+2x+3|}.$$ Next consider the other integral. Put $$t\sqrt{2}=x+1$$. Then $$dx=\sqrt 2\,dt$$. Now \begin{align*} \int\frac{x+2}{x^2+2x+3}\,dx&=\int\frac{(x+1)+1}{(x+1)^2+2}\,dx\\ &=\int\frac{t\sqrt 2+1}{2t^2+2}\sqrt 2\,dt\\ &=\frac{1}{2}\int\frac{2t+\sqrt 2}{t^2+1}\,dt\\ &=\frac{1}{2}\left(\int\frac{2t}{t^2+1}\,dt+\sqrt 2\int\frac{1}{t^2+1}\,dt\right)\\ &=\frac{1}{2}\left(\ln{|t^2+1|}+\sqrt 2\arctan t\right) \end{align*} and hence this is equal to $$\frac{1}{2}\left(\ln{\left|\frac{x^2+2x+1}{2}+1\right|}+\sqrt 2\arctan\frac{x+1}{\sqrt 2}\right)=\frac{1}{2}\left(\ln{\left|\frac{x^2+2x+3}{2}\right|}+\sqrt 2\arctan\frac{x+1}{\sqrt 2}\right)$$ therefore \begin{align*} \int\frac{x}{x^2+2x+3}\,dx&=\int\frac{2x+2}{x^2+2x+3}\,dx-\int\frac{x+2}{x^2+2x+3}\,dx\\ &=\ln{|x^2+2x+3|}-\frac{1}{2}\left(\ln{\left|\frac{x^2+2x+3}{2}\right|}+\sqrt{2}\arctan\frac{x+1}{\sqrt 2}\right)+C. \end{align*} apparently, the correct answer is $$\frac{(\ln|x^2+2x+3|)}{2}-\frac{\sqrt{2}\arctan{\frac{(x+1)}{\sqrt{2}}}}{2}+C.$$ what went wrong? • Surely that's the same as your solution? – Angina Seng Oct 10 '20 at 18:18 It seems that you made no mistake. Actually, your answer and the “correct” one are one and the same, since$$\frac12\log|x^2+2x+3|-\frac12\log\left|\frac{x^2+2x+3}2\right|$$is a constant. Recall $$\ln(a/b) = \ln(a) - \ln(b)$$, so $$\ln\left|\dfrac{x^2+2x+3}{2} \right| = \ln\dfrac{|x^2+2x+3|}{|2|} = \ln|x^2+2x+3|-\ln2$$ so, distributing the $$-\dfrac{1}{2}$$, we obtain $$\ln|x^2+2x+3|-\dfrac{1}{2}\ln|x^2+2x+3|-\dfrac{1}{2}\ln2 - \dfrac{\sqrt{2}\arctan\frac{x+1}{\sqrt 2}}{2}+C$$ which is just $$\dfrac{1}{2}\ln|x^2+2x+3| - \dfrac{\sqrt{2}\arctan\frac{x+1}{\sqrt 2}}{2}-\dfrac{1}{2}\ln 2 + C$$ and because $$-\dfrac{1}{2}\ln 2$$ is a constant, we can absorb that into $$C$$. $$\ln{|x^2+2x+3|}-\frac{1}{2}\left(\ln{\left|\frac{x^2+2x+3}{2}\right|}+\sqrt{2}\arctan\frac{x+1}{\sqrt 2}\right)+C= \ln{|x^2+2x+3|}-\frac{1}{2}\left(\ln{\left|{x^2+2x+3}\right|-\ln{2}}+\sqrt{2}\arctan\frac{x+1}{\sqrt 2}\right)+C=\frac{1}{2}\left(\ln{\left|{x^2+2x+3}\right|}- \sqrt{2}\arctan\frac{x+1}{\sqrt 2}\right)-\dfrac{1}{2}\ln{2} + C$$ $$C_{new}=-\dfrac{1}{2}\ln{2} + C$$ $$\int_{}^{} \frac{x}{x^2 +2x+3}dx=\frac{1}{2}\int_{}^{} \frac{2x+2-2}{x^2 +2x+3} dx =\frac{1}{2}([ln(|x^2 +2x+3|] +c_1) - \frac{1}{2}\int_{}^{} \frac{2}{x^2 +2x+1 +2} dx= \frac{1}{2}([ln(|x^2 +2x+3|] +c_1) - \frac{1}{2}\int_{}^{} \frac{2}{(x+1)^2 +2 } dx =\frac{1}{2}([ln(|x^2 +2x+3|]+c_1) -\frac{1}{2}\int_{}^{} \frac{1}{(\frac{x+1}{\sqrt{2}})^2 +1} dx=\frac{1}{2}([ln(|x^2 +2x+3|]+c_1) - \frac{\sqrt{2}}{2}([arct(\frac{x+1}{\sqrt{2}})] +c_2)$$ We put $$C=\frac{1}{2}(c_1-\sqrt{2}c_2 )$$.
2021-06-22T15:10:35
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3859541/where-is-the-mistake-in-my-evaluation-of-int-fracxx22x3-dx", "openwebmath_score": 0.992367148399353, "openwebmath_perplexity": 582.4028745912073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137910906879, "lm_q2_score": 0.8615382040983516, "lm_q1q2_score": 0.846042397976085 }
https://mathhelpboards.com/threads/rv-uniformly-distributed.24012/
# rv uniformly distributed #### Francobati ##### New member Hello. Let $Y=1-X^2$, where $X~ U(0,1)$. What statement is TRUE? -$E(Y^2)=2$ - $E(Y^2)=1/2$ - $var(Y)=1/12$ - $E(Y)=E(Y^2)$ -None of the remaining statements. Solution: I compute: $E(Y^2)=E(1-X^2)^2=E(1+X^4-2X^2)=1+E(X^4)-2E(X^2)$, then? #### Klaas van Aarsen ##### MHB Seeker Staff member Hello. Let $Y=1-X^2$, where $X~ U(0,1)$. What statement is TRUE? -$E(Y^2)=2$ - $E(Y^2)=1/2$ - $var(Y)=1/12$ - $E(Y)=E(Y^2)$ -None of the remaining statements. Solution: I compute: $E(Y^2)=E(1-X^2)^2=E(1+X^4-2X^2)=1+E(X^4)-2E(X^2)$, then? Apply the definition of expected value. That is: $$EZ = \int z f_Z(z) \, dz$$ So with $X\sim U(0,1)$: $$E(X^2) = \int_0^1 x^2 \cdot 1 \, dx$$ #### Francobati ##### New member $E(X^2)=\frac{1^3-0^3}{3*1}$ $E(X^4)=\frac{1^5}{5}$ $E(Y^2)=1+\frac{1}{5}-2*\frac{1}{3}=1+\frac{1}{5}-\frac{2}{3}=\frac{15+3-10}{15}= \frac{8}{15}\ne2\ne\frac{1}{2}$, so first and second are false. $var(Y)=var(1-X^2)=var(1)-var(X^2)=0-var(X^2)$ $var(X)=\frac{(b-a)^2}{12}=\frac{(1-0)^2}{12}=\frac{1}{12}$ But waht formula I must appky in $E(X^2)$ and in $E(X^4)$ to obtain these values? #### Klaas van Aarsen ##### MHB Seeker Staff member I take it you mean $var(X^2)$? To find it, apply the definition of variance: $$var(Z) = E\big((Z-EZ)^2\big) = E\big(Z^2\big) - (EZ)^2$$ #### Francobati ##### New member Yes and I obtain $E(X^2)=var(X)+(E(X))^2=\frac{(b-a)^2}{12}+(\frac{a+b}{2})^2=\frac{1}{12}+(\frac{1}{2})^2=\frac{1}{12}+\frac{1}{4}=\frac{1}{3}$ This result equal to this $E(X^2)=\frac{1^3-0^3}{3(1)}= \frac{1}{3}$, how I can translate this $E(X^2)=\frac{1^3-0^3}{3(1)}$ in a general formula?
2020-07-05T06:11:51
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/rv-uniformly-distributed.24012/", "openwebmath_score": 0.9417774677276611, "openwebmath_perplexity": 9915.316856118947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137868795701, "lm_q2_score": 0.8615382076534742, "lm_q1q2_score": 0.8460423978392256 }
https://math.stackexchange.com/questions/3129521/if-f-divides-g-in-sx-show-that-f-divides-g-in-rx-for-r-a-sub
# If $f$ divides $g$ in $S[x]$, show that $f$ divides $g$ in $R[x]$ for $R$ a sub-ring of $S$. Let $$R$$ be a sub-ring of a ring $$S$$. Let $$f,g$$ be non-zero polynomials in $$R[x]$$ and assume that the leading coefficient of $$f$$ is a unit in $$R$$. If $$f$$ divides $$g$$ in $$S[x]$$, show that $$f$$ divides $$g$$ in $$R[x]$$. Thoughts: If $$f$$ divides $$g$$ in $$S[x]$$, then $$g(x) = f(x)q(x)$$ for some $$q(x)$$ in $$S[x]$$. From here I'm not sure what the question is asking. Do I have to show that $$q(x)$$ is an element of $$R[x]$$ ? Insights appreciated. • Yes, that is precisely what it means for $f$ to divide $g$ in $R[x]$. – Servaes Feb 27 at 23:44 • @Servaes how do I know that $q(x)$ is an element of $R[x]$? – IntegrateThis Feb 27 at 23:45 • That is the point of the question. Give it some thought. – Servaes Feb 27 at 23:46 • Your title and your question are different. The title says that $S$ is a subring of $R$, but the body of your question says that $R$ is a subring of $S$. Which one is true? – stressed out Feb 27 at 23:48 • I think I found the solution. Use Euclidean division. Since the leading coefficient of $f$ is invertible, you can do it. In fact, assume that $f$ is a monic polynomial for simplicity. – stressed out Feb 27 at 23:50 Hint (Euclidean) division with remainder works because the lead coef is a unit, and the quotient and remainder are unique (same proof as when the coef ring is a field). Thus by uniqueness, the remainder on division in the subring $$R$$ is the same as in $$S$$, i.e. $$0$$. Remark Results like this can also be derived more generally from persistence of gcds, e.g. see here. Or directly $$g = f q = (u x^i\! +\! f')(s x^j\! +\! q') = us\, x^{i+j}\!+\cdots \in\! R[x]\,$$ so $$\,us\! =\! r\in R$$ so $$\,s = r/u\in R$$. Thus $$\, fq' = g - sx^kf\in R[x]\,$$ and $$\,\deg q' < \deg q\,$$ so by induction $$\,q'\in R[x]\,$$ so $$\,q\in R[x]$$ • Ok. I see. So there couldn't be a q(x), r(x) with deg(r(x))> 0 in R[x] such that g = q f + r because that would contradict the uniqueness of euclidean division? – IntegrateThis Feb 27 at 23:57 • @IntegrateThis Exactly. I added a remark and link on the gcd perspective. – Bill Dubuque Feb 27 at 23:59 • Are $R$ and $S$ assumed to be UFDs? – Servaes Feb 28 at 0:05 • @Servaes No, commutative rings – Bill Dubuque Feb 28 at 0:08 • @stressedout If $\,\deg r,\deg R < \deg f\,$ and $q f + r = Q f + R\,$ then $\,(Q-q)f = r-R.\,$ If $Q\neq q$ then, since lead coef of $f$ is a unit, $\deg {\rm LHS} \ge \deg f > \rm \deg RHS \Rightarrow\!\Leftarrow$ Therefore $\,Q = q\,$ so $\,r-R = 0.\$ – Bill Dubuque Feb 28 at 0:34 Well, as I mentioned in the comments and Bill Dubuque has explained it well in his answer, you can use the Euclidean division algorithm. Note that for the Euclidean division algorithm to work, all that you need is to know that the leading coefficient of the divisor is a unit. To see this, try to divide a polynomial $$g(x)$$ by another polynomial $$f(x)$$ of lower degree and you'll see that you can always cancel the term of highest degree in $$g(x)$$ when the leading coefficient of $$f(x)$$ is a unit. For a better insight and seeing what can go wrong when the leading coefficient is not a unit, try to divide $$3x^2+1$$ by $$2x-1$$ in $$\mathbb{Z}$$. You will immediately see that you can't get rid of $$3x^2$$ because $$2 \not\mid 3$$. However, since in this problem your leading coefficient is a unit, you can assume that $$f(x)$$ is a monic polynomial. Since $$1$$ divides anything in the ring, no such problem can arise. Addendum: without using the uniqueness of the divisor and the remainder polynomials, we can argue as follows: Suppose that $$g(x)=f(x)k(x)+r(x)$$ in $$R[x]$$. Since $$R[x] \subseteq S[x]$$, the same equation holds in $$S[x]$$. On the other hand, you had assumed that $$g(x)=f(x)q(x)$$ in $$S[x]$$; so we get that $$f(x)q(x)=f(x)k(x)+r(x)$$. Hence, $$f(x)\big( q(x) -k(x) \big) = r(x)$$. Since the leading coefficient of $$f(x)$$ is $$1$$ and $$1$$ is never a zero-divisor, unless $$q(x)-k(x) = 0$$, we have $$\deg r(x) \geq \deg f(x)$$ which is a contradiction. So, $$q(x)-k(x)=0$$ and therefore, $$r(x)=0$$. Thanks to Bill Dubuque for point out that $$f \mid r$$ still implies $$\deg r \geq \deg f$$ because $$f$$ is monic. • For a direct proof we can show by comparing lead coefs that the lead coef of the quotient $q$ is in $R$, and then inductively do the same for the rest of the quotient (rest after subtracting its lead term) - see the Remark in my answer. – Bill Dubuque Feb 28 at 1:10 • @BillDubuque Yes, I saw your answer. Your answers are sometimes so spot on and yet comprehensive that voting them up just once is not enough. :) – stressed out Feb 28 at 1:17
2019-07-20T12:03:07
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3129521/if-f-divides-g-in-sx-show-that-f-divides-g-in-rx-for-r-a-sub", "openwebmath_score": 0.8102993369102478, "openwebmath_perplexity": 172.43439284843558, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137889851291, "lm_q2_score": 0.8615382040983515, "lm_q1q2_score": 0.8460423961620657 }
https://mathhelpboards.com/threads/if-derivative-is-not-zero-anywhere-then-function-is-injective.7993/
# [SOLVED]If Derivative is Not Zero Anywhere Then Function is Injective. #### caffeinemachine ##### Well-known member MHB Math Scholar Hello MHB. I am sorry that I haven't been able to take part in discussions lately because I have been really busy. I am having trouble with a question. In a past year paper of an exam I am preparing for it read: Let $f: (a,b)\to \mathbb R$ be a differentiable function with $f'(x)\neq 0$ for all $x\in(a,b)$. Then is $f$ necessarily injective? I know that a function can be differentiable at all points and have a discontinuous derivative. This makes me think that $f$ is not necessarily injective. But I am not able to construct a counterexample. Can anybody help? #### Ackbach ##### Indicium Physicus Staff member I know that a function can be differentiable at all points and have a discontinuous derivative. It can? Can you come up with an example of a function that does this? For me, I think of $f(x)=|x|$. It is not differentiable at $0$; its derivative is discontinuous at the origin. #### Opalg ##### MHB Oldtimer Staff member Hello MHB. I am sorry that I haven't been able to take part in discussions lately because I have been really busy. I am having trouble with a question. In a past year paper of an exam I am preparing for it read: Let $f: (a,b)\to \mathbb R$ be a differentiable function with $f'(x)\neq 0$ for all $x\in(a,b)$. Then is $f$ necessarily injective? I know that a function can be differentiable at all points and have a discontinuous derivative. This makes me think that $f$ is not necessarily injective. But I am not able to construct a counterexample. Can anybody help? Rolle's theorem. MHB Math Scholar MHB Math Scholar Thanks. #### Ackbach ##### Indicium Physicus Staff member Another result of interest, which I found here: the Darboux theorem. If a function is differentiable, then its derivative must satisfy the Intermediate Value property. #### Deveno ##### Well-known member MHB Math Scholar Let us suppose by way of contradiction a counter-example exists. Thus we have two points $c < d \in (a,b)$ such that: $f(c) = f(d)$, but $c \neq d$. By supposition, $c$ and $d$ are, of course, interior points of $(a,b)$, and thus since $f$ is differentiable on $(a,b)$, $f$ is continuous on $[c,d]$ and differentiable on $(c,d)$. Hence we may apply the mean value theorem to deduce there exists a point $x_1 \in (c,d)$ such that: $f'(x_1) = \dfrac{f(d) - f(c)}{d - c} = 0$ violating the condition $f'(x) \neq 0$ for all $x \in (a,b)$. Thus no such pair exists, which thus means if for $c,d \in (a,b), f(c) = f(d)$, we must have $c = d$, that is, $f$ is injective. (Note this proof takes advantage of the trichotomy rule, a consequence of the order properties of $\Bbb R$).
2020-09-20T21:03:21
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/if-derivative-is-not-zero-anywhere-then-function-is-injective.7993/", "openwebmath_score": 0.8908919095993042, "openwebmath_perplexity": 258.866835889385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137868795702, "lm_q2_score": 0.8615382040983515, "lm_q1q2_score": 0.8460423943480462 }
https://math.stackexchange.com/questions/2134903/is-there-any-mathematical-reason-for-this-digit-repetition-show
# Is there any mathematical reason for this "digit-repetition-show"? The number $$\sqrt{308642}$$ has a crazy decimal representation : $$555.5555777777773333333511111102222222719999970133335210666544640008\cdots$$ Is there any mathematical reason for so many repetitions of the digits ? A long block containing only a single digit would be easier to understand. This could mean that there are extremely good rational approximations. But here we have many long one-digit-blocks , some consecutive, some interrupted by a few digits. I did not calculate the probability of such a "digit-repitition-show", but I think it is extremely small. Does anyone have an explanation ? • Hint: $308642=(5000^2+2)/9^2$. Feb 8 '17 at 13:09 • In interestingly the prime factorization of this number is $2 \times 154321$ I wonder if the 54321 has anything to do with it? Feb 8 '17 at 13:18 • On a related note, see Schizophrenic number Feb 10 '17 at 10:29 • Did this come up as an actual problem or just for fun? Feb 10 '17 at 20:52 • @BrianRisk Just for fun! Feb 11 '17 at 14:01 The architect's answer, while explaining the absolutely crucial fact that $$\sqrt{308642}\approx 5000/9=555.555\ldots,$$ didn't quite make it clear why we get several runs of repeating decimals. I try to shed additional light to that using a different tool. I want to emphasize the role of the binomial series. In particular the Taylor expansion $$\sqrt{1+x}=1+\frac x2-\frac{x^2}8+\frac{x^3}{16}-\frac{5x^4}{128}+\frac{7x^5}{256}-\frac{21x^6}{1024}+\cdots$$ If we plug in $$x=2/(5000)^2=8\cdot10^{-8}$$, we get $$M:=\sqrt{1+8\cdot10^{-8}}=1+4\cdot10^{-8}-8\cdot10^{-16}+32\cdot10^{-24}-160\cdot10^{-32}+\cdots.$$ Therefore \begin{aligned} \sqrt{308462}&=\frac{5000}9M=\frac{5000}9+\frac{20000}9\cdot10^{-8}-\frac{40000}9\cdot10^{-16}+\frac{160000}9\cdot10^{-24}+\cdots\\ &=\frac{5}9\cdot10^3+\frac29\cdot10^{-4}-\frac49\cdot10^{-12}+\frac{16}9\cdot10^{-20}+\cdots. \end{aligned} This explains both the runs, their starting points, as well as the origin and location of those extra digits not part of any run. For example, the run of $$5+2=7$$s begins when the first two terms of the above series are "active". When the third term joins in, we need to subtract a $$4$$ and a run of $$3$$s ensues et cetera. • @Peter It is quite common to unaccept an answer after a better answer appears. Doing so helps guide readers to the best answer, which is often not the highest voted one, due to many factors, e.g. earlier answers usually get more votes, and less technical answers usually get more votes from hot-list activity (as here). This is currently (by far) the best explanation you have. Feb 8 '17 at 20:35 • For the record: The reason I support Peter's decision to accept the architect's answer is that mine is building upon it. Without the observation that $5000/9$ is an extremely good approximation I most likely would not have bothered, and most certainly would not have come up with this refinement. IMHO Math.SE works at its best, when different users add different points of view refining earlier answers. The voters very clearly like both the answers. Sunshine and smiles to all! Feb 10 '17 at 7:36 • @JyrkiLahtonen strong agreement. I love to see answers working in tandem, and the checkmark doesn't give all that many points. Best to have the answer that others build on be the one that people read first :) Feb 10 '17 at 8:13 • You may enjoy applying your skills to the Schizophrenic numbers. ;) Feb 10 '17 at 10:35 • @JyrkiLahtonen's answer is the correct one. The repeating digits can be inferred from the Taylor expansion of the square root. the_architect's answer is merely an observation. Mar 12 '18 at 10:37 Repeated same numbers in a decimal representation can be converted to repeated zeros by multiplication with $9$. (try it out) so if we multiply $9 \sqrt{308642} = \sqrt{308642 \times 81} = \sqrt{25 000 002}$ since this number is allmost $5000^2$ it has a lot of zeros in its decimal expansion • Superb answer! (+1) Feb 8 '17 at 13:19 • And the underlying reason here is the series expansion $$\sqrt{a^2+x} = a + \frac{1}{2a}x - \frac{1}{(2a)^3}x^2 + \frac2{(2a)^5}x^3 - \frac{5}{(2a)^7}x^4 + \cdots$$ which can be derived from the generalized binomial theorem. When $2a$ is a large power of $10$, this gives a nice decimal representation of the square root. Feb 8 '17 at 14:04 • To check that this is the "right" explanation I'd find it good to have other, similar examples. And here is another one: $\sqrt{1975308642} = 44444.44444472222222222135416666667209201388884650336371564...$ which can be explained by noting that $1975308642 = (400000^2 + 2)/9^2$. Feb 8 '17 at 14:37 • It is important to note that $25000002$ is not only almost $5000^2$, but also $>5000^2$. Otherwise the same argument would work for $$\sqrt{30864\color{red}1}\approx 555.55467777708433223768894721... .$$ Does not look so very nice! It is because $81\times 308641<5000^2$, but still close to $5000^2$. Jan 25 '18 at 9:57
2022-01-16T21:42:36
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2134903/is-there-any-mathematical-reason-for-this-digit-repetition-show", "openwebmath_score": 0.6596417427062988, "openwebmath_perplexity": 404.9026268813655, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9678992932829918, "lm_q2_score": 0.8740772482857833, "lm_q1q2_score": 0.8460187508905518 }
http://mathhelpforum.com/algebra/31827-two-exponent-root-problems-print.html
# Two exponent and root problems • March 23rd 2008, 05:32 PM ZBomber Two exponent and root problems I just need help with one problem on my homework... Both the directions are to simplify First one: Fifth root of x cubed all over Seventh root of x to the 4th I didnt know how to do that one at all... Second one (Fourth root of x cubed times Fourth root of x to the fifth) with a -2 exponent outside of the paranthesis For that one, my answer was 1 over square root of x to the 8th. Thanks so much to anyone who responds.... any help would be appreciated, let me know if anything needs to be further/more clearly explained. • March 23rd 2008, 06:20 PM o_O 1. $\frac{\sqrt[5]{x}}{\sqrt[7]{x^{4}}}$ Roots can be expressed as exponents: $a^{\frac{m}{n}} = \sqrt[n]{x^{m}} = \left(\sqrt[n]{x}\right)^{m}$ Then simplify as you would with whole number exponents. 2. Show us your work because I'm not sure how you arrived at that answer. • March 23rd 2008, 06:31 PM ZBomber Quote: Originally Posted by o_O 1. $\frac{\sqrt[5]{x}}{\sqrt[7]{x^{4}}}$ Roots can be expressed as exponents: $a^{\frac{m}{n}} = \sqrt[n]{x^{m}} = \left(\sqrt[n]{x}\right)^{m}$ Then simplify as you would with whole number exponents. 2. Show us your work because I'm not sure how you arrived at that answer. Thank you! For the second one... First I multiplied what was in the paranthesis.... so that would be fourth root of x to the 8th. I didnt really know what to do after this. I know that when you have a negative exponent, if it is on the top of a fraction it will go on the bottom and vice versa. But I wasnt really sure how to simplify the fourth root of x to the 8th... maybe when you multiply 8 and -2 you get the fourth root of x to the -16th... and to reduce that further it could possibly be x to the -4th? So maybe it would be 1 over x to the fourth power? I'm not sure, I'm really lost on this one. Sorry if all the text is confusing by the way, I'm not sure how to format my posts to use fractions and roots. • March 23rd 2008, 06:33 PM Soroban Hello, ZBomber We must change the roots and powers into fractional exponents . . . Quote: $1)\;\;\frac{\sqrt[5]{x^3}}{\sqrt[7]{x^4}}$ We have: . $\frac{x^{\frac{3}{5}}}{x^{\frac{4}{7}}}$ Then (as expected) subtract exponents: . $x^{\frac{3}{5}-\frac{4}{7}} \;=\;x^{\frac{21}{35} - \frac{20}{35}} \;=\;x^{\frac{1}{35}}$ Quote: $2)\;\;\left(\sqrt[4]{x^3}\cdot\sqrt[4]{x^5}\right)^{-2}$ We have: . $\left(x^{\frac{3}{4}}\cdot x^{\frac{5}{4}}\right)^{-2} \;=\;\left(x^{\frac{3}{4}+\frac{5}{4}}\right)^{-2} \;=\;\left(x^{\frac{8}{4}}\right)^{-2} \;=\;\left(x^2\right)^{-2} \;= \;x^{-4} \;=\;\frac{1}{x^4}$ • March 23rd 2008, 06:35 PM ZBomber Quote: Originally Posted by Soroban Hello, ZBomber We must change the roots and powers into fractional exponents . . . We have: . $\frac{x^{\frac{3}{5}}}{x^{\frac{4}{7}}}$ Then (as expected) subtract exponents: . $x^{\frac{3}{5}-\frac{4}{7}} \;=\;x^{\frac{21}{35} - \frac{20}{35}} \;=\;x^{\frac{1}{35}}$ We have: . $\left(x^{\frac{3}{4}}\cdot x^{\frac{5}{4}}\right)^{-2} \;=\;\left(x^{\frac{3}{4}+\frac{5}{4}}\right)^{-2} \;=\;\left(x^{\frac{8}{4}}\right)^{-2} \;=\;\left(x^2\right)^{-2} \;= \;x^{-4} \;=\;\frac{1}{x^4}$ So I got the second one right the second time! Ok, thanks a lot! Those answers look spot on to me! (Nod)
2014-09-19T19:43:49
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/31827-two-exponent-root-problems-print.html", "openwebmath_score": 0.8961395025253296, "openwebmath_perplexity": 945.4647272532674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.967899295134923, "lm_q2_score": 0.8740772400852111, "lm_q1q2_score": 0.8460187445719547 }
https://forum.math.toronto.edu/index.php?PHPSESSID=o2p636jud5c8pf6shputvetgg3&action=printpage;topic=2404.0
# Toronto Math Forum ## MAT244--2020F => MAT244--Lectures & Home Assignments => Chapter 2 => Topic started by: Julian on September 28, 2020, 12:45:14 PM Title: W3L3 Exact solutions to inexact equations Post by: Julian on September 28, 2020, 12:45:14 PM In week 3 lecture 3, we get the example $(-y\sin(x)+y^3\cos(x))dx+(3\cos(x)+5y^2\sin(x))dy=0$. We determine that this equation is not exact, but that we can make it exact by multiplying the equation by $y^2$. We then find the general solution to the new equation is $y^3\cos(x)+y^5\sin(x)=C$. My question is why is this good enough? We didn't answer the original question. We answered a modified version which we chose specifically because it seems easier to us. Shouldn't we still find a solution to the original equation $(-y\sin(x)+y^3\cos(x))dx+(3\cos(x)+5y^2\sin(x))dy=0$? Is there some way we can "divide out" $y^2$ from $y^3\cos(x)+y^5\sin(x)=C$ to get it? Title: Re: W3L3 Exact solutions to inexact equations Post by: Victor Ivrii on September 29, 2020, 04:31:19 AM In week 3 lecture 3, we get the example $(-y\sin(x)+y^3\cos(x))dx+(3\cos(x)+5y^2\sin(x))dy=0$. We determine that this equation is not exact, but that we can make it exact by multiplying the equation by $y^2$. We then find the general solution to the new equation is $y^3\cos(x)+y^5\sin(x)=C$. My question is why is this good enough? We didn't answer the original question. We answered a modified version which we chose specifically because it seems easier to us. Shouldn't we still find a solution to the original equation $(-y\sin(x)+y^3\cos(x))dx+(3\cos(x)+5y^2\sin(x))dy=0$? Is there some way we can "divide out" $y^2$ from $y^3\cos(x)+y^5\sin(x)=C$ to get it? We have not modified equation, but simply multiplied it by an integrating factor. These two equations are equivalent, except as $y=0$, that means $C=0$. But $y=0$ os also a solution to the original equation. Checking this would give you a 100% correct solution, otherwise it is almost perfect
2022-01-28T11:31:52
{ "domain": "toronto.edu", "url": "https://forum.math.toronto.edu/index.php?PHPSESSID=o2p636jud5c8pf6shputvetgg3&action=printpage;topic=2404.0", "openwebmath_score": 0.8055546879768372, "openwebmath_perplexity": 285.01695067762836, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9678992914310603, "lm_q2_score": 0.87407724336544, "lm_q1q2_score": 0.8460187445094239 }
http://www.cig-beauty.com/3zk0oib/icuity.php?id=6797f7-yield-rate-formula
Using the APY formula, you can compare several interest rates which have varying compounding periods. Another term for effective yield is APY, or annual percentage yield. I = $10,000 x 0.12 x 1 . Assume that the price of the bond is$940 with the face value of bond $1000. Lockheed Martin Corporation has$900 million $1,000 per value bonds payable carrying semi-annual coupon rate of 4.25%. Formula. Money market yield is the rate of return on highly liquid investments with a maturity of less than one year. 1. You must consider the interest rate, the time period of your investment, and the kind of interest involved. Managers can also use the product yield formula to calculate how many units their production process must create to deliver a specific number of good units. Assume that the annual coupons are$100, which is a 10% coupon rate, and that there are 10 years remaining until maturity. The formula for the current yield is – Annual Coupon Payment / Current Bond Price. The total first time yield is equal to FTYofA * FTYofB * FTYofC * FTYofD or 0.9000 * 0.8889 * 0.9375 * 0.9333 = 0.7000. Running yield. The formula to calculate dividend yield, therefore, is =D4/D3. The formula for the EAR is: Effective Annual Rate = (1 + (nominal interest rate / number of compounding periods)) ^ (number of compounding periods) – 1 . The concept of bond yield is very important to understand as it is used in the assessment of its expected performance. Bond prices fluctuate in value as they are bought and sold in the secondary market. Step 3: Finally, the formula for current yield can be derived by dividing the bond’s coupon payment expected in the next one year (step 1) by its current market price (step 2) as shown below. The current yield formula is used to determine the yield on a bond based on its current price. The formula to use will be: Click here to download the sample Excel file. Both par value and periodic coupon payments constitute the potential future cash flows. You're left with a rate of return or "net yield" when you subtract these expenses. It’s easy to work out the rental yield for your property by using our simple rental yield calculator sum. Just as when working out … An account states that its rate is 6% compounded monthly. Based on this information, you are required to calculate the approximate yield to maturity. The capital gains yield and dividend yield is combined to calculate the total stock return. Imagine you received 200 resumes from an agency and only 5% of them passed through your screening call phase. Test a smaller range of interest rates to determine a precise interest rate. Formula for Yield Yield is a measure of cash flow that an investor gets on the amount invested in a security. The rate, or r, would be .06, and the number of times compounded would be 12 as there are 12 months in a year.When we put this into the formula we have Formula To Calculate the Yield Rate(Selection Rate) Starting at the stage of receiving the application and ending at recruitment of the candidate, you can calculate the yield rate (selection rate) by the following formula: Formula: Let us understand it with a small example. Calculate the bond’s current yield if the bond trades at a premium price of $1,020, The bond trades at par and The bond trades at a discounted price of$980. For this example, the current yield formula would be shown as. 1. Percent Yield Formula . This will give you a precise calculation of the yield to maturity. Recommended Articles. Therefore, for the given coupon rate and market price, the YTM of the bond is 3.2%. The expenses or operational costs associated with an investment property can be significant and can include acquisition and transactions costs, management fees, repairs and maintenance costs, rates … The bond is bought at a price of 95 and the redemption value is 100, here it pays the interest on a quarterly basis. Calculating dividend growth in Excel (Current dividend amount ÷ Previous dividend amount) – 1. The formula for capital gains yield does not include dividends paid on the stock, which can be found using the dividend yield. Mathematically, it is represented as, Current Yield = Coupon Payment in Next One Year / Current Market Price * 100% Example of … The n in the annual percentage yield formula would be the number of times that the financial institution compounds. = YIELD(settlement, maturity, rate, pr, redemption, frequency, [basis])This function uses the following arguments: 1. In general, analysts use the term "effective yield" to refer to the annual yield, which is helpful in comparing assets that pay more than once a year. Yield to Maturity (… The term “yield to maturity” or YTM refers to the return expected from a bond over its entire investment period until maturity. Redemption yield. When analyzing which of several savings investments is best, you need to compare their annual rates of yield (APY). Coupon on the bondwill be $1,000 * 8% which is$80. Formula To Calculate the Yield Rate(Selection Rate) Starting at the stage of receiving the application and ending at recruitment of the candidate, you can calculate the yield rate (selection rate) by the following formula: Formula: Let us understand it with a small example. That is, if we provide rate < 0; pr. On the other hand, the term “current yield” means the current rate of return of the bond investment computed on the basis of the coupon payment expected in the next one year and the current market price. The formula then subtracts that number by one. These factors are used to calculate the price of the bond in the primary market. Perform financial forecasting, reporting, and operational metrics tracking, analyze financial data, create financial models, we often calculate the yield on a bond to determine the income that would be generated in a year. YTM is calculated using the formula given below. 2. The coupon rate of a bond usually remains the same; however, the changes in interest rate markets encourage investors to constantly change their required rate of return (Current yield). Conversely, if interest rates decline (the market yield declines), then the price of the bond should rise (all else being equal). Also implicit in the formula derivation, but not obvious in its final form, is that growth from the valuation date until the next review is taken into account in the formula. As a general rule in financial theory, one would expect a higher premium, or return, for a riskier investment. Use a specific formula to figure out the discount yield on your Treasury Bill. By taking the time to learn and master these functions, you’ll significantly speed up your financial modeling. In materials science and engineering, the yield point is the point on a stress-strain curve that indicates the limit of elastic behavior and the beginning of plastic behavior. This has been a guide to YIELD function in Excel. Definition of First Time Yield (FTY): The number of good units produced divided by the number of total units going into the process. Rental yield calculator. To help them calculate the yield rate for multiple job posts easily, we have created a very simple Yield Rate Calculator with predefined formulas. YTM is used in the calculation of bond price wherein all probable future cash flows (periodic coupon payments and par value on maturity) are discounted to present value on the basis of YTM. Yield is the ratio of annual dividends divided by the share price. In most cases, for most small businesses, you can use a basic formulas, such as Y = (I)(G) + (I)(1-G)(R), to calculate yield. The equation for percent yield is: percent yield = (actual yield/theoretical yield) x 100% Where: actual yield is the amount of product obtained from a chemical reaction; theoretical yield is the amount of product obtained from the stoichiometric or balanced equation, using the limiting reactant to determine product; Units for both actual and theoretical yield … To calculate Dividend high yield, we require a dividend amount and stock price. The formula for current yield is expressed as expected coupon payment of the bond in the next one year divided by its current market price. The coupon rate is also known as the interest rate. Start with 6.9 percent, and decrease the annual interest rate amount by a tenth of a percent each time. There is also TIPS (Treasury Inflation Protected Securities), also known as Inflation Linked fixed income. To understand the uses of the function, let’s consider an example: We can use the function to find out the yield. The current yield formula can be used along with the bond yield formula, yield to maturity, yield to call, and other bond yield formulas to compare the returns of various bonds.The current yield formula may also be used with risk ratings and calculations to compare various bonds. Bank discount yield (or simply discount yield) is the annualized rate of return on a purely discount-based financial instrument such as T-bill, commercial paper or a repo. This example using the approximate formula would be The forward rate formula helps in deciphering the yield curve which is a graphical representation of yields on different bonds having different maturity periods. Step 4: Finally, the formula for the bond price can be used to determine the YTM of the bond by using the expected cash flows (step 1), number of years until maturity (step 2) and bond price (step 3) as shown below. ≤ 0; redemption ≤ 0; frequency is any number other than 1, 2, or 4; or [basis] is any number other than 0, 1, 2, 3, or 4. Interest can be compounded daily, monthly, or annually. The yield of a bond is inversely related to its price today: if the price of a bond falls, its yield goes up. It is calculated to compare the attractiveness of investing in a bond with other investment opportunities. which would return a current yield … To learn more, check out these additional CFI resources: To master the art of Excel, check out CFI’s FREE Excel Crash Course, which teaches you how to become an Excel power user. Example. For example, you could assess an external agency’s services as a candidate source. The annual percentage yield formula would be applied to determine what the effective yield would be if the account was compounded given the stated rate. Step 2: Next, figure out the current market price of the bond. By rearranging the above expression, we can work out the formula for yield to maturity on a zero-coupon bond: $$\text{s} _ \text{n}=\text{YTM}=\left[\left(\frac{\text{FV}}{\text{P}}\right)^\frac{\text{1}}{\text{n}\times \text{m}}-\text{1}\right]\times \text{m}$$ The yield to maturity calculated above is the spot interest rate (s n) for n years. It is the date when the security expires. Any of the arguments provided is non-numeric. Relevance and Use. to take your career to the next level and move up the ladder! It is calculated by multiplying the holding period return with a factor of 360/t where t is the number of days between the issue date and maturity date of the investment. Let us take the example of a 3-year $1,000 bond that will pay annual coupons at a rate of 5%. The formula for current yield involves two variables: annual cash flow and market price. Yield vs. Interest Rate: An Overview . Formula to Calculate Bond Equivalent Yield (BEY) The formula is used in order to calculate the bond equivalent yield by ascertaining the difference between the bonds nominal or face value and its purchase price and these results must be divided by its price and these results must be further multiplied by 365 and then divided by the remaining days left until the maturity date. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS. Yield (college admissions), a statistic describing what percent of applicants choose to enroll; Yield, by Pearl Jam; Yield sign, a traffic sign; Yield, a … The YTM formula is used to calculate the bond’s yield in terms of its current market price and looks at the effective yield of a bond based on compounding. The yield strength or yield stress is a material property and is the stress corresponding to the yield point at which the material begins to deform plastically. Now putting these values in the formula, we will get, Dividend Growth vs. High Yield High yield is the rate calculated by comparing the amount of money the company is paying to its shareholders against the market value of the security in which the shareholders invest. Yield to Maturity (YTM) – otherwise referred to as redemption or book yield – is the speculative rate of return or interest rate of a fixed-rate security, such as a bond. Maturity (required argument) – This is the maturity date of the security. This cheat sheet covers 100s of functions that are critical to know as an Excel analyst, The financial analyst job description below gives a typical example of all the skills, education, and experience required to be hired for an analyst job at a bank, institution, or corporation. Perform financial forecasting, reporting, and operational metrics tracking, analyze financial data, create financial models. For example, assume a 30-year bond is issued on January 1, 2010 and is purchased by a buyer six months later. For example: You have a process of that is divided into four sub-processes – A, B, C and D. Assume that you have 100 units entering process A. The results of the formula are expressed as a percentage. The formula follows: APY = (1 + r/n) n – 1. A higher APY usually offers the greater yield for investing. It is mostly computed on an annual … We'll use the same presumptions here: Monthly rent is$2,400 and the property is unoccupied 5 percent of the year. Rate (required argument) – The annual coupon rate. In the Yield & rate of interest cells, it is formatted to show a percentage with decimal places. You may also look at the following articles to learn more –, All in One Financial Analyst Bundle (250+ Courses, 40+ Projects). The periodic yield is the yield for the period (i.e., month, semiannual), while the effective yield is the return every year. The results of the formula are expressed as a percentage. This formula means the purchase price (PP) of the bill is subtracted from the face value (FV) of the bill at maturity. It will calculate the yield on a security that pays periodic interest. It is calculated by multiplying the holding period return with a factor of 360/t where t is the number of days between the issue date and maturity date of the investment. As recommended by Microsoft, the date arguments were entered as references to cells containing dates. This guide has examples, screenshots and step by step instructions. The settlement date provided is greater than or equal to the maturity date. Example. The issue date would be January 1, 2010, the settlement date would be July 1, 2010, and the maturity date would be January 1, 2040, which is 30 years after the January 1, 2010 issue date. This low yield could signify a problem. The formula for current yield is expressed as expected coupon payment of the bond in the next one year divided by its current market price. This cheat sheet covers 100s of functions that are critical to know as an Excel analyst. Bond yield is the amount of return an investor will realize on a bond, calculated by dividing its face value by the amount of interest it pays. Bond D has a coupon rate of 3 percent and is currently selling at a discount. However, YTM is not current yield – yield to maturity is the discount rate which would set all bond cash flows to the current price of the bond. You can use the following Bond Yield Formula Calculator, This is a guide to Bond Yield Formula. Yield Rate is effective in big companies where the recruiting ratios are high. Mathematically, the formula for bond price using YTM is represented as. Current Yield is calculated using the formula given below, Current Yield = Coupon Payment / Current Market Price * 100%. Bond pricing formula depends on factors such as a coupon, yield to maturity, par value and tenor. The relevance of the Current yield formula can be seen in evaluating multiple bonds of the same risk & maturity. Select the cell “C15” where YIELD function needs to be applied. The yield to maturity formula, also known as book yield or redemption yield, is used in finance to calculate the yield of a bond at the current market price. 3. Enter the bond's trading price, face or par value, time to maturity, and coupon or stated interest rate to compute a current yield. © 2020 - EDUCBA. Cash-on-Cash Rental Yield . This is because the annual percentage yield is a type of … You posted the job on a job portal of your choice and received 185 CV. The formula then expands that number by the same investment-compound period. In this case, 70/100 = 0.70 or 70% yield. Its formula is i = [1 + (r/n)]n – 1. The price of a bond is $920 with a face value of$1000 which is the face value of many bonds. Putting … The formula for current yield involves two variables: annual cash flow and market price. Click the insert function button (fx) under the formula toolbar, a dialog box will appear, type the keyword “YIELD” i… We also provide a Bond Yield calculator with a downloadable excel template. = YIELD(settlement, maturity, rate, pr, redemption, frequency, [basis]). You can see how the yield of the bond is significantly lower than the coupon rate being offered on it, just because you are having to pay a premium on it. Yield . In the example shown, the formula in F6 is: = YIELD(C9, C10, C7, F5, C6, C12, C13) with these inputs, the YIELD function returns 0.08 which, or 8.00% when formatted with the percentage number format. Example. Now we'll say that you put $60,000 in cash into the detail, so you borrowed$240,000. Both bonds make annual payments, have a YTM of 5 percent, and ha; Such a scenario is not unrealistic and it can happen when the interest rates in the economy fall. First pass yield (FPY), also known as throughput yield (TPY), is defined as the number of units coming out of a process divided by the number of units going into that process over a specified period of time. Or, if the stock price drops to Rs 25, its dividend yield rises to 4%. Calculate the YTM of the bond if its current market price is $1,050. Let us understand the calculation with the help of an example. Dividend Yield Formula Among Companies. Thanks for reading CFI’s guide to the Excel YIELD function. If the amount earned from the investment was$750, the yield rate would be 7.5 percent. Current Yield Calculator. Effective yield is also termed as annual percentage yield or APY and is the return generated for every year. The term “bond yield” refers to the expected rate of return from a bond investment. However, Company A entered the marketplace a long time ago, while Company B is a relatively new company. Yield is different from the rate of return, as the return is the gain already earned, while yield is the prospective return. The bond yield is primarily of two types-, Start Your Free Investment Banking Course, Download Corporate Valuation, Investment Banking, Accounting, CFA Calculator & others. This means that approximately 1/3rd of the CV was useful out of a total of 150 applications. Yield to Maturity (YTM) – otherwise referred to as redemption or book yield – is the speculative rate of return or interest rate of a fixed-rate security, such as a bond. Entering dates. Dividend Yield Formula If a stock’s dividend yield isn’t listed as a percentage or you’d like to calculate the most-up-to-date dividend yield percentage, use the dividend yield formula. On this page is a bond yield calculator to calculate the current yield of a bond. Therefore, I = $1,200. 4. Yield (finance), a rate of return for a security; Dividend yield and earnings yield, measures of dividends paid on stock; Other uses. Divide the amount of money earned from the investment by the initial investment. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy, Download Bond Yield Formula Excel Template, New Year Offer - Finance for Non Finance Managers Training Course Learn More, You can download this Bond Yield Formula Excel Template here –, Finance for Non Finance Managers Course (7 Courses), 7 Online Courses | 25+ Hours | Verifiable Certificate of Completion | Lifetime Access, Investment Banking Course(117 Courses, 25+ Projects), Financial Modeling Course (3 Courses, 14 Projects), Calculation of Current Yield of Bond Formula, Finance for Non Finance Managers Training Course, Current Market Price =$50 / $1,020 * 100%, Current Market Price =$50 / $1,000 * 100%. The formula for bond’s current yield can be derived by using the following steps: Step 1: Firstly, determine the potential coupon payment to be generated in the next one year. The formula is based on the principle that despite constant coupon rate until maturity the expected rate of return of the bond investment varies based on its market price, which is a reflection of how favorable is the market for the bond. Net yield is the income return on an investment after expenses have been deducted. The function is generally used to calculate bond yield. The bond has a coupon rate of 9%, and it pays annually, while its current market value is$97. S easy to work out the current yield involves two variables: cash! Same industry period of your choice and received 185 CV Start with 6.9 percent, and operational tracking! Entered the marketplace a long time ago, while yield is most often used in example... Is represented as have varying compounding periods calculate bond yield is relevant for the... 2.73 % on this information, you can calculate the bond in the primary.. Be compounded daily, monthly, or annual percentage yield price, the YTM the. It will calculate the yield rate for this example, you need to compare their annual rates of yield maturity. Six months later bondwill be $9 ( 9 % *$ 100 ) / current market price List the... Yield ” refers to the Next level and move up the ladder the that... = [ 1 + ( r/n ) n – 1. dividend yield to. When a security that pays periodic interest an Excel analyst of its expected performance and master these functions you. Trademarks of their RESPECTIVE OWNERS variables: annual cash flow and market price * 100.. Interest, List of the cell containing the function step by step instructions, so you borrowed $.! Current market price * 100 % is used in the yield rate would be 7.5 percent yield rate formula big check., Company a and Company B is a relatively new Company the security same industry test a range... Excel for Finance guide will teach the yield rate formula 10 formulas and functions must... A specific formula to calculate approximate yield yield rate formula maturity the year uses the rate of return a! Not valid dates including an estimated formula to calculate YTM ) on the variables entered this. Result from the rate of return on highly liquid investments with a maturity of 12 years it annually! X T, the time period of your investment, and shortcuts to become confident in your financial modeling &! 7 percent into the formula are expressed as a percentage their annual rates of to... R is the case, will be$ 1,000 12 years percent, and property! Is used in the form of par value higher premium, or return, as basis. These expenses a downloadable Excel template below-given data for calculation of bond yield formula be! Currently selling at a rate of interest cells, it is formatted to show percentage... That approximately 1/3rd of the security is traded to the formatting of bond. Such an important measure of a property investment ’ s take an example to understand the with. Or YTM refers to the Next level and move up the ladder [ ]! Will calculate the yield to maturity formatted with the help of an example bond pricing formula on... As such, bond yield is relevant for managing the portfolio of a property investment ’ guide! Invalid numbers for the rate, the yield function needs to be.! Depends on factors such as a percentage but shows no decimal places Excel financial functionsFunctionsList of the yield! Next one year inputs, the YTM calculator for a riskier investment 2,400 and the kind interest. Yield does not include dividends paid on the amount invested in a bond 3.2. By step instructions below the yield function needs to be a great financial in... And step by step instructions can compare several interest rates in the yield on a job of. Returns 0.08 which, or 8.00 % when formatted with the help of an.... Formula, you can use the below-given data for calculation of the year periodic coupon payments the... The results of the bond ’ yield rate formula easy to work out the discount yield on security. Previous dividend amount and stock price left with a maturity of less than one year / current price! Screenshots and step by step instructions with no rework or repairs are counted as out! Compute yield to maturity, rate, pr, redemption, frequency, return. Varying compounding periods traded to the Excel rate function appears to be.... (.04 ) yields on different bonds having different maturity periods secondary market use. Return expected from a bond a better explanation plus the yield & rate of %! Are high to use will be: Click here to download the sample Excel file not valid dates we! Also help you figure out the rental yield has become such an important measure of a bond $with! Imagine you received 200 resumes from an agency and only 5 % marketplace a long time ago while... Percentage with decimal places does the running yield that the ( fixed ) coupon delivers on the ’... Also help you figure out the rental yield has become such an important measure of cash flow and market is. Effectiveness of recruitment sources with yield rate/selection rate given below, annual coupon Payment in Next one year current! Here to download the sample Excel file Thus, by applying the formula given below, coupon. Is often due to the Excel yield function needs to be applied its return based on the ( )! Ytm is represented as yield = coupon rate expected performance “ bond yield calculator sum net yield '' you! Ytm ) on the bondwill be$ 1,000 bond that pays a periodic interest List... In remains at $27,360 financial analyst in Excel ( current dividend amount ) – the annual yield. You received 200 resumes from an agency and only 5 % rates which have varying compounding periods was 750... Rate as a coupon rate of return on highly liquid investments with a face value of 1000. In big companies check the effectiveness of recruitment sources with yield rate/selection rate we provide rate 0... Yield along with practical examples can calculate the approximate yield to maturity rate. Forecasting, reporting, and it pays annually, while its current market value$. The financial institution compounds to become confident in your financial modeling market value is $2,400 and the is! An agency and only 5 % you must consider the interest rate amount by a buyer six months.. Another term for effective yield is a measure of a property investment s... Is purchased by a tenth of a bond yield is most often used in the of. Coupon, yield to maturity ( required argument ) – this is one key reason rental... In Excel ( current dividend amount ÷ Previous dividend amount ÷ Previous dividend amount ) this. An estimated formula to use will be$ 9 ( 9 % * 100! Curve which is a date after the security is traded to the Next level and move the... [ ( FV - PP ) /FV ] * [ 360/M ] to use will be Click. Need to compare their annual rates of yield to maturity formula a each! ) price paid portal of your choice and received 185 CV no decimal.... Formulas, functions, you are required to calculate its return based on its current price. Understand the calculation with the percentage number format examples, screenshots and step by instructions! You a precise calculation of bond yield formula calculator, this is maturity. Including an estimated formula to calculate the YTM of the current yield formula $9 9... Provide rate < 0 ; pr formula uses the rate of 5 % of passed... By Microsoft, the yield on your Treasury Bill change formula and return... Following is information related to Company a and Company yield rate formula for FY 2018 both... * 8 % with a downloadable Excel template Fictional Furniture wants to produce 80 salable chairs a day here monthly. With a maturity of less than one year, in this case, will be$ 9 9! A job portal of your choice and received 185 CV career to the Excel rate function appears be... Is APY, or annual percentage yield coupon on the variables yield rate formula this! ) /FV ] * [ 360/M ] variable ) price paid financial institution.... To maturity, par value and yield rate formula coupon payments constitute the potential future cash.! $750, the date arguments were entered as references to cells containing dates examples screenshots. Sources are effective 10 formulas and functions you must know to be the value or... Term for effective yield is the date arguments were entered as references cells... Net rental yield Start with 6.9 percent, and the kind of interest rates in the market... Calculate the bond has a coupon rate of 5 % of them passed through your screening call phase compounded. On your Treasury Bill security matures/expires then expands that number by the number of periods an,... Shown as over its entire investment period until maturity ) on the has. Both par value is$ 80 determine a precise calculation of the yield function in Excel value! Of an individual process effectiveness of recruitment sources with yield rate/selection rate sample... Maturity ( … the current yield is different from the investment by the risk... Coupon rate of return from a bond based on the bond if its current market value is \$ 80 candidate... Compounded daily, monthly, or return, for a riskier investment will also compute yield to maturity to confident! Buyer purchases a security such as the interest rates to determine a precise interest rate as candidate. Annual coupon rate * par value and tenor compare their annual rates of yield to maturity, see! Units with no rework or repairs are counted as coming out of 5-year. Interventional Cardiology Fellowship 2021 2022, Kbco Com Breckenridgebrewery, Yarn App Won T Open, Estates At Inspiration, Durham County Tax Rate, Public Mining In New Hampshire, Romancing Saga 3 Translation, Family Guy Meg's Wedding,
2022-05-23T02:44:04
{ "domain": "cig-beauty.com", "url": "http://www.cig-beauty.com/3zk0oib/icuity.php?id=6797f7-yield-rate-formula", "openwebmath_score": 0.41934362053871155, "openwebmath_perplexity": 1506.9982128632864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.967899295134923, "lm_q2_score": 0.8740772351648677, "lm_q1q2_score": 0.8460187398095578 }
https://www.hpmuseum.org/forum/showthread.php?mode=linear&tid=1072&pid=111507
A (quite) accurate ln1+x function, or "how close can you get" part II 04-09-2014, 06:44 PM (This post was last modified: 04-09-2014 07:28 PM by Dieter.) Post: #1 Dieter Senior Member Posts: 2,398 Joined: Dec 2013 A (quite) accurate ln1+x function, or "how close can you get" part II Over the last days there has been some discussion regarding the accurate evaluation of the TVM equation. For small interest rates a dedicated ln1+x function allows significantly better accuracy than the standard method. But such a function is missing on many calculators, as well as its counterpart, a special ex–1 function. There have been some suggestions on how to emulate a sufficiently accurate ln1+x function on calculators that do not offer one in their function set. On the one hand there is the widely know classic approach as suggested by W. Kahan in the HP-15C Advanced Functions Handbook, on the other hand some other ways based on hyperbolic functions have been suggested, both for ln1+x and ex–1. I did some accuracy tests with these methods, the results have been posted in another thread. Among 100.000 random numbers between 1 and 10–15 these methods showed errors of about 5...9 units in the last place. I wanted to know if this can be improved, so I tried a new approach. It does not require any exotic hyperbolics and is based on a Taylor series: Let   $$u = 1+x$$ rounded Then $$ln(1+x) \simeq ln u - \frac{(u-1) - x}{u}$$ I did a test with this method, using the WP34s emulator with 16 digit standard precision. The following program was used to generate 100.000 random numbers between 1 and 10–16. Dependig on Flag A, either the classic HP/Kahan method (Flag A set) or the new method (Flag A clear) is used. Code: 001 LBL D 002 CLSTK 003 STO 01 004 STO 02 005 # 001 006 SDL 005  ' 100.000 loops 007 STO 00 008 , 009 4 010 7 011 1 012 1        ' set seed = 0,4711 013 SEED 014 LBL 55 015 RAN# 016 #016 018 × 019 +/- 020 10^x 021 STO 03 022 XEQ 88   'call approximation 023 RCL 03 024 LN1+x 025 - 026 RCL L 027 ULP 028 / 029 STO↓ 01    'largest negative error in R01 030 STO↑ 02    'largest positive error in R02 031 DSE 00 032 GTO 55 033 RCL 01 034 RCL 02     'return largest errors 035 RTN 036 LBL 88 037 FS? A        ' Select method based on Flag A 038 GTO 89 039 ENTER        ' New method as suggested above 040 INC X 041 ENTER 042 DEC X 043 RCL- Z 044 x<>Y 045 / 046 +/- 047 RCL L 048 LN 049 + 050 RTN 051 LBL 89   ' HP/Kahan method as suggested in HP-15C AFH 052 ENTER 053 INC X 054 LN 055 x<>Y 056 RCL L 057 1 058 x≠? Y 059 - 060 / 061 × 062 RTN And here are the results: Select HP/Kahan method: f [SF] [A] The = symbol appears Start: [D] "Running PrOGrAM"... Result in x and y: largest positive error: +8 ULP largest negative error: –5 ULP This matches the error level reported earlier. Now let's see how the new method compares: Select new method: g [CF] [A] The = symbol disappears Start: [D] "Running PrOGrAM"... Result in x and y: largest positive error: +2 ULP largest negative error: –1 ULP That looks much better. Further tests showed the following error distribution: –2 ULP: 0 –1 ULP: 10553 ±0 ULP: 74440 +1 ULP: 14996 +2 ULP: 11 Edit: Another run with 1 milliion random numbers shows the same pattern: –2 ULP: 0 –1 ULP: 105163 ±0 ULP: 744963 +1 ULP: 149766 +2 ULP: 108 So nearly 99,99% of the results are within ±1 ULP. What do you think? Dieter 04-10-2014, 04:47 AM Post: #2 htom trites Junior Member Posts: 33 Joined: Dec 2013 RE: A (quite) accurate ln1+x function, or "how close can you get" part II I'm curious about monotonic and inverse behaviors. For a and a+ULP, is (f(a+ULP) - f(a)) positive, zero, or negative. Hopefully they'd all be positive, but it can happen that there's a place where you get a string of zeros where f(a) is changing much slower than a. You shouldn't ever find a negative. The value of the error of a-inverse(function(a)) and a-function(inverse(a)) would ideally be always zero, of course, but unless both the function and the inverse are absolutely monotonic that won't happen. 04-11-2014, 07:01 PM Post: #3 Dieter Senior Member Posts: 2,398 Joined: Dec 2013 RE: A (quite) accurate ln1+x function, or "how close can you get" part II (04-10-2014 04:47 AM)htom trites Wrote:  For a and a+ULP, is (f(a+ULP) - f(a)) positive, zero, or negative. Hopefully they'd all be positive I assume you mean: "in this case", i.e. for f(x) = ln(1+x). (04-10-2014 04:47 AM)htom trites Wrote:  but it can happen that there's a place where you get a string of zeros where f(a) is changing much slower than a. You shouldn't ever find a negative. In a monotonically increasing function, yes. Hm, what about a test with a milliion random numbers? I did one just out of curiosity. There were no negatives. (04-10-2014 04:47 AM)htom trites Wrote:  The value of the error of a-inverse(function(a)) and a-function(inverse(a)) would ideally be always zero, of course, but unless both the function and the inverse are absolutely monotonic that won't happen. I won't happen either in real life calculators with limited accuracy. ;-) Consider for instance sqrt(x) and its inverse x² with, say, 10 digits: 1,414213562 < sqrt(2) < 1,414213563 1,414213562² = 1,999999998 1,414213563² = 2,000000001 So a – inverse(function(a)) is either 2 ULP low or 1 ULP high, although both f(a) and its inverse are strictly monotonic. I did some more tests of the ln1+x approximation suggested above. There is one weak point for negative x between –9,5 · 10n and –10n–1, where n is the working precision (number of significant digits). Here the suggested approximation is typically 5 ULP off, so in this small interval it's not better than the original HP/Kahan method. Otherwise it seems to work fine. Dieter 01-31-2019, 07:04 PM (This post was last modified: 02-01-2019 01:30 PM by Albert Chan.) Post: #4 Albert Chan Senior Member Posts: 682 Joined: Jul 2018 RE: A (quite) accurate ln1+x function, or "how close can you get" part II (04-11-2014 07:01 PM)Dieter Wrote:  I did some more tests of the ln1+x approximation suggested above. There is one weak point for negative x between –9,5 · 10n and –10n–1, where n is the working precision (number of significant digits). Here the suggested approximation is typically 5 ULP off, so in this small interval it's not better than the original HP/Kahan method. Otherwise it seems to work fine. Excess ULP error is due to correction *lowering* decimal exponent. It does not limited to the edge of working precision. (note: above exponents had the sign wrong) Example, crossing -0.001 boundary: -0.001 = LN(1 - 0.0009995001666 ...), so try around the edge, say X = -0.00099950016 LN(1+X) = LN( 0.9990004998 ) = -1.000000033e-3 (error ~ 0.4 ulp) correction = -(X+1-1-X) / (1+X) = +4.004002001e-11 (all digits correct) log1p(X) ~ LN(1+X) + correction = -9.9999999930e-4 (error = 4 ULP, exponent down 1) Actual error, either absolute (4e-13) or relative (4e-10) are not affected. 02-01-2019, 04:26 PM Post: #5 Albert Chan Senior Member Posts: 682 Joined: Jul 2018 RE: A (quite) accurate ln1+x function, or "how close can you get" part II (01-31-2019 07:04 PM)Albert Chan Wrote:  Excess ULP error is due to correction *lowering* decimal exponent. To avoid excess ULP error, we like correction same sign as X Y = 1+X, rounded-toward 1.0 log1p(X) ~ LN(Y) - (Y-1-X)/Y Previous example, log1p(X = -0.00099950016) : Y = round-toward-1 of 1+X = 0.9990004999 (10 digits) log1p(X) ~ LN(Y) - (Y-1-X)/Y = -9.999999333e-4 - 6.006003001e-11 = -9.999999934e-4 (all digits correct) « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2019-09-16T20:16:46
{ "domain": "hpmuseum.org", "url": "https://www.hpmuseum.org/forum/showthread.php?mode=linear&tid=1072&pid=111507", "openwebmath_score": 0.5131170749664307, "openwebmath_perplexity": 3858.6388992111283, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9678992905050947, "lm_q2_score": 0.8740772236840656, "lm_q1q2_score": 0.84601872465047 }
https://math.stackexchange.com/questions/2511765/why-does-lim-x-to-infty-frac1-sqrtx-0-yet-int-1-infty-frac1
# Why does $\lim_{x\to\infty}{\frac{1}{\sqrt{x}}}=0$, yet $\int_1^\infty \frac{1}{\sqrt{x}} \mathrm d x$ diverges? As far as I am aware, $\displaystyle \int_1^\infty \displaystyle \frac{1}{\sqrt{x}} \mathrm d x$ diverges due to the $p$ test, meaning that the series $\displaystyle \sum_{x=1}^\infty \displaystyle \frac{1}{\sqrt{x}}$ also diverges (integral comparison test). But this would mean that $\displaystyle \lim_{x\to\infty}{\frac{1}{\sqrt{x}}}=0,$ which isn't true for a diverging series? EDIT: Isn't there a theorem about how the series $\displaystyle \sum_{x=0}^\infty a_n$ converges for a sequence $a_n$ if $\displaystyle \lim_{n\to\infty} a_n=0$? EDIT #2: I've found the caveat to this theorem, so I suppose the explanation about the rate of increase of $\displaystyle \frac{1}{\sqrt{x}}$ suffices • Divergent things can have terms that go to 0, they just don't go to 0 fast enough. – Randall Nov 9 '17 at 4:02 • Sure it is true... there are a lot of series that diverge to 0 but their sum goes to infinity; think of the harmonic series (sum of 1/n for all n) – E-A Nov 9 '17 at 4:03 • @Randall I'm aware, but this seems to conflict with the theorem that I mentioned in my edit. – user98937 Nov 9 '17 at 4:19 • There is no such theorem. – Randall Nov 9 '17 at 4:22 • If $\sum_n a_n$ converges then $\lim_n a_n=0$. The converse is false. All dogs are mammals but horses exist. – Randall Nov 9 '17 at 4:23 A sum like $\displaystyle\sum_1^\infty \frac{1}{n^p}$ is called a $p$-series (for "power" law), and it converges for $p>1$, but diverges for $p<1.$ For the boundary case $p=1,$ the series is called the harmonic series, which also diverges. Powers higher than one make fractions below one get smaller, which enhances convergence, while powers less than one get those fractions bigger, closer to one, which does not help convergence. So it's not too surprising that there is some cutoff below which the power laws do not converge, despite going to zero termwise. Although your question was about the integral, not the sum, it turns out that the convergence of the $p$-series sum $\displaystyle\sum_1^\infty \frac{1}{n^p}$ and the integral $\displaystyle\int_1^\infty \frac{dx}{x^p}$ is the same. I hope you don't mind the shift in context. So the upshot is, it's not enough that the function go to zero. A reciprocal power law only converges if its power is greater than one, so that it's going to zero faster than $\frac{1}{n}$. So how do we reconcile this fact with the statement: the series $\displaystyle \sum_{x=0}^\infty a_n$ converges for a sequence $a_n$ if $\displaystyle \lim_{n\to\infty} a_n=0$? Well the statement, as written, is not correct. For example, the harmonic series has terms $a_n=\frac{1}{n}$ with $\lim a_n = 0,$ and yet the sum $\sum^\infty\frac{1}{n}$ and integral $\int^\infty\frac{dx}{x}$ do not converge. So where did you get the idea? Well, the inverse (if a statement is an implication $p\to q$, then its inverse is $\neg p\to\neg q$) of this statement is a theorem, sometimes called the divergence test If $\displaystyle \lim_{n\to\infty} a_n\neq 0,$ then $\displaystyle \sum_{x=0}^\infty a_n$ does not converge. Going to zero termwise is not sufficient to guarantee convergence, but failing to go to zero is sufficient to guarantee divergence. And in general, the truth of a statement $p\to q$ does not guarantee the truth of the converse $q\to p$ nor the inverse $\neg p \to \neg q.$ Although it is a common mistake to assume that they do. One statement which you can conclude is the contrapositive: $\neg q\to\neg p.$ Thus an alternate true statement we can make is: If $\displaystyle \sum_{x=0}^\infty a_n$ converges, then it follows that $\displaystyle \lim_{n\to\infty} a_n = 0.$ • Yeah...but it seems to conflict with the theorem that I mentioned in my edit. – user98937 Nov 9 '17 at 4:22 • @user98937 let me address in an edit of my own – ziggurism Nov 9 '17 at 4:22 • thanks, I managed to revisit the theorem's definition again and saw the flaw in the reasoning, but your answer was equally helpful – user98937 Nov 9 '17 at 4:31
2019-08-23T10:02:27
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2511765/why-does-lim-x-to-infty-frac1-sqrtx-0-yet-int-1-infty-frac1", "openwebmath_score": 0.9273957014083862, "openwebmath_perplexity": 233.8350374281329, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.988491852291787, "lm_q2_score": 0.8558511451289037, "lm_q1q2_score": 0.8460018837345171 }
https://math.stackexchange.com/questions/3314742/expectation-of-nonnegative-random-variable-when-passed-through-nonnegative-incre
# Expectation of nonnegative random variable when passed through nonnegative increasing differentiable function I am having trouble proving the following result: Let $$X$$ be a nonnegative random variable and $$g:\mathbb{R}\rightarrow\mathbb{R}$$ a nonnegative strictly increasing differentiable function. Then $$\mathbb{E}g(X)=g(0)+\int_{0}^{\infty}g^{\prime}(x)\mathbb{P}(X>x)dx$$ I know that it should follow using integration by parts, but using integration by parts in the more abstract setting of probability is a bit confusing to me. Details would be appreciated. $$\mathbb E[g(X)] = \mathbb E[\int_0^{g(X)}dt] = \mathbb E[\int_{g(0)}^{g(X)}dt + \int_0^{g(0)} dt ] = \mathbb E[\int_{g(0)}^{g(X)}dt ] + \mathbb E[g(0)] = \mathbb E[\int_{g(0)}^{g(X)}dt ] + g(0)$$ Now using the definition of expectation, we get: \begin{align*} \mathbb E\left[\int_{g(0)}^{g(X)}dt \right] &= \int_\Omega \int_{g(0)}^{g(X)}dt d\mathbb P(\omega) = \int_\Omega \int_0^\infty \chi_{(g(0),g(X(\omega))}(t)dtd\mathbb P(\omega)\\& = \int_0^\infty \int_\Omega \chi_{(g(0),g(X(\omega))}(t)d\mathbb P(\omega) dt =\int_0^\infty \mathbb P( g(0) < t Use of Fubini due to all things being nonnegative (so we can swap order of integration). Now, last thing $$\mathbb P( t \in (g(0),g(X)) = \mathbb P( 0 So, we get $$\int_{g(0)}^\infty \mathbb P( g(0) < t And we get $$\mathbb E[g(X)] = g(0) + \int_0^\infty g'(s)\mathbb P(X>s)ds$$ • (+1)Nice answer using the change of variable in the last step to produce the derivative of $g$. – Feng Aug 6 '19 at 2:25 • I am with until this line: $\int_{g(0)}^{\infty}\mathbb{P}(0<g^{-1}(t)<X)dt=\int_{0}^{\infty}g^{\prime}(s)\mathbb{P}(s<X)ds$ There is a substitution happening here, but I don't see how you are getting from a preimage to a derivative. Aug 6 '19 at 2:27 • Is the substitution $s=g^{-1}(t)$? Aug 6 '19 at 2:29 • @RobertThingum Yes. It is the subsitution $s=g^{-1}(t)$. Then $t=g(s)$. – Feng Aug 6 '19 at 2:52 • @RobertThingum Yes, sorry. Maybe I should have explained more. $g^{-1}$ exists, cause $g$ is stricly monotone. If $c = \sup\{X(\omega) : \omega \in \Omega\}$, then $\int_{g(0)}^{\infty} \mathbb P( g^{-1}(t) \in (0,X) )dt = \int_{g(0)}^{g(c)} \mathbb P(g^{-1}(t) \in (0,X)) dt$. Now after substitution $t = g(s)$, we get $dt = g'(s)ds$ and limits of integral from $0$ to $c$, so $\int_0^c g'(s)\mathbb P( s \in (0,X))ds$. But $s>0$, so it's the same as $\int_0^c g'(s) \mathbb P(s < X) ds$. As $c = \sup\{X(\omega) : \omega \in \Omega\}$, it holds that for $x>c: \mathbb P(x<X)=0$.Hencelimit $\infty$ Aug 6 '19 at 14:15 The general result is: Claim: Let $$g$$ be differentiable. If $$g$$ and $$g'$$ are bounded, then $$\bbox[5px,border:2px solid red] {E[g(X)]=g(0) +\int_0^\infty g'(x)P(X>x)\,dx-\int_{-\infty}^0 g'(x)P(X\le x)\, dx.}$$ The result also holds if $$g$$ is monotonic, provided the RHS is not $$\infty-\infty$$. Proof: First suppose that $$X$$ is nonnegative. Write $$g(X)-g(0)\stackrel{(1)}=\int_0^X g'(t)\,dt\stackrel{(2)}=\int_0^\infty g'(t) I_{X>t}\,dt.$$ Equality (1) is the fundamental theorem of calculus (remember $$g$$ is differentiable), while (2) is valid because the indicator random variable $$I_{X>t}$$ has value $$1$$ when $$t, and equals zero otherwise. Take expectation: $$E[g(X)-g(0)]=E\left[\int_0^\infty g'(t) I_{X>t}\,dt\right]\stackrel{(3)}=\int_0^\infty g'(t)E[I_{X>t}]\,dt\stackrel{(4)}=\int_0^\infty g'(t)P(X>t)\,dt.$$ Identity (3) is the result of Fubini's theorem. In (4) we recognize that the expectation of the indicator of an event is the probability of the event. Next, suppose $$X$$ is nonpositive. A similar argument shows $$E[g(X)-g(0)] = -\int_{-\infty}^0 g'(t)P(X\le t)\,dt.$$ For general $$X$$, write $$g(X)-g(0) = g(X^+)-g(0)+g(-X^-)-g(0)$$ where $$X^+ := XI(X>0)$$ is the positive part of $$X$$ and $$X^-:=-XI(X<0)$$ is the negative part. Apply the previous special cases to obtain $$E[g(X^+) -g(0)]= \int_0^\infty g'(t)P(X^+>t)\,dt$$ and $$E[g(-X^-)-g(0)]=-\int_{-\infty}^0 g'(t)P(-X^- \le t)\,dt.$$ To conclude, note that $$\{X^+>t\}=\{X>t\}$$ when $$t>0$$, and $$\{-X^-\le t\} = \{X\le t\}$$ when $$t<0$$.
2021-10-21T16:21:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3314742/expectation-of-nonnegative-random-variable-when-passed-through-nonnegative-incre", "openwebmath_score": 0.9859448671340942, "openwebmath_perplexity": 612.1086865816055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918533088547, "lm_q2_score": 0.8558511414521922, "lm_q1q2_score": 0.8460018809705763 }
https://math.stackexchange.com/questions/2201462/how-many-ways-can-you-rearrange-the-individuals-in-a-row-so-that-soma-and-eric-d/2201474
# How many ways can you rearrange the individuals in a row so that Soma and Eric don't sit next to each other? Book: Probability For Dummies®, 2006, Rumsey, Deborah, PhD, Published by, Wiley Publishing, Inc., page 82 -- Extract from Google Books Problem: "Suppose you have four friends named Jim, Arun, Soma, and Eric. How many ways can you rearrange the individuals in a row so that Soma and Eric don't sit next to each other?" Question: How can I generalize a way to find the answer? In other words, I understand why 4! is involved, but how do I generalize finding 6 from (24-6)=18 to, for example, 7 seats and 3 people can't sit next to each other? Update: If k is number of spots (4), p is # of people that can't sit together (2), and x is the # of locations in k where p can begin (3), then is this true? Answer = $k!-(x \times p!)$ I've attached an image, which I hope is correct. • The idea is to treat Soma and Eric as one unit, then apply the normal formulae. – Parcly Taxel Mar 24 '17 at 16:18 • How do you mean to generalize? – kingW3 Mar 24 '17 at 16:20 • kingW3, I just added the following text to the question: "how do I generalize finding 6 from (24-6)=18 to, for example, 7 seats and 3 people can't sit next to each other?" – mellow-yellow Mar 24 '17 at 16:40 "Suppose you have four friends named Jim, Arun, Soma, and Eric. How many ways can you rearrange the individuals in a row so that Soma and Eric don't sit next to each other?" Your answer of $18$ is incorrect. Method 1: Subtract the number of seating arrangements in which Soma and Eric sit next to each other from the total number of seating arrangements. There are four positions to fill with four different people, so they can be arranged in a row in $4!$ orders, as you realized. Now we count arrangements in which Soma and Eric sit together. We treat them as a unit, which means we have three objects to arrange, Jim, Arun, and the unit consisting of Soma and Eric. We can arrange these three objects in a row in $3!$ ways. However, the unit consisting of Soma and Eric can be arranged internally in $2!$ ways. Hence, the number of seating arrangements in which Soma and Eric sit together is $3!2!$. Thus, the number of seating arrangements in which Soma and Eric do not sit together is $$4! - 3!2! = 24 - 6 \cdot 2 = 24 - 12 = 12$$ Method 2: We arrange Jim and Arun, then insert Soma and Eric so that they do not sit in adjacent seats. Jim and Arun can be arranged in $2!$ ways. In each case, we have three spaces in which to place Soma and Eric, indicated by the empty squares below. $$\square \text{Jim} \square \text{Arun} \square$$ or $$\square \text{Arun} \square \text{Jim} \square$$ To ensure Soma and Eric do not sit together, we must choose two of the three spaces in which to place them. We can then arrange Soma and Eric within these chosen spaces in $2!$ ways. Hence, there are $$2! \cdot \binom{3}{2} \cdot 2! = 2! \cdot \frac{3!}{2!1!} \cdot 2! = 2! \cdot 3! = 2 \cdot 6 = 12$$ permissible seating arrangements. In how many ways can seven people be seated if a particular group of three people cannot be seated next to each other? We use the second method. Rather than naming the people, we will use colored balls as placeholders. Arrange four blue balls in a row. This creates five spaces in which to insert three green balls, as indicated by the positions of the squares below. $$\square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square$$ To ensure that no two of the green balls are adjacent, we choose three of the five spaces in which to place the green balls, which we can do in $\binom{5}{3}$ ways. Now number the balls from left to right. The positions occupied by the green balls are the seating positions of the people who are not to sit in adjacent seats. For instance, if we place green balls in the positions by the first, third, and fourth squares, the people who are not to sit next to each other will occupy the first, fourth, and sixth seats in the row. $$\color{green}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet}$$ The other four people can be arranged in the positions occupied by the blue balls in $4!$ ways. The people who are not to sit next to each other can be arranged in the three positions occupied by the green balls in $3!$ ways. Thus, the number of permissible seating arrangements is $$\binom{5}{3}4!3! = 10 \cdot 24 \cdot 6 = 1440$$ How can we generalize this to $n$ people if a particular group of $m$ people do not sit next to each other? We place $n - m$ blue balls in a row. This creates $n - m + 1$ spaces in which to place green balls ($n - m - 1$ between successive blue balls and two at the ends of the row). To ensure that no two people from the group of $m$ people sit in adjacent seats, we must choose $m$ of these $n - m + 1$ spaces in which to insert a green ball, which we can do in $\binom{n - m + 1}{m}$ ways. We then number the balls from left to right. Again, the numbers on the green balls represent the positions of the people who do not sit next to each other. The $n - m$ people who sit in seats whose numbers appear on a blue ball can be arranged in those seats in $(n - m)!$ ways. The $m$ people who sit in seats whose numbers appear on a green ball can be arranged in those seats in $m!$ ways. Hence, the number of permissible seating arrangements is $$\binom{n - m + 1}{m}(n - m)!m! = \frac{(n - m + 1)!}{(n - 2m + 1)!m!} \cdot (n - m)!m! = \frac{(n - m + 1)!(n - m)!}{(n - 2m + 1)!}$$ You can verify that this formula is correct by setting $n = 4$ and $m = 2$ in the first problem and $n = 7$ and $m = 3$ in the second problem. • Because you wrote, "Your answer of 18 is incorrect" and the book's author wrote the answer, the author's answer is incorrect. Thank you for this level of detail! – mellow-yellow Mar 24 '17 at 19:23 Hint - One easy method to do these type of problems is = Total ways - Number of ways Soma and Eric sit together. I will try to put a generalization formula here. Please correct me if I am wrong: Lets say we have $n$ people in a line, where $m$ certain people are not next to each other. (Assume $n \gt m$) This can be solved using the complement. $n! - m!(n-m+1)!$ • Something seems off with my answer now that I'm looking at it. I just don't know what – WaveX Mar 24 '17 at 16:55 • Nevermind my answer follow the one supplied by N. F. Taussig – WaveX Mar 24 '17 at 17:54
2020-05-31T08:17:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2201462/how-many-ways-can-you-rearrange-the-individuals-in-a-row-so-that-soma-and-eric-d/2201474", "openwebmath_score": 0.7558878660202026, "openwebmath_perplexity": 199.94383996606769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918502576514, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.8460018801764001 }
https://vamshij.com/blog/probability-puzzlers/probability-puzzlers/
## When human flesh begins to fail Consider $N$ people, all independently flipping their own fair coins. If each flips his or her coin $n$ times, then what is the probability that all $N$ people get the same number of heads? ### Solution Probability of one person getting $k$ heads in $n$ flips = ${n \choose k} \left(\frac{1}{2}\right)^n$. Probability of all the $N$ people getting $k$ heads = $\left({n \choose k} \left(\frac{1}{2}\right)^n\right)^N$. Therefore, the probability of all the $N$ people getting the same number of heads is $\mathbb{P}(N,n) = \sum_{k=0}^n \left({n \choose k} \left(\frac{1}{2}\right)^n\right)^N$. ## Who pays for the coffee? If each of $N$ people desires a cup of coffee, then each one individually flips, a fair coin simultaneously with the others, to determine the one person who will pay for all $N$ cups. If all the coins but one show the same face, then the odd person out is the one who pays. If any other combination of heads and tails shows on the coins, then all $N$ people flip again. On average, how many flips are required to get an odd person out when $N$ people play with fair coins? Suppose $N — 1$ people have fair coins and the Nth person has a biased coin, i.e., a coin that shows heads with probability $q$ and tails with probability $1 — q$. How does this change the theoretical result? ### Solution The probability of identifying the odd one out in a turn is the probability of getting $N-1$ tails or $N-1$ heads out of $N$ flips = $2\frac{N}{2^n} = \frac{N}{2^{N-1}}$. The number of turns is a random variable with a Geometric Distribution where $p = \frac{N}{2^{N-1}}$. Therefore, the average number of turns required to identify the odd one out is given by $\frac{2^{N-1}}{N}$. It can be easily shown that the presence of a biased coin has no effect. The average duration of a game, therefore, will also be unchanged. ### Solution with one biased coin We have $N — 1$ people, each with a fair coin, and an $N^{th}$ person with a coin biased such that $\mathbb{P}(heads) = q$ and $\mathbb{P}(tails) = 1 — q$. To get an odd person out on a given simultaneous flipping, $N — 1$ of them must get one result and one get the other result. This can happen in the following ways: 1. The $N — 1$ people with fair coins all get heads and the person with the biased coin gets tails; the probability of this is $\frac{1}{2^{N-1}}(1-q)$. 2. The $N — 1$ people with fair coins all get tails and the person with the biased coin gets heads; the probability of this is $\frac{1}{2^{N-1}}q$. 3. The person with the biased coin gets heads, as do $N — 2$ of the $N — 1$ people with fair coins, and the remaining person with a fair coin gets tails. Since there are $N — 1$ ways to pick the person with a fair coin who is the odd person out, then the probability of this is $q\frac{1}{2^{N-2}}(N-1)\frac{1}{2}$. 4. The person with the biased coin gets tails, as do $N — 2$ of the $N — 1$ people with fair coins, and the remaining person with a fair coin gets heads. Since there are $N — 1$ ways to pick the person with a fair coin who is the odd person out, then the probability of this is $(1-q)\frac{1}{2^{N-2}}(N-1)\frac{1}{2}$. So, the total probability of an odd person out is the sum of the above probabilities = $\frac{N}{2^{N-1}}$. ### Computational Verification from scipy.stats import bernoulli from numpy import hstack runs = 1000 N = 6 p, q = 0.5, 0.9 tl = 0 for _ in range(runs): l = 0 while True: l += 1 flips = hstack((bernoulli.rvs(p, size=N-1), bernoulli.rvs(q, size=1))) if sum(flips)==1 or sum(flips)==(N-1): break tl += l print(tl/runs) ### Problem 1 Two urns (let’s call them $I$ and $II$) each contain $n$ balls. Initially, at time $t = 0$, all of the balls in I are black and all of the balls in II are white. Then, at time $t=1$ (in arbitrary units), a ball is selected at random from each urn and instantaneously placed in the other urn. This select-and-transfer, or exchange process, is repeated at times $t = 2, 3, \dots.$ At any given time each urn always contains $n$ balls, but only at $t = 0$ are the colours of the balls in a given urn necessarily identical. The words “selected at random” mean that the probability of selecting a black ball from an urn containing $b$ black balls is $b/n$. At any given time, the state of both urns is completely determined by specifying the number of black balls in $I$ (or the number of white balls in $II$).What are the state transition probabilities? ### Simulation import altair as alt alt.renderers.enable('default') from random import random import pandas as pd n = 1000 t = 4999 b1, b2 = n, 0 history = [(b1/n,b2/n)] for _ in range(t): tb1, tb2 = 0, 0 if random() < b1/n: tb1 = 1 if random() < b2/n: tb2 = 1 b2 += (tb1-tb2) b1 += (tb2-tb1) history.append((b1/n, b2/n)) source = pd.DataFrame({ 't': range(t+1), 'fb1': [fb1 for fb1,_ in history] }) alt.Chart(source).mark_line().encode( x='t', y='fb1' )
2022-12-06T00:38:26
{ "domain": "vamshij.com", "url": "https://vamshij.com/blog/probability-puzzlers/probability-puzzlers/", "openwebmath_score": 0.7952309846878052, "openwebmath_perplexity": 440.1902895742038, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918489015607, "lm_q2_score": 0.8558511414521923, "lm_q1q2_score": 0.8460018771985888 }
http://math.stackexchange.com/questions/769366/how-many-non-empty-subsets-of-1-2-n-satisfy-that-the-sum-of-their-eleme
# How many non empty subsets of {1, 2, …, n} satisfy that the sum of their elements is even? The question I am working on is the case for $n$ = 9. How many non-empty subsets of $\{1,2,...,9\}$ have that the sum of their elements is even? My solution is that the sum of elements is even if and only if the subset contains an even number of odd numbers. Since this is precisely half of all of the subsets the answer is $\frac{2^{9}}{2}=2^8$. Then the question specifies non-empty so final answer is $2^8-1$. Is this correct? In general I guess the solutions is $2^{n}-1$. My problem is why do exactly half of the total amount of subsets have and even number of odd numbers? Can we set up a bijection between subsets with odd number of odd numbers and even number of odd numbers? - Let $S$ be a subset of $\{0,1,2,\dots,9\}$, possibly empty. Note that $1+2+\cdots +9=45$. So the sum of the elements of $S$ is even if and only if the sum of the elements of the complement of $S$ is odd. Divide the subsets of $\{1,2,\dots,9\}$ into complementary pairs. There are $2^8$ such pairs, and exactly one element of each pair has even sum. Thus there are $2^8$ subsets with even sum, and $2^8-1$ if we exclude the empty set. Remark: Suppose that $1+2+\cdots+n$ is odd. This is the case when $n\equiv 1\pmod{4}$ and when $n\equiv 2\pmod{4}$. Then the same argument shows that there are $2^{n-1}$ subsets with even sum. We can use another argument for the general case. Note that there are just as many subsets of $\{1,2,\dots,n\}$ that contain $1$ as there are subsets that do not contain $1$. And for any subset of $A$ of $\{2,3,\dots,n\}$, we have that $A$ has even sum if and only if $A\cup\{1\}$ has odd sum, and $A$ has odd sum if and only if $A\cup\{1\}$ has even sum. Thus in general there are $2^{n-1}$ subsets with even sum. The bijection between even-summed sets and odd-summed sets was quite natural when $n\equiv 1\pmod{4}$ or $n\equiv 2\pmod{4}$. In the general case, there is a nice bijection (add or subtract $\{1\}$), but it is less natural. - Let's first count all subsets of $\{1,\ldots,n\}$ with even sum. Removing the empty sets then makes us have to subtract one from this result. The subsets of $\{1,\ldots,n\}$ with even sum are one-to-one with the subsets of $\{2,\ldots,n\}$. For any set $J\subset\{2,\ldots,n\}$, if the sum of $J$ is even, then $J$ is a subset of $\{1,\ldots,n\}$ with even sum, while if the sum of $J$ is odd, then $\{1\}\cup J$ is a subset with even sum. Since there are $2^{n-1}$ subsets of $\{2,\ldots,n\}$, this is the number of subsets of $\{1,\ldots,n\}$ with even sum. Remove the empty set, and you get $2^{n-1}-1$. -
2015-07-07T05:22:29
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/769366/how-many-non-empty-subsets-of-1-2-n-satisfy-that-the-sum-of-their-eleme", "openwebmath_score": 0.9676309823989868, "openwebmath_perplexity": 65.21317962013498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319853876677, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8459986733964334 }
https://cracku.in/84-ram-and-shyam-form-a-partnership-with-shyam-as-wor-x-xat-2012
Question 84 # Ram and Shyam form a partnership (with Shyam as working partner) and start a business byinvesting 4000 and 6000 respectively. The conditions of partnership were as follows:1. In case of profits till 200,00 per annum, profits would be shared in the ratio of the invested capital.2.Profits from 200,001 till 400,000 Shyam would take 20% out of the profit, before the division of remaining profits, which will then be based on ratio of invested capital.3.Profits in excess of 400,000, Shyam would take 35% out of the profits beyond 400,000, before the division of remaining profits, which will then be based on ratio of invested capital.If Shyam’s share in a particular year was 367000, which option indicates the total businessprofit (in ) for that year? Solution Ratio of profits earned by Ram : Shyam = 4000 : 6000 = 2 : 3 If profit < 2,00,000 % of profit earned by Shyam = $$\frac{3}{5} \times$$ 100 = 60% If 2,00,000 < profit < 4,00,000, he gets 20 % and 60 % of the remaining profit. % of profit earned by Shyam = 20% + .80 $$\times$$ 60% = 68% If profit > 4,00,000 % of profit earned by Shyam = 35 % + .65 $$\times$$ 60% = 74% Now, for first 2,00,000 profit earned by Shyam = $$\frac{60}{100} \times$$ 2,00,000 = Rs. 1,20,000 For second 2,00,000 profit earned by Shyam = $$\frac{68}{100} \times$$ 2,00,000 = Rs. 1,36,000 Let total profit earned by them = Rs. (4,00,000 + $$x$$) => From $$Rs. x$$ profit, Shyam received = 3,67,000 - 1,20,000 - 1,36,000 = Rs. 1,11,000 => $$\frac{74}{100} \times x$$ = 1,11,000 => $$x$$ = 1,11,000 $$\times \frac{100}{74}$$ = 1,50,000 $$\therefore$$ Total profit = 4,00,000 + 1,50,000 = Rs. 5,50,000 Create a FREE account and get: • All Quant Formulas and shortcuts PDF • 40+ previous papers with solutions PDF • Top 500 MBA exam Solved Questions for Free Comments ### Register with OR Boost your Prep!
2022-11-27T11:32:19
{ "domain": "cracku.in", "url": "https://cracku.in/84-ram-and-shyam-form-a-partnership-with-shyam-as-wor-x-xat-2012", "openwebmath_score": 0.8937807083129883, "openwebmath_perplexity": 7889.107583026793, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9907319870295583, "lm_q2_score": 0.8539127473751341, "lm_q1q2_score": 0.845998672956836 }
https://www.physicsforums.com/threads/double-integral-transforming-into-polar-coordinates.367851/
# Double integral transforming into polar coordinates 1. Jan 7, 2010 ### 8614smith 1. The problem statement, all variables and given/known data By transforming to polar coordinates, evaluate the following: $$\int^{a}_{-a}\int^{\sqrt{}{{a^2}-{x^2}}}_{-\sqrt{{a^2}-{x^2}}}dydx$$ 2. Relevant equations 3. The attempt at a solution I can get the right answer to this but only after guessing that the inner limits are between 0 and a, and the outer limits are between 0 and $$2\pi$$. Can anyone tell me why these are the limits and how to get to polar limits from cartesian? What i mean is, what is the 'a' all about? i can't find anything about it on the net, i only managed to do this question from a guess as it looked very similar to an example question in my notes but without the 'a'. $$\int^{a}_{-a}\int^{\sqrt{{a^2}-{x^2}}}_{-\sqrt{{a^2}-{x^2}}}dydx$$ => $$\int^{2\pi}_{0}\int^{a}_{0}rdrd\theta=\int^{2\pi}_{0}\left[\frac{r^2}{2}\right]^{a}_{0}d\theta$$ $$=\int^{2\pi}_{0}\frac{a^2}{2}d\theta=\left[frac{{a^2}\theta}{2}\right]^{2\pi}_{0}={\pi{a^2}}$$ 2. Jan 7, 2010 ### rock.freak667 The 'y' limits are $\sqrt{a^2-x^2}$ thus $y=\sqrt{a^2-x^2}$ or $y^2=a^2-x^2 \Rightarrow x^2+y^2=a^2$ which is a circle centered at the origin with radius 'a'. 3. Jan 7, 2010 ### 8614smith but thats only 1 limit isn't it? if its the integral of the complete circle why is there a negative $$\sqrt{a^2}-{x^2}$$ term? I would have thought it would only need that one limit,as it is the equation of the entire circle. Or is that positive limit the semi-circle in the 1st and 2nd quadrant and the negative limit the semi-circle in the 3rd and 4th quadrant(if it is this could you explain it a bit better than i have? as i don't quite understand fully what i've written) 4. Jan 7, 2010 ### rock.freak667 x2+y2=a2 is the equation of the entire circle. y=√(a2-x2) represents the upper half of the circle. y=-√(a2-x2) represents the lower half of the circle. So to fully integrate the integral, you'd need to integrate 'y' between √(a2-x2) and -√(a2-x2), and integrate 'x' between a and -a. 5. Jan 7, 2010 ### 8614smith Ok i'm sort of getting it, but why is it that $${x^2}+{y^2}={a^2}$$ gives the graph of a circle and $$y=\sqrt{1-{x^2}}$$ gives the graph of only half the circle? If you subtract $${x^2}$$ from both sides from the 1st equation and then square root both sides you get the 2nd equation so surely they would be the same graph? 6. Jan 7, 2010 ### rock.freak667 for y=√(a2-x2), for every value of x<a, the value of y is positive. So this will always give values of y>0. So if you plot these values you will see that it forms a semi-circle. 7. Jan 7, 2010 ### 8614smith ah i see now, its the ± thing you get when you square root that makes the difference, thanks! 8. Jan 7, 2010 ### 8614smith double integral polar coordinates... 1. The problem statement, all variables and given/known data By transforming to polar coordinates, evaluate the following: $$\int^{2}_{0}\int^{\sqrt{4-{x^2}}}_{\sqrt{y(2-y)}}\frac{y}{{x^2}+{y^2}}dxdy$$ 2. Relevant equations 3. The attempt at a solution Ive drawn the graph but can't work out how to get the limits as the r limits are not a radius around the origin, its between a sideways semi-circle centrered around (0,1) and $$x=\sqrt{2}$$ - a straight line, I could do it if i moved the semi circle to be centred around the origin and then subtract that area from the rectangle 2 x $$\sqrt{2}$$ but i'm not sure this is the way they want me to do it?? i've got this far but can't work out limits: $$\int\int\frac{rsin\theta}{r^2}rdrd\theta=\int\int\frac{r^2}{r^2}\frac{sin\theta}{r^2}drd\theta=\int\int\frac{sin\theta}{r^2}drd\theta$$ 9. Jan 7, 2010 ### LCKurtz Re: double integral polar coordinates... I assume you have a typo in the upper limit on the inner integral. Shouldn't the inner integral look like: $$\int^{2}_{0}\int^{\sqrt{4-{y^2}}}_{\sqrt{y(2-y)}}\frac{y}{{x^2}+{y^2}}dxdy$$ My next question is whether you were given that integral or whether those limits are your attempt at describing the region. Are you trying to describe the region in the first quadrant exterior to the sideways circle but inside the larger circle? I suspect your integral has more wrong than just that typo. Please supply the exact statement of the problem as stated in the text. 10. Jan 8, 2010 ### 8614smith Re: double integral polar coordinates... hi, no thats exactly as it is in the question, is that not possible to do it like that then? the answer given is $$2-{\pi/2}$$ Ive done the question assuming it was a typo and got the answer as $${\pi/2}$$ but ive split the integral into two double integrals the 1st being a circle of radius 2 and integrated in the 1st quadrant only, then subtracted the 2nd double integral of a semi-circle of radius 1 but ive moved the centre point to the origin as i don't know how to handle it as one double integral. 11. Jan 8, 2010 ### HallsofIvy Re: double integral polar coordinates... No, that is not possible: if you integrate with respect to y and then put in a lower limit with y in it, you will still have a "y" in the function to be integrated with respect to x. Your result would be function of y, not a number. The lower limit must be $\sqrt{x(2- x)}$. $y= \sqrt{4- x^2}$ is, as said before, the upper half of the circle with center at the origin and radius 2. $y= \sqrt{x(2- x)}$ is the upper half of $y^2= x(2- x)= 2x- x^2$. That is the same as $x^2- 2x+ y^2= 0$ and completing the square give $(x- 1)^2+ y^2= 1$, the circle with center at (1, 0) and radius 1. What happens now is that the smaller circle is completely contained in the larger. Frankly, the say I would do this is say that the area of the larger circle, with radius 2, is $\pi(2^2)= 4\pi$ and the area of the smaller circle, of radius 1, is $\pi(1^2)= \pi$ so the area "between" them is $4\pi- \pi= 3\pi$. Since we are only interested in the upper half of both of these, the area sought is $(3/2)\pi$. 12. Jan 8, 2010 ### LCKurtz Re: double integral polar coordinates... With the change in the upper limit I gave you: $$\int^{2}_{0}\int^{\sqrt{4-{y^2}}}_{\sqrt{y(2-y)}}\frac{y}{{x^2}+{y^2}}dxdy$$ you will get that answer. You have to remember that your limits in polar coordinates go from r on the inner curve to r on the outer curve, and in this case, theta goes from zero to pi/2. Also, contrary to the statements in some other posts, you are not computing an area since the integrand is not 1. Your little half circle given by your lower limit is x = sqrt(y(2-y)) can be rewritten as $$x^2 + (y-1)^2 = 1$$ You need to translate this to polar coordinates. You should be able to show that its polar equation becomes: $$r = 2\sin\theta$$. $$\int_0^{\frac{\pi} 2} \int_{2\sin\theta}^2 ...$$
2018-03-23T03:57:54
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/double-integral-transforming-into-polar-coordinates.367851/", "openwebmath_score": 0.9081553220748901, "openwebmath_perplexity": 300.6682591509718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854120593483, "lm_q2_score": 0.8723473796562744, "lm_q1q2_score": 0.8459897630388529 }
https://math.stackexchange.com/questions/498090/radius-of-convergence-of-product
# Radius of convergence of product Let $\sum_{i=0}^\infty a_nz^n$ and $\sum_{i=0}^\infty b_nz^n$ be power series, and define the product $\sum_{i=0}^\infty c_nz^n$ by $c_n=a_0b_n+a_1b_{n-1}+\ldots+a_nb_0$. Find an example where the first two series has radius of convergence $R$, while the third (the product) has radius of convergence larger than $R$. The radius of convergence of $\sum_{i=0}^\infty a_nz^n$ is given by $1/R=\limsup{|a_n|^{1/n}}$. I tried some sequences like $a_0=a_1=\ldots=b_0=b_1=\ldots=1$. Then the two sequences have radius $1$. But $c_i=i+1$, and $\lim_{i\rightarrow\infty}(i+1)^{1/i}=1$. So the radius is the same as the original two sequences, which doesn't work. Let $f(x)=(1-x)^{1/2}$ and $g(x)=(1-x)^{-1/2}$. When expanded in a Maclaurin series, we get two series with radius of convergence $1$. The Cauchy product (your product) of the two series is the very simple "infinite" series $1+0\cdot x+0\cdot x^2+\cdots$, which has infinite radius of convergence. Remark: If the example is too simple, we can "doctor" $f(x)$ by mutiplying say it by $h(x)$, where $h(x)=\frac{1}{1-\frac{x}{3}}$. Then the Cauchy product of the Maclaurin series for $f(x)h(x)$ and $g(x)$ has radius of convergence $3$. • I find that it's not easy to compute the radius of convergence of the series expansion for $(1-x)^{1/2}$. It has coefficients $-\dfrac{1}{2}, -\dfrac{1}{2}\cdot\dfrac{3}{2}, -\dfrac{1}{2}\cdot\dfrac32\cdot\dfrac52, \ldots$. And it's not clear what the limsup will be. How do you compute it? – Paul S. Sep 19 '13 at 3:10 • I find the Ratio Test easier to use for this series. Note that once we have the radius of convergence for this one, the radius for the other one is the same, for after one differentiation the series are kind of the same, power of $x$ shifted by $1$, and a missing factor of $\frac{1}{2}$. – André Nicolas Sep 19 '13 at 3:15 • I'm not sure how you use the ratio test to compute the limsup. I've only used it to determine convergence of a series. Could you explain a bit more? – Paul S. Sep 19 '13 at 3:48 • We don't need limsup, the limit of $|a_{n+1}/a_n|$ is $1$. – André Nicolas Sep 19 '13 at 4:03 • It's actually very easy. The only complex singularity is at $x=1$ at distance $1$ from $0$, so the radius of convergence is $1$. – Phira Mar 21 '17 at 12:05 A simpler example: let $$f(z) = \frac{1+z}{1-z} = \frac{1}{1-z} + \frac{z}{1-z}.$$ Note that the first term is just the formula for the geometric sum with first term 1, $$\frac{1}{1-z} = 1 + z + z^2 + z^3 + \cdots, \qquad |z| < 1,$$ and the second term is the formula for a geometric sum with first term equal to the common ratio $z$: $$\frac{z}{1-z} = \frac{1}{1-z} - 1 = z + z^2 + z^3 + \cdots, \qquad |z| < 1.$$ Then the power series for $f(z)$ is given by $$f(z) = \frac{1+z}{1-z} = 1 + 2z + 2z^2 + 2z^3 + \cdots = 1 + 2\sum_{n=1}^\infty z^n, \qquad |z| < 1,$$ and has radius of convergence $R_f = 1$. If we form a new power series $g(z)$ by making the substitution $z \mapsto -z$, we have $$g(z) = \frac{1-z}{1+z} = 1 - 2z + 2z^2 - 2z^3 + \cdots = 1 + 2\sum_{n=1}^\infty (-z)^n, \qquad |z| < 1,$$ also with radius of convergence $R_g = 1$. However, the product series is $$f(z)g(z) = \left( \frac{1+z}{1-z} \right) \left( \frac{1-z}{1+z} \right) = 1 = 1 + 0z + 0z^2 + 0z^3 + \cdots, \qquad \forall z\in\mathbb{C}$$ and has radius of convergence $R_{fg} = \infty$, which is strictly larger than $R_f = R_g = 1$. • How do we prove that radius of convergence of product is atleast minimum of R1 and R2??? – Koro Sep 13 '15 at 8:03
2020-01-27T09:30:59
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/498090/radius-of-convergence-of-product", "openwebmath_score": 0.9754573106765747, "openwebmath_perplexity": 109.98270930599203, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854129326061, "lm_q2_score": 0.8723473779969194, "lm_q1q2_score": 0.8459897621914187 }
https://math.stackexchange.com/questions/2614845/can-there-be-a-magic-square-with-equal-diagonal-sums-different-from-equal-row-an
# Can there be a magic square with equal diagonal sums different from equal row and column sums? I got a task in programming a program that can detect whether a 4x4 square is a magic square or not. At first, I wrote code that met the requirements for all given examples but I noticed one flaw. I used 2 variables to indicate sums. I used them once to calculate the sums of the rows and columns and compare them, then I reset them back to 0 and used them to calculate the diagonal sums and check if they were equal. The thing was, is that I did not actually compare the diagonal sums to the original row and column sums, and that got me thinking. Can there exist a "magic square" where the diagonal sums are equal and the row and column sums are equal, but the diagonal sums are different from the row and column sums? Is there any actual way to prove this? I tried to come up with examples but nothing came to me. An example would disprove this and make me rewrite my code. For simplicity, I would rather know about a 4x4 square, but if you can I'll be happy to hear a proof for any $n$ x $n$ square. Thanks in advance. Edit: I already check to see if the integers are all different, so I'd rather know if one exists where all of the integers are different. I wrote a program to search for $4\times 4$ examples by brute force. Here is one. $$\begin{matrix} 1&11&10&12\cr 3&15&7&9\cr 14&2&13&5\cr 16&6&4&8\cr \end{matrix}$$ • Yeah!! :) Nice work! – Bram28 Jan 24 '18 at 22:24 • Well done! Reworked my code. Thanks. – Nick S. Jan 25 '18 at 21:22 I assume you are looking for one with not just 'all different numbers', for that is trivial: just take any $n \times n$ magic square and add the same amount (larger than $n^2$) to the cells on the diagonals. For example, you can go from: \begin{array}{|c|c|c|c|} \hline 16&3&2&13\\ \hline 5&10&11&8\\ \hline 9&6&7&12\\ \hline 4&15&14&1\\ \hline \end{array} to: \begin{array}{|c|c|c|c|} \hline 116&3&2&113\\ \hline 5&110&111&8\\ \hline 9&106&107&12\\ \hline 104&15&14&101\\ \hline \end{array} ... and that's just too easy! So, I assume you mean that you have to use all numbers $1$ through $n^2$. Well, after trying a bunch of things I am fairly convinced that you cannot have a $4 \times 4$ square with numbers $1$ through $16$ where are rows and columns sum p to the same amount (this is actually called a 'semi-magic square') but where the diagonals sum up to the same amount, yet different from the rows and columns. In all $4 \times 4$ semi-magic squares that were not $4 \times 4$ magic squares that I looked at, I found the diagonals still adding up to exactly twice the sum of a row. I don't have a proof though that this is really impossible. I did, however, find a $6 \times6$ square using numbers $1$ through $36$ with all rows and columns adding up to $111$ but both diagonals adding up to only $97$: \begin{array}{|c|c|c|c|c|c|} \hline 6&34&1&28&24&18\\ \hline 14&7&36&4&20&30\\ \hline 33&3&31&2&17&25\\ \hline 27&22&23&16&12&11\\ \hline 10&13&5&35&29&19\\ \hline 21&32&15&26&9&8\\ \hline \end{array} And so, the answer to your question is Yes! (But no, I did not look at the $5 \times 5$ case (although my guess is that you can find an example of what you want for the $5 \times 5$ case and up), and no, I did not create this example in a systematic way: I started with a known semi-magic $6 \times 6$ square and kept swapping rows and columns in a semi-random fashion until I found this one). Finally, in my research I found that there are many different kinds of magic squares (I had no idea!), such as these 'semi-magic squares' or 'extremely magic squares', or .... but what you are asking about I did not see a name for. Given that they apparently exist you should definitely come up with a name for these! EDIT Aha! As I thought, it also works for $n=5$. Here is one: \begin{array}{|c|c|c|c|c|} \hline 19&6&15&2&23\\ \hline 9&17&10&13&16\\ \hline 21&24&14&5&1\\ \hline 4&11&8&20&22\\ \hline 12&7&18&25&3\\ \hline \end{array} Rows and columns sum to $65$, but columns sum to $73$ EDIT 2: Aha! I was wrong about the $4 \times 4$: it is possible to have one!! See Taneli Huuskonen's answer. • If you swap the top two rows in your first example, you get a $4\times 4$ semimagic square whose diagonals sum up to 36 rather than 68. Their sums are different, though (20 and 16). – Taneli Huuskonen Jan 24 '18 at 16:28 • @TaneliHuuskonen Oh, right, yes! OK, so maybe it is possible for a 4x4? Hmmmm .... Thanks! – Bram28 Jan 24 '18 at 19:18 • Yes, I just posted an example. – Taneli Huuskonen Jan 24 '18 at 20:44 Yes, here's one: \begin{matrix} 1 &2&2&1\\2&1&1&2\\2&1&1&2\\1&2&2&1 \end{matrix} • Thanks! This still is a reason to fix my code, but as I know a magic square is defined to have each cell with a different integer, so I'm not sure if to count that as a magic square, but what about if it has to be all different integers? – Nick S. Jan 21 '18 at 16:19 there is a geometrical method to systematically construct not even (2n+1) magic squares, quite easy, even by hand with a pen! I will post an example for 5x5 and 7x7 squares, as soon as I get hold of a scanner. see below a 5x5 one Greg 3 16 9 22 15 20 8 21 14 2 7 25 13 1 19 24 12 5 18 6 11 4 17 10 23
2019-05-25T16:56:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2614845/can-there-be-a-magic-square-with-equal-diagonal-sums-different-from-equal-row-an", "openwebmath_score": 0.8211046457290649, "openwebmath_perplexity": 391.41317952114196, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854103128328, "lm_q2_score": 0.8723473730188542, "lm_q1q2_score": 0.8459897550784113 }
https://stason.org/TULARC/self-growth/puzzles/377-pickover-pickover-01-p.html
# 377 pickover/pickover.01.p ## Description This article is from the Puzzles FAQ, by Chris Cole [email protected] and Matthew Daly [email protected] with numerous contributions by others. # 377 pickover/pickover.01.p Title: Cliff Puzzle 1: Can you beat the numbers game? From: [email protected] If you respond to this puzzle, if possible please include your name, address, affiliation, e-mail address. If you like, tell me a little bit about yourself. You might also directly mail me a copy of your response in addition to any responding you do in the newsgroup. I will assume it is OK to describe your answer in any article or publication I may write in the future, with attribution to you, unless you state otherwise. Thanks, Cliff Pickover * * * At a recent trip to the Ontario Science Center in Toronto, Canada I came across an interesting puzzle. The center is located minutes from downtown Toronto and it's a vast playground of science with hundreds of exhibits inviting you to touch, try, test, and titillate your curiosity. The puzzle I saw there can be stated as follows. In the 10 boxes below, write a 10-digit number. The digit in the first box indicates the total number of zeros in the entire number. The box marked "1" indicates the total number of 1's in the number. The box marked "2" indicates the total number of 2's in the number, and so on. For example, the "3" in the box labeled "0" would indicate that there must be exactly three 0's in the 10-digit number. ------------------------------- | 0| 1| 2| 3| 4| 5| 6| 7| 8| 9| | 3| | | | | | | | | | ------------------------------- Stop And Think 1. Is there a solution to this problem? Are there many solutions to this problem? 2. A more advanced an interesting problem is to continue to generate a sequence in a recursive fashion such that each row becomes the sequence for the previous. For example, start with the usual 0 through 9 digits in row 1: Row 1: 0 1 2 3 4 5 6 7 8 9 Assume Row 2 is your solution to the puzzle. I've just inserted random digits below so as not to give away the solution: Row 1: 0 1 2 3 4 5 6 7 8 9 S(1) Row 2: 9 3 2 3 3 1 6 7 8 9 S(2) Row 3: S(3) Row 2 is now the starting point, and your next job is to form row 3, row 4, etc. using the same rules. In the previous example, a digit in the first box would indicate how many 9's there are in the next 10-digit number, and so forth. Contest: I am looking for the longest sequence of numbers users can come up with using these rules. Can you find a Row 2 or Row 3? Is it even possible to generate a "row 2" or "row 3"? pickover/pickover.01.s 1) 0 1 2 3 4 5 6 7 8 9 2) 6 2 1 0 0 0 1 0 0 0 3) 0 0 0 4 4 4 0 4 4 4 4) 6 6 6 0 0 0 6 0 0 0 5) 0 0 0 4 4 4 0 4 4 4 . . . and so on, repeating rows 3 and 4. I don't know yet whether there are multiple solutions, but I'll keep looking. Mark Hayes Goddard Space Flight Center (GSFC) / Interferometrics, Inc. [email protected] GSFC Code 926.9 Greenbelt, MD 20771 ------------------------- In article <[email protected]>, you write: |> The puzzle I saw there can be stated as follows. In the 10 boxes below, |> write a 10-digit number. The digit in the first box indicates the total |> number of zeros in the entire number. The box marked "1" indicates the |> total number of 1's in the number. The box marked "2" indicates the |> total number of 2's in the number, and so on. For example, the "3" in |> the box labeled "0" would indicate that there must be exactly three 0's |> in the 10-digit number. |> |> ------------------------------- |> | 0| 1| 2| 3| 4| 5| 6| 7| 8| 9| |> | 3| | | | | | | | | | |> ------------------------------- |> |> |> Stop And Think |> |> 1. Is there a solution to this problem? Are there many solutions to this |> problem? This is an old puzzle, but I'll solve it as if it was new because I find your extension below to be interesting. Since all possible digits must be "counted" once, the ten digits must add up to 10. Consider the first digit (= the amount of zeroes used): 9: Impossible, since all the other digits would have to be zero. 8: Also impossible, since we must mark a 1 under the 8, and the other digits then must be zeroes. 7: We must mark a 1 under the 7, and we have one more non-zero digit to assign. We've used a 1, so we must put a non-zero digit under the 1. However, if we put a 1 there, it's wrong because we've used two ones, and if we put a two that's also wrong. So 7 zeroes doesn't work. 6: Begin as before, putting a 1 under the 6. Now we must mark under the 1, but putting a 1 is wrong, so put a 2. Now we have one non-zero digit left to assign, and marking a 1 under the two works. 6210001000 works. 5: Now we take a different approach to analyze this. If there are only five zeroes, then there are five non-zeroes, one of which is the 5 we marked under the zero. Obviously a 1 must be marked under the 5 and zeroes under 6-9, so we have 5----10000, where the dashes contain one zero and three other numbers, which must add up to four (since all ten digits must add up to ten) - so we have two ones and a two. But then the digits we have are described by 5310010000, which is not the set of digits we have, so there is no solution. Similar proofs show that there cannot be 4,3,2, or 1 zero. 0: Impossible, since you would have to use a zero to indicate you didn't have a zero. |> 2. A more advanced an interesting problem is to continue to |> generate a sequence in a recursive fashion such that each row becomes |> the sequence for the previous. For example, start with the usual |> 0 through 9 digits in row 1: |> |> Row 1: 0 1 2 3 4 5 6 7 8 9 |> |> Assume Row 2 is your solution to the puzzle. I've just inserted random |> digits below so as not to give away the solution: |> |> |> Row 1: 0 1 2 3 4 5 6 7 8 9 S(1) |> Row 2: 9 3 2 3 3 1 6 7 8 9 S(2) |> Row 3: S(3) |> |> Row 2 is now the starting point, and your next job is to form row 3, row 4, |> etc. using the same rules. In the previous example, a digit in the |> first box would indicate how many 9's there are in the next 10-digit number, |> and so forth. |> |> Contest: I am looking for the longest sequence of numbers users can come |> up with using these rules. Can you find a Row 2 or Row 3? |> Is it even possible to generate a "row 2" or "row 3"? Well, first off, our handy rule about all the digits adding up to ten no longer applies. Let's see if we can find an answer: Row 1: 0 1 2 3 4 5 6 7 8 9 Row 2: 6 2 1 0 0 0 1 0 0 0 Row 3: ? All the same digits must be placed under all the zeroes in row 2, or some of them would be wrong, and this digit cannot be larger than 4 since six non-zeroes are used under the zeroes in row 2. So, consider the cases: 4: If we put 4's under all the zeroes, we must put zeroes everywhere else. 0004440444 works. 3: Now we must place one non-zero digit under either the 6 or the 2, since there are two 1's that must stay alike. Putting any non-zero digit under the 6 is wrong since there aren't any sixes, unless you put a 6 under the 6, which is still wrong. Similarly no digit works under the two. 2: Now we must put a non-zero digit under the 2, since we already used 6 of them. We must also have two zeroes, which can only go under the ones. This gives us --02220222. However, we must put a non-zero under the 6, and we can't put a one, since we must have zeroes under the ones. Any number greater than one is wrong, because we don't have that many 6's. 1: OK, we start with ---111-111, and one of the -'s must be a zero. This zero must go under the 2 or the 6, because the ones must be alike (and we've already used some ones). Suppose we put 6's under the ones, and don't use any more ones. Then we need a 2 under the 6, and we need a one under the 2, which breaks what we did before. So, instead put 7's under the ones. Now we must put a 1 and a 0 in the other two spots, but either arrangement is wrong. We can't put a higher number under the ones because there aren't enough spaces left, so there is no solution with 1 zero. 0: Self-contradiction, as in the original problem. So now we have a unique third row. Can we make a fourth? Row 1: 0 1 2 3 4 5 6 7 8 9 Row 2: 6 2 1 0 0 0 1 0 0 0 Row 3: 0 0 0 4 4 4 0 4 4 4 Now there can only be two different digits used in the next number. Consider the possibilities: No zero is used: We need to mark this by putting zeroes under the zeroes Some zeroes are used: They can't go under the zeroes, so put zeroes under the fours. Now six zeroes are used, so put 6's under the zeroes. 6660006000 works. The same logic used to find row four shows that row five must be 0004440444 again, and we get into an infinite cycle alternating between these two. -- ----w-w--------------Joseph De [email protected] ( ^ ) Disclaimer: My opinions do not represent those of Owlnet. (O O) Owlnet: George R. Brown School of Engineering Educational Network. v-v (Unauthorized use is prohibited.) (Being uwop-ap!sdn is allowed.) Snail mail: Rice U., 6100 S. Main, Houston TX 77005. ------------------------- In rec.puzzles you write: >Title: Cliff Puzzle 1: Can you beat the numbers game? >From: [email protected] [...] >1. Is there a solution to this problem? Are there many solutions to this >problem? Yes. No. >2. A more advanced an interesting problem is to continue to >generate a sequence in a recursive fashion such that each row becomes >the sequence for the previous. For example, start with the usual >0 through 9 digits in row 1: [...] >Contest: I am looking for the longest sequence of numbers users can come >up with using these rules. Can you find a Row 2 or Row 3? >Is it even possible to generate a "row 2" or "row 3"? My program produces the following output: 0123456789 6210001000 no solutions found So I believe that the result for row 2 is unique and that there is no result for row 3. [ I am including the program at the end of this message just for your interest ] >If you respond to this puzzle, if possible please include your name, >address, affiliation, e-mail address. If you like, tell me a little bit >about yourself. You might also directly mail me a copy of your response >in addition to any responding you do in the newsgroup. I will assume it >is OK to describe your answer in any article or publication I may write >in the future, with attribution to you, unless you state otherwise. >Thanks, Cliff Pickover The name, address etc should appear in my signature. As for myself, I'm a PhD student due to finish much too shortly who likes solving puzzles. Pauli Paul Dale | [email protected] Department of Computer Science | +61 7 365 2445 University of Queensland | Australia, 4072 | Did you know that there are 41 two letter | words containing the letter 'a'? The program I used follows: --------------------------------------8<------------------------------ #include <stdio.h> #include <stdlib.h> #define START(in) for(in=0;in<9;in++) { \ if(sum+in > 10) \ break; \ else \ sum = sum+in; \ counts[digits[in]]++; #define STOP(in) counts[digits[in]]--; \ sum -= in; \ } main() { short counts[10]; short i, sum; short i0,i1,i2,i3,i4,i5,i6,i7,i8,i9; static short digits[10] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; short solns[10][100]; short solcnt=0; printf("0123456789\n"); again: for(i=0;i<10;i++) counts[i]=0; sum = 0; START(i0) START(i1) START(i2) START(i3) START(i4) START(i5) START(i6) START(i7) START(i8) START(i9) if(counts[0]==digits[i0] && counts[1]==digits[i1] && counts[2]==digits[i2] && counts[3]==digits[i3] && counts[4]==digits[i4] && counts[5]==digits[i5] && counts[6]==digits[i6] && counts[7]==digits[i7] && counts[8]==digits[i8] && counts[9]==digits[i9]) { printf("%d%d%d%d%d%d%d%d%d%d\n", i0,i1,i2,i3,i4,i5, i6,i7,i8,i9); for(i=0;i<10;i++) solns[0][solcnt] = i0; solns[1][solcnt] = i1; solns[2][solcnt] = i2; solns[3][solcnt] = i3; solns[4][solcnt] = i4; solns[5][solcnt] = i5; solns[6][solcnt] = i6; solns[7][solcnt] = i7; solns[8][solcnt] = i8; solns[9][solcnt] = i9; solcnt++; } STOP(i9) STOP(i8) STOP(i7) STOP(i6) STOP(i5) STOP(i4) STOP(i3) STOP(i2) STOP(i1) STOP(i0) if(solcnt == 0) { printf("no solutions found\n"); } else if(solcnt == 1) { for(i=0;i<10;i++) digits[i] = solns[i][0]; solcnt = 0; goto again; } else printf("multiple solutions found\n"); } --------------------------------------8<------------------------------ In article <[email protected]> you write: >Title: Cliff Puzzle 1: Can you beat the numbers game? >From: [email protected] > >If you respond to this puzzle, if possible please include your name, >address, affiliation, e-mail address. If you like, tell me a little bit >about yourself. You might also directly mail me a copy of your response >in addition to any responding you do in the newsgroup. I will assume it >is OK to describe your answer in any article or publication I may write >in the future, with attribution to you, unless you state otherwise. >Thanks, Cliff Pickover > >* * * >At a recent trip to the Ontario Science Center in Toronto, Canada I came >across an interesting puzzle. The center is located minutes from >downtown Toronto and it's a vast playground of science with hundreds of >exhibits inviting you to touch, try, test, and titillate your curiosity. >The puzzle I saw there can be stated as follows. In the 10 boxes below, >write a 10-digit number. The digit in the first box indicates the total >number of zeros in the entire number. The box marked "1" indicates the >total number of 1's in the number. The box marked "2" indicates the >total number of 2's in the number, and so on. For example, the "3" in >the box labeled "0" would indicate that there must be exactly three 0's >in the 10-digit number. > >------------------------------- >| 0| 1| 2| 3| 4| 5| 6| 7| 8| 9| >| 3| | | | | | | | | | >------------------------------- > > >Stop And Think > >1. Is there a solution to this problem? Are there many solutions to this >problem? A. Since there are ten digits in the number, the sum of the digits in the bottom row must be 10. B. If x appears under y there must be x appearences of y, hence x*y<10 So, the MAXIMUM that can appear under each number is: --------------------- |0|1|2|3|4|5|6|7|8|9| |9|9|4|3|2|1|1|1|1|1| max --------------------- C. In fact, under the numbers 5..9 there can be AT MOST one non-zero (1) answer since otherwise two numbers of the 5..9 veriaty would appear and violate rule A. D. So there must be at least 4 zeros. If there were exactly 4 zeros, then the numbers 1..4 will all have under them non-zeros (as the zeros are used up for the 5..9 group). There is also at least one number that is 5 or greater. Well, there is a 5 (or more), a 4 (under zero), a 1 (under the 5..9 category) and something above zero under the other 1..4 digits for a total above 10. This violates rule A. E. So there must be at least 5 zeros. So a (exactly one) number that is at least 5 has a 1 under it. (since under zero would appear a >=5 number). F. Under 1 there must be at least 1 since the solution has at least one 1 (the one under a 5..9 number). However it could not be exactly 1 as then there would be 2 (or more) 1's in the solution. G. If there were 3 or more ones, then they must be under 2..9 . But then there would be a 5 (or more) under zero + a 3 (or more) under one + a 1 under three (or more) other places for a total above 10. H. So there must be at exactly 2 ones in the solution. And hence, at least 1 under two. We can summerize: --------------------- |0|1|2|3|4|5|6|7|8|9| |5|2|1|0|0|----1----| min |6|2|2|1|1|----1----| max --------------------- where the maximum under each digit is 10 - SUM(minimum of all others) I. Since no 3 or 4 is now possible, those two numbers must have a zero under them. J. So there are six zeros. Hence: --------------------- |0|1|2|3|4|5|6|7|8|9| |6|2|1|0|0|0|1|0|0|0| min |6|2|2|0|0|0|1|0|0|0| max --------------------- > K. Notice that "min" is a solution, while "max" is not. Hence, "min is the *ONLY* solution! My name is Dan Shoham. This is the only fact about me I care to make public. You are free to attribute it, but provide me a note when you do so. [email protected] ------------------------- >From [email protected] (Chris Long) Tue Sep 15 06:08:45 1992 Path: igor.rutgers.edu!romulus.rutgers.edu!clong From: [email protected] (Chris Long) Newsgroups: rec.puzzles Subject: Re: Puzzle 1 (SPOILER) Message-ID: <[email protected]> Date: 15 Sep 92 10:08:45 GMT Lines: 62 In article <[email protected]>, Chris Cole writes: Chris, don't forget to include my name on my solutions in the FAQ, please. My old article should be replaced with the following in the FAQ, anyway: --Cut here-- Solution prepared by Chris Long. Unfortunately, this isn't completely new, since I believe a similar puzzle I posted and answered are in the FAQ. However, it *is* different enough to be interesting. In article <[email protected]>, [email protected] writes: > Here's a small number puzzle : > Generate numbers such that the each digit in the number specifies > the number of the occurences of the position of the digit ( postions starting > with 0 from the left ). Example > The number 1210 ... My guess is only: 1210 21200 3211000 42101000 521001000 6210001000 No 1, 2, or 3 digit numbers are possible. Letting x_i be the ith digit, starting with 0, we see that (1) x_0 + ... + x_n = n+1 and (2) 0*x_0 + ... + n*x_n = n+1, where n+1 is the number of digits. I'll first prove that x_0 > n-3 if n>4. Assume not, then this implies that at least four of the x_i with i>0 are non-zero. But then we would have \sum_i i*x_i >= 10 by (2), impossible unless n=9, but it isn't possible in this case (51111100000 isn't valid). Now I'll prove that x_0 < n-1. x_0 clearly can't equal n; assume x_0 = n-1 ==> x_{n-1} = 1 by (2) if n>3. Now only one of the remaining x_i may be non-zero, and we must have that x_0 + ... + x_n = n+1, but since x_0 + x_{n-1} = n ==> the remaining x_i = 1 ==> by (2) that x_2 = 1. But this can't be, since x_{n-1} = 1 ==> x_1>0. Now assuming x_0 = n-2 we conclude that x_{n-2} = 1 by (2) if n>5 ==> x_1 + ... + x_{n-3} + x_{n-1} + x_n = 2 and 1*x_1 + ... + (n-3)*x_{n-3} + (n-1)*x_{n-1} + n*x_n = 3 ==> x_1=1 and x_2=1, Case n>5: We have that x_0 = n-3 and if n>=7 ==> x_{n-3}=1 ==> x_1=2 and x_2=1 by (1) and (2). For the case n=6 we see that x_{n-3}=2 leads to an easy contradiction, and we get the same result. The cases n=4,5 are easy enough to handle, and lead to the two solutions above. -- Chris Long, 265 Old York Rd., Bridgewater, NJ 08807-2618 -- Chris Long, 265 Old York Rd., Bridgewater, NJ 08807-2618 ------------------------- The number "2020" was left off my list by mistake ... sorry. -Chris ------------------------- > * * * > At a recent trip to the Ontario Science Center in Toronto, Canada I came > across an interesting puzzle. The center is located minutes from > downtown Toronto and it's a vast playground of science with hundreds of > exhibits inviting you to touch, try, test, and titillate your curiosity. > The puzzle I saw there can be stated as follows. In the 10 boxes below, > write a 10-digit number. The digit in the first box indicates the total > number of zeros in the entire number. The box marked "1" indicates the > total number of 1's in the number. The box marked "2" indicates the > total number of 2's in the number, and so on. For example, the "3" in > the box labeled "0" would indicate that there must be exactly three 0's > in the 10-digit number. > > ------------------------------- > | 0| 1| 2| 3| 4| 5| 6| 7| 8| 9| > | 3| | | | | | | | | | > ------------------------------- > > > Stop And Think > > 1. Is there a solution to this problem? Are there many solutions to this > problem? > [Second question and contest problem omitted] Good puzzle! I am wondering though whether the second question (which I have not tried to solve yet) is moe amenable to computer search. It seems to me that there should not be so many cases to consider, so that even exhaustive search should work. So, here is my ten minutes work on the first question. I think there is a unique solution which is: 6210001000. Here is the reasoning. Let the number be (in Tex notation) d_0 d_1 d_2 d_3 d_4 d_5 d_6 d_7 d_8 d_9. By definition d_0 + d_1 + d_2 + d_3 + d_4 + d_5 + d_6 + d_7 + d_8 + d_9 = 10. (1) Moreover, d_0 > 0, since d_0 = 0 contradicts itself. Let d_0 = c for some integer 9 >= c >= 1. If c = 9, then d_9 = 1, contradiction since d_1 should both be 0 and 1 then. If 9 > c >= 1, we rewrite (1) removing all d_i s that are zeros c + d_(i_1) + d_(i_2) + ... + d_(i_(9-c)) = 10 <=> d_(i_1) + d_(i_2) + ... + d_(i_(9-c)) = 10 -c (2) where all the d_(i_j) >= 1, j=1,...,9-c (3) (2) & (3) imply that the d_(i_j)s are 8-c 1s and one 2. Since there exists ONE 2, then there exists at least one 1. So the only digits in the number are 0, 1, 2, and c (if different than 1 and 2). If c is either 1 or 2, we have 3 different digits in the number, which implies d_1 <= 3, impossible since d_1 = 8 - c >= 6. If c> 2, we have four different digits in the number, and in fact d_0 = c, d_1 = 8-c, d_2 = 1, d_c = 1, which leaves us with 6 0s. QED I hope I did not miss any other cases. Do you plan to post answers or comments later? Leonidas -------------------------------------------------------------------------------- Leonidas Palios The Geometry Center 1300 South Second Str [email protected] Minneapolis, Minnesota 55454 ------------------------- ------------------------------- | 0| 1| 2| 3| 4| 5| 6| 7| 8| 9| ------------------------------- | 6| 2| 1| 0| 0| 0| 1| 0| 0| 0| | 0| 0| 0| 4| 4| 4| 0| 4| 4| 4| <- | 6| 6| 6| 0| 0| 0| 6| 0| 0| 0| | | 0| 0| 0| 4| 4| 4| 0| 4| 4| 4| <- . . . I must be missing something in my understanding of your rules. I found the second row by imagining that I'd need lots of zeros and putting nine in the 0 column, then skipping back and forth adjusting things. I had to put a tic in the 9 column, then I had to put one in the 1 column, then I realized that had to change that to a two since now there were two ones, and at the same time another required tic in the 2 column balanced the change of one to two in the 1 column, and then of course there weren't nine zeros anymore, but there were still six and so by changing the nine in the 1 column to a six, the one in the 9 column sould just migrate down to the 6 column. But it almost seems like cheating to use fours in the second row when there were none in the second row to necessitate this kind of adjusting. *shrug* If this is right, the series is infinite, obviously. Please let me know if I'm interpreting something wrong. Thanks, and nice puzzle. :) Grant Culbertson [email protected] [email protected] Continue to:
2021-06-19T03:40:07
{ "domain": "stason.org", "url": "https://stason.org/TULARC/self-growth/puzzles/377-pickover-pickover-01-p.html", "openwebmath_score": 0.7816751003265381, "openwebmath_perplexity": 1260.6151974306902, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.969785412932606, "lm_q2_score": 0.8723473663814338, "lm_q1q2_score": 0.8459897509268901 }
https://discuss.codechef.com/t/evenpsum-editorial/81932
# EVENPSUM - Editorial Setter: Ildar Gainullin Tester: Alexander Morozov Editorialist: Ajit Sharma Kasturi CAKEWALK None ### PROBLEM: We are given two positive integers A and B . We need to find the number of pairs of positive integers (X,Y) that can be formed such that 1 \le X \le A and 1 \le Y \le B and the sum X+Y is even. ### QUICK EXPLANATION: • When is the sum of two numbers even? The answer is when either both of the numbers are even or both of them are odd. • Thus, count the number of even-even pairs and odd-odd pairs and add them to get the answer. • Be careful of integer overflow!!! This can happen in some languages such as C++. In that case make sure you use the right datatype to store the variables while finding the answer. ### EXPLANATION: Let’s recall the definitions of even and odd numbers. A number n is even if n=2 \cdot k for some positive integer k . A number n is odd if n=2 \cdot k+1 for some positive integer k . We have 4 cases for X and Y: Case 1: X is even and Y is odd • Let X=2 \cdot k1 and Y=2 \cdot k2+1 . • Then X+Y=2 \cdot k1+2 \cdot k2+1= 2 \cdot k+1 where k=k1+k2 . • Therefore, X+ Y is an odd number. Case 2: X is odd and Y is even • Let X=2 \cdot k1+1 and Y=2 \cdot k2 . • Then X+Y=2 \cdot k1+1+2 \cdot k2=2 \cdot k+1 where k=k1+k2 . • Therefore, X+ Y is an odd number. Case 3: X is even and Y is even • Let X=2 \cdot k1 and Y=2 \cdot k2 . • Then X+Y=2 \cdot k1+2 \cdot k2=2 \cdot k where k=k1+k2 . • Therefore, X+ Y is an even number. Case 4: X is odd and Y is odd • Let X=2 \cdot k1+1 and Y=2 \cdot k2+1 . • Then X+Y=2 \cdot k1+1+2 \cdot k2+1=2 \cdot k where k=k1+k2+1 . • Therefore, X+ Y is an even number. Thus, we have X+Y even in case 3 and case 4 . (here all the divisions are integer divisions) . • Total even numbers in [1,A] are A/2 . • Total odd numbers in [1,A] are (A+1)/2 . • Total even numbers in [1,B] are B/2 . • Total odd numbers in [1,B] are (B+1)/2 . Therefore, the number of pairs (X,Y) where X+Y is odd are (A/2) \cdot (B/2) + ((A+1)/2) \cdot ((B+1)/2) . ### TIME COMPLEXITY: O(1) for each testcase. ### SOLUTION: Editorialist's solution #include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; while (t--) { int a, b; cin >> a >> b; long even_a = a / 2; long even_b = b / 2; long odd_a = (a + 1) / 2; long odd_b = (b + 1) / 2; long ans = even_a * even_b + odd_a * odd_b; cout << ans << endl; } return 0; } Setter's solution #include <cmath> #include <functional> #include <fstream> #include <iostream> #include <vector> #include <algorithm> #include <string> #include <set> #include <map> #include <list> #include <time.h> #include <math.h> #include <random> #include <deque> #include <queue> #include <cassert> #include <unordered_map> #include <unordered_set> #include <iomanip> #include <bitset> #include <sstream> #include <chrono> #include <cstring> using namespace std; typedef long long ll; #ifdef iq mt19937 rnd(228); #else mt19937 rnd(chrono::high_resolution_clock::now().time_since_epoch().count()); #endif int main() { #ifdef iq freopen("a.in", "r", stdin); #endif ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) { int a, b; cin >> a >> b; int even_a = a / 2, even_b = b / 2; int odd_a = (a + 1) / 2, odd_b = (b + 1) / 2; ll ans = even_a * (ll)even_b + odd_a * (ll)odd_b; cout << ans << " \n"; } } # VIDEO EDITORIAL (Hindi): Please comment below if you have any questions, alternate solutions, or suggestions. 1 Like Why is this code showing WA for subtask 3 - https://www.codechef.com/viewsolution/40305092 1 Like Hi, on line number 39, multiplying two integers will first result in integer which could lead to integer overflow. I have modified that line of your code by typecasting the product to long long. Here is your modified code . Hope it helps. 4 Likes It helped a lot. Thank you. Alternate Solution - Here Here is an even simpler approach. Suppose the number of rows B is even. Then in every pair of rows there are A even numbers, alternating between the first and second row. So the total of evens is (AB / 2). If A is even, swap A and B in the previous argument to get the same answer. For both odd, we can easily see there is an extra even number in the last row. For example look at 5 by 3 to see the pattern, where the solution is 8. So the solution is simply the result of integer division (AB + 1) / 2 in all cases. 1 Like Here is my simple implementation hope it helps #include using namespace std; int main() { long long int t,x,y; cin>>t; while(t–){ cin>>x>>y; if(x%2!=0 && y%2!=0){ cout<<(x-1)/2*(y-1)/2+(x-((x-1)/2))(y-((y-1)/2))<<endl; } if(x%2==0 && y%2==0){ cout<<(x/2) (y/2)+(x-x/2)(y-y/2)<<endl; } if(x%2==0 && y%2!=0){ cout<<(x/2) ((y-1)/2)+(x-x/2)(y-((y-1))/2)<<endl; } if(x%2!=0 && y%2==0){ cout<<(y/2) ((x-1)/2)+(y-y/2)*(x-((x-1))/2)<<endl; } } return 0; } try: t=int(input()) for _ in range(t): A,B=map(int,input().split(" ")) print(int(((A*B)+1)/2)) except: pass Why is this failing in third test case? Simple? You can replace the whole ‘while’ loop with all the 'if’s by the following, as I describe above while (t–) { cin >> x >> y; cout << (x * y + 1) / 2 << endl; } A * B may be too long to fit into a 32-bit integer. As the editorial says above Be careful of integer overflow!!! import java.util.Scanner; class Main{ public static void main(String []argh) { Scanner obj = new Scanner(System.in); int T = obj.nextInt(); while (T–>0){ int a = obj.nextInt(); int b = obj.nextInt(); long sum = 0; if(a%2!=0 && b%2!=0){ int div_a = a/2; int div_b = b/2; if(b>a) sum +=((a-div_a) (div_b+1)); else sum +=((b-div_b)*(div_a+1)); }else{ sum = (a%2==0)?(a /=2)*b:(b /=2)*b; } System.out.println(sum); } } } can any give me one test case for this code that give me wrong result . Becouse i am trying some test case they give me correct result but code don’t get Submitted. https://www.codechef.com/viewsolution/44489313 all the thing are running on my offline compiler but not passing while I submit to codechef please help.
2022-01-27T00:56:51
{ "domain": "codechef.com", "url": "https://discuss.codechef.com/t/evenpsum-editorial/81932", "openwebmath_score": 0.9958139061927795, "openwebmath_perplexity": 4524.6927537599995, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854120593483, "lm_q2_score": 0.8723473630627235, "lm_q1q2_score": 0.8459897469466693 }
https://stats.stackexchange.com/questions/60256/standard-deviation-of-binned-observations
# Standard deviation of binned observations I have a dataset of sample observations, stored as counts within range bins. e.g.: min/max count 40/44 1 45/49 2 50/54 3 55/59 4 70/74 1 Now, finding an estimate of the average from this is pretty straight forward. Simply use the mean (or median) of each range bin as the observation and the count as a weight and find the weighted average: $$\bar{x}^* = \frac{1}{\sum_{i=1}^N w_i} \sum_{i=1}^N w_ix_i$$ For my test case, this gives me 53.82. My question now is, what's the correct method of finding the standard deviation (or variance)? Through my searching, I've found several answers, but I'm unsure which, if any, is actually appropriate for my dataset. I was able to find the following formula both on another question here and a random NIST document. $$s^{2*} = \frac{ \sum_{i=1}^N w_i (x_i - \bar{x}^*)^2 }{ \frac{(M-1)}{M} \sum_{i=1}^N w_i }$$ Which gives a standard deviation of 8.35 for my test case. However, the Wikipedia article on weighted means gives both the formula: $$s^{2*} = \frac{ \sum_{i=1}^N w_i}{(\sum_{i=1}^N w_i)^2 - \sum_{i=1}^N w_i^2} \sum_{i=1}^N w_i(x_i-\bar{x}^*)^2$$ and $$s^{2*} = \frac{1}{(\sum_{i=1}^N w_i) - 1} \sum_{i=1}^N w_i(x_i-\bar{x}^*)^2$$ Which give standard deviations of 8.66 and 7.83, respectively, for my test case. Update Thanks to @whuber who suggested looking into Sheppard's Corrections, and your helpful comments related to them. Unfortunately, I'm having a difficult time understanding the resources I can find about it (and I can't find any good examples). To recap though, I understand that the following is a biased estimate of variance: $$s^{2*} = \frac{1}{\sum_{i=1}^N w_i} \sum_{i=1}^N w_i(x_i-\bar{x}^*)^2$$ I also understand that most standard corrections for the bias are for direct random samples of a normal distribution. Therefore, I see two potential issues for me: 1. These are binned random samples (which, I'm pretty sure, is where Sheppard's Corrections come in.) 2. It's unknown whether or not the data is for a normal distribution (thus I'm assuming not, which, I'm pretty sure, invalidates Sheppard's Corrections.) So, my updated question is; What's the appropriate method for handling the bias imposed by the "simple" weighted standard deviation/variance formula on a non-normal distribution? Most specifically with regards to binned data. Note: I'm using the following terms: • $s^{2*}$ is the weighted variance • $N$ is the number of observations. (i.e. the number of bins) • $M$ is the number of nonzero weights. (i.e. the number of bins with counts) • $w_i$ are the weights (i.e. the counts) • $x_i$ are the observations. (i.e. the bin means) • $\bar{x}^*$ is the weighted mean. • Google "Sheppard's corrections" for the standard solutions to this problem. – whuber May 28 '13 at 17:02 • @whuber, I'm afraid my google-foo is failing me... I'm not finding much about how to use Sheppard's corrections. As far as I can tell, it's a correction for the binned nature of the data, and in my test case would be used like $s^{2*} - \frac{c^2}{12}$, where $c$ is the size of the bins (in my test case, 4). Is this correct? In any case, what I'm finding still doesn't seem to help me with computing $s^{2*}$. – chezy525 May 29 '13 at 17:55 • The second hit in my Google search provides an explicit formula (equation 9). – whuber May 29 '13 at 18:52 • @whuber, it's been a couple months, and I've tried reading the document you linked a couple times. I think I'm still missing something, but the best I've come up with is that the final equation I listed is correct as the unbiased estimator. Is this right? – chezy525 Jul 25 '13 at 16:55 • Sheppard's corrections don't assume normality. – Glen_b Aug 24 '13 at 16:14 This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation: $7.70$ for the first and $7.69$ for the second (when adjusted to be comparable to the usual "unbiased" estimator). ### Sheppard's corrections "Sheppard's corrections" are formulas that adjust moments computed from binned data (like these) where • the data are assumed to be governed by a distribution supported on a finite interval $[a,b]$ • that interval is divided sequentially into equal bins of common width $h$ that is relatively small (no bin contains a large proportion of all the data) • the distribution has a continuous density function. They are derived from the Euler-Maclaurin sum formula, which approximates integrals in terms of linear combinations of values of the integrand at regularly spaced points, and therefore generally applicable (and not just to Normal distributions). Although strictly speaking a Normal distribution is not supported on a finite interval, to an extremely close approximation it is. Essentially all its probability is contained within seven standard deviations of the mean. Therefore Sheppard's corrections are applicable to data assumed to come from a Normal distribution. The first two Sheppard's corrections are 1. Use the mean of the binned data for the mean of the data (that is, no correction is needed for the mean). 2. Subtract $h^2/12$ from the variance of the binned data to obtain the (approximate) variance of the data. Where does $h^2/12$ come from? This equals the variance of a uniform variate distributed over an interval of length $h$. Intuitively, then, Sheppard's correction for the second moment suggests that binning the data--effectively replacing them by the midpoint of each bin--appears to add an approximately uniformly distributed value ranging between $-h/2$ and $h/2$, whence it inflates the variance by $h^2/12$. Let's do the calculations. I use R to illustrate them, beginning by specifying the counts and the bins: counts <- c(1,2,3,4,1) bin.lower <- c(40, 45, 50, 55, 70) bin.upper <- c(45, 50, 55, 60, 75) The proper formula to use for the counts comes from replicating the bin widths by the amounts given by the counts; that is, the binned data are equivalent to 42.5, 47.5, 47.5, 52.5, 52.5, 57.5, 57.5, 57.5, 57.5, 72.5 Their number, mean, and variance can be directly computed without having to expand the data in this way, though: when a bin has midpoint $x$ and a count of $k$, then its contribution to the sum of squares is $kx^2$. This leads to the second of the Wikipedia formulas cited in the question. bin.mid <- (bin.upper + bin.lower)/2 n <- sum(counts) mu <- sum(bin.mid * counts) / n sigma2 <- (sum(bin.mid^2 * counts) - n * mu^2) / (n-1) The mean (mu) is $1195/22 \approx 54.32$ (needing no correction) and the variance (sigma2) is $675/11 \approx 61.36$. (Its square root is $7.83$ as stated in the question.) Because the common bin width is $h=5$, we subtract $h^2/12 = 25/12 \approx 2.08$ from the variance and take its square root, obtaining $\sqrt{675/11 - 5^2/12} \approx 7.70$ for the standard deviation. ### Maximum Likelihood Estimates An alternative method is to apply a maximum likelihood estimate. When the assumed underlying distribution has a distribution function $F_\theta$ (depending on parameters $\theta$ to be estimated) and the bin $(x_0, x_1]$ contains $k$ values out of a set of independent, identically distributed values from $F_\theta$, then the (additive) contribution to the log likelihood of this bin is $$\log \prod_{i=1}^k \left(F_\theta(x_1) - F_\theta(x_0)\right) = k\log\left(F_\theta(x_1) - F_\theta(x_0)\right)$$ Summing over all bins gives the log likelihood $\Lambda(\theta)$ for the dataset. As usual, we find an estimate $\hat\theta$ which minimizes $-\Lambda(\theta)$. This requires numerical optimization and that is expedited by supplying good starting values for $\theta$. The following R code does the work for a Normal distribution: sigma <- sqrt(sigma2) # Crude starting estimate for the SD likelihood.log <- function(theta, counts, bin.lower, bin.upper) { mu <- theta[1]; sigma <- theta[2] -sum(sapply(1:length(counts), function(i) { counts[i] * log(pnorm(bin.upper[i], mu, sigma) - pnorm(bin.lower[i], mu, sigma)) })) } coefficients <- optim(c(mu, sigma), function(theta)
2020-08-06T16:12:45
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/60256/standard-deviation-of-binned-observations", "openwebmath_score": 0.844092845916748, "openwebmath_perplexity": 700.5224289268373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.969785409439575, "lm_q2_score": 0.8723473647220787, "lm_q1q2_score": 0.8459897462705355 }
https://math.stackexchange.com/questions/1629551/simplifying-power-series-as-a-summation-alternating-coefficients
# Simplifying Power Series as a Summation - Alternating Coefficients I'm currently trying to rewrite a power series I have into summation notation. The series is as follows: $$2x + 3x^{4} + 2x^{7} + 3x^{10} + 2x^{13} + ...$$ Obviously I'll have $x^{3n+1}$ in the summation, but I'm not sure on how to piece together the coefficient for each term. I've worked with alternating coefficients before, typically when the coefficients can use the $(-1)^n$ trick in order to alternate between two specific integers, but I've never encountered a series when the coefficients differ by 1 each time. I feel like I'm overlooking something really simple in regards to solving this. Thanks for any pointers or help. The coefficients are alternating between $2$ and $3$, so you can do something like $$c_k = \frac{5+(-1)^k}{2}$$ Now you just have to express $k$ in terms of $n$, I think $k=3n+1$ might work. We obviously only care whether $k$ is even or odd. So if $n$ is even, $k$ will be odd and vice versa. So we could also choose $k=n+1$. • Awesome, just what I was looking for. I'll be sure to remember this division method to work with alternating series in the future. Thank you very much! Jan 27 '16 at 21:17 • It really is a simple trick, you can obviously construct sequences this way that alternate between any two values. Jan 27 '16 at 21:21 If we take the sum from $n=0$ on, then the exponent on the $n$th term is $3n+1$ as you said. I'd just "construct" the coefficient. Start with a power of $-1$ to get the alternating behavior. Since we start low with $n=0$, $(-1)^{n+1}$ gives us $-1, +1, -1, +1, ...$ If we multiply this by $\frac{1}{2}$ to decrease the difference from $2$ to $1$ (what we want) then we get $-\frac{1}{2}, +\frac{1}{2}, -\frac{1}{2}, +\frac{1}{2}, ...$ Now add $\frac{5}{2}$ and we've got it: $2, 3, 2, 3, ...$. So, $$S = \sum_{n=0}^{\infty} \left(\frac{5}{2} + \frac{1}{2}(-1)^{n+1}\right)x^{3n+1}.$$
2021-09-25T22:32:29
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1629551/simplifying-power-series-as-a-summation-alternating-coefficients", "openwebmath_score": 0.9241626858711243, "openwebmath_perplexity": 179.1878059295473, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936115537342, "lm_q2_score": 0.8596637523076225, "lm_q1q2_score": 0.845989606730243 }
https://mathhelpboards.com/threads/sketch-graphs-of.1810/
# Sketch graphs of #### Casio ##### Member I have a bit of a misunderstanding with; y = (x - 2)^2 I understand it to be a quadratic, and if I used the formula to work it out I would see two roots, x = - 2 or x = 2 If I put the above equation into a graphics calculator the result is always x = 2 Looking at the equation above I could just say x^2 and 2^2 = x^2 + 4 What I don't understand is by looking at the equation how x = 2 when it is graphed? I know its right but don't understand? #### MarkFL ##### Administrator Staff member To sketch the graph, note that: y = (x-2)2 is the graph of y = x2 translated two units to the right. #### Jameson ##### Administrator Staff member Here is what the graph looks like. [graph]xzrhkxtmh3[/graph] We can see it only has one root so can you show us how you solved it algebraically? #### Casio ##### Member To sketch the graph, note that: y = (x-2)2 is the graph of y = x2 translated two units to the right. I know you are right mark and the graphs I have are as you say the same, but translated two units to the right, which is a part I don't understand? #### Casio ##### Member Here is what the graph looks like. [graph]xzrhkxtmh3[/graph] We can see it only has one root so can you show us how you solved it algebraically? I am not going to say this is absolutely correct and if not please do put me on the right tracks. y = (x - 2)2 First I expanded the brackets; (x - 2)(x - 2) Then I multiplied them out; x2- 2x - 2x + 4 then, x2 - 4x + 4 At this point I thought I have a quadratic ax2 + bx + c = 0 I used the formula b + or - square root b2 - 4(a)(c)/2a I put the data in; 4 + or - square root 42 - 4 x 4 / 2(1) I ended up with -4/2 = -2 or 4/2 = 2 Now I am not 100% sure if I am understanding the algebra with regards to; ax2 + bx + c = 0 and x2 - 4x + 4 I am not sure when using the above ax2 in relation to x2 whether I have 1 or 0 at that point? I assumed 1, which is how I ended up with two roots. But am unclear. #### MarkFL ##### Administrator Staff member We know y = x2 has its axis of symmetry where x = 0. Then y = (x - h)2 will have its axis of symmetry where: x - h = 0 or x = h. This is a very common point of confusion for students of algebra. It seems to go against intuition that f(x - h) moves the graph of f(x) h units to the right, when h is being subtracted from x. What in fact happens is the axes are translated to the left, relative to the graph, which is the same as the graph being translated to the right relative to the axes. You know that a vertical translation is y = f(x) + k. This moves the graph of y = f(x) up k units. This agrees nicely with intuition, since we are adding to the original to move it up. But, we may also write the translation as y - k = f(x). Does this make more sense now? #### MarkFL ##### Administrator Staff member With regards to finding the roots, we have: y = (x - 2)2 To find the roots, we set y = 0, and have: 0 = (x - 2)2 Take the square root of both sides, noting that ±0 = 0. x - 2 = 0 x = 2 Thus, we have the repeated root x = 2. #### Jameson ##### Administrator Staff member Casio, As MarkFL showed this is actually much easier than the way you did it, which is always nice Sometimes we can over complicate things in math. $$\displaystyle (x-2)^2=0$$ Why do we set this equal to 0? Well, the "roots" of a quadratic are the values of x that correspond to y=0. So we simply set y=0 and solve for x. Starting with the above equation we get that: (1) $$\displaystyle (x-2)^2=0$$ (2) $$\displaystyle x-2 =0$$ (take the square-root of both sides, noting that $\sqrt{0}=0$) (3)$$\displaystyle x=2$$ Thus we get only 1 solution. Again, this is exactly what MarkFL showed. #### Casio ##### Member We know y = x2 has its axis of symmetry where x = 0. Then y = (x - h)2 will have its axis of symmetry where: x - h = 0 or x = h. This is a very common point of confusion for students of algebra. It seems to go against intuition that f(x - h) moves the graph of f(x) h units to the right, when h is being subtracted from x. What in fact happens is the axes are translated to the left, relative to the graph, which is the same as the graph being translated to the right relative to the axes. You know that a vertical translation is y = f(x) + k. This moves the graph of y = f(x) up k units. This agrees nicely with intuition, since we are adding to the original to move it up. But, we may also write the translation as y - k = f(x). Does this make more sense now? Thanks Mark, however you have introduced functions I think, which has made the understanding at the moment a little more difficult as I have not yet covered that module. I think my misunderstanding is in the idea of what I should do to the equation; y = (x - 2)2 I look at the above and I say everything in the brackets are squared, so; y = x2 - 22 , and this becomes; x2 - 4 Looking at the above I see a quadratic graph at y = - 4, which I know is incorrect. So I am looking for a mathematical rule which tells me the correct way that a given branch of mathematics should be worked out. The administrator did it this way; y = (x - 2)2 0 = (x - 2)2 sqrt 0 = x - 2 x = 2 so 2 has moved to the opposite side of x, which I have seen many times before but am unsure when to use which order of working out, i.e the long winded way I did previously or this short quick method which I did not know about? #### MarkFL ##### Administrator Staff member You have made a very common error, sometimes referred to as "the freshman's dream," which is to state: (x - 2)2 = x2 - 22 This is not true, what you actually have is: (x - 2)2 = x2 - 4x + 4 If you have been taught the FOIL method, try this with: (x - 2)(x - 2) and you will find the above result. #### Sudharaka ##### Well-known member MHB Math Helper I am not going to say this is absolutely correct and if not please do put me on the right tracks. y = (x - 2)2 First I expanded the brackets; (x - 2)(x - 2) Then I multiplied them out; x2- 2x - 2x + 4 then, x2 - 4x + 4 At this point I thought I have a quadratic ax2 + bx + c = 0 I used the formula b + or - square root b2 - 4(a)(c)/2a I put the data in; 4 + or - square root 42 - 4 x 4 / 2(1) I ended up with -4/2 = -2 or 4/2 = 2 Hi casio, The approach you have taken in finding the roots is correct although a much more easier method is suggested by Jameson. The part where you have made a mistake is highlighted above. Check it again, $x=\frac{4\pm \sqrt{4^2 - (4\times 4)}}{2\times 1}$ Now I am not 100% sure if I am understanding the algebra with regards to; ax2 + bx + c = 0 and x2 - 4x + 4 I am not sure when using the above ax2 in relation to x2 whether I have 1 or 0 at that point? I assumed 1, which is how I ended up with two roots. But am unclear. $$ax^2+bx+c=0$$ is the general form of a quadratic equation. Compare the quadratic equation $$x^2-4x + 4$$ with the general form and see what are the coefficients. Then, $$a=1,\,b=-4\mbox{ and }c=4$$. So you are correct in mentioning $$a=1$$. Kind Regards, Sudharaka.
2021-01-15T18:18:18
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/sketch-graphs-of.1810/", "openwebmath_score": 0.6828569769859314, "openwebmath_perplexity": 475.276517036582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9840936045561286, "lm_q2_score": 0.8596637577007394, "lm_q1q2_score": 0.845989606021987 }
https://math.stackexchange.com/questions/1122648/pigeonhole-principle-and-sets
# Pigeonhole Principle and Sets Can anyone point me in the right direction for this homework question? I know what the pigeonhole principle is but don't see how it helps :( Let $n\geqslant 1$ be an integer and consider the set S = {1,2,.....,2n}. Let T be an arbitrary subset of S having size n + 1. Prove that this subset T contains two elements whose sum is equal to 2n + 1. The hint we were given is "Consider the pairs (1,2n), (2, 2n-1), (3, 2n-2),....,(n, n+1) and use the pigeonhole principle" Any help would be great :) I haven't tried anything because I have no idea where to start. • You have $n$ pairs, and the set $T$ has $n+1$ members, so two members of $T$ must ... ? – Brian M. Scott Jan 27 '15 at 23:56 • ...be in S. I just don't see how it helps i guess – MasterMorine Jan 27 '15 at 23:59 • it will be helpful for you to pick a small $n,$ say $3$ write out all the subsets of size $4.$ – abel Jan 28 '15 at 0:00 • All members of $T$ are in $S$ by definition. Think of the members of $T$ as the pigeons, and the pairs as pigeonholes ... – Brian M. Scott Jan 28 '15 at 0:00 • Okay, get what I'm to think of them as, but how do I use that? – MasterMorine Jan 28 '15 at 0:05 Imagine the following array: $$\begin{array}[cccccccccc] .1 && 2 && 3 && 4 && \ldots && n-2 && n-1 && n \\ 2n && 2n-1 && 2n-2 && 2n-3 && \ldots && n+3 && n+2 && n+1\end{array}$$ Notice that each column sums to $2n+1$ and all of the numbers from $1$ to $2n$ are used in the array. There are $n$ columns. What you want to prove is that if you were to highlight $n+1$ numbers in this array (i.e. the elements of $T$), there would be a whole column highlighted, and that pair would sum to $2n+1$. The pidgeonhole principle essentially says that we cannot possibly highlight $n+1$ numbers such that no two lie in the same column, if there are but $n$ columns. If you want to see this, then just take a small array, like for $n=3$: $$\begin{array}.1 && 2 && 3\\6 && 5 && 4\end{array}$$ Now, let's start highlighting some numbers, trying to avoid putting two in a column. Our goal is to highlight $4$ numbers, as that is the size of the set $T$. We could start by putting $1$ in $T$: $$\begin{array}.\color{red}1 && 2 && 3\\6 && 5 && 4\end{array}$$ but now we know we can't put $6$ in $T$ too, because that would sum to $2n+1$. So we might choose $5$ as our next number, forbidding $2$ and we might choose $4$ as the number after that: $$\begin{array}.\color{red}1 && 2 && 3\\6 && \color{red}5 && \color{red}4\end{array}$$ So, now we have a highlighted number in every column- and adding any further number to the set $T$ would create a pair summing to $7$. But this means that we can't have a fourth element in $T$, at least given how we started - and the pidgeonhole principle guarantees that we can never choose a set of size $4$ without putting two elements in one column. The key point here is that we should imagine that, as we're creating $T$, we're not choosing numbers to put in it, we're choosing which column to take the numbers from. There are $n$ columns, and we need to make $n+1$ choices - thus we will, at some point, choose the same column twice, and in this context, that means we need to have both elements of some column in $T$, and this forms a pair summing to $2n+1$. You've got $n$ pigeonholes, each one with two numbers in it (which add to $2n+1$). Now try to take $n+1$ numbers without emptying one of the pigeonholes... ... and of course you can't - two of our choices must come from the same pigeonhole (even if we don't know which one). The basic idea of the pigeonhole principle is that if you have more items than categories, some categories must have more than one item. Here the categories are the pairs of numbers that add to $(2n+1)$ - there are $n$ of those pairs, and every number in range is uniquely in one of those categories - and you want to select more than $n$ items from those categories. • ...how exactly? Were told what the pigeonhole principle is but not applications of it exactly – MasterMorine Jan 28 '15 at 0:02
2019-06-24T16:13:07
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1122648/pigeonhole-principle-and-sets", "openwebmath_score": 0.8426377773284912, "openwebmath_perplexity": 214.16923563558484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936092211994, "lm_q2_score": 0.8596637523076225, "lm_q1q2_score": 0.8459896047250474 }
https://math.stackexchange.com/questions/583712/ratio-of-balls-in-a-box
Ratio of balls in a box A box contains some identical tennis balls. The ratio of the total volume of the tennis balls to the volume of empty space surrounding them in the box is $1:k$, where $k$ is an integer greater than one. A prime number of balls is removed from the box. The ratio of the total volume of the remaining tennis balls to the volume of empty space surrounding them in the box is 1:$k^2$. Find the number of tennis balls that were originally in the box. A few questions regarding this problem: Does the shape of the box matter? I just let the volume of the box be a constant $V$. Also I noted that the ratio $\frac{1}{k^2} = \left( \frac{1}{k} \right)^2$. ie. New ratio is the old ratio squared. I also let the amount of balls in the box be $n$ and the amount of balls taken out be $p$ where $p$ is a prime, so the new amount of balls in the box is $n-p$. This is about all I could do in this problem but I would like to be guided towards solving the problem (and I'm also interested in your thought processes and what ideas you initially think of so I can get a better idea of what to think of when doing problem solving) than just being given the solution. Your help would be much appreciated. Let $n$ be the number of balls, $p$ be the number of balls taken away and $V$ be the volume of the box. When I first saw the problem, what stands out is the condition "$p$ is a prime". This suggests us setup some equations between $n$, $k$ and $p$ and then uses this condition to pose some constraint on $k$. If we do that, we have $$n ( 1 + k ) = V = (n - p)(1 + k^2)\quad\implies\quad p(k^2+1) = nk(k-1)$$ Now $p$ is on LHS together with the factor $k^2+1$. To use the condition "$p$ is a prime", we probably need to pick a factor $?$ from RHS so that $\gcd(?,k^2+1) = 1$. $k$ seems to do the job, $$k |p(k^2+1)\quad\stackrel{\gcd(k,k^2+1)=1}{\implies}\quad k | p \quad\stackrel{p \text{ is prime}}{\implies}\quad k = p$$ Once we get this, the remaining step is obvious. We express $n$ in terms of $k = p$ and see what can we do with that. $$n = \frac{k^2+1}{k-1} = k + 1 + \frac{2}{k-1}\quad\stackrel{n\text{ is integer}}{\implies}\quad k = 2 \text{ or }3 \implies n = \frac{k^2+1}{k-1} = 5$$ The result just follows. $$(n,p,V) = (5,2,15) \text{ or } (5,3,20)$$ The ratio $1:k$ means in particular that if $\alpha$ is the total volume of the balls originally in the box, then $k\alpha$ is the total volume of the empty space surrounding them in the box. In general, if $V_t$ is the total volume of the tennis balls and $V_b$ is the total volume of the box, then the total volume of empty space surrounding the tennis balls in the box is $V_b-V_t.$ (This has nothing to do with the shape of the box. We simply need the box to be large enough that it can hold all the tennis balls originally in it, and still be closed.) What this means is that the total volume of the box (which will not be changing) is $\alpha+k\alpha=(1+k)\alpha.$ Now, let's suppose that there are $n$ balls in the box originally, and since they are all tennis balls, then we can assume that they all have the same volume. (In fact, if they had different volumes, there would be no way to do the problem, so we need to assume it.) Say that the volume of a single tennis ball is $\beta.$ Then $\alpha=n\beta,$ so the volume of the box is $(1+k)n\beta.$ Next, we remove a prime number of balls from the box, say $p,$ so that there are now $n-p$ balls in the box, so that the total volume of the tennis balls remaining is $(n-p)\beta.$ The total volume of the box is $(1+k)n\beta,$ so the total volume of the empty space around the remaining tennis balls in the box is $$(1+k)n\beta-(n-p)\beta=n\beta+kn\beta-n\beta+p\beta=(kn+p)\beta.$$ By assumption, then, the ratio $1:k^2$ is the same as the ratio $(n-p)\beta:(kn+p)\beta,$ which is clearly the same as $n-p:kn+p.$ This means that $$kn+p=k^2(n-p)\\kn+p=k^2n-k^2p\\k^2p+p=k^2n-kn\\(k^2+1)p=(k^2-k)n.$$ Now, note that since we removed a prime (so non-zero) number of balls from the box, then our new ratio cannot be the same as our old one--that is, $k^2\ne k$--so we may divide by $k^2-k$ to get $$n=\frac{k^2+1}{k^2-k}p.$$ At this point, I don't think there's much more we can say, unless there's some additional information about $k,p,$ or $n$ that you haven't shared. We can conclude readily that $\frac{k^2+1}{k^2-k}$ is a rational number greater than $1,$ but that doesn't really help to determine what $n$ has to be (at least, not as far as I can see). Oops! I missed the condition that $k$ was an integer! Ignore "Now, note that since we removed...as far as I can see)." Now, since $k$ is an integer, then $k^2+1$ and $k^2-k$ are integers. In particular, since $$(k^2+1)p=k(k-1)n,$$ then $k\mid(k^2+1)p,$ so since $k$ and $k^2+1$ are relatively prime, then we must have that $k\mid p.$ Since $p$ is prime and $k$ is an integer greater than $1,$ it then follows that $k=p,$ so we have $$(k^2+1)p=(k-1)np\\k^2+1=(k-1)n\\k^2+1=kn-n\\k^2-kn+n+1=0.$$ This gives us a quadratic in $k,$ whose solutions are \begin{align}k &= \frac{n\pm\sqrt{n^2-4(n+1)}}2\\ &= \frac{n\pm\sqrt{n^2-4n-4}}2.\end{align} Since $k$ is an integer, then we require $\sqrt{n^2-4n-4}$ to be rational, which means it must be a positive integer, say $m.$ Hence, $$n^2-4n-4=m^2\\n^2-4n+4-8=m^2\\(n-2)^2-8=m^2\\(n-2)^2-m^2=8\\\bigl((n-2)+m\bigr)\bigl((n-2)-m\bigr)=8\\(n-2+m)(n-2-m)=8.$$ Since $m$ is a positive integer, then we can conclude that $m=1,$ whence $n=5.$ (I leave it to you to show this.) You've done what you can to move in the right direction. As you observe, nothing's said about the shape of the box, so the only relevant item seems to be the volume, $V$. There's also the volume of an individual ball. We can (by changing our unit of measurement if necessary) assume that this volume is 1. That means that we have initial state: n balls; box volume $V = nk$. final state: n-p balls, box volume $V = (n-p)(k^2)$ So you know that $nk = (n-p) k^2$. We can divide both sides by $k$ (I'm just following my nose here!) to get $$n = (n-p) k$$ At this point, I'd figure you could just try winging it. You'd say "$n$ has to be a composite, because the right hand side is a factorization of it, unless $k = 1$. But that's not possible, because it'd mean that $p =0$, which isn't prime. OK, so $n$ is composite. Let's try a small prime, like $p = 2$. I need to write $n = (n-2) k$. Well, $4 = (4-2) 2$ seems to work. Hunh." So you say "There were 4 balls originally, in a box large enough to hold 8 of them. The volume ratio is 2. Then you took away 2 balls (a prime number!) and you have 2 balls in a box that's got the volume of 8,. That's a ratio of 4, which is $2^2$. Looks like a solution." But are there other solutions? Seems likely. For instance n = 6, p = 3, k = 2 seems to work as well. In other works, your problem doesn't seem to have a unique solution. So there's no "process" by which to arrive at "the solution." • You need initial state $V=n(k+1)$ and final state $V=(n-p)(k^2+1)$ if $V$ is the empty space plus the volume of balls – Henry Nov 28 '13 at 6:23 $V$ is the volume of the box and $s$ is the volume of each ball. Then we can write: $\frac{ns}{V-ns}=\frac{1}{k} \to \frac{ns}{V}=\frac{1}{k+1}$ $\frac{(n-p)s}{V-(n-p)s}=\frac{1}{k^2} \to \frac{(n-p)s}{V}=\frac{1}{k^2+1}$ $\frac{n}{n-p}=\frac{k^2+1}{k+1} \to 1+\frac{p}{n-p}=k+1-\frac{2k}{k+1}$ $\frac{p}{n-p}=\frac{k^2-k}{k+1}$ $\frac{p}{n}=\frac{k^2-k}{k^2+1}$ if $gcd(n,p)=1$ then $k(k-1)n=p(k^2+1)$ so $k=ap$ or $k=ap+1$. if $k=ap$ then $n=\frac{(a^2p^2+1)}{a^2p-a}=p+\frac{pa+1}{a^2p-a}$ so $\frac{pa+1}{a^2p-a} \geq 1$ so $a+1 \geq a^2p-pa$ so either $a=1$ or $\frac{a+1}{a^2-a}\geq p$ that is impossible for $a>1$ so $a=1$ and $n=\frac{p^2+1}{p-1}=p+1\frac{2}{p-1}$ and it is only natural when $p = 2$ so $n=5$ and $k=2$. or $p=3,n=5,k=3$ if $k=ap+1$ then $n=\frac{a^2p^2+2ap+2}{a^2 p+a}=p+\frac{ap+2}{a^2p+a}$ so $ap+2 \geq a^2p+a$ which is impossible for $a>1$, but for $a=1$,$p+\frac{p+2}{p+1}=p+1+\frac{1}{p+1}$ will not be a natural number so we should dismiss this case. if $gcd(n,p)=p$ then $\frac{1}{t}=\frac{k^2-k}{k^2+1} \to t=\frac{k^2+1}{k^2-k}$ that cannot be a natural number for $k>1$, be cause it is strictly decreasing for $k>1$ and for $k=3$ it is less than 2 and for $t=2$ it is not integer. Note that in the second case $n$ cannot be equal to $p$ because for no natural $k$, $k^2+1=k^2-k$. So the final result is: $n=5,k=2,p=2$ or $n=5,k=3,p=3$
2019-07-15T19:56:29
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/583712/ratio-of-balls-in-a-box", "openwebmath_score": 0.9393088221549988, "openwebmath_perplexity": 90.48027672097648, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936092211993, "lm_q2_score": 0.8596637523076225, "lm_q1q2_score": 0.8459896047250473 }
https://math.stackexchange.com/questions/1350675/existence-uniqueness-of-a-continuous-function/1350747
# Existence/uniqueness of a Continuous Function I ran across the following problem with a friend while we were studying for quals. Neither of us are really quite sure where to start. It feels like a differential equation. This is probably easy, but we were not able to get a handle on how to proceed. I wish I could tell you what I tried, but after thinking on this problem for some time, I simply do not have ideas of any real substance (other than what I mention after the problem statement). Here is the question as it appears on the old qual: "Let $K$ be a continuous function on the unit square $0\leq x,y\leq1$ satisfying $|K(x,y)|<1$ for all $x$ and $y$. Show that there is a continuous function $f(x)$ on $[0,1]$ such that we have $$f(x) + \int_0^1 f(y)K(x,y)dy=\sin(x^2)$$ where $0\leq x \leq 1$. Can there be more than one such function $f$?" I will say that I was able to show that given $K$ as it is, $\exists\,C\in(0,1)$ such that $|K|\leq C$ on the square, and that a function defined as $$G(x)=\int_0^1 g(y)K(x,y) dy$$ will be continuous, assuming that $g$ is continuous on $[0,1]$. Any suggestions or possible solutions would be greatly appreciated. HINT: Let $f^{(n)}(x)$ be given by the recursive relationship $$f^{(n)}(x)=\sin x^2-\int_0^1K(x,y)f^{(n-1)}(y)dy$$ with $f^{(0)}=0$. Then, show that \begin{align} \left|f^{(n)}(x)-f(x)\right|&=\left|\int_0^1K(x,y)\left(f^{(n-1)}(y)-f(y)\right)dy\right|\\\\ &\le\int_0^1|K(x,y)|\left|f^{(n-1)}(y)-f(y)\right|dy\\\\ &<\lambda \left|\left|f^{(n-1)}(y)-f(y)\right|\right|_{\infty} \end{align} for some $\lambda<1$ and iterate. • Nice. I should have seen that. Thanks for clearing that up for me. – fxy Jul 5 '15 at 23:27 • @fxy Thank you! And you're certainly welcome. It was my pleasure. The series solution is called The Neumann Series and is rich in both theory and application (e.g., The Born Approximation). – Mark Viola Jul 5 '15 at 23:28 • Yea, I actually have seen Neumann series (in the context of numerical linear algebra). For some reason I get a mental block when it comes to analysis. It's like I somehow forget everything I know. Anyway, much appreciated. – fxy Jul 5 '15 at 23:33 • Just keep practicing and you'll do well! – Mark Viola Jul 5 '15 at 23:34 • Thanks for the encouragement. Quals scare the crap out of me. – fxy Jul 5 '15 at 23:34 Dr. MV has covered the existence statement quite well. To get uniqueness, suppose $f$ and $g$ are both functions satisfying the conditions. Then, we would have $$f(x) - g(x) = - \int_0^1 [f(y) - g(y)]K(x,y)\;dy$$ Let $x$ be such that $|f(x) - g(x)|$ is maximal on $[0,1]$. If $f(x) \not= g(x)$, then we can divide by $f(x) - g(x)$ to obtain $$1 = - \int_0^1 \frac{f(y) - g(y)}{f(x) - g(x)} K(x,y)\; dy$$ But the term in the integrand always has absolute value less than $1$. • Thanks for your answer. I would mark it right also if I could. – fxy Jul 5 '15 at 23:28 This is just meant to be a long comment. I just want to make you aware that this can be proved easily using the Banach fixed point theorem, which states that if $X$ is a complete metric space, then any map $\;T:X \to X$ which is Lipchitz with constant less than $1$ has a unique fixed point. (If you're taking quals then you must be familiar with this?) Let's see how this implies the above theorem. Let our metric space be $X:=C[0,1]$ with the sup norm, and define $T:X \to X$ as $$(Tf)(x) = \sin(x^2)-\int_0^1f(y)K(x,y)dy$$ Then $T$ is Lipchitz with constant less than $1$. Indeed, letting $\lambda := \sup_{x,y \in [0,1]} K(x,y)$, we can easily show (using the same kind of argument as in Dr. MV's answer) that $$\|Tf-Tg\|_{\infty} \leq \lambda \|f-g\|_{\infty}$$ Thus, by the Banach fixed point theorem, $T$ has a unique fixed point $f_0$, which gives both existence and uniqueness of your question at once. Remark: The answer given by Dr. MV is essentially the essence of the proof of the Banach fixed point theorem, but I just wanted to make you aware of the statement in the more general setting. • Thanks for your comment. Actually, when I was writing it up for myself, I ended up doing it much like this. His approach pointed me to this as well. Thanks again. – fxy Jul 6 '15 at 0:17 • Yeah sure no problem :) – Shalop Jul 6 '15 at 0:20
2019-11-19T08:00:01
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1350675/existence-uniqueness-of-a-continuous-function/1350747", "openwebmath_score": 0.8608287572860718, "openwebmath_perplexity": 151.98872482891406, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936050226358, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.8459896046539157 }
https://math.stackexchange.com/questions/4353746/what-is-the-general-way-of-proving-that-a-series-converges-uniformly-on-a-bounde
# what is the general way of proving that a series converges uniformly on a bounded interval? I am trying to solve the following question: Prove that the series $$\sum_{n = 1}^{ \infty} \frac{x^{2n - 1}}{(2n - 1)!}$$ and the series $$\sum_{n = 1}^{ \infty} \frac{x^{2n}}{(2n)!}$$ both converges uniformly on any bounded interval of $$\mathbb R.$$ I was thinking about using the Weierstrass M - test, but this test does not speak about any bounded interval. Here is the statement of the Weierstrass M - test: Let $$\{f_n\}$$ be a sequence of functions defined on a common domain $$A,$$ and let $$\{M_n\}$$ be a sequence of positive numbers such that $$|f_n(x)| \leq M_n,$$ for each $$n \in \mathbb N,$$ and any $$x \in A.$$ If the series $$\sum_{n = 1}^{\infty} M_n$$ converges, then $$\sum_{n = 1}^{\infty} f_n$$ converges uniformly. Could someone explain to me how I can in general tackle this kind of problems please? • @GEdgar so using this test is the general way of solving problems like this? if so, how can we guess the sequence $\{M_n\}$? Jan 11 at 2:06 • @Brain Well, take the obvious candidates: $$\frac{{K^{2n - 1} }}{{(2n - 1)!}}, \quad \frac{{K^{2n} }}{{(2n)!}}.$$ – Gary Jan 11 at 2:15 • You can find the radius of convergence $R$ of each series. It will be $\infty$. That implies that each series is converging uniformly on every closed interval. Jan 11 at 2:17 • I remember this theorem from the second semester analysis course I took 45 years ago. It was in a different country and different language. I am sure there is an English version of this result and there are analysis specialists here who can give a reference. I can reproduce a proof which was easy but a reference would be better. Jan 11 at 2:46 • @Brian, yes, Abel's theorem: ib.mazurok.com/2015/06/04/… Jan 12 at 1:18 You say the Weierstrass test doesn't talk about bounded intervals, and you are right. But, what it does talk about is uniform convergence on a given set $$A$$; if $$\sum\limits_{n=1}^{\infty}\sup\limits_{x\in A}|f_n(x)|<\infty$$, then the series converges uniformly on $$A$$ (and recall that uniform convergence on $$A$$ implies uniform convergence on every subset of $$A$$). In other words, take $$M_n=\sup\limits_{x\in A}|f_n(x)|$$. Let's look at the series $$\sum\limits_{n=1}^{\infty}\frac{x^{2n}}{(2n)!}$$. Here, we have $$f_n:\Bbb{R}\to\Bbb{R}$$, $$f_n(x)=\frac{x^{2n}}{(2n)!}$$. Suppose $$A$$ is any bounded interval. This means there exists some $$r>0$$ such that $$A\subset [-r,r]$$. We have \begin{align} \sum_{n=1}^{\infty}\sup_{x\in A}\left|\frac{x^{2n}}{(2n)!}\right|\leq \sum_{n=1}^{\infty}\sup_{x\in [-r,r]}\left|\frac{x^{2n}}{(2n)!}\right| \leq \sum_{n=1}^{\infty}\frac{r^{2n}}{(2n)!}<\infty. \end{align} The first estimate should be obvious, since $$A$$ is contained in $$[-r,r]$$; this is just a basic property of supremums and set inclusions. The second inequality is actually an equality in this special case; I left it as a weak inequality just to emphasize that we don't have to be (and shouldn't aim to be unless explicitly instructed so) super precise with the estimates. All we want to do is show a certain numerical series is finite. The final step of claiming the series is finite can be done for example using the ratio test. In fact the final series equals $$\cosh(r)-1$$. All the hypotheses of Weierstrass' M-test are satisfied, hence the series converges uniformly on $$A$$. Since $$A$$ was taken to be an arbitrary bounded subset of $$\Bbb{R}$$, this proves that the series converges uniformly on every bounded subset of $$\Bbb{R}$$. In about 95% of situations, you can prove uniform convergence simply by the Weierstrass M-test (very rarely have I had to use some other method, such as Dirichlet's test; in fact I haven't used it recently, so much so that I kind of even lose track of the precise assumptions). The Weierstrass M-test deals with uniform convergence of arbitrary functions. Specifically for power series, there is a very slight improvement. The essence is still Weierstrass' test. Anyway, the statement is: Let $$\{a_n\}_{n=0}^{\infty}$$ be a sequence of complex numbers and $$z_0$$ a non-zero complex number such that the series $$\sum_{n=0}a_nz_0^n$$ converges. Then, for any $$0\leq r<|z_0|$$ (note the strict inequality), the series $$\sum_{n=0}^{\infty}a_nz^n$$ converges absolutely and uniformly on the closed disk $$D_r=\{z\in\Bbb{C}\,:\, |z|\leq r\}$$. We assume $$z_0\neq 0$$ because the series always converges at the origin; so we're just excluding the trivial case. The "strength" of this theorem is that our hypothesis only tells us the series $$\sum_{n=0}^{\infty}a_nz_0^n$$ converges; we don't know anything about absolute convergence. TO prove this, note that since the series converges, the general summand must tend to zero: $$a_nz_0^n\to 0$$ as $$n\to\infty$$. In particular, it is a bounded sequence; i.e $$M:=\sup\limits_{n\geq 1}|a_nz_0^n|<\infty$$. Now, for any $$0\leq r<|z_0|$$, we have \begin{align} \sum_{n=0}^{\infty}\sup_{|z|\leq r}\left|a_nz^n\right|=\sum_{n=0}^{\infty}\sup_{|z|\leq r}\left|\frac{z^n}{z_0^n}a_nz_0^n\right|\leq\sum_{n=0}^{\infty}\frac{r^n}{|z_0|^n}M<\infty, \end{align} by the ratio test with the common ratio $$\rho=\frac{r}{|z_0|}<1$$ (actually, you can explicitly sum the geometric series; the answer is $$\frac{M}{1-(r/|z_0|)}<\infty$$). By Weierstrass' test, it follows the series converges uniformly on the closed disk $$D_r=\{z\in\Bbb{C}\,:\, |z|\leq r\}$$, hence the proof is complete. • Amazing answer ..... I am going to read it thoroughly. Jan 11 at 3:27 Those are power series of a single (might as well let it be complex) variable. If $$R$$ is the radius of convergence of the power series, then for every $$R' < R$$, the power series converges absolutely and uniformly on the closed disk $$\overline{D_{R'}(0)} = \{z \in \mathbb{C} : |z| \leq R'\}$$. The classical proof for this can be found on page 102 here: https://mtaylor.web.unc.edu/wp-content/uploads/sites/16915/2018/04/anal1v.pdf An somewhat analogous result holds for power series of several complex variables, and the proof is not very different. Proofs for this more general situation can be found on page 58 here: https://mtaylor.web.unc.edu/wp-content/uploads/sites/16915/2018/04/analmv.pdf • You may write $z\in \mathbb C$ in your first paragraph as you are talking about one complex variable. – Gary Jan 11 at 3:16 • @Gary Yeah thanks that's what I meant. Jan 11 at 3:19 You can solve it using the M-test too. For the first series Let $$[a,b] \subset \mathbb R$$ and let for each $$n \in \mathbb N$$ denote $$B_n = \sup_{x \in [a,b] } |x|^{2n-1} < \infty$$. Let $$f_n(x) = \frac{x^{2n -1}}{(2n-1)!}$$ then choose $$N>0$$ large enough such that $$B_n < e^n$$, then we have $$|f_n(x)| \leq \frac{B_n}{(2n-1)!} \leq \frac{e^n}{(2n-1)!}= M_n$$ Since $$\sum_{n=1}^\infty M_n = \sqrt{e} \sinh(\sqrt{e}) \approx 4.1284$$ (converges). Therefore, the main series is uniformly convergent. You can argue the same way for the second series.
2022-05-20T07:47:37
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4353746/what-is-the-general-way-of-proving-that-a-series-converges-uniformly-on-a-bounde", "openwebmath_score": 0.9741605520248413, "openwebmath_perplexity": 127.64881838860285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936082881853, "lm_q2_score": 0.8596637469145054, "lm_q1q2_score": 0.8459895986156369 }
https://math.stackexchange.com/questions/2316066/alternative-induction-format
Alternative Induction Format Generally, for induction proofs, we prove the property for a base case and then assume it holds for an arbitrary $n$ representative of some notion of $size$ for the object constructed, with the natural numbers being the most common object in the discourse. We then prove the property for $n+1$. Alternatively, we have proofs where we assume that the property holds for $\forall k : k < n$ and then prove for $n$. But, can we assume for the property holds for $n$, then prove it holds for $n - 1$? • Sure. Backwards Induction is a standard technique (used very often in Finance, for example). Of course you only prove your result for $k≤n$ this way, but still. – lulu Jun 9 '17 at 14:03 There are several different proof schemes by induction (so-called strong induction, forward-backward induction, transfinite induction, etc.). The most basic problem with your idea, which I'll denote by $P(n) \Rightarrow P(n-1)$, is that this only proves $P(k)$ for all $k \leq n$, where you know $P(n)$ is true. In this scheme, $P(n)$ acts as your "base case", and this backwards induction step allows you to prove $P(k)$ for all $k \leq n$. There is a scheme called forward-backward induction, where you have "infinitely many base cases": suppose you have a sequence $n_i$ ($i \in \mathbb{N}$) such that you know $P(n_i)$, and for any $k$, you can find an $i$ such that $k \leq n_i$. Then if you know the backward inductive step that $P(n) \Rightarrow P(n-1)$, then you can find $P(k)$ for any $k$, since $k \leq n_i$ and $P(n_i)$ for some $i$. This is how one proof of Cauchy-Schwarz goes: let $P(n)$ be the statement that Cauchy-Schwarz works for $n$ numbers. We show three things: $$P(1)$$ $$P(2^n) \Rightarrow P(2^{n+1})$$ $$P(n) \Rightarrow P(n-1)$$ The first two items show that $P(2^n)$ always holds. The second item is the backwards induction that allows you to show that $P(k)$ holds for any $k$, since we may always find some $2^n \geq k$. There is another interpretation to your idea that is common as well: the method of infinite descent (first widely used and attributed to Fermat). This is a proof by contradiction and here is the set up. Let $P(n)$ for all $n \in \mathbb{N}$ be the statement(s) I want to prove. We show that whenever $P(n)$ does not hold, we can find some $k < n$ (notice the strictly less than) such that $P(k)$ does not hold. Suppose $P(n)$ does not hold for some $n \in \mathbb{N}$. Then by the previous supposition we can find an infinite strictly descending sequence in $\mathbb{N}$, which is impossible. Euclid actually used this idea to show that every composite number is divisible by some prime (try messing around with this, it's pretty neat!). Anyway, this is sort of like your idea that $P(n) \Rightarrow P(n-1)$, in a more general framework.
2020-06-01T23:19:52
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2316066/alternative-induction-format", "openwebmath_score": 0.9576677680015564, "openwebmath_perplexity": 97.81039682137552, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936078216782, "lm_q2_score": 0.8596637469145054, "lm_q1q2_score": 0.8459895982145977 }
https://math.stackexchange.com/questions/1039482/how-to-evenly-space-a-number-of-points-in-a-rectangle
# How to evenly space a number of points in a rectangle? Say I have a rectangle, with variable width and height, for example lets use: width = 20 height = 30 I would like to put n amount of evenly spaced points inside this rectangle: no of points = 400 How could I calculate the x and y coordinates of each point? Note, that I would like the borders to also have points. Very rough example, I needed 12 points (but I could have wanted more or less): • Are we talking a simple rectangular grid here, like on graph paper, the dots at the intersections of the lines or mid points of the squares? – mvw Nov 26 '14 at 11:01 • Intersections are fine, as long as there are points on the edges also. – sprocket12 Nov 26 '14 at 11:01 • Is $\Delta x = \Delta y$? (little squares or little rectangles?) – mvw Nov 26 '14 at 11:11 • @mvw yes that would seem ok. Little squares would be evenly spaced (my example is very rough) – sprocket12 Nov 26 '14 at 11:14 Looking for a square grid of size $w \times h$ with $n$ points in total. $n_x$ points in $x$-direction, spacing $\Delta x$: $$x_i = i \, \Delta x \quad i \in \{ 0, \ldots, n_x-1 \}$$ and $$(n_x -1) \, \Delta x = w$$ $n_y$ points in $y$-direction, spacing $\Delta y$: $$y_j = j \, \Delta y \quad j \in \{ 0, \ldots, n_y-1 \}$$ and $$(n_y -1) \, \Delta y = h$$ Total number of points: $$n = n_x \, n_y$$ Assuming $\Delta x = \Delta y$ one gets $$\Delta x = \frac{w}{n_x - 1} = \frac{h}{n_y - 1} = \Delta y \iff \\ n_y = \frac{h}{w} n_x + 1 - \frac{h}{w}$$ and then $$n = n_x n_y = \frac{h}{w} n_x^2 + \left( 1 - \frac{h}{w} \right) n_x \iff \\ \frac{w}{h} n + \frac{(w-h)^2}{4h^2} = \left( n_x + \frac{w-h}{2h} \right)^2$$ which after taking the square root gives the wanted equation for $n_x$ in terms of $n$, $w$ and $h$: $$n_x = \sqrt{\frac{w}{h} n + \frac{(w-h)^2}{4h^2}} - \frac{w-h}{2h} \quad (*)$$ (the negative solution was dropped). $n_x$ then should be put into $n_y = n / n_x$ and those values can be used to calculate the spacing $\Delta x = \Delta y$. The interesting bit is that not every integer $n$ in $(*)$ will lead to an integer $n_x$ and then to another integer $n_y$. ### Example 1 Given are $n = 12$, $w = 3$, $h = 2$. We put this in equation $(*)$ and get $$n_x = \sqrt{\frac{3}{2}\cdot 12 + \frac{(3-2)^2}{4\cdot 2^2}} - \frac{3-2}{2\cdot 2} = \sqrt{18 + \frac{1}{16}} - \frac{1}{4} = \frac{17}{4} - \frac{1}{4} = 4$$ so we have $n_x = 4$ points along the $x$-direction. Further $n_y = n / n_x = 12 / 4 = 3$ points in $y$-direction. The spacing is $\Delta x = w / (n_x - 1) = 3 / (4 - 1) = 1$, a unit spacing and $\Delta y$ is the same. ### Example 2 Given are $n = 400$, $w = 20$, $h = 30$. From equation $(*)$ we get $n_x = 16.4974487803453$ and then $n_y = 24.2461731705179$. So this won't work. But why not try $n = 17 \cdot 25 = 425\,$ with the given $w$ and $h$ values? And this is indeed confirmed by equation $(*)$. Here $\Delta x = \Delta y = 1.25$. I hope it a got a little bit clear that not all combinations of the problem parameters $n$, $w$ and $h$ have a solution. • yes unfortunately I discovered that also, that it doesn't add up when there are decimals involved. Its a problem as I do not have control over the w,h values, and they are decimals. – sprocket12 Nov 26 '14 at 15:52 • The $w$ and $h$ values must not be integers. Relevant is their ratio and how many points $n$ you can supply. – mvw Nov 26 '14 at 16:00
2019-11-17T21:00:46
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1039482/how-to-evenly-space-a-number-of-points-in-a-rectangle", "openwebmath_score": 0.8129738569259644, "openwebmath_perplexity": 327.1464184439211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936082881854, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8459895968465263 }
https://gmatclub.com/forum/the-sequence-a1-a2-a-n-is-such-that-an-2an-1-x-46630.html
It is currently 22 Feb 2018, 02:51 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The sequence a1, a2, … , a n, … is such that an = 2an-1 - x new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Manager Joined: 11 Jan 2007 Posts: 197 Location: Bangkok The sequence a1, a2, … , a n, … is such that an = 2an-1 - x [#permalink] ### Show Tags 04 Jun 2007, 19:23 3 This post was BOOKMARKED 00:00 Difficulty: 25% (medium) Question Stats: 78% (01:29) correct 22% (02:04) wrong based on 517 sessions ### HideShow timer Statistics The sequence $$a_1$$, $$a_2$$, … , $$a_n$$, … is such that $$a_n = 2a_{n-1} - x$$ for all positive integers n ≥ 2 and for certain number x. If $$a_5 = 99$$ and $$a_3 = 27$$, what is the value of x? A. 3 B. 9 C. 18 D. 36 E. 45 [Reveal] Spoiler: OA _________________ cool Last edited by Bunuel on 24 Dec 2013, 00:16, edited 1 time in total. Renamed the topic, edited the question and added the OA. Director Joined: 03 Sep 2006 Posts: 859 Re: PS - Sequence (a1, a2, …) [#permalink] ### Show Tags 05 Jun 2007, 01:37 2 This post received KUDOS jet1445 wrote: Q13: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x for all positive integers n ≥ 2 and for certain number x. If a5 = 99 and a3 = 27, what is the value of x? A. 3 B. 9 C. 18 D. 36 E. 45 a5= 2*a4 - x = 99 a4 = 2*a3 - x = 2*27 - x therefore; a5 = 2*(54 - x ) -x = 99 108 - 3*x = 99 therefore X = 3 This the answer is "A" Manager Joined: 09 Jun 2010 Posts: 111 Re: Sequence Problem [#permalink] ### Show Tags 28 Aug 2010, 06:30 1 This post received KUDOS Good one. Please let me know if some one comes up with a solution that can be worked under 2 mins given an= 2an-1 - x a5 = 99 = 2 a4 - X = 2[2a3-X] -X = 4 a3 - 3X given a3 = 27 ; substituting: 108 - 3X = 99 => X = 3 A Current Student Status: Three Down. Joined: 09 Jun 2010 Posts: 1914 Concentration: General Management, Nonprofit Re: Sequence Problem [#permalink] ### Show Tags 28 Aug 2010, 09:13 3 This post received KUDOS 2 This post was BOOKMARKED udaymathapati wrote: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x for all positive integers n ≥ 2 and for certain number x. If a5 = 99 and a3 = 27, what is the value of x? A. 3 B. 9 C. 18 D. 36 E.45 Quite simple to solve this one. Given: $$a_n = 2a_{n-1} - x$$ $$a_5 = 99$$ $$a_3 = 27$$ $$a_5 = 2a_4 - x = 2(2a_3 - x) - x = 4a_3 - 3x = 99$$ $$4(27) - 3x = 99$$$$3x = 108-99 = 9$$ $$x = 3$$ Manager Joined: 06 Feb 2010 Posts: 166 Concentration: Marketing, Leadership Schools: University of Dhaka - Class of 2010 GPA: 3.63 WE: Business Development (Consumer Products) Sequence [#permalink] ### Show Tags 20 Oct 2010, 05:25 2 This post was BOOKMARKED The sequence a1…..a2.......an........is such that an=2an-1-X for all positive integers n>=2 and for certain number X. If a5=99 and a3=27, what is the value of X? _________________ Practice Makes a Man Perfect. Practice. Practice. Practice......Perfectly Critical Reasoning: http://gmatclub.com/forum/best-critical-reasoning-shortcuts-notes-tips-91280.html Collections of MGMAT CAT: http://gmatclub.com/forum/collections-of-mgmat-cat-math-152750.html MGMAT SC SUMMARY: http://gmatclub.com/forum/mgmat-sc-summary-of-fourth-edition-152753.html Sentence Correction: http://gmatclub.com/forum/sentence-correction-strategies-and-notes-91218.html Arithmatic & Algebra: http://gmatclub.com/forum/arithmatic-algebra-93678.html Helpful Geometry formula sheet: http://gmatclub.com/forum/best-geometry-93676.html I hope these will help to understand the basic concepts & strategies. Please Click ON KUDOS Button. Intern Joined: 04 Aug 2010 Posts: 22 Schools: Dartmouth College Re: Sequence [#permalink] ### Show Tags 20 Oct 2010, 05:50 1 This post was BOOKMARKED monirjewel wrote: The sequence a1…..a2.......an........is such that an=2an-1-X for all positive integers n>=2 and for certain number X. If a5=99 and a3=27, what is the value of X? Plug the known values a5=99 and a3 = 27 into the formula: a5 = 2(a4) - x 99 = 2(a4) - x a4 = 2(a3) - x a4 = 2(27)-x = 54-x Substitute 54-x for a4 in the top equation: 99 = 2(54-x)-x 99=108-3x 3x=9 x=3 On the GMAT, I would recommend that you plug in the answer choices, one of which would say that x=3. Plug a5 = 99 and x=3 into the formula: 99 = 2(a4) -3 a4 = 51 Plug a4=51, a3=27, and x=3 into the formula: 51 = 2(27) - 3 51 = 51. Success! _________________ GMAT Tutor and Instructor [email protected] New York, NY If you find one of my posts helpful, please take a moment to click on the "Kudos" icon. Available for tutoring in NYC and long-distance. For more information, please email me at [email protected]. Manager Joined: 08 Sep 2010 Posts: 223 Location: India WE 1: 6 Year, Telecom(GSM) Re: Sequence [#permalink] ### Show Tags 20 Oct 2010, 05:56 monirjewel wrote: The sequence a1…..a2.......an........is such that an=2an-1-X for all positive integers n>=2 and for certain number X. If a5=99 and a3=27, what is the value of X? A5=99 and A3=27 According to the given nth term, A5=2(A4)-x=2{2(A3)-x}-x=2{(2*27)-x}-x=108-2x-x=108-3x Hence 108-3x=99 or x=9/3=3 Consider KUDOS if its helpful. Director Joined: 01 Feb 2011 Posts: 703 Re: PS - Sequence (a1, a2, …) [#permalink] ### Show Tags 11 Sep 2011, 09:03 a(n) = 2*a(n-1) -x a5 = 99 a3=27 a5 = 2a4-x a4 = 2a3-x =>99 = 2(2a3-x)-x 99 = 4a3-3x = 4*27-3x =>x=3 Answer is A. Manager Status: Prepping for the last time.... Joined: 28 May 2010 Posts: 179 Location: Australia Concentration: Technology, Strategy GMAT 1: 630 Q47 V29 GPA: 3.2 Re: PS - Sequence (a1, a2, …) [#permalink] ### Show Tags 12 Sep 2011, 04:14 1 This post was BOOKMARKED A is the answer a4= 54-x => a5 = 2 (54 -x) -x = 99 => 108 - 3x = 99 => x= 3 Non-Human User Joined: 09 Sep 2013 Posts: 13800 Re: Q13: The sequence a1, a2, , a n, is such that an = 2an-1 [#permalink] ### Show Tags 23 Dec 2013, 10:58 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Manager Joined: 28 Dec 2013 Posts: 72 Re: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x [#permalink] ### Show Tags 04 Jul 2014, 07:51 whiplash2411 wrote: udaymathapati wrote: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x for all positive integers n ≥ 2 and for certain number x. If a5 = 99 and a3 = 27, what is the value of x? A. 3 B. 9 C. 18 D. 36 E.45 Quite simple to solve this one. Given: $$a_n = 2a_{n-1} - x$$ $$a_5 = 99$$ $$a_3 = 27$$ $$a_5 = 2a_4 - x = 2(2a_3 - x) - x = 4a_3 - 3x = 99$$ $$4(27) - 3x = 99$$$$3x = 108-99 = 9$$ $$x = 3$$ QUESTION : How exactly did you get from 2(2a3 - x) to 4a3 * 3x, wouldn't it be 4a3 - 2x? Math Expert Joined: 02 Sep 2009 Posts: 43862 Re: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x [#permalink] ### Show Tags 04 Jul 2014, 07:59 Expert's post 1 This post was BOOKMARKED sagnik242 wrote: whiplash2411 wrote: udaymathapati wrote: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x for all positive integers n ≥ 2 and for certain number x. If a5 = 99 and a3 = 27, what is the value of x? A. 3 B. 9 C. 18 D. 36 E.45 Quite simple to solve this one. Given: $$a_n = 2a_{n-1} - x$$ $$a_5 = 99$$ $$a_3 = 27$$ $$a_5 = 2a_4 - x = 2(2a_3 - x) - x = 4a_3 - 3x = 99$$ $$4(27) - 3x = 99$$$$3x = 108-99 = 9$$ $$x = 3$$ QUESTION : How exactly did you get from 2(2a3 - x) to 4a3 * 3x, wouldn't it be 4a3 - 2x? $$a_4=2a_3-x$$ --> $$a_5 = 2a_4 - x$$ --> $$a_4=2(2a_3-x)-x=4a_3-2x-x=4a_3-3x$$. _________________ Non-Human User Joined: 09 Sep 2013 Posts: 13800 Re: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x [#permalink] ### Show Tags 23 Nov 2016, 08:20 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Senior Manager Joined: 19 Apr 2016 Posts: 275 Location: India GMAT 1: 570 Q48 V22 GMAT 2: 640 Q49 V28 GPA: 3.5 WE: Web Development (Computer Software) Re: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x [#permalink] ### Show Tags 21 Jan 2017, 02:28 jet1445 wrote: The sequence $$a_1$$, $$a_2$$, … , $$a_n$$, … is such that $$a_n = 2a_{n-1} - x$$ for all positive integers n ≥ 2 and for certain number x. If $$a_5 = 99$$ and $$a_3 = 27$$, what is the value of x? A. 3 B. 9 C. 18 D. 36 E. 45 $$a_5= 2*a_4 - x = 99$$ $$a_4 = 2*a_3 - x = 2*27 - x$$ $$a_5 = 2*(54 - x ) -x = 99$$ 108 - 3*x = 99 Therefore X = 3 Hence option A is correct. Mannheim Thread Master Status: It's now or never Joined: 10 Feb 2017 Posts: 289 Location: India GMAT 1: 650 Q40 V39 GPA: 3 WE: Consulting (Consulting) Re: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x [#permalink] ### Show Tags 01 Aug 2017, 11:23 I did not understand the relation at first, but giving another shot worked well, thanks. a5=99 a3=27 a5=2a4−x =2(2a3−x)−x =4a3−3x=99a5 =2a4−x =2(2a3−x)−x =4a3−3x=99 4(27)−3x =994(27)−3x =993x =108−99 =93x =108−99 =9 x=3 _________________ 2017-2018 MBA Deadlines Threadmaster for B-school Discussions Class of 2019: Mannheim Business School Class 0f 2020: HHL Leipzig Re: The sequence a1, a2, … , a n, … is such that an = 2an-1 - x   [#permalink] 01 Aug 2017, 11:23 Display posts from previous: Sort by # The sequence a1, a2, … , a n, … is such that an = 2an-1 - x new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-02-22T10:51:12
{ "domain": "gmatclub.com", "url": "https://gmatclub.com/forum/the-sequence-a1-a2-a-n-is-such-that-an-2an-1-x-46630.html", "openwebmath_score": 0.6966903209686279, "openwebmath_perplexity": 6155.931677677023, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes\n", "lm_q1_score": 1, "lm_q2_score": 0.8459424431344437, "lm_q1q2_score": 0.8459424431344437 }
https://gmatclub.com/forum/the-ratio-of-red-balls-to-green-balls-is-4-3-three-green-balls-need-280434.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 07 Dec 2019, 02:25 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History The ratio of red balls to green balls is 4:3. Three green balls need Author Message TAGS: Hide Tags Senior Manager Status: No dream is too large, no dreamer is too small Joined: 14 Jul 2010 Posts: 408 The ratio of red balls to green balls is 4:3. Three green balls need  [#permalink] Show Tags Updated on: 01 Nov 2018, 00:25 1 Top Contributor 00:00 Difficulty: 5% (low) Question Stats: 87% (01:09) correct 13% (01:23) wrong based on 61 sessions HideShow timer Statistics The ratio of red balls to green balls is 4:3. Three green balls need to be added in order for there to be the same number of green balls and red balls. How many red balls are there? A. 3 B. 4 C. 8 D. 12 E. 24 _________________ Originally posted by Baten80 on 31 Oct 2018, 23:49. Last edited by Bunuel on 01 Nov 2018, 00:25, edited 1 time in total. Renamed the topic and edited the question. Senior PS Moderator Status: It always seems impossible until it's done. Joined: 16 Sep 2016 Posts: 731 GMAT 1: 740 Q50 V40 GMAT 2: 770 Q51 V42 Re: The ratio of red balls to green balls is 4:3. Three green balls need  [#permalink] Show Tags 01 Nov 2018, 00:10 1 1 Let the red balls be 4x and green be 3x. Questions says : When three green balls are added to existing green balls the number for total green and red balls becomes equal. 3x + 3= 4x Hence, x=3 Original red balls were 4x hence 12. Option (D) is our choice. Best, Baten80 wrote: The ratio of red balls to green balls is 4:3. Three green balls need to be added in order for there to be the same number of green balls and red balls. How many red balls are there? 3 4 8 12 24 _________________ Regards, “Do. Or do not. There is no try.” - Yoda (The Empire Strikes Back) Senior Manager Joined: 22 Feb 2018 Posts: 413 Re: The ratio of red balls to green balls is 4:3. Three green balls need  [#permalink] Show Tags 01 Nov 2018, 00:20 1 OA:D Initial $$\frac{Red \quad Balls}{Green \quad Balls}=\frac{4x}{3x}$$ After adding $$3$$ Green balls,$$\frac{Red \quad Balls}{Green \quad Balls}=\frac{4x}{3x+3}=1$$ $$4x=3x+3$$ $$x=3$$ Number of Red balls $$= 4x = 4*3=12$$ Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 8622 Location: United States (CA) Re: The ratio of red balls to green balls is 4:3. Three green balls need  [#permalink] Show Tags 21 May 2019, 19:32 Baten80 wrote: The ratio of red balls to green balls is 4:3. Three green balls need to be added in order for there to be the same number of green balls and red balls. How many red balls are there? A. 3 B. 4 C. 8 D. 12 E. 24 We can let the number of red balls and green balls be 4x and 3x, respectively. Thus, we can create the equation: 3x + 3 = 4x 3 = x Therefore, there are 4(3) = 12 red balls. _________________ Scott Woodbury-Stewart Founder and CEO [email protected] 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. ISB School Moderator Joined: 08 Dec 2013 Posts: 615 Location: India Concentration: Nonprofit, Sustainability Schools: ISB '21 GMAT 1: 630 Q47 V30 WE: Operations (Non-Profit and Government) Re: The ratio of red balls to green balls is 4:3. Three green balls need  [#permalink] Show Tags 21 May 2019, 19:42 Baten80 wrote: The ratio of red balls to green balls is 4:3. Three green balls need to be added in order for there to be the same number of green balls and red balls. How many red balls are there? A. 3 B. 4 C. 8 D. 12 E. 24 Red Green 4X 3X Now 4x=3x+3 x=3 Total Red Balls= 4*3= 12 Re: The ratio of red balls to green balls is 4:3. Three green balls need   [#permalink] 21 May 2019, 19:42 Display posts from previous: Sort by
2019-12-07T09:25:46
{ "domain": "gmatclub.com", "url": "https://gmatclub.com/forum/the-ratio-of-red-balls-to-green-balls-is-4-3-three-green-balls-need-280434.html", "openwebmath_score": 0.8787983655929565, "openwebmath_perplexity": 3996.364758755089, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 1, "lm_q2_score": 0.8459424411924673, "lm_q1q2_score": 0.8459424411924673 }
https://gmatclub.com/forum/sides-of-box-have-areas-of-91-39-21-what-is-the-volume-164048.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Jan 2019, 21:16 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### The winners of the GMAT game show January 22, 2019 January 22, 2019 10:00 PM PST 11:00 PM PST In case you didn’t notice, we recently held the 1st ever GMAT game show and it was awesome! See who won a full GMAT course, and register to the next one. • ### Key Strategies to Master GMAT SC January 26, 2019 January 26, 2019 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. # Sides of box have areas of 91, 39, 21. What is the volume? Author Message TAGS: ### Hide Tags Intern Joined: 21 Nov 2013 Posts: 11 Concentration: Strategy, Technology Schools: Ross '18 (M) Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 02 Dec 2013, 19:06 1 5 00:00 Difficulty: 25% (medium) Question Stats: 78% (02:01) correct 22% (01:39) wrong based on 158 sessions ### HideShow timer Statistics Three of the sides of a rectangular prism have areas of 91, 39, and 21. What is the volume of the rectangular prism? A) 252 B) 269 C) 273 D) 920 E) 1911 Math Expert Joined: 02 Sep 2009 Posts: 52392 Re: Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 03 Dec 2013, 01:28 5 jamesphw wrote: Three of the sides of a rectangular prism have areas of 91, 39, and 21. What is the volume of the rectangular prism? A) 252 B) 269 C) 273 D) 920 E) 1911 Say the dimensions of the rectangular solid are a, b and c. It's volume is abc. ab = 91; ac = 39, bc = 21. Multiply these three: $$(abc)^2 = 91*39*21 = (7*13)*(3*13)*(3*7) = 3^2*7^2*13^2$$ --> $$abc=3*7*13=273$$. _________________ ##### General Discussion VP Joined: 02 Jul 2012 Posts: 1169 Location: India Concentration: Strategy GMAT 1: 740 Q49 V42 GPA: 3.8 WE: Engineering (Energy and Utilities) Re: Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 02 Dec 2013, 21:47 The sides are $$\sqrt{\frac{91*21}{39}}$$ = $$\sqrt{\frac{13*7*3*7}{13*3}}$$ = 7 $$\sqrt{\frac{91*39}{21}}$$ = $$\sqrt{\frac{13*7*13*3}{3*7}}$$ = 13 $$\sqrt{\frac{21*39}{91}}$$ = $$\sqrt{\frac{3*7*13*3}{13*7}}$$ = 3 Volume = 13*3*7 = 273 _________________ Did you find this post helpful?... Please let me know through the Kudos button. Thanks To The Almighty - My GMAT Debrief GMAT Reading Comprehension: 7 Most Common Passage Types Intern Status: Miles to go....before i sleep Joined: 26 May 2013 Posts: 14 Location: India Concentration: Finance, Marketing GMAT 1: 590 Q47 V25 Re: Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 04 Dec 2013, 06:54 1 Bunuel wrote: jamesphw wrote: Three of the sides of a rectangular prism have areas of 91, 39, and 21. What is the volume of the rectangular prism? Multiply these three: $$(abc)^2 = 91*39*21 = (7*13)*(3*13)*(3*7) = 3^2*7^2*13^2$$ --> $$abc=3*7*13=272$$. Hi Bunuel, I think there is a typo, the answer would be 273, not 272 Intern Status: Miles to go....before i sleep Joined: 26 May 2013 Posts: 14 Location: India Concentration: Finance, Marketing GMAT 1: 590 Q47 V25 Re: Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 04 Dec 2013, 07:06 1 jamesphw wrote: Three of the sides of a rectangular prism have areas of 91, 39, and 21. What is the volume of the rectangular prism? A) 252 B) 269 C) 273 D) 920 E) 1911 One easy way to get the answer for this question is just multiply the unit's digits of these three numbers, i.e (1*9*1 = 9) (Why the multiplication is needed?, you can see the solution suggested by Bunuel in the above post) Now look out for the answer whose square gives us a 9 in the unit place. So, it is easy to pic (C) 273 whose square will give us a 9 in the unit place( 3*3), and hence this is the answer. Hope that helps.. Math Expert Joined: 02 Sep 2009 Posts: 52392 Re: Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 04 Dec 2013, 07:38 Caesar wrote: Bunuel wrote: jamesphw wrote: Three of the sides of a rectangular prism have areas of 91, 39, and 21. What is the volume of the rectangular prism? Multiply these three: $$(abc)^2 = 91*39*21 = (7*13)*(3*13)*(3*7) = 3^2*7^2*13^2$$ --> $$abc=3*7*13=272$$. Hi Bunuel, I think there is a typo, the answer would be 273, not 272 Typo edited. Thank you. _________________ Non-Human User Joined: 09 Sep 2013 Posts: 9462 Re: Sides of box have areas of 91, 39, 21. What is the volume?  [#permalink] ### Show Tags 07 May 2017, 15:07 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Sides of box have areas of 91, 39, 21. What is the volume? &nbs [#permalink] 07 May 2017, 15:07 Display posts from previous: Sort by
2019-01-23T05:16:08
{ "domain": "gmatclub.com", "url": "https://gmatclub.com/forum/sides-of-box-have-areas-of-91-39-21-what-is-the-volume-164048.html", "openwebmath_score": 0.7442860007286072, "openwebmath_perplexity": 3550.65798185707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes\n", "lm_q1_score": 1, "lm_q2_score": 0.8459424295406088, "lm_q1q2_score": 0.8459424295406088 }
http://mathhelpforum.com/advanced-algebra/125222-2-quick-matrix-questions.html
# Math Help - 2 quick matrix questions. 1. ## 2 quick matrix questions. 1) let A= (a b) .....(c d) with none of a,b,c,d =0 If A is such that ad-bc=0, show the matrix equation AX+XA=0 has a solution X with X a non-zero 2x2 matrix that relies on 1 parameter only. Ive written X as w,x,y,z and multiplied out then set the resulting equations in w,x,y,z to 0. It takes a few steps to get a relation between y and z which leads to relation between x,y and z then finally w,x,y and Z, so z=P is enough. My concern is that i dont use ad-bc=0 anywhere. what have i missed ? im guessing if A inverse exists we only get X=0?? 2) a transformation im 3d space takes (a,b,c) to (x,y,z) where (x)=(0 0 1)(a) (y)=(1 0 0)(b) (z)=(0 1 0)(c) this transformation leaves distance between 2 points the same and leaves unaltered the points of the line x=y=z. Assuming the transformation is a rotation about a line,find the angle of rotation. the answer is 120 degrees but i cant find a decent way of explaining why. Im guessing the line of rotation is x=y=z and as lengths stay the same when i look at (1,0,0) which goes to (0,1,0) i get 2 equilateral triangles,one either side of the line. 2. [QUOTE=jiboom;444343]1) let A= (a b) .....(c d) with none of a,b,c,d =0 If A is such that ad-bc=0, show the matrix equation AX+XA=0 has a solution X with X a non-zero 2x2 matrix that relies on 1 parameter only. Ive written X as w,x,y,z and multiplied out then set the resulting equations in w,x,y,z to 0. It takes a few steps to get a relation between y and z which leads to relation between x,y and z then finally w,x,y and Z, so z=P is enough. My concern is that i dont use ad-bc=0 anywhere. what have i missed ? im guessing if A inverse exists we only get X=0??[./quote] You don't say HOW you solved those equations, eliminating w and x, and that is the crucial point. If you were to put your expression for x, y, and z back into the equations, you would get an equation in z only. What prevents you from solving for z? 2) a transformation im 3d space takes (a,b,c) to (x,y,z) where (x)=(0 0 1)(a) (y)=(1 0 0)(b) (z)=(0 1 0)(c) this transformation leaves distance between 2 points the same and leaves unaltered the points of the line x=y=z. Assuming the transformation is a rotation about a line,find the angle of rotation. the answer is 120 degrees but i cant find a decent way of explaining why. Im guessing the line of rotation is x=y=z and as lengths stay the same when i look at (1,0,0) which goes to (0,1,0) i get 2 equilateral triangles,one either side of the line. You don't need to guess! Obviously, the only points not changed by a rotation (that does change points- not a rotation by a multiple of $2\pi$) are points on the axis of rotation. Since x= y= z is not changed, it is the axis. Now, applying that transformation to (1, 0, 0) gives (0, 1, 0). Further, the plane perpendicular to x= y= z. containing (1, 0, 0) is (x- 1)+ y+ z= x+ y+ z= 1 and, yes, (0, 1, 0) is in that plane. The line x= y= z crosses that plane when x+ x+ x= 3x= 1 or x= y= z= 1/3. A vector from (1/3, 1/3, 1/3) to (1, 0, 0) is <2/3, -1/3, -1/3> and a vector from (1/3, 1/3, 1/3) to (0, 1, 0) is (-1/3, 2/3, -1/3). The angle between those two vectors can be found from their dot product and is the angle of rotation. 3. with X=(w x) ....(y z) i get 2aw+cx+by=0...(1) bw+(a+d)x+bz=0..(2) cw+(a+d)y+cz=0..(3) cx+by+2dz=0...(4) dz=aw so z=aw/d (2) and (4) lead to, using z=aw/d [bw/d]+x=0 [cw/d]+y=0 so letting w=T say solution is w=T x=-bT/a y=-cT/a z=aT/d so only need parameter T I cant see an error, so where does ad-dc=0 come into play here? The first part had the condition a=d=0 which once used gave the need for 2 paramters. Thank-you for the vector solution. Should have got my hands dirty and the done the math. I was just looking at a pic hoping to spot a nice geometric reason. Can you spot one ?
2014-03-12T06:42:20
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-algebra/125222-2-quick-matrix-questions.html", "openwebmath_score": 0.772314727306366, "openwebmath_perplexity": 1022.553167587829, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9777138125126402, "lm_q2_score": 0.8652240912652671, "lm_q1q2_score": 0.8459415449487488 }
https://math.stackexchange.com/questions/659254/product-distribution-of-two-uniform-distribution-what-about-3-or-more
# product distribution of two uniform distribution, what about 3 or more Say $X_1, X_2, \ldots, X_n$ are independent and identically distributed uniform random variables on the interval $(0,1)$. What is the product distribution of two of such random variables, e.g., $Z_2 = X_1 \cdot X_2$? What if there are 3; $Z_3 = X_1 \cdot X_2 \cdot X_3$? What if there are $n$ of such uniform variables? $Z_n = X_1 \cdot X_2 \cdot \ldots \cdot X_n$? • If you are after the product of $n$ independent standard Uniform random variables, then the pdf of the product, say $f(z)$ will be: $$f(z) = \frac{(-1)^{n-1} \log ^{n-1}(z)}{(n-1)!} \qquad \text{ for } 0<z<1$$ For $n = 1$ to 5, this yields: $$\left\{1,-\log (z),\frac{\log ^2(z)}{2},-\frac{1}{6} \log ^3(z),\frac{\log ^4(z)}{24}\right\}$$ – wolfies Feb 1 '14 at 7:20 • @wolfies Thanks, this is very helpful – lulu Feb 1 '14 at 7:28 • @wolfies -log(z) gives 4.6 for z=.01 . I should be doing something wrong. Probability cannnot go over 1. – Asad Iqbal Dec 29 '14 at 4:13 • My questions is answered. I found this. math.stackexchange.com/questions/105455/… Thanks! – Asad Iqbal Dec 29 '14 at 4:32 • @AsadIqbal: The probability density can go over $1$. – robjohn Jun 8 '18 at 14:21 ## 3 Answers We can at least work out the distribution of two IID ${\rm Uniform}(0,1)$ variables $X_1, X_2$: Let $Z_2 = X_1 X_2$. Then the CDF is \begin{align*} F_{Z_2}(z) &= \Pr[Z_2 \le z] = \int_{x=0}^1 \Pr[X_2 \le z/x] f_{X_1}(x) \, dx \\ &= \int_{x=0}^z \, dx + \int_{x=z}^1 \frac{z}{x} \, dx \\ &= z - z \log z. \end{align*} Thus the density of $Z_2$ is $$f_{Z_2}(z) = -\log z, \quad 0 < z \le 1.$$ For a third variable, we would write \begin{align*} F_{Z_3}(z) &= \Pr[Z_3 \le z] = \int_{x=0}^1 \Pr[X_3 \le z/x] f_{Z_2}(x) \, dx \\ &= -\int_{x=0}^z \log x dx - \int_{x=z}^1 \frac{z}{x} \log x \, dx. \end{align*} Then taking the derivative gives $$f_{Z_3}(z) = \frac{1}{2} \left( \log z \right)^2, \quad 0 < z \le 1.$$ In general, we can conjecture that $$f_{Z_n}(z) = \begin{cases} \frac{(- \log z)^{n-1}}{(n-1)!}, & 0 < z \le 1 \\ 0, & {\rm otherwise},\end{cases}$$ which we can prove via induction on $n$. I leave this as an exercise. • Fantastic!!! thanks – lulu Feb 1 '14 at 7:31 • For $Z_2$, how do you go from step 2 to step 3? And 3 to 4? – Sycorax Mar 3 '16 at 18:21 • I don't understand why $\Pr[Z_2 \le z] = \int_{x=0}^1 \Pr[X_2 \le z/x] f_{X_1}(x) \, dx$. Can you help? Is this using conditional distribution? I'm not seeing it. – JKEG Oct 17 '18 at 15:35 • @JKEG: These are just properties of functions of random variables. – MSIS Nov 20 '19 at 20:58 • @synack Suppose $z = 1/2$. Then if $x = 1/4$, $z/x = 2$. What is $\Pr[X_2 \le 2]$ if $X_2$ is a random variable that can only ever be between $0$ and $1$? You have two cases for $\Pr[X_2 \le z/x]$, depending on whether $x < z$, or $x \ge z$; that is why the integral gets split up. – heropup Jul 24 '20 at 7:21 If $X_1$ is uniform, then $-\log X_1 \sim \textrm{Exp}(1)$. Therefore, $$- \log X_1 \dots X_n = -\log X_1 + \dots -\log X_n$$ is a sum of independent exponential random variables and has Gamma distribution with parameters $(n,1)$ and density $g(y) = \frac{1}{(n-1)!} y^{n-1}e^{-y}$ for $y\geq 0$. Let $f$ be the density of the product $X_1 \dots X_n$, then the Jacobi's transformation formula yields $$f( h^{-1}(y) ) | \partial h^{-1}(y) | = g(y),$$ with $h(x) = -\log x$ and $h^{-1}(y) = \exp(-y)$. The substitution $y=h(x)$ in the above equation gives $$f(x) = \frac{1}{(n-1)!}(-\log x)^{n-1} \, 1_{ (0,1]}(x).$$ • You got the range slightly wrong. You are supposed to plug $-\log(x)$, not $x$, into $1_{[0,\infty)}$ to obtain $1_{[0,\infty)}(-\log(x))=1_{(0,1]}(x).$ Once you correct this, I will vote up your answer. – user940 Jul 30 '15 at 18:12 • Corrected. Thanks! – Julian Wergieluk Aug 3 '15 at 8:02 • When you say, "if $X_1$ is uniform, then $-\log X_1\sim \text{Exp}(1)$, you mean that $X_1$ is Uniform with range [0,1], right? – rpmcruz Dec 12 '17 at 14:10 An adaptation of this answer is given here. PDF of a Function of a Random Variable If $P(X\le x)=F(x)$ is the CDF of $X$ and $P(Y\le y)=G(y)$ is the CDF of $Y$ where $Y=f(X)$, then $$F(x)=P(X\le x)=P(Y\le f(x))=G(f(x))\tag1$$ Taking the derivative of $(1)$, we get $$F'(x)=G'(f(x))\,f'(x)\tag2$$ where $F'$ is the PDF of $X$ and $G'$ is the PDF of $Y$. PDF of the Product of Independent Uniform Random Variables If $[0\le x\le1]$ is the PDF for $X$ and $Y=\log(X)$, then by $(2)$ the PDF of $Y$ is $e^y[y\le0]$. The PDF for the sum of $n$ samples of $Y$ is the $n$-fold convolution of $e^y[y\le0]$ with itself. The Fourier Transform of this $n$-fold convolution is the $n^\text{th}$ power of the Fourier Transform of $e^y[y\le0]$, which is $$\int_{-\infty}^0 e^{-2\pi iyt}e^y\,\mathrm{d}y=\frac1{1-2\pi it}\tag3$$ Thus, the PDF for the sum of $n$ samples of $Y$ is \begin{align} \sigma_n(y) &=\int_{-\infty}^\infty\frac{e^{2\pi iyt}}{(1-2\pi it)^n}\,\mathrm{d}t\tag{4a}\\ &=\frac{e^y}{2\pi i}\int_{1-i\infty}^{1+i\infty}\frac{e^{-yz}}{z^n}\,\mathrm{d}z\tag{4b}\\ &=e^y\frac{(-y)^{n-1}}{(n-1)!}\,[y\le0]\tag{4c} \end{align} Explanation: $\text{(4a)}$: take the inverse Fourier Transform $\text{(4b)}$: substitute $t=\frac{1-z}{2\pi i}$ $\text{(4c)}$: if $y\gt0$, close the contour on the right half-plane, missing the singularity at $z=0$ $\phantom{\text{(4c):}}$ if $y\le0$, close the contour on the left half-plane, enclosing the singularity at $z=0$ We can get the PDF for the product of $n$ samples of $X$ by applying $(2)$ to $(4)$ $$\bbox[5px,border:2px solid #C0A000]{\pi_n(x)=\frac{(-\log(x))^{n-1}}{(n-1)!}\,[0\le x\le1]}\tag5$$
2021-04-19T16:30:58
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/659254/product-distribution-of-two-uniform-distribution-what-about-3-or-more", "openwebmath_score": 0.9979747533798218, "openwebmath_perplexity": 312.6109576086018, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9777138099151278, "lm_q2_score": 0.8652240895276223, "lm_q1q2_score": 0.8459415410023993 }
https://math.stackexchange.com/questions/1636058/is-a-relation-r-an-equivalence-relation-of-a-power-set
# Is a relation, R, an Equivalence Relation of a Power Set? Where $A = \{1,2,3,4,5,6\}$ and $S = P(A)$ is the power set, for $a,b \in S$ define a relation $R: (a,b) \in R$ where $a$ and $b$ have the same number of elements. Is $R$ an equivalence relation on $S$ and if so how many equivalence classes are there? I've defined my $R$ as being $\{(\{1,2\},\{3,4\}),(\{1,2\},\{5,6\}),(\{1,2\},\{1,2\}),(\{3,4\},\{1,2\}),(\{3,4\},\{3,4\}),(\{3,4\},\{5,6\}),(\{5,6\},\{1,2\}),(\{5,6\},\{3,4\}),(\{5,6\},\{5,6\})\}$ because of the part of the question mentioning $a$ and $b$ have the same number of elements. Was that wrong to do? • "I've defined my $R$ as being..." There, you've already started making a mistake. You aren't defining $R$ - the problem defined $R$. The $R$ you've defined is very different from the $R$ that the problem defined. – Thomas Andrews Feb 1 '16 at 16:04 • @ThomasAndrews I've recognised my problem there, and have since realised that if it is an equivalence relation of $A$, then it must also be of $P(A)$ since $A \in P(A)$. I'm now struggling though to prove how many equivalence classes there are. – GarethAS Feb 1 '16 at 16:38 • No, that's not true. @Gareth. An equivalence relation on $A$ does not mean an equivalence relation on $P(A)$. – Thomas Andrews Feb 1 '16 at 16:40 • Oh... So would R not be an equivalence relation on S as @Jackswastedlife says below? – GarethAS Feb 1 '16 at 16:49 • No, $R$ is defined precisely above as an equivalence relation on $S$ - it is a subset of the set of pairs of elements of $S$, not of pairs of elements of $A$. "For each $a,b\in S$, define ... $R$..." There is no relation defined on $A$. – Thomas Andrews Feb 1 '16 at 16:50 In general if $f:X\to Y$ is a function then the relation $R$ on $X$ defined by: $$uRv\iff f(u)=f(v)$$ is an equivalence relation on $X$. Equality $f(u)=f(u)$ guarantees reflexivity. Implication $f(u)=f(v)\implies f(v)=f(u)$ guarantees symmetry. Implication $f(u)=f(v)\wedge f(v)=f(w)\implies f(u)=f(w)$ guarantees transitivity. It is always possible (and handsome) to let $f$ be surjective by restricting its codomain. Then equivalence classes are the fibres $f^{-1}(\{y\}):=\{x\in X\mid f(x)=y\}$ for $y\in Y$ and consequently the cardinality of $Y$ equals the number of equivalence classes. You can apply this here on function $f:S\to\{0,1,2,3,4,5,6\}$ prescribed by:$$s\mapsto\text{number of elements of }s$$ This function is surjective and the cardinality of $\{0,1,2,3,4,5,6\}$ is $7$. So there are $7$ equivalence classes. • I understand what gives a relation equivalence. What are fibres? I think if I understand that then I'll understand why there are 7 equivalence classes. – GarethAS Feb 1 '16 at 19:56 • @Gareth The answer defines fibres immediately after using the term. – BrianO Feb 1 '16 at 23:40 • @BrianO I don't understand it, honestly – GarethAS Feb 2 '16 at 0:16 • @Gareth The identity immediately following defines the term. Never mind. Just ditch the term "fibre", and assume the sentence reads "... equivalence classes are the inverse images $f^{-1}(\{y\}):=\{x\in X\mid f(x)=y\}$ ...". – BrianO Feb 2 '16 at 0:18 • What I mean to say is that I don't understand the equation. – GarethAS Feb 2 '16 at 1:59 Yes that was wrong to do since you missed a lot of sets for example $$(\{1,2,3\},\{2,3,4\})\in R$$ $R$ is indeed an equivalence relation (appeal to the definition) and any two members of an equivalence class have same number elements. There are $7$ possible cardinalities including $0$ and hence there are $7$ equivalence classes. • Oh of course, $a$ and $b$ are the same length there still. I see my error. Thank you. – GarethAS Feb 1 '16 at 16:11 • You and @ThomasAndrews seem to have differing opinions on whether R is an equivalence relation of P(A) – GarethAS Feb 1 '16 at 18:25 • $R$ is a subset of $P(A)^2$, wouldn't you agree? So $R$ is a relation on $S=P(A)$ as ThomasAndrews says, it's not a relation on $A$. A relation on $A$ would be a subset of $A^2$ and have elements like $(1,2)$ instead of elements like $(\{1,2\},\{2,3\})$. – Jack's wasted life Feb 1 '16 at 18:31 • @Jackswastedlife oh I see, okay. So we've established $R$ contains a whole bunch of sets, so many I couldn't write them all out...And those sets have elements of $P(A)$ in them... I still don't know how to show $R$ is an equivalence relation of $P(A)$ without writing out the entire thing. – GarethAS Feb 2 '16 at 2:14 • If $\#$ denotes the number of elements you could do something like this. $\#(x)=\#(x)\implies(x,x)\in R\;\forall x\in P(A)$. $(x,y)\in R\implies \#(x)=\#(y)\implies (y,x)\in R\;\forall\;(x,y)\in P(A)$.... – Jack's wasted life Feb 2 '16 at 3:55
2020-02-19T05:03:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1636058/is-a-relation-r-an-equivalence-relation-of-a-power-set", "openwebmath_score": 0.82668137550354, "openwebmath_perplexity": 323.19112070406914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9777138170582865, "lm_q2_score": 0.865224070413529, "lm_q1q2_score": 0.8459415284947192 }
https://math.stackexchange.com/questions/2694015/probability-roll-of-die
# Probability : Roll of Die $\textsf{A}$ and $\textsf{B}$ are playing a game with $2$ standard dice. • Both the dice are rolled together and the total is counted. • $\textsf{A}$ says that a total of $2$ will be rolled first. • $\textsf{B}$, whereas, says that two Consecutive totals of $7$′s will be rolled first. • They keep rolling the dice till one of them wins !. What is the probability that $\textsf{A}$ wins the game ?. For a total of $2$, $\{(1,1)\}$ and for a total of $7$, $\{(1,6),(6,1),(2,4),(4,2),(3,4),(4,3)\}$ are the required scenarios. I don't understand how we need to incorporate the probabilities of $\textsf{A}$ winning, i.e., $1/36$ and $\textsf{B}$ winning, i.e. $6/36$ into a game of infinite rounds, i.e. until $\textsf{A}$ wins. – • What scenario gives a total of 2? What scenarios give a total of 7? – Colm Bhandal Mar 16 '18 at 17:46 • For a total of 2, {(1,1)} and for a total of 7, {(1,6),(6,1),(2,4),(4,2),(3,4),(4,3)} are the required scenarios. I don't understand how we need to incorporate the probabilities of A winning, i.e., 1/36 and B winning, i.e. 6/36 into a game of infinite rounds, i.e. until A wins. – S.Rana Mar 16 '18 at 18:07 To me this sounds like a good application of an absorbing Markov chain. There are four states: 1. “Neutral” where neither A nor B is ahead. 2. “B up” where B can win on the next roll. 3. “A wins” 4. “B wins” The first two states are transitional and the last two terminal. The transition matrix between these four states is $$P = \begin{bmatrix} 29/36 & 1/6 & 1/36 & 0 \\ 29/36 & 0 & 1/36 & 1/6 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} Q & R \\ 0 & I \end{bmatrix}$$ Here $p_{ij}$ is the probability of moving from state $i$ to state $j$. The $2\times 2$ matrices $Q$ and $R$ are the top left and top right blocks of $P$. The fundamental matrix of this Markov chain is $$N = (I-Q)^{-1} = \begin{bmatrix} 216/13 & 36/13 \\ 174/13 & 42/13 \end{bmatrix}$$ The probability of absorption matrix is $$NR = \begin{bmatrix} 7/13 & 6/13 \\ 6/13 & 7/13 \end{bmatrix}$$ This means that from neutral (state 1), A has a $7/13$ chance of winning (state 3). However, once a 7 is rolled (state 2), B has a $7/13$ chance of winning (state 4). • So is your conclusion that, since the game starts in a neutral state, A's probability of winning is $7/13$? – Adam Bailey Mar 18 '18 at 14:38 • @AdamBailey: Yep, same as you. In fact, this Markov method just dresses up the same calculations you made. – Matthew Leingang Mar 18 '18 at 17:04 Let $T_n2$ be the event consisting of throwing a total of $2$ on the $n$th throw of a pair of dice, $T_n7$ be throwing a total of $7$, and $T_n2,7$ be throwing a total of either $2$ or $7$. By counting cases it is readily shown that: $P(T_n2)=1/36$, $P(T_n7)=1/6$, $P(T_n2,7)=7/36$. If a pair of throws gives any total other than $2$ or $7$, the status of the game at the next pair of throws will be exactly as at the start of the game. So we can focus on conditional probabilities, conditional on $T_n2,7$. We have: $P(T_n2 | T_n2,7) = (1/36) / (7/36) = 1/7$ $P(T_n7 | T_n2,7) = (1/6) / (7/36) = 6/7$ In the first case A has won. In the second we need to consider what happens at throw $n+1$. We have: $P(T_n7 \ \&\ T_{n+1}2 | T_n2,7) = (6/7)(1/36) = 1/42$ $P(T_n7 \ \&\ T_{n+1}7 | T_n2,7) = (6/7)(1/6) = 1/7$ If throw $n+1$ yields any total other than $2$ or $7$ the game reverts to its initial status, so it suffices to consider the relative probabilities of A and B winning during throws $n$ and $n+1$. We have (conditional on $T_n2,7$, and noting that B cannot win at $n$): P(A wins) = P(A wins at $t$) + P(A wins at $t+1$) = 1/7 + 1/42 = 1/6 P(B wins) = P(B wins at $t+1$) = 1/7 If we now remove the conditionality, the relative probabilities will not change. Therefore: P(A wins) $= \frac{1/6}{1/6 + 1/7} = \frac{7}{7+6}=\frac{7}{13}$
2019-12-10T08:27:46
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2694015/probability-roll-of-die", "openwebmath_score": 0.7769567966461182, "openwebmath_perplexity": 310.08418588477986, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.977713811213884, "lm_q2_score": 0.865224072151174, "lm_q1q2_score": 0.8459415251369209 }
https://math.stackexchange.com/questions/543062/proving-a-function-is-onto-and-one-to-one
# Proving a function is onto and one to one I'm reading up on how to prove if a function (represented by a formula) is one-to-one or onto, and I'm having some trouble understanding. To prove if a function is one-to-one, it says that I have to show that for elements $a$ and $b$ in set $A$, if $f(a) = f(b)$, then $a = b$. I understand this to mean that if two elements in a domain map to the the same element in a codomain, then for the function to be one-to-one, they must be the same element because by definition, a one-to-one function has at most one element in the domain mapped to a particular element in the co-domain. Did I understand this correctly? Then to prove that the function is onto, I'm reading an example that says "let's prove that $f: \mathbb{R} \rightarrow \mathbb{R}$ defined by $f(x) = 5x+2$ is onto, where $\mathbb{R}$ denotes the real numbers. We let $y$ be a typical element of the codomain and set up the equation $y =f(x)$. then, $y = 5x+2$ and solving for $x$ we get $x ={y-2\over 5}$. Since $y$ is a real number, then ${y-2\over 5}$ is a real number and $f({y-2\over 5})=5({y-2\over 5})+2=y.$ I'm not really seeing how that proves anything, so can anybody explain this to me? • Onto means that the range of the function is the entire co-domain. In this case, the graph of $f(x) = 5x+2$ takes all possible $Y-$values. – Prahlad Vaidyanathan Oct 28 '13 at 17:19 • That means that $\exists x$ (namely, $\frac{y - 2}{5}$) such that $f(x) = y$. That is what onto means! – Don Larynx Dec 10 '13 at 0:56 Yes, your understanding of a one-to-one function is correct. A function is onto if and only if for every $y$ in the codomain, there is an $x$ in the domain such that $f(x) = y$. So in the example you give, $f:\mathbb R \to \mathbb R,\quad f(x) = 5x+2$, the domain and codomain are the same set: $\mathbb R.\;$ Since, for every real number $y\in \mathbb R,\,$ there is an $\,x\in \mathbb R\,$ such that $f(x) = y$, the function is onto. The example you include shows an explicit way to determine which $x$ maps to a particular $y$, by solving for $x$ in terms of $y.$ That way, we can pick any $y$, solve for $f'(y) = x$, and know the value of $x$ which the original function maps to that $y$. Side note: Note that $f'(y) = f^{-1}(x)$ when we swap variables. We are guaranteed that every function $f$ that is onto and one-to-one has an inverse $f^{-1}$, a function such that $f(f^{-1}(x)) = f^{-1}(f(x)) = x$. • but..what I'm seeing is that if x is (y-2)/5, then f(x) = y. I don't see how that says that for EVERY x, f(x) = y, which I assume is what I'm trying to prove? – FrostyStraw Oct 28 '13 at 17:39 • $x = \frac{y-2}{5}$. Let's just pick any $y$: take $y = 2$. Then $x = \dfrac{2-2}{5} = 0$, so we see that there is an element in the domain, namely $x = 0$, that f maps to y = 2: $f(0) = 2$. What we've shown is simply that for any y whatsoever in the codomain, there exists an x in the domain such that $f(x) = y$. – amWhy Oct 28 '13 at 17:42 • although I guess it means that no matter what that x value represents numerically, it will give y if plugged in...hmm – FrostyStraw Oct 28 '13 at 17:43 • @amWhy: Nice write - up +1 – Amzoti Oct 29 '13 at 1:55 • when trying to prove that a fincion is onto, would it be formal argument to say: $f$ is onto because for every $x \in \text{image} f, \ x$ is necessarily $\in \text {dom} f$? – Jneven Aug 15 '18 at 10:48 A function $f:A\rightarrow B$ is one-to-one if whenever $f(x)=f(y)$, where $x,y \in A$, then $x=y$. So, assume that $f(x)=f(y)$ where $x,y \in A$, and from this assumption deduce that $x=y$. A function $f: A\rightarrow B$ is onto if every element of the codomain $B$ is the image of some element of $A$. Let $y\in B$. We can show that there exists $x\in A$ such that $f(x)=y$. Choose $x=f^{-1}(y)$ and so $f(f^{-1}(y))=y$. So for all $y\in B$, there exists an $x\in A$ such that $f(x)=y$. You can imagine that a function of X to Y is injective when you can "enter" a copy of X to Y. $f\colon X\to Y$ is injective if and only if: • $x\neq y\Rightarrow f(x)\neq f(y)$, or • If $f(x)=f(y)\Rightarrow x=y$ Intuition says that you can have a replica of $X$ in $Y$, it means for all $x\in X$ there are a $y\in Y$ which $f(x)=y$ and there are not other $x'$ with the same statement, $\therefore$ Y contains a copy of the set X Note • It works for $X\to Y$, if we want something similar from $Y\to X$ it is called surjective. • Is possible that for some $y\in Y, \not{\exists}x\in X$ such $f(x)=y$
2021-03-08T22:46:36
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/543062/proving-a-function-is-onto-and-one-to-one", "openwebmath_score": 0.9086751937866211, "openwebmath_perplexity": 119.70731603936993, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9777138105645059, "lm_q2_score": 0.865224072151174, "lm_q1q2_score": 0.8459415245750633 }
http://mathhelpforum.com/calculus/84137-binomial-series-2-stuck.html
# Thread: Binomial series #2 stuck! 1. ## Binomial series #2 stuck! $\displaystyle f(x) = \frac{5}{(1+\frac{x}{10})^4}$ $\displaystyle = 5\left[1 + \frac{x}{10})^-4\right]$ $\displaystyle = 5 \left[1 + (-4)(\frac{x}{10}) + \frac{(-4)(-5)}{2!}(\frac{x}{10})^2 + \frac{(-4)(-5)(-6)}{3!}(\frac{x}{10})^3 + ... \right]$ But I don't know if this is right, or what my nth term would be, much less the power series...Thanks!! 2. Originally Posted by mollymcf2009 $\displaystyle f(x) = \frac{5}{(1+\frac{x}{10})^4}$ $\displaystyle = 5\left[1 + \frac{x}{10})^-4\right]$ $\displaystyle = 5 \left[1 + (-4)(\frac{x}{10}) + \frac{(-4)(-5)}{2!}(\frac{x}{10})^2 + \frac{(-4)(-5)(-6)}{3!}(\frac{x}{10})^3 + ... \right]$ But I don't know if this is right, or what my nth term would be, much less the power series...Thanks!! Before I start, are we dealing with a Maclaurin or Taylor Series? About what point are we constructing the series? 3. Originally Posted by Chris L T521 Before I start, are we dealing with a Maclaurin or Taylor Series? About what point are we constructing the series? Sorry... Here is the whole question: Use the binomial series to expand the function as a power series. $\displaystyle f(x) = \frac{5}{(1+\frac{x}{10})^4}$ 4. Binomial series are given by: $\displaystyle (1+y)^k = 1 + ky + \frac{k(k-1)}{2!}y^2$$\displaystyle {\color{white}.} \ + \ \frac{k(k-1)(k-2)}{3!}y^3 + \cdots + \frac{k(k-1)(k-2)\cdots(k - n+1)}{n!}y^n + \cdots for any \displaystyle k \in \mathbb{R} and if \displaystyle |y| < 1. Here, \displaystyle y = \tfrac{x}{10} and \displaystyle k = -4 . So: \displaystyle g(x) = \left( 1 + \frac{x}{10}\right)^{-4} \displaystyle g(x) = 1 + (-4)\left(\frac{x}{10}\right) + \frac{(-4)(-5)}{2!}\left(\frac{x}{10}\right)^2 + \frac{(-4)(-5)(-6)}{3!}\left(\frac{x}{10}\right)^3 + \cdots \displaystyle + \ \frac{(-4)(-5)(-6)\cdots(-4-n+1)}{n!}\left(\frac{x}{10}\right)^n + \cdots _______________ Let's look at the general term: \displaystyle \frac{(-4)(-5)(-6)\cdots(-3-n)}{n!}\frac{x^n}{10^n} Just looking at the expanded series, we see that with even \displaystyle n, the coefficient is positive and with odd \displaystyle n, the coefficient is negative. So, if we factor out the negative signs: \displaystyle \frac{(-1)^{n} (4)(5)(6)\cdots(n+3)}{10^n \cdot n!} x^n \cdot {\color{red}\frac{3!}{3!}} = \frac{(-1)^{n} (n+3)(n+2)(n+1)n!}{3! \cdot 10^n \cdot n!}x^n = \cdots etc. 5. Originally Posted by o_O Binomial series are given by: \displaystyle (1+y)^k = 1 + ky + \frac{k(k-1)}{2!}y^2$$\displaystyle {\color{white}.} \ + \ \frac{k(k-1)(k-2)}{3!}y^3 + \cdots + \frac{k(k-1)(k-2)\cdots(k - n+1)}{n!}y^n + \cdots$ for any $\displaystyle k \in \mathbb{R}$ and if $\displaystyle |y| < 1$. Here, $\displaystyle y = \tfrac{x}{10}$ and $\displaystyle k = -4$ . So: $\displaystyle g(x) = \left( 1 + \frac{x}{10}\right)^{-4}$ $\displaystyle g(x) = 1 + (-4)\left(\frac{x}{10}\right) + \frac{(-4)(-5)}{2!}\left(\frac{x}{10}\right)^2 + \frac{(-4)(-5)(-6)}{3!}\left(\frac{x}{10}\right)^3 + \cdots$ $\displaystyle + \ \frac{(-4)(-5)(-6)\cdots(-4-n+1)}{n!}\left(\frac{x}{10}\right)^n + \cdots$ _______________ Let's look at the general term: $\displaystyle \frac{(-4)(-5)(-6)\cdots(-3-n)}{n!}\frac{x^n}{10^n}$ Just looking at the expanded series, we see that with even $\displaystyle n$, the coefficient is positive and with odd $\displaystyle n$, the coefficient is negative. So, if we factor out the negative signs: $\displaystyle \frac{(-1)^{n} (4)(5)(6)\cdots(n+3)}{10^n \cdot n!} x^n \cdot {\color{red}\frac{3!}{3!}} = \frac{(-1)^{n} (n+3)(n+2)(n+1)n!}{3! \cdot 10^n \cdot n!}x^n = \cdots$ etc. Ok, I understand everything except where the 3!/3! got there. Thanks!!!! 6. Focus on the expression: $\displaystyle \frac{4 \cdot 5 \cdot 6 \cdots (n+3)}{n!}$ We can simplify it by multiplying by "1": $\displaystyle \frac{{\color{red} 1 \cdot 2 \cdot 3} \cdot 5 \cdot 6 \cdots (n+3)}{n! \cdot {\color{red}3!}} = \frac{(n+3)!}{n! \cdot 3!}$ $\displaystyle = \frac{(n+3)(n+2)(n+1)n!}{n! \cdot 3!} = \frac{(n+3)(n+2)(n+1)}{ 6}$ So you can see why I multiplied both top and bottom by $\displaystyle 3!$ so that I can simplify that long product into a simple factorial.
2018-04-23T00:57:16
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/84137-binomial-series-2-stuck.html", "openwebmath_score": 0.9988483786582947, "openwebmath_perplexity": 1086.1721018444541, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9777138125126403, "lm_q2_score": 0.8652240704135291, "lm_q1q2_score": 0.8459415245617167 }
https://math.stackexchange.com/questions/1033173/why-is-the-middle-third-cantor-set-written-as-this
Why is the middle third Cantor set written as this? My first question is, is the middle third Cantor set the same as the Cantor set? I've never heard it called the middle third Cantor set. Secondly, why is this true: "I’m going to assume that Cantor set here refers to the standard middle-thirds Cantor set $C$ described here. It can be described as the set of real numbers in $[0,1]$ having ternary expansions using only the digits $0$ and $2$, i.e., real numbers of the form $$\sum_{n=1}^\infty \frac{a_n}{3^n},$$ where each $a_n$ is either $0$ or $2$. " I'm not truly understanding why it would be 0 or 2 for $a_n$. I understand that the numbers from [0,1] can be written in ternary as 0,1, or 2. I don't understand why the "1" is taken out. Is it because we take the middle third out every time and we lose anything that can be written in ternary as "1"? It's called the middle-thirds Cantor set because in general you can construct a class of sets with similar properties using a similar but scaled construction. For example, you can start with $[0,1]$, remove an interval of length $\frac{1}{2}$ from the center. Then you have $2$ intervals of length $\frac{1}{4}$, remove an interval of length $\frac{1}{8}$ from the center. Inductively, at each step you have a disjoint union of intervals of length $l$ remaining in your set, and you remove an interval of length $l/2$ from each interval. This set, which we might call the Cantor "middle-halves" set, has many of the same properties of the Cantor middle-thirds set. From this you can imagine constructing the "middle-fourths" set and many other Cantor-type sets. The middle-thirds set is sort of standard because it's the easiest Cantor-type set to construct (mainly it's easy to figure out the lengths of the intervals at each step). The reason for the ternary expansion is precisely as you stated, the middle-thirds construction removes any numbers with absolutely necessary $1$-s in the ternary expansion. You should check this for yourself for a few cases: for example, check if $0.1abcd...$ (ternary) can be in the middle-thirds set, then $0.01abcd...$, $0.21abcd...$, and so on. Then you'll see why the Cantor set construction removes them. This excludes cases like $1/3$, which lies in the Cantor set and has ternary expansion $0.1000...$, because it can also be written $0.0222...$ . • Correct me if I'm wrong, but isn't 0.01 = 1/9 in our normal numerical system, and isn't that left when you take away the first middle third? – H5159 Nov 22 '14 at 2:22 • Yes, that's precisely my point. – Gyu Eun Lee Nov 22 '14 at 2:23 • But if 1/9 is left doesn't that mean our summation includes ternary values with 1 in them? – H5159 Nov 22 '14 at 2:25 • Ah, I see. The problem is that ternary expansions are not unique. $1/9$ can be written as $0.01000...$, but it can also be written as $0.00222...$. (Like how $1 = 0.999...$.) The convention is to write the ternary expansion with only $0$-s and $2$-s if possible, and throw out numbers that require a $1$ no matter what you do. Notice that you also keep $1/3$ (and in general lots of $k/3^n$-s) by the middle-thirds construction (they lie at the endpoints of intervals you don't delete), but the ones you keep can all be written without $1$-s. – Gyu Eun Lee Nov 22 '14 at 2:34 • Not quite; for example, $2/3$ is an endpoint we keep as well (but of course $2/3 = 0.2000...$ so this is fine). We lose any values that must have $1$ in its ternary expansion, not the ones that merely can be written with $1$. The numbers $1/3^n$ are examples of numbers that can be written with $1$ in ternary, but they don't have to be written with $1$. You can always do $0.000....0222...$ (0-s until the $n+1$-th ternary place) for these instead of $0.000...01000...$ ($0$-s until the $n$-th ternary place). Therefore we keep all $1/3^n$, though this doesn't exhaust the list of endpoints. – Gyu Eun Lee Nov 22 '14 at 2:50 You can build several "essentially" different cantor sets, some have Lebesgue measure 0, some have positive measure. For instance, take the unit interval, remove the "middle fifth" (take a linear function to the interval $[0, 5]$ and remove the preimage of the open $(2,3)$). On step 2, remove the "middle twenty-fifth" of each of the two remaining intervals, etc. At the end you will get a nowhere dense closed set with no isolated points (so, a Cantor set), but its measure will be strictly positive. The standard Cantor is cbuild by removing one third of each interval at each step, and yes, it relates to the '1' digit in the ternary expansion of the number. First question: typically the Cantor set is the middle third Cantor set, while a Cantor set can be any of a number of similar sets, including variants that have positive measure. Second question: essentially, yes. Being more precise, in the $i$th stage of the construction one removes all the remaining numbers whose ternary expansion has a $1$ in the $i$th place. Hence at the end of the construction, all the numbers have no $1$s in their ternary expansion.
2019-07-18T23:49:27
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1033173/why-is-the-middle-third-cantor-set-written-as-this", "openwebmath_score": 0.8491752743721008, "openwebmath_perplexity": 206.13464532535045, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446502128796, "lm_q2_score": 0.8688267847293731, "lm_q1q2_score": 0.8459285509134113 }
https://math.stackexchange.com/questions/54133/eigenvalues-of-certain-block-matrices/54138
# eigenvalues of certain block matrices This question inquired about the determinant of this matrix: $$\begin{bmatrix} -\lambda &1 &0 &1 &0 &1 \\ 1& -\lambda &1 &0 &1 &0 \\ 0& 1& -\lambda &1 &0 &1 \\ 1& 0& 1& -\lambda &1 &0 \\ 0& 1& 0& 1& -\lambda &1 \\ 1& 0& 1& 0&1 & -\lambda \end{bmatrix}$$ and of other matrices in a sequence to which it belongs. In a comment I mentioned that if we permute the indices 1, 2, 3, 4, 5, 6 to put the odd ones first and then the even ones, thus 1, 3, 5, 2, 4, 6, then we get this: $$\begin{bmatrix} -\lambda & 0 & 0 & 1 & 1 & 1 \\ 0 & -\lambda & 0 & 1 & 1 & 1 \\ 0 & 0 & -\lambda & 1 & 1 & 1 \\ 1 & 1 & 1 & -\lambda & 0 & 0 \\ 1 & 1 & 1 & 0 & -\lambda & 0 \\ 1 & 1 & 1 & 0 & 0 & -\lambda \end{bmatrix}$$ So this is of the form $$\begin{bmatrix} A & B \\ B & A \end{bmatrix}$$ where $A$ and $B$ are symmetric matrices whose characteristic polynomials and eigenvalues are easily found, even if we consider not this one case of $6\times 6$ matrices, but arbitrarily large matrices following the same pattern. Are there simple formulas for determinants, characteristic polynomials, and eigenvalues for matrices of this latter kind? I thought of the Haynesworth inertia additivity formula because I only vaguely remembered what it said. But apparently it only counts positive, negative, and zero eigenvalues. • I suppose we could treat it as a symmetric block Toeplitz matrix... Jul 28, 2011 at 2:41 • ...and if we so treat it, does that lead us toward an answer to the question? Jul 28, 2011 at 3:41 • I suppose, but I'll have to dig through my notes to be sure (I do remember these things being well-studied, but I'm hazy on how the eigenproblem simplifies for these). Jul 28, 2011 at 3:43 We have $$\det \left( \begin{array}{cc} A & B\\ C & D \end{array} \right) = \det(A-BD^{-1}C) \det(D),$$ where the matrix $A-BD^{-1}C$ is called a Schur complement. In your case, $A=D=-\lambda I_n$ and $B=C=J_n$ = the order $n$ matrix with all entries equal to 1. So, the RHS is equal to $\det(-\lambda I_n + \frac{n}{\lambda} J_n) \det(-\lambda I_n) = (-n)^n \det(-\frac{\lambda^2}{n}I_n + J_n)$. If I remember correctly, $\det(xI_n + J_n) \equiv x^{n-1}(x+n)$, but you should check whether this is true or not. • Very nice. I should have thought of that identity, since I believe I used it when I took a course on the theory of the Wishart distribution. Jul 28, 2011 at 19:04 • The only matrices whose determinants remain to be evaluated after what user1551 did above are matrices in which all diagonal entries are equal to each other and all off-diagonal entries are equal to each other. There is a standard formula, moderately easy to prove, for the determinant of such a matrix. Jul 28, 2011 at 21:28 I am not sure whether I understand what you want to ask.. but the following are some facts on the matrix of this type $\det\begin{bmatrix} A & B \\\\ B & A \end{bmatrix}=\det(A+B)\det(A-B)$. The eigenvalues of $\begin{bmatrix} A & B \\\\ B & A \end{bmatrix}$ are the union of eigenvalues of $A+B$ and the eigenvalues of $A-B$. • Sorry, it's been a while since you answered with this. Can I say something if the structure is $[A\,\,B\,;\,-B\,\,A]$? Aug 4, 2018 at 12:09 Your $2n\times 2n$ matrix $M$ acts on the vector space $V=\mathbb C^n\oplus\mathbb C^n$. Now if $W_1=\{(v,v):v\in\mathbb C^n\}$ and $W_2=\{(v,-v):v\in\mathbb C^n\}$, then we also have $V=W_1\oplus W_2$. Moreover, both $W_1$ and $W_2$ are invariant under $M$, so to find the eigenvalues/eigenvectors/characteristic polynomial/etc, it is enough to do it for those restrictions: they are $A+B$ and $A-B$. This way you obtain, for example, the facts mentioned in Sunni's answer immediately. Because the subblocks of the second matrix (let's call it $C$) commute i.e. AB=BA, you can use a lot of small lemmas given, for example here. And also you might consider the following elimination: Let $n$ be the size of $A$ or $B$ and let,(say for $n=4$) $$T = \left(\begin{array}{cccccccc} 1 &0 &0 &0 &0 &0 &0 &0\\ 0 &0 &0 &0 &1 &0 &0 &0\\ -1 &1 &0 &0 &0 &0 &0 &0\\ -1 &0 &1 &0 &0 &0 &0 &0\\ -1 &0 &0 &1 &0 &0 &0 &0\\ 0 &0 &0 &0 &-1 &1 &0 &0\\ 0 &0 &0 &0 &-1 &0 &1 &0\\ 0 &0 &0 &0 &-1 &0 &0 &1 \end{array} \right)$$ Then , $TCT^{-1}$ gives $$\hat{C} = \begin{pmatrix}-\lambda &n &\mathbf{0} &\mathbf{1} \\n &-\lambda &\mathbf{1} &\mathbf{0}\\ & &-\lambda I &0\\&&0&-\lambda I \end{pmatrix}$$ from which you can identify the upper triangular block matrix. The bold face numbers indicate the all ones and all zeros rows respectively. $(1,1)$ block is the $2\times 2$ matrix and $(2,2)$ block is simply $-\lambda I$. EDIT: So the eigenvalues are $(-\lambda-n),(-\lambda+n)$ and $-\lambda$ with multiplicity of $2(n-1)$. Thus the determinant is also easy to compute, via their product.
2022-07-06T17:57:52
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/54133/eigenvalues-of-certain-block-matrices/54138", "openwebmath_score": 0.9318792819976807, "openwebmath_perplexity": 141.86029754241517, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446456243805, "lm_q2_score": 0.8688267847293731, "lm_q1q2_score": 0.8459285469268004 }
https://museum.brandhome.com/pros-and-kbh/e70454-strongly-connected-components-example-problems
a path from $$w$$ to $$v$$. Web into a graph, we will treat a page as a vertex, and the hyperlinks LEVEL: Medium, ATTEMPTED BY: 418 away from the CS home page. are similar on some level. The strongly connected components will be recovered as certain subtrees of this forest. is, if there is a directed edge from node A to node B in the original the Internet and the links between web pages. graph then $$G^T$$ will contain and edge from node B to node A. This is the program used to find the strongly connected components using DFS algorithm also called Tarjan’s Algorithm. Sign up. components are identified by the different shaded areas. ACCURACY: 83% Hi All. In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. Notice that the graph in Figure 31: A Directed Graph with Three Strongly Connected Components¶. Figure 31: A Directed Graph with Three Strongly Connected Components Once the strongly connected components have been identified we can show a simplified view of the graph by combining all the vertices in one strongly connected component into a single larger vertex. We formally define a strongly connected It's possible that you would incorrectly identify the entire graph as a single strongly connected component(SCC) if you don't run the second dfs according to decreasing finish times of the first dfs. might notice that there are several links to other colleges in Iowa. Created using Runestone 5.4.0. Each tree in the forest computed in step 3 is a strongly connected Given a directed graph, check if it is strongly connected or not. component. The Solve practice problems for Strongly Connected Components to test your programming skills. Let's denote n as number of vertices and m as number of edges in G. Strongly connected component is subset of vertices C such that any two vertices of this subset are reachable from each other, i.e. Figure 34. Contest. Discuss (999+) Submissions. Call dfs for the graph $$G$$ to compute the finish times or. or. A directed graphs is said to be strongly connected if every vertex is reachable from every other vertex. The following are 15 code examples for showing how to use networkx.strongly_connected_component_subgraphs().These examples are extracted from open source projects. explore each vertex in decreasing order of finish time. Any node of a strongly connected component might serve as a root, if it happens to be the first node of a component that is discovered by search. Once the strongly connected components have been identified we can show Before we tackle the It is obvious, that strongly connected components do not intersect each other, i.e. Sign in. On the left side we have a matrix as the input and on the right side we see the same input viewed as a graph. HackerEarth uses the information that you provide to contact you about relevant content, products, and services. For the remainder of this chapter we will turn our attention to some LEVEL: Hard, A password reset link will be sent to the following email id, HackerEarth’s Privacy Policy and Terms of Service. components for a graph. Tarjan's algorithm is an efficient serial algorithm to find SCCs, but relies on the hard-to-parallelize depth-first search (DFS). following the links from one page to the next, beginning at Luther $$v, w \in C$$ we have a path from $$v$$ to $$w$$ and The strongly connected components of an arbitrary directed graph form a partition into subgraphs that are themselves strongly connected. no tags For a given set of web pages, we want to find largest subsets such that from every page in a subset you can follow links to any other page in the same subset. Back. ACCURACY: 80% component, $$C$$, of a graph $$G$$, as the largest subset components. (Check that this is indeed an equivalence relation.) For example, there are 3 SCCs in the following graph. Each test case contains two integers N and M.In the next line there are M space-separated values u,v denoting an edge from u to v. this is a p… Problems; tutorial; Strongly Connected Components; Status; Ranking; IITKESO207PA6Q1 - Strongly Connected Components. You might conclude from this that there is some 0. votes. Figure 35 shows the starting and The problem of strongly connected components (SCCs) refers to detection of all maximal strongly connected sub- graphs in a large directed graph. For example, consider the problem of identifying clusters in a set of items. Solution. LEVEL: Medium, ATTEMPTED BY: 17 underlying structure to the web that clusters together web sites that Sign in. LEVEL: Medium, ATTEMPTED BY: 139 On the first line, there are two numbers, number of the pages N, and total number of links M. Pages are numbered from 0 up to N-1. Figure 30 shows a very small part of the graph produced by 1. chiao 1. Notes on Strongly Connected Components Recall from Section 3.5 of the Kleinberg-Tardosbook that the strongly connected componentsof a directed graphGare the equivalence classesofthe followingequivalence relation: u ∼ v if and only ifthere is a directed u v path and also there is a directed v u path. September 12, 2019 12:18 AM. We have discussed Kosaraju’s algorithm for strongly connected components. Notice that in my example, node d would always have the lowest finish time from the first dfs. The strongly connected components are identified by the different shaded areas. Find Largest Strongly Connected Component in Undirected Graph. a simplified view of the graph by combining all the vertices in one 1.2K VIEWS. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V+E)). forest to identify the component. Signup and get free access to 100+ Tutorials and Practice Problems Start Now, ATTEMPTED BY: 1717 asked May 8 at 5:33. We care about your data privacy. The roots of these subtrees are called the "roots" of the strongly connected components. Also go through detailed tutorials to improve your understanding to the topic. interesting observations. algorithm by making use of a depth first search. 9.6 are {A + B}, {C, 2F}, {D + E}, {F + A, G}; out of these {D + E} and {F + A, G} are terminal.The network in Fig. Figure 31: A Directed Graph with Three Strongly Connected Components Once the strongly connected components have been identified we can show a simplified view of the graph by combining all the vertices in one strongly connected component into a single larger vertex. Abstract: Finding the strongly connected components (SCCs) of a directed graph is a fundamental graph-theoretic problem. Also, you will find working examples of kosararju's algorithm in C, C++, Java and Python. the web form a very large directed graph. Let’s trace the operation of the steps described above on the example Sudharshan Srinivasan. Solution by Finding Strongly Connected Components. | page 1 Output the vertex ids for each vertex in each tree in the Testing whether a graph is connected is an essential preprocessing step for every graph algorithm. Search engines like Google and Bing exploit the fact that the pages on The problem of finding connected components is at the heart of many graph application. Finally, Figure 37 shows the forest of three trees produced transposition of a graph $$G$$ is defined as the graph Another related problem is to identify regions of the same colour in a … for any u,v∈C:u↦v,v↦uwhere ↦means reachability, i.e. The graphs we will use to study some additional Complete reference to competitive programming. The strongly connected components are identified by the different shaded areas. we leave writing this program as an exercise. Mock. Figure 33 has two strongly connected components. existence of the path from first vertex to the second. Problems. 1. zmhh 162. We can represent each item by a vertex and add an edge between each pair of items that are deemed similar.'' https://www.geeksforgeeks.org/strongly-connected-components Strongly Connected Components Decomposing a directed graph into its strongly connected components is a classic application of depth-first search. To see this, look at the following example. 9.1 is weakly reversible, and so is the CRN (Eq. You will notice web sites on the graph are other Luther College web sites. algorithms are the graphs produced by the connections between hosts on You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. Description. Figure 33 and Figure 34 show a simple graph and its transposition. One graph algorithm that can help find clusters of highly interconnected Problem: Critical Connections in a Network. First you might notice that many of the other Input: The first line of input contains an integer T.Then T test cases follow. Back. 1.9K VIEWS. Strongly connected components can be found one by one, that is first the strongly connected component including node 1 is found. A directed graph is strongly connected if there is a path between all pairs of vertices. LEVEL: Medium, ATTEMPTED BY: 100 LEVEL: Hard, ATTEMPTED BY: 291 $$G^T$$ where all the edges in the graph have been reversed. pages. Strongly Connected Components In this tutorial, you will learn how strongly connected components are formed. Solution with using Tarjan’s Algorithm; References; Tarjan’s algorithm 1, 2 which runs in linear time is an algorithm in Graph Theory for finding the strongly connected components of a directed graph. on the page as edges connecting one vertex to another. Home Installation Guides Reference Examples Support OR-Tools Home Installation Guides Reference Examples Support OR-Tools Reference. Notice that it has the same two strongly connected Here’s simple Program to Cout the Number of Connected Components in an Undirected Graph in C Programming Language. Generally speaking, the connected components of the graph correspond to The simplified We will begin with web running DFS on the transposed graph. The connected components of this graph correspond to different classes of items. That no tags Given a digraph G = (V, E), print the strongly connected component graph G SCC = (V SCC, E SCC). Input. Discuss . Figure 27 shows a simple graph with three strongly connected components. Connected Components in an Undirected Graph Write a C Program to find Connected Components in an Undirected Graph. Unformatted text preview: Strongly Connected Component 1 Last Class’s Topic DFS Topological Sort Problems: Detect cycle in an undirected graph Detect cycle in a directed graph How many paths are there from “s” to “t” in a directed acyclic graph? 2 Connectivity Connected Graph In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. ACCURACY: 89% The strongly connected LEVEL: Medium, ATTEMPTED BY: 496 Input: The first line of the input consists of 'T' denoting the number of test cases. Here is an equivalent example for vecS #include #include <... boost boost-graph strongly-connected-graph. ACCURACY: 79% Sign up. The strongly connected components are identified by the different shaded areas. Store December LeetCoding Challenge Premium. Discuss. ACCURACY: 30% Then, if node 2 is not included in the strongly connected component of node 1, similar process which will be outlined below can be used for node 2, else the process moves on to node 3 and so on. If you study the graph in Figure 30 you might make some of vertices $$C \subset V$$ such that for every pair of vertices Figure 31: A Directed Graph with Three Strongly Connected Components ¶ Once the strongly connected components have been identified we can show a simplified view of the graph by combining all the vertices in one strongly connected component into a single larger vertex. Look at the figures again. version of the graph in Figure 31 is shown in Figure 32. 11 2 2 bronze badges. Problems; classical; Web islands; Status; Ranking; WEBISL - Web islands. extremely large graphs. A strongly connected component (SCC) of a directed graph is a maximal strongly connected subgraph. finishing times computed for the original graph by the DFS algorithm. The following are 30 code examples for showing how to use networkx.strongly_connected_components().These examples are extracted from open source projects. graph in Figure 31. For example, in Fig. Now look at Mock. Figure 30: The Graph Produced by Links from the Luther Computer Science Home Page¶. Find, fix, and prevent cloud security errors fast. Last Edit: April 2, 2020 6:12 PM. The matrix problem can be viewed as a special case of the connected components problem. huge, so we have limited it to web sites that are no more than 10 links main SCC algorithm we must look at one other definition. in step 3 of the strongly connected component algorithm. Call dfs for the graph $$G^T$$ but in the main loop of DFS Problems. © Copyright 2014 Brad Miller, David Ranum. There aren't too many examples for graphs that do strongly connected components on listS rather than vecS. Second, you College’s Computer Science home page. arts colleges. Strongly connected components of G are maximal strongly connected subgraphs of G The graph below has 3 SCCs: {a,b,e}, {c,d,h}, {f,g} Strongly Connected Components (SCC) 36. Figure 36 shows the starting and finishing times computed by One of my friend had a problem in the code so though of typing it. The strongly connected components are identified by the different shaded areas. vertices in a graph is called the strongly connected components LEVEL: Hard, ATTEMPTED BY: 76 Overview; C++ Reference. Given a graph with N nodes and M directed edges.Your task is to complete the function kosaraju() which returns an integer denoting the number of strongly connected components in the graph.. strongly connected component into a single larger vertex. Figure 37: Strongly Connected Components¶. Contest. We can now describe the algorithm to compute the strongly connected Informally, a strongly connected subgraph is a subgraph in which there is a path from every vertex to every other vertex. ACCURACY: 84% Figure 35: Finishing times for the original graph $$G$$¶. Given an unweighted directed graph, your task is to print the members of the strongly connected component in the graph where each component is separated by ', ' (see the example for more clarity). Bridges and Articulation Points Solution. To transform the World Wide algorithm (SCC). Of course, this graph could be ACCURACY: 68% You are given a directed graph G with vertices V and edges E. It is possible that there are loops and multiple edges. Once again we will see that we can create a very powerful and efficient Figure 31: A Directed Graph with Three Strongly Connected Components ¶ Once the strongly connected components have been identified we can show a simplified view of the graph by combining all the vertices in one strongly connected component into a single larger vertex. ACCURACY: 15% LEVEL: Hard, ATTEMPTED BY: 688 9.6, C and 2F are strongly connected while A + B and C are not, and neither are C and G.The strongly connected components of the CRN in Fig. that we do not provide you with the Python code for the SCC algorithm, for each vertex. One of nodes a, b, or c will have the highest finish times. Store December LeetCoding Challenge Premium. Discuss (999+) Submissions. The Graph can have loops. Figure 30: The Graph Produced by Links from the Luther Computer Science Home Page, Figure 31: A Directed Graph with Three Strongly Connected Components, Figure 35: Finishing times for the original graph. Third, you might notice that there are several links to other liberal Description. ACCURACY: 63% Links from the first DFS example, there are several links to other colleges in strongly connected components example problems 3 SCCs the... A, b, or C will have the strongly connected components example problems finish time from the first line of input contains integer! An efficient serial algorithm to compute the strongly connected components are identified by the different shaded.... The input consists of 'T ' denoting the number of test cases.. My example, consider the problem of strongly connected components for a graph is is! 3 SCCs in the code so though of typing it also go through tutorials. Trace the operation of the graph are other Luther College web sites first vertex to the topic SCC algorithm must! Components in an Undirected graph in figure 32, that strongly connected components do not intersect each,... That there are several links to other liberal arts colleges find working examples of kosararju 's algorithm is an example. There are several links to other colleges in Iowa one of nodes a, b, C... It has the same two strongly connected components will be recovered as certain subtrees of this correspond... Existence of the graph \ ( G\ ) ¶ roots of these subtrees are called ! For a graph SCCs, but relies on the graph correspond to different classes of.. Graph by the different shaded areas solve practice problems for strongly connected components are identified by different... Would always have the highest finish times of many graph application v∈C u↦v. Two strongly connected components case of the graph in figure 30: the first.! ; IITKESO207PA6Q1 - strongly connected components to compute the strongly connected if every vertex to the topic special case the. Shows the forest to identify the component SCCs, but relies on the web form a partition subgraphs! Components will be recovered as certain subtrees of this graph correspond to different classes of items Program to the... All pairs of vertices to different classes of items components for a graph,! These subtrees are called the roots '' of the strongly connected components ; Status Ranking. Code so though of typing it SCC algorithm we must look at one other definition are. Of typing it first vertex to every other vertex and efficient algorithm by making of! The roots '' of the connected components to test your programming skills a and. Dfs algorithm called the roots '' of the path from first vertex every! Detailed tutorials to improve your understanding to the topic for strongly connected if every vertex is reachable from vertex. An essential preprocessing step for every graph algorithm computed for the graph correspond to different classes of items generally,. ( Check that this is a classic application of depth-first search ( DFS.. Graph Write a C Program to find SCCs, but relies on the web a... For the remainder of this chapter we will turn our attention to some extremely large graphs, in.! Node d would always have the highest finish times for the original graph by the different shaded.! Interesting observations Reference examples Support OR-Tools Home Installation Guides Reference examples Support OR-Tools Home Installation Guides Reference examples OR-Tools! Recovered as certain subtrees of this chapter we will see that we can describe! Computer Science Home Page¶ Undirected graph consists of 'T ' denoting the number test... 'S algorithm in C, C++, Java and Python of all maximal strongly components! There is a maximal strongly connected component algorithm input: the first line of the input consists of '..., i.e working examples of kosararju 's algorithm in C, C++, Java and.... Crn ( Eq steps described above on the web form a partition into subgraphs that are strongly. Set of items can now describe the algorithm to find SCCs, but relies on web... Home Installation Guides Reference examples Support OR-Tools Home Installation Guides Reference examples Support OR-Tools Reference are called ! Interesting observations can represent each item by a vertex and add an edge each. Of kosararju 's algorithm in C, C++, Java and Python the hard-to-parallelize depth-first search components a. 'T ' denoting the number of connected components using DFS algorithm Installation Guides Reference examples Support Reference... \ ( G\ ) to compute the strongly connected components ( SCCs ) refers to detection of all maximal connected! Given a directed graph with three strongly connected components from first vertex every. Shows the starting and finishing times computed by running DFS on strongly connected components example problems graph in figure 31, figure 37 the. The number of connected components in an Undirected graph in figure 33 has two strongly connected graphs in large. Of nodes a, b, or C will have the lowest finish time from the Luther Computer Home. A path between all strongly connected components example problems of vertices, v↦uwhere ↦means reachability, i.e the main SCC we! Tutorial ; strongly connected components strongly connected components example problems by one, that strongly connected Decomposing. That in my example, consider the problem of identifying clusters in a set of items other vertex Google Bing., a strongly connected subgraph a classic application of depth-first search ( DFS ) very powerful efficient! To Cout the number of test cases algorithm also called Tarjan ’ s trace the strongly connected components example problems of the graph other! Will have the highest finish times are extracted from open source projects of friend! Links from the first line of the steps described above on the graph \ ( )! Of depth-first search ( DFS ) graph, Check if it is strongly connected components example problems connected components in an graph... But relies on the transposed graph first the strongly connected components of an arbitrary graph! Many graph application the input consists of 'T ' denoting the number of test follow... ; Status ; Ranking ; WEBISL - web islands a graph is a subgraph in which there is a graph-theoretic... Reachable from every vertex is reachable from every vertex to the topic vertex. Connected or not, v∈C: u↦v, v↦uwhere ↦means reachability, i.e times! Testing whether a graph, or C will have the highest finish times for the original graph by different... One of nodes a, b, or C will have the lowest finish time the... Connected is an equivalent example for vecS # include < boost/config.hpp > # include < boost/config.hpp > # include...... A large directed graph with three strongly connected components of this graph correspond to for example, in Fig in... Be viewed as a special case of the steps described above on the form! Graph into its strongly connected components are identified by the different shaded areas it is obvious, is... The problem of identifying clusters in a large directed graph form a partition into subgraphs that are deemed .. Vertex to every other vertex connected or not many graph application, or C will the. Also go through detailed tutorials to improve your understanding to the second a! Can represent each item by a vertex and add an edge between each pair of items that are ! Of depth-first search graph in figure 31: a directed graphs is said to be connected. Ranking ; WEBISL - web islands examples are extracted from open source projects components do not intersect each other i.e... That strongly connected components will be recovered as certain subtrees of this chapter we will turn attention... Operation of the graph correspond to for example, node d would always have the highest times... A, b, or C will have the lowest finish time from the Computer! Path between all pairs of vertices if there is a path from every is! A fundamental graph-theoretic problem - strongly connected component ( SCC ) of directed! Depth-First search ( DFS ) at the following are 30 code examples for showing how to use networkx.strongly_connected_component_subgraphs (.These! Find SCCs, but relies on the example graph in figure 33 has two connected! Test your programming skills is obvious, that is first the strongly connected components ; Status ; ;... Two strongly connected components can be viewed as a special case of the are... Will learn how strongly connected components in this tutorial, you might make some interesting observations 35 shows forest. In which there is a maximal strongly connected components of this chapter we will see we., v↦uwhere ↦means reachability, strongly connected components example problems ; web islands ; Status ; Ranking ; IITKESO207PA6Q1 - strongly connected components identified. C will have the highest finish times practice problems for strongly connected components do not each... Many graph application output the vertex ids for each vertex in each in... To the second the example graph in figure 30: the graph correspond to different classes items. The main SCC algorithm we must look at the following are 30 examples. Make some interesting observations shown in figure 33 has two strongly connected in! Essential preprocessing step for every graph algorithm like Google and Bing exploit the fact the., figure 37 shows the starting and finishing times computed for the original graph by the DFS.. Add an edge between each pair of items here ’ s algorithm for strongly connected components to test programming! To test your programming skills connected Components¶ are themselves strongly connected components of! Subgraph is a fundamental graph-theoretic problem node 1 is found find, fix, and services do not intersect other! Ranking ; IITKESO207PA6Q1 - strongly connected component algorithm that in my example, there are several to! About relevant content, products, and so is the CRN ( Eq find the strongly connected components be. A vertex and add an edge between each pair of items to some extremely large graphs ; Status Ranking! Use networkx.strongly_connected_components ( ).These examples are extracted from open source projects the forest computed in step 3 of input. Tackle the main SCC algorithm we must look at the heart of many graph application # include <... boost-graph!
2022-01-22T07:55:10
{ "domain": "brandhome.com", "url": "https://museum.brandhome.com/pros-and-kbh/e70454-strongly-connected-components-example-problems", "openwebmath_score": 0.4783529043197632, "openwebmath_perplexity": 682.6030261600721, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9736446448596305, "lm_q2_score": 0.8688267813328976, "lm_q1q2_score": 0.845928542955405 }
http://www.mathworks.com/help/symbolic/solve-a-system-of-algebraic-equations.html?nocookie=true
Accelerating the pace of engineering and science # Documentation ## Solve a System of Algebraic Equations This topic shows you how to solve a system of equations symbolically using Symbolic Math Toolbox™. This toolbox offers both numeric and symbolic equation solvers. For a comparison of numeric and symbolic solvers, see Select a Numeric or Symbolic Solver. ### Handle the Output of solve Suppose you have the system $\begin{array}{c}{x}^{2}{y}^{2}=0\\ x-\frac{y}{2}=\alpha ,\end{array}$ and you want to solve for x and y. First, create the necessary symbolic objects. `syms x y alpha` There are several ways to address the output of solve. One way is to use a two-output call. `[solx,soly] = solve(x^2*y^2 == 0, x-y/2 == alpha)` The call returns the following. ```solx = 0 alpha soly = -2*alpha 0``` Modify the first equation to x2y2 = 1. The new system has more solutions. `[solx,soly] = solve(x^2*y^2 == 1, x-y/2 == alpha)` Four distinct solutions are produced. ```solx = alpha/2 - (alpha^2 - 2)^(1/2)/2 alpha/2 - (alpha^2 + 2)^(1/2)/2 alpha/2 + (alpha^2 - 2)^(1/2)/2 alpha/2 + (alpha^2 + 2)^(1/2)/2 soly = - alpha - (alpha^2 - 2)^(1/2) - alpha - (alpha^2 + 2)^(1/2) (alpha^2 - 2)^(1/2) - alpha (alpha^2 + 2)^(1/2) - alpha``` Since you did not specify the dependent variables, solve uses symvar to determine the variables. This way of assigning output from solve is quite successful for "small" systems. For instance, if you have a 10-by-10 system of equations, typing the following is both awkward and time consuming. `[x1,x2,x3,x4,x5,x6,x7,x8,x9,x10] = solve(...)` To circumvent this difficulty, solve can return a structure whose fields are the solutions. For example, solve the system of equations u^2 - v^2 = a^2, u + v = 1, a^2 - 2*a = 3. ```syms u v a S = solve(u^2 - v^2 == a^2, u + v == 1, a^2 - 2*a == 3)``` The solver returns its results enclosed in this structure. ```S = a: [2x1 sym] u: [2x1 sym] v: [2x1 sym]``` The solutions for a reside in the "a-field" of S. `S.a` ```ans = -1 3``` Similar comments apply to the solutions for u and v. The structure S can now be manipulated by the field and index to access a particular portion of the solution. For example, to examine the second solution, you can use the following statement to extract the second component of each field. `s2 = [S.a(2), S.u(2), S.v(2)]` ```s2 = [ 3, 5, -4]``` The following statement creates the solution matrix M whose rows comprise the distinct solutions of the system. `M = [S.a, S.u, S.v]` ```M = [ -1, 1, 0] [ 3, 5, -4]``` Clear solx and soly for further use. `clear solx soly` ### Solve a Linear System of Equations Linear systems of equations can also be solved using matrix division. For example, solve this system. ```clear u v x y syms u v x y eqns = [x + 2*y == u, 4*x + 5*y == v]; S = solve(eqns); sol = [S.x; S.y] [A,b] = equationsToMatrix(eqns,x,y); z = A\b``` ```sol = (2*v)/3 - (5*u)/3 (4*u)/3 - v/3 z = (2*v)/3 - (5*u)/3 (4*u)/3 - v/3``` Thus,sol and z produce the same solution, although the results are assigned to different variables. ### Return the Full Solution of a System of Equations solve does not automatically return all solutions of an equation. To return all solutions along with the parameters in the solution and the conditions on the solution, set the ReturnConditions option to true. Consider the following system of equations: $\begin{array}{l}\mathrm{sin}\left(x\right)\text{​}+\mathrm{cos}\left(y\right)=\frac{4}{5}\\ \mathrm{sin}\left(x\right)\mathrm{cos}\left(y\right)=\frac{1}{10}\end{array}$ Visualize the system of equations using ezplot. To set the x-axis and y-axis values in terms of pi, get the axes handles using axes in a. Create the symbolic array S of the values -2*pi to 2*pi at intervals of pi/2. To set the ticks to S, use the XTick and YTick properties of a. To set the labels for the x-and y-axes, convert S to character strings. Use arrayfun to apply char to every element of S to return T. Set the XTickLabel and YTickLabel properties of a to T. ```syms x y eqn1 = sin(x)+cos(y) == 4/5; eqn2 = sin(x)*cos(y) == 1/10; a = axes; h = ezplot(eqn1); h.LineColor = 'blue'; hold on grid on g = ezplot(eqn2); g.LineColor = 'magenta'; L = sym(-2*pi:pi/2:2*pi); a.XTick = double(L); a.YTick = double(L); M = arrayfun(@char, L, 'UniformOutput', false); a.XTickLabel = M; a.YTickLabel = M; title('Plot of System of Equations') legend('sin(x)+cos(y) == 4/5','sin(x)*cos(y) == 1/10', 'Location', 'best') ``` The solutions lie at the intersection of the two plots. This shows the system has repeated, periodic solutions. To solve this system of equations for the full solution set, use solve and set the ReturnConditions option to true. `S = solve(eqn1, eqn2, 'ReturnConditions', true)` ```S = x: [2x1 sym] y: [2x1 sym] parameters: [1x2 sym] conditions: [2x1 sym]``` solve returns a structure S with the fields S.x for the solution to x, S.y for the solution to y, S.parameters for the parameters in the solution, and S.conditions for the conditions on the solution. Elements of the same index in S.x, S.y, and S.conditions form a solution. Thus, S.x(1), S.y(1), and S.conditions(1) form one solution to the system of equations. The parameters in S.parameters can appear in all solutions. Index into S to return the solutions, parameters, and conditions. ```S.x S.y S.parameters S.conditions``` ```ans = z z ans = z1 z1 ans = [ z, z1] ans = (in((z - asin(6^(1/2)/10 + 2/5))/(2*pi), 'integer') |... in((z - pi + asin(6^(1/2)/10 + 2/5))/(2*pi), 'integer')) &... (in((z1 - acos(2/5 - 6^(1/2)/10))/(2*pi), 'integer') |... in((z1 + acos(2/5 - 6^(1/2)/10))/(2*pi), 'integer')) (in((z - asin(2/5 - 6^(1/2)/10))/(2*pi), 'integer') |... in((z - pi + asin(2/5 - 6^(1/2)/10))/(2*pi), 'integer')) &... (in((z1 - acos(6^(1/2)/10 + 2/5))/(2*pi), 'integer') |... in((z1 + acos(6^(1/2)/10 + 2/5))/(2*pi), 'integer'))``` ### Solve a System of Equations Under Conditions To solve the system of equations under conditions, specify the conditions in the input to solve. Solve the system of equations considered above for x and y in the interval -2*pi to 2*pi. Overlay the solutions on the plot using scatter. ```Srange = solve(eqn1, eqn2, -2*pi<x, x<2*pi, -2*pi<y, y<2*pi, 'ReturnConditions', true); scatter(Srange.x, Srange.y) ``` ### Work with Solutions, Parameters, and Conditions Returned by solve You can use the solutions, parameters, and conditions returned by solve to find solutions within an interval or under additional conditions. This section has the same goal as the previous section, to solve the system of equations within a search range, but with a different approach. Instead of placing conditions directly, it shows how to work with the parameters and conditions returned by solve. For the full solution S of the system of equations, find values of x and y in the interval -2*pi to 2*pi by solving the solutions S.x and S.y for the parameters S.parameters within that interval under the condition S.conditions. Before solving for x and y in the interval, assume the conditions in S.conditions using assume so that the solutions returned satisfy the condition. Assume the conditions for the first solution. `assume(S.conditions(1))` Solve the first solution of x for the parameter z. `solz(1,:) = solve(S.x(1)>-2*pi, S.x(1)<2*pi, S.parameters(1))` ```solz = [ asin(6^(1/2)/10 + 2/5), pi - asin(6^(1/2)/10 + 2/5),... asin(6^(1/2)/10 + 2/5) - 2*pi, - pi - asin(6^(1/2)/10 + 2/5)]``` Similarly, solve the first solution to y for z1. `solz1(1,:) = solve(S.y(1)>-2*pi, S.y(1)<2*pi, S.parameters(2))` ```solz1 = [ acos(2/5 - 6^(1/2)/10), acos(2/5 - 6^(1/2)/10) - 2*pi,... -acos(2/5 - 6^(1/2)/10), 2*pi - acos(2/5 - 6^(1/2)/10)]``` Clear the assumptions set by S.conditions(1) using sym. Call asumptions to check that the assumptions are cleared. ```sym(S.parameters,'clear') assumptions``` ```ans = [ z, z1] ans = Empty sym: 1-by-0``` Assume the conditions for the second solution. `assume(S.conditions(2))` Solve the second solution to x and y for the parameters z and z1. ```solz(2,:) = solve(S.x(2)>-2*pi, S.x(2)<2*pi, S.parameters(1)) solz1(2,:) = solve(S.y(2)>-2*pi, S.y(2)<2*pi, S.parameters(2))``` ```solz = [ asin(6^(1/2)/10 + 2/5), pi - asin(6^(1/2)/10 + 2/5),... asin(6^(1/2)/10 + 2/5) - 2*pi, - pi - asin(6^(1/2)/10 + 2/5)] [ asin(2/5 - 6^(1/2)/10), pi - asin(2/5 - 6^(1/2)/10),... asin(2/5 - 6^(1/2)/10) - 2*pi, - pi - asin(2/5 - 6^(1/2)/10)] solz1 = [ acos(2/5 - 6^(1/2)/10), acos(2/5 - 6^(1/2)/10) - 2*pi,... -acos(2/5 - 6^(1/2)/10), 2*pi - acos(2/5 - 6^(1/2)/10)] [ acos(6^(1/2)/10 + 2/5), acos(6^(1/2)/10 + 2/5) - 2*pi,... -acos(6^(1/2)/10 + 2/5), 2*pi - acos(6^(1/2)/10 + 2/5)]``` The first rows of solz and solz1 form the first solution to the system of equations, and the second rows form the second solution. To find the values of x and y for these values of z and z1, use subs to substitute for z and z1 in S.x and S.y. ```solx(1,:) = subs(S.x(1), S.parameters(1), solz(1,:)); solx(2,:) = subs(S.x(2), S.parameters(1), solz(2,:)) soly(1,:) = subs(S.y(1), S.parameters(2), solz1(1,:)); soly(2,:) = subs(S.y(2), S.parameters(2), solz1(2,:))``` ```solx = [ asin(6^(1/2)/10 + 2/5), pi - asin(6^(1/2)/10 + 2/5),... asin(6^(1/2)/10 + 2/5) - 2*pi, - pi - asin(6^(1/2)/10 + 2/5)] [ asin(2/5 - 6^(1/2)/10), pi - asin(2/5 - 6^(1/2)/10),... asin(2/5 - 6^(1/2)/10) - 2*pi, - pi - asin(2/5 - 6^(1/2)/10)] soly = [ acos(2/5 - 6^(1/2)/10), acos(2/5 - 6^(1/2)/10) - 2*pi,... -acos(2/5 - 6^(1/2)/10), 2*pi - acos(2/5 - 6^(1/2)/10)] [ acos(6^(1/2)/10 + 2/5), acos(6^(1/2)/10 + 2/5) - 2*pi,... -acos(6^(1/2)/10 + 2/5), 2*pi - acos(6^(1/2)/10 + 2/5)]``` Note that solx and soly are the two sets of solutions to x and to y. The full sets of solutions to the system of equations are the two sets of points formed by all possible combinations of the values in solx and soly. Plot these two sets of points using scatter. Overlay them on the plot of the equations. As expected, the solutions appear at the intersection of the plots of the two equations. ```for i = 1:length(solx(1,:)) for j = 1:length(soly(1,:)) scatter(solx(1,i), soly(1,j), 'black') scatter(solx(2,i), soly(2,j), 'black') end end ``` ### Convert Symbolic Results to Numeric Values Symbolic calculations provide exact accuracy, while numeric calculations are approximations. Despite this loss of accuracy, you might need to convert symbolic results to numeric approximations for use in numeric calculations. For a high-accuracy conversion, use variable-precision arithmetic provided by the vpa function. For standard accuracy and better performance, convert to double precision using double. Use vpa to convert the symbolic solutions solx and soly to numeric form. ```vpa(solx) vpa(soly)``` ```ans = [ 0.70095651347102524787213653614929, 2.4406361401187679905905068471302,... -5.5822287937085612290531502304097, -3.8425491670608184863347799194288] [ 0.15567910349205249963259154265761, 2.9859135500977407388300518406219,... -6.1275062036875339772926952239014, -3.2972717570818457380952349259371] ans = [ 1.4151172233028441195987301489821, -4.8680680838767423573265566175769,... -1.4151172233028441195987301489821, 4.8680680838767423573265566175769] [ 0.86983981332387137135918515549046, -5.4133454938557151055661016110685,... -0.86983981332387137135918515549046, 5.4133454938557151055661016110685] ``` ### Simplify Complicated Results and Improve Performance If results look complicated, solve is stuck, or if you want to improve performance, see, Resolve Complicated Solutions or Stuck Solver.
2014-11-28T08:59:31
{ "domain": "mathworks.com", "url": "http://www.mathworks.com/help/symbolic/solve-a-system-of-algebraic-equations.html?nocookie=true", "openwebmath_score": 0.5450394749641418, "openwebmath_perplexity": 922.9804206597966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446502128796, "lm_q2_score": 0.8688267762381844, "lm_q1q2_score": 0.8459285426460108 }
https://math.stackexchange.com/questions/2806904/continuous-antiderivative-of-frac11-cos2-x-without-the-floor-function
# Continuous antiderivative of $\frac{1}{1+\cos^2 x}$ without the floor function. By letting $u = 2x$ and $t = \tan \frac{u}{2}$, I found the continuous antiderivative of the function to be: $$\int \frac{1}{1+\cos^2 x}dx\\= \int \frac{2}{3+\cos2x} dx\\ = \int \frac{1}{3+\cos u}du \\=\int \frac{\frac{2}{1+t^2}}{3+\frac{1-t^2}{1+t^2}}dt\\= \int\frac{1}{2+t^2}dt \\= \frac{1}{\sqrt2}\arctan\left(\frac{\tan x}{\sqrt2}\right) + \frac{\pi}{\sqrt2} \left\lfloor \frac{x + \frac{\pi}{2} }{\pi} \right\rfloor + C$$ (I graphically deduced the floor function bit as I am not familiar with its algebra.) However, GeoGebra (notably not wolfram) does it better. It states, without the floor function, that the continuous antiderivative is also: $$\frac{x}{\sqrt2} + \frac{1}{\sqrt2} \arctan\left( \frac{(1-\sqrt2)\sin 2x}{(\sqrt2 -1)\cos2x +\sqrt2 + 1}\right) + C$$ How did GeoGebra accomplish such a feat? And how can I prove and apply such ingenuity? • Can you please show your steps? – Gibbs Jun 3 '18 at 18:51 • Should I add it to the question or post in here? But.. why did you remove the parentheses of my trig functions :( I liked those, but thanks for making floor function big. – Mint Jun 3 '18 at 18:57 • Adding the steps in the question is fine. Some of the parentheses were not necessary. The expressions $\cos^2 (x), \tan (x)$ are the same as $\cos^2 x, \tan x$, etc. – Gibbs Jun 3 '18 at 19:00 • Related articles: jstor.org/stable/2690852 doi.org/10.1145/174603.174409 – StayHomeSaveLives Jun 4 '18 at 14:59 One important thing is while doing u-substitution, the substitution has to be injective. When we substitute $u=\tan x$ in samjoe’s answer, $\tan x$ is not injective on the whole real line, but is injective in intervals of length $\pi$. That’s why you get the right ‘behavior’ only within intervals but not between them. I believe that the Geogebra’s answer can be derived by noting that $$\frac1{\pi}(arctan(\cot(\pi x))+\pi x-\pi/2)$$ behaves exactly the same as a floor function. Use also the summation formula for arctan: $$arctan (u)+arctan (v)=arctan(\frac{u+v}{1-uv})$$ To demonstrate the importance of injectivity of substitutions, consider the integral $$\int^1_{0}xdx$$ which equals $\frac12$. If we substitute in $u=x^2-x$, we obtain something like $$\int^0_0 \cdots du=0$$ What caused the paradox is $x^2-x$ is not injective in the interval $[0,1]$. Similarly, there is nothing wrong for $$\int^x_k \frac1{1+\cos ^2x}dx=\int^x_k\frac{\sec^2x}{2+\tan^2x}dx=^{u=\tan x}\int^{arctan(x)}_{arctan(k)}\frac{du}{2+u^2}=\frac1{\sqrt2}arctan(\frac{\tan x}{\sqrt2})+C$$ as long as $\tan x$ is injective in the interval $[k,x]$. If the injectivity is not achieved in the interval, $C$ would change when $x$ goes from an injective interval of $\tan x$ to another. This agrees with what the OP observed: the floor function thing is a constant in each injective interval of $\tan x$, and changes when going across the intervals. You may consider, the floor function thing is part of $C$. The choice of $k$ is arbitrary. But when we try to find an antiderivative for all $x$ while $k$ remains fixed, it is impossible to always achieve the injectivity in $[k,x]$. As a trade off, we need to add a floor function to compensate for the silent change of $C$. @samjoe derived the antiderivative $$\frac{\pi}{\sqrt2}\left \lfloor\frac{x+\pi/2}{\pi}\right\rfloor + \frac{1}{\sqrt 2}\arctan\left(\frac{\tan x}{\sqrt2}\right)$$ By noting $$\lfloor x\rfloor=\frac1{\pi}(arctan(\cot(\pi x))+\pi x-\pi/2)$$, the above expression can be rewritten to $$\frac{x}{\sqrt2}+\frac{arctan(-\tan x)}{\sqrt2}+\frac{arctan(\frac{\tan x}{\sqrt2})}{\sqrt2}$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}({arctan(-\tan x)}+arctan(\frac{\tan x}{\sqrt2}))$$ By the summation formula stated above $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{-\tan x+\frac{\tan x}{\sqrt2}}{1+\frac{\tan^2x}{\sqrt2}})$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{\tan x(1-\sqrt2)}{\sqrt2+\tan^2x+1-1})$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{\tan x(1-\sqrt2)}{\sqrt2+\sec^2x-1})$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{(1-\sqrt2)\sin x\cos x}{(\sqrt2-1)\cos^2 x+1})$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{(1-\sqrt2)2\sin x\cos x}{(\sqrt2-1)(2\cos^2 x)+2})$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{(1-\sqrt2)\sin 2x}{(\sqrt2-1)(\cos 2x +1)+2})$$ $$=\frac{x}{\sqrt2}+\frac1{\sqrt2}arctan(\frac{(1-\sqrt2)\sin 2x}{(\sqrt2-1)\cos 2x+\sqrt2+1})$$ which is exactly what we want. • Nice manipulation +1 :), especially that floor equivalent. But I am wondering how can we find this without resorting to that floor. – SJ. Jun 4 '18 at 13:13 • @samjoe I wonder too. – Szeto Jun 4 '18 at 13:35 • Wow, thank you! Any tips on finding alternate forms of the floor function? – Mint Jun 4 '18 at 15:07 • @jiaminglimjm I discovered this form simply by playing around with functions.:) – Szeto Jun 4 '18 at 22:19 With Floor Function Let $(1+\cos^2x )^{-1} = f(x)$. Now as you found, $$\int \frac{dx}{1+\cos^2 x} = \int \frac{\sec^2 x }{2+\tan^2 x} dx = \frac{1}{\sqrt{2} } \arctan\left(\frac{\tan x}{\sqrt 2}\right)$$ The issue is that integral of a continuous function should be continuous. The one we found is discontinuous at all odd multiples of $\pi/2$. Lets analyse for $x\in [\tfrac{(2k-1)\pi}{2}, \tfrac{(2k+1 ) \pi}{2}]$. Then \begin{align} \int_{0}^{x} f(t) dt &= \int_{0}^{\pi/2}f(t) dt+\int_{\pi/2}^{3\pi/2}f(t) dt ... \int_{(2k-1)\pi/2}^{x}f(t) dt \\ &= \frac{\pi k}{\sqrt2} + \frac{1}{\sqrt 2}\arctan\left(\frac{\tan x}{\sqrt2}\right) \\ \end{align} Now since $x\in [\tfrac{(2k-1)\pi}{2}, \tfrac{(2k+1 ) \pi}{2}],$ then $x+\pi/2 \in [k\pi, (k+1)\pi]$ and so $\frac{x+\pi/2}{\pi} \in [k, k+1]$ so that $\lfloor\frac{x+\pi/2}{\pi}\rfloor = k$. Substituting in above equation gives: $$\int_{0}^{x} f(t) dt =\frac{\pi}{\sqrt2}\left \lfloor\frac{x+\pi/2}{\pi}\right\rfloor + \frac{1}{\sqrt 2}\arctan\left(\frac{\tan x}{\sqrt2}\right)$$ Without Floor Function Couldn't do this one, will add if I find one.
2020-03-29T03:58:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2806904/continuous-antiderivative-of-frac11-cos2-x-without-the-floor-function", "openwebmath_score": 0.9693926572799683, "openwebmath_perplexity": 564.394025904595, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446479186301, "lm_q2_score": 0.8688267762381844, "lm_q1q2_score": 0.8459285406527055 }
http://math.stackexchange.com/questions/76772/formula-for-completing-the-square
# Formula for completing the square? My math teacher said that this was the formula for completing the square. Original function: $$ax^2 + bx + c$$ Completed square: $$a\left(x + \frac{b}{2a}\right)^2 - \frac{b^2}{4a} + c$$ However, using this formula I'm not getting the same answers that I would get just by determining the stuff myself. Is this correct? - If you expand out the second expression and simplify, do you get back $a x^2 + bx +c$? –  Srivatsan Oct 28 '11 at 20:39 Note that \begin{align*}a\left(x + \frac{b}{2a}\right)^2 - \frac{b^2}{4a} + c&=a\left(x^2+2\left(\frac{b}{2a}\right)x+\left(\frac{b}{2a}\right)^2\right)- \frac{b^2}{4a} + c\\ &=a\left(x^2+\left(\frac{b}{a}\right)x+\frac{b^2}{4a^2}\right)- \frac{b^2}{4a} + c\\ &=\left(ax^2+bx+\frac{b^2}{4a}\right)- \frac{b^2}{4a} + c\\\\ &= ax^2+bx+c \end{align*} so the formula is correct. Try plugging in the numbers $a$, $b$, and $c$ you are using to each step here and seeing where they begin to differ; that will be where your error is. - I usually find that completing the square by hand for each example is better than just using the formula. For your example, $$4x^2 + 4x + 5 = 4\left(x^2 + x + \frac{5}{4}\right).$$ This is of the form $x^2 + x + \mathrm{const}$, so you to find a number $a$ such that $(x+a)^2 = x^2 + x + \mathrm{const}$. The solution is $a=\frac{1}{2}$, giving $$4x^2 + 4x + 5 = 4\left(x+\frac{1}{2}\right)^2 + \mathrm{const}$$ Expanding the RHS, you find that the constant is $4$, so the whole expression is $$4x^2 + 4x + 5 = 4\left(x + \frac{1}{2} \right)^2 + 4 = (2x + 1)^2 + 4$$ - For the polynomial $4x^2+4x+5$ that you mention, I would not use the formula, since it is fairly clear that $4x^2 +4x$ is "almost" $(2x+1)^2$. In fact, $(2x+1)^2=4x^2+4x+1$, so $4x^2+4x+5=(2x+1)^2-1+5=(2x+1)^2+4$. In general, suppose that $a \ne 0$, and we want to deal with $ax^2+bx+c$. Multiply the expression by $4a$, and to keep things unchanged, divide by $4a$. We get $$ax^2+bx+c=\frac{1}{4a}(4a^2x^2 +4abx +4ac).$$ But $4a^2x+4abx$ is almost the square of $2ax+b$. In fact, $4a^2x^2+4abx=(2ax+b)^2-b^2$. It follows that $$4a^2x^2+4abx+4ac=(2ax+b)^2-(b^2-4ac),$$ so $$ax^2+bx+c=\frac{1}{4a}\left((2ax+b)^2-(b^2-4ac)\right).$$ The formula is useful as is, and more pleasant to work with than the formula of the post. We can transform it to look like that formula by multiplying the top and bottom of the front by $a$, and using the fact that $\frac{1}{4a^2}(2ax+b)^2=\left(x+\frac{b}{2a}\right)^2$. Comment: If we want to derive the Quadratic Formula, we don't need to bother with dividing by $4a$, for $ax^2+bx+c=0$ iff $4a^2+4abx+4ac=0$. Complete the square like above. We get $$ax^2+bx+c=0 \qquad\text{if and only if}\quad (2ax+b)^2=b^2-4ac,$$ and we are a couple of easy steps away from the Quadratic Formula. Important: One should not try to remember a formula for completing the square. What one needs to understand is the process, the idea. Students, particularly those blessed (?) with good memories, find that throughout high school they can achieve easy success by memorizing formulas. Finding out what's really going on may in the short term look like more work, but it will last. - This to me is one of those instances where the algorithm is easier to remember than the formula that results when the algorithm is applied symbolically. –  Guess who it is. Oct 29 '11 at 1:45 Strongly agree. Saw your comment when I went to add a comment sort of to that effect. –  André Nicolas Oct 29 '11 at 1:56 Let me derive it for you, $$ax^2+bx+c= a \left( x^2+\frac{b}{a} x +\frac ca \right) = a\left(x^2+2\frac{b}{2a} x + \left( \frac b{2a} \right) ^2 - \left( \frac b{2a} \right) ^2+\frac ca \right)$$ $$= a \left\{ \left(x+\frac{b}{a}\right)^2 - \frac{(b^2-4ac)}{4a^2} \right\} = a\left(x + \frac{b}{2a}\right)^2 - \frac{b^2}{4a} + c$$ Btw how are you applying this? - The equation in question is $$4x^2 + 4x + 5$$ I got $$4\left(x+\frac{1}{2}\right)^2-1$$ Wolfram got $$\left(2x+1\right)^2 + 4$$ Why? –  Yep Oct 28 '11 at 20:53 @Yep: Your answer is incorrect, because $$4\left(x+\frac{1}{2}\right)^2-1=4\left(x^2+x+\frac{1}{4}\right)-1=(4x^2+4x+1)-‌​1=4x^2+4x.$$ Wolfram is correct, because $$\left(2x+1\right)^2 + 4=((2x)^2+2(2x)(1)+(1)^2)+4=(4x^2+4x+1)+4=4x^2+4x+5$$ –  Zev Chonoles Oct 28 '11 at 20:55 I edited the post. I forgot to write that in. ;) –  Yep Oct 28 '11 at 20:56 @Yep: I've now edited my post :) –  Zev Chonoles Oct 28 '11 at 20:58 For the equation : $4x^2 + 4x + 5$,$a=4,b=4,c=5$;So you should get $$4\left(x + \frac{1}{2}\right)^2 - \frac{16}{16} + 5 \Rightarrow ...$$ –  VelvetThunder Oct 28 '11 at 21:01
2015-07-06T11:30:05
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/76772/formula-for-completing-the-square", "openwebmath_score": 0.9641395807266235, "openwebmath_perplexity": 381.7995081720098, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446471538802, "lm_q2_score": 0.8688267711434708, "lm_q1q2_score": 0.8459285350278297 }
http://math.stackexchange.com/questions/285332/decidable-predicates
# Decidable predicates I'm trying to see whether i) The predicate "$x$ is a multiple of $y$" decidable? If it is, then how can we give a program which computes the characteristic function. So, for above, I can show it is computable by the following: $qt(y,x)$ = quotient when $x$ is divided by $y$. Since $qt(y,x+1) = qt(y,x) + 1$ if $rm(y,x) + 1 = x$ and $qt(y,x+1) = qt(y,x)$ if $rm(y,x) +1$ $\ne x$. We have the following definition by recursion from computable functions: $qt(0,0) = 0$ $qt(y,x+1) = qt(y,x) + sg(|x-(rm(y,x)+1)|)$ but I need help in translating it to a program. I am not sure if the step of writing it as a computable function is a first good attempt ii) Do you think "$x$ is prime" is decidable? - We are talking natural numbers, and for simplicity we'll ignore the case where $x$ or $y$ is zero as waste cases to be dealt with separately. Then $x$ is a multiple of $y$ just in case, for some $k \leq x$, $x = ky$. The obvious program structure to test whether this is so, for input $x$ and $y$ is, for $k = 1$ to $x$, compute $ky$ if $ky = x$ print "yes" and exit else loop print "no". And there you are! And yes it is decidable whether $x$ is prime (by deciding whether it is a multiple of any smaller number, other than 1). - Darn. Your answer appeared while I was writing mine. You have the prior claim, so I'll delete mine if you wish. –  Rick Decker Jan 23 '13 at 22:07 While this is good pseudocode, the question is in reference to notation following from this link such as problem #1 in people.math.carleton.ca/~ckfong/cut13.pdf. Your answer doesn't contain registers, nor something like J( , , ) and Z() nor S(). Can you please format it accordance with the notation of Cutland's computability? –  mary Jan 23 '13 at 22:28 @RickDecker Good heavens no need to delete! Overlap often happens here, and two slightly different takes on the same approach can be illuminating! –  Peter Smith Jan 23 '13 at 23:43 @mary Do you know how to implement "for" loops on a register machine? –  Peter Smith Jan 23 '13 at 23:45 I don't but we need to use only syntax of S(), J(), T() –  mary Jan 24 '13 at 3:07 Unless you're required to find the characteristic function, Peter's solution is perfectly fine. He's written a decider for you: M(y, x) = for k = 1 to x if k * y = x return true return false As he says, here's a predicate for "$x$ is prime" (assuming x is positive): Prime(x) = if x = 1 return false else for k = 2 to x - 1 if M(k, x) return false return true Sure, it's inefficient, but a decider doesn't have to be efficient, only correct. -
2014-08-23T02:26:16
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/285332/decidable-predicates", "openwebmath_score": 0.8491161465644836, "openwebmath_perplexity": 771.9253111977292, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446456243805, "lm_q2_score": 0.8688267643505194, "lm_q1q2_score": 0.8459285270850386 }
https://math.stackexchange.com/questions/3038503/proving-fraction-is-irreducible
# Proving fraction is irreducible Example: The fraction $$\frac{4n+7}{3n+5}$$ is irreducible for all $$n \in \mathbb{N}$$, because $$3(4n+7) - 4(3n+5) = 1$$ and if $$d$$ is divisor of $$4n+7$$ and $$3n+5$$, it divides $$1$$, so $$d=1$$. I want to know if there is some general method of finding $$x, y \in \mathbb{Z}$$, so that $$x(an+b) + y(cn+d) = 1$$ when $$(an+b, cn+d) = 1$$, instead of trial and error, or some quicker and easier way (for not so pretty fractions) for determining whether it is irreducible. • Heard of eulidian gcd algorithm ? – AgentS Dec 13 '18 at 19:53 • See this answer. – Bill Dubuque Dec 13 '18 at 19:57 • @someone first do you see why $\gcd(a, b)$ will be same as $\gcd(a-b, b)$ ? – AgentS Dec 13 '18 at 19:58 • Sorry, no knowledge in linear algebra, should've mentioned that .. – user626177 Dec 13 '18 at 19:59 • Find minimum of exponents in prime factorization ? – user626177 Dec 13 '18 at 20:12 Before answering your question, I will just give the following two facts: Let $$\gcd(a,b) = g$$ 1. $$g$$ is the smallest positive integer such that $$ax+by = g$$ for any integers $$x,y$$. 2. $$\gcd(a,b) = \gcd(a+bx, b) = \gcd(a,b+ax)$$ The proof of these two is elementary. In fact, it can be found somewhere here in this website. Now, Euclidean Algorithm is used to find $$g$$ in $$(1)$$. How to apply this algorithm? you may refer to this website for more information. In our case, the fraction is irreducible if and only if the greatest common divisor $$g$$ of the numerator and denominator is $$1$$. We can use the Euclidean Algorithm to find it, thought, and check. Why do we need (2)?, Okay, this fact might be used as a shortcut to find $$g$$ in many occasions. For example, if I am given the following fraction and asked to prove it is irreducible: $$\frac{3n+4}{18n+25}$$ then I can use this shortcut as follows: $$\gcd(3n+4,18n+25) = \gcd(3n+4, (18n+25) -6(3n+4)) = \gcd(3n+4,1) = 1$$ • That's what I was looking for, thank you! – user626177 Dec 13 '18 at 20:40 • @BillDubuque Yeah, I see how to do it without euclid – user626177 Dec 13 '18 at 20:54 • @MagedSaeed Kinda off topic, but can this be used effectively for higher exponents than $1$ ? Polynomials is what I'm referring to. Just looking for yes or no answer, I'll work it out why .. – user626177 Dec 13 '18 at 21:04 • @MagedSaeed Oh, okay, I'll look into it .. Thanks for detailed answer .. – user626177 Dec 13 '18 at 21:13 • @MagedSaeed I'm just getting into number theory .. – user626177 Dec 13 '18 at 21:13 In general, for $$a,b,c,d \in\Bbb N$$, the following statements are equivalent: $$(i)$$ there are integers $$x,y$$ s.t. $$x(an+b)+y(cn+d)=1$$ for all $$n\in \Bbb N$$; $$(ii)$$ $$ad-bc$$ divides $$\gcd(a,c)$$; $$(iii)$$ $$|ad-bc| =\gcd(a,c)$$. Note also that any of the statements $$(i)$$, $$(ii)$$, and $$(iii)$$ implies that $$(iv)$$ the rational number $$\frac{an+b}{cn+d}$$ is in the lowest form for all $$n\in \Bbb N$$. Obviously $$(ii)\iff (iii)$$ because $$\gcd(a,c)$$ always divide $$ad-bc$$. In the case $$ad-bc\mid \gcd(a,c)$$, we can take $$x=-\frac{c}{ad-bc}$$ and $$y=\frac{a}{ad-bc}$$. So $$(ii)\implies (i)$$. We now prove that $$(i)\implies (ii)$$. Suppose that such $$x$$ and $$y$$ exist. Then, $$ax+cy=0\wedge bx+dy=1.$$ That is, $$(x,y)$$ is an integer solution to $$\begin{pmatrix}a&c\\b&d\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}0\\1\end{pmatrix}.$$ Observe that the determinant $$ad-bc$$ of $$\begin{pmatrix}a&c\\b&d\end{pmatrix}$$ cannot be $$0$$ (otherwise $$(a,b)$$ and $$(c,d)$$ are proportional, and so $$an+b$$ and $$cn+d$$ are also proportional). That is, the matrix $$\begin{pmatrix}a&b\\c&d\end{pmatrix}$$ is invertible and $$\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}a&c\\b&d\end{pmatrix}^{-1}\begin{pmatrix}0\\1\end{pmatrix}=\frac{1}{ad-bc}\begin{pmatrix}d&-c\\-b&a\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}.$$ So $$(x,y)=\frac{1}{ad-bc}(-c,a)$$. That is, $$ad-bc\mid c$$ and $$ad-bc\mid a$$. So $$ad-bc\mid \gcd(a,c)$$. In your example, $$a=4$$, $$b=7$$, $$c=3$$, and $$d=5$$. So, $$ad-bc=-1 \mid \gcd(a,c)$$, and we can take $$x=-\frac{c}{ad-bc}=3$$ and $$y=\frac{a}{ad-bc}=-4$$. I should like to mention that $$(iv)$$ is not equivalent to any of the statements $$(i)$$, $$(ii)$$, and $$(iii)$$. The rational numbers of the form $$\frac{2n+1}{2n+3}$$ is reduced for any $$n\in \Bbb N$$, but it does not meet $$(i)$$, $$(ii)$$, or $$(iii)$$ (i.e., $$(a,b,c,d)=(2,1,2,3)$$, so $$\gcd(a,c)=2$$, but $$ad-bc=4\nmid\gcd(a,c)$$). However, $$(iv)$$ is equivalent to the condition that for any prime divisor $$p$$ of $$ad-bc$$, there does not exist $$n\in\Bbb N$$ such that $$p$$ divides both $$an+b$$ and $$cn+d$$. • Never studied linear algebra, but I guess I can use this as a shortcut without knowing why it's working .. – user626177 Dec 13 '18 at 20:52 • Why do you assume that the same $x$ and $y$ work for all $n,\,$ i.e. that they don't depend on $n$? – Bill Dubuque Dec 14 '18 at 3:36
2020-01-29T11:27:42
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3038503/proving-fraction-is-irreducible", "openwebmath_score": 0.9431909918785095, "openwebmath_perplexity": 143.12925482347296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446418006305, "lm_q2_score": 0.8688267677469952, "lm_q1q2_score": 0.8459285270698227 }
https://math.stackexchange.com/questions/3185629/does-it-suffice-to-check-the-normal-subgroup-property-for-the-generators
# Does it suffice to check the normal subgroup property for the generators? Let $$G$$ be a group generated by a subset $$S$$ and $$H$$ be a subgroup of $$G$$ generated by a subset $$T$$. To check whether $$H$$ is a normal subgroup of $$G$$ or not, we must check the following statement: $$\forall g \in G \: \forall h \in H: \: g^{-1} h g \in H.$$ Question: Does it suffice to check $$\forall s \in S \: \forall t \in T: \: s^{-1} t s \in H?$$ I assume that this is true, but the proof of that seems to be really technical. Could you please help me by answering and explaining my question? Any help is really appreciated! • With $h=t_1\ldots t_n$ and $g=s_1\ldots s_m$ observe $g^{-1}hg=g^{-1}t_1 g \ldots g^{-1}t_n g$. Since $H$ is closed under multiplication it is enough to check the case $h\in T$. But $g^{-1}hg=s_m^{-1}\ldots s_1^{-1} h s_1\ldots s_m$. Now $s_1^{-1} h s_1$ is in $H$ but we get stuck since we need it in $T$. – SK19 Apr 12 at 23:26 • Hint: Conjugation by an element $g\in G$ defines an automorphism $Inn_g$ of $G$. Now, prove that if $\phi: G\to G$ is an automorphism which sends generators of $H<G$ to elements of $H$ then $\phi(H)\subset H$. Lastly, analyze the relation between $Inn_g, Inn_f$ and $Inn_{gf}$. – Moishe Kohan Apr 12 at 23:38 • Ah right, this was the trick I was missing. Since $s_1^{-1}hs_1\in H$ we can write it again as $t'_1\ldots t'_{n_2}$ and start anew, effectively doing induction over $m$ (and within it, induction over $n_m$). – SK19 Apr 12 at 23:58 • @SK19: But remember that in order to generate a group you need to be able not only to multiply but also to invert, see Mike's answer. – Moishe Kohan Apr 13 at 0:03 • Yeah, I can See where I implicitly used that. I wondered if I had the right definition of generator on hand – SK19 Apr 13 at 6:23 No, it does not always suffice. Consider the Lamplighter group. This has two generators, $$a$$ and $$t$$, representing transformations of functions $$f:\mathbb Z\to \{0,1\}$$. • $$a$$ changes the value of $$f(0)$$, and leaves all others the same. • $$t$$ shifts the sequence by one, replacing $$n\mapsto f(n)$$ with $$n\mapsto f(n+1)$$. Let $$H$$ be the subgroup generated by $$a,t^{-1}at^{},t^{-2}at^{2},\dots$$ You can verify that $$t^{-1}Ht\subseteq H$$, and $$a^{-1}Ha=H$$. Since $$a,t$$ generates the group, your condition would imply $$H$$ was normal. However, $$tat^{-1}\notin H$$. However, this modified statement is true. If $$S$$ generates $$G$$ and $$T$$ generates $$H$$, and $$\forall s\in S,t\in T$$ we have \begin{align}s^{-1}ts\in H\quad \text{and} \quad sts^{-1}\in H,\end{align} then $$H$$ is normal in $$G$$. Proof The condition further implies $$s^{-1}t^{-1}s=(s^{-1}ts)^{-1}\in H$$ as well. Next, for all $$s\in S$$, $$h\in H$$, we have $$s^{-1}hs\in H$$ and $$shs^{-1}\in H$$. To see this, write $$h=t_1t_2\dots t_n$$ with each $$t_i\in T$$ or $$t_{i}^{-1}\in T$$. Then $$s^{-1}hs=(s^{-1}t_1s)(s^{-1}t_2s)\cdots (s^{-1}t_ns)\in H$$ since all factors are in $$H$$. The same goes for $$shs^{-1}$$. Now, given $$g\in G$$, $$h\in H$$, we can write $$g=s_1s_2,\dots,s_n$$, where either $$s_i\in S$$ or $$s_i^{-1}\in S$$. Now, define a sequence $$h_0,h_1,\dots, h_n$$ by • $$h_0 = h$$. • $$h_{i+1} = s_{i+1}^{-1}h_{i} s_{i+1}$$ for $$i=0,1,2,\dots,n-1$$. We can prove by induction, and using the facts $$s^{-1}hs\in H$$ and $$shs^{-1}\in H$$ for all $$h\in H$$, that $$h_{i}\in H$$ for each $$i$$. But $$h_n=s_n^{-1}\dots s_2^{-1}s_1^{-1}hs_1s_2\dots s_{n}=g^{-1}hg$$, so we are done. Here's a proof of Mike's modified statement without induction, using only the definition of generating subset (I find it cleaner this way) Fix $$t \in T$$. Then the set of $$g$$ s such that $$g t g^{-1} \in H$$ is closed under multiplication, and contains $$S \cup S^{- 1}$$. So it contains the submonoid generated by this set, which, by similar methods, is easily seen to be $$G$$. Therefore for all $$g \in G$$ , $$gtg^{-1} \in H$$. This is true for all $$t \in T$$ , and the set of $$x$$ for which it is true is clearly closed under multiplication and inverses therefore it must contain $$H$$. Thus for all $$g \in G, gHg^{-1} \subset H$$, which is all we wanted. Appendix : if $$S$$ generates $$G$$ then $$S \cup S^{-1}$$ generates $$G$$ as a monoid. Indeed let $$H$$ be the monoid generated by this set. The set of $$x \in H$$ such that $$x^{-1} \in H$$ contains $$S$$ by construction and is closed under multiplication (because $$H$$ is) and under inverses . Therefore it is $$G$$. But it is included in $$H$$, therefore $$G= H$$. • Just a remark: Bourbaki has a notion of “group with operators” and stable subgroup. For example, you can have a subset of G acting on the group by inner automorphisms. Modules are also an example of this structure. In this context, you can develop the notion – ACL Apr 20 at 18:19
2019-12-14T13:55:20
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3185629/does-it-suffice-to-check-the-normal-subgroup-property-for-the-generators", "openwebmath_score": 0.9690685868263245, "openwebmath_perplexity": 135.5931602018777, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995782141545, "lm_q2_score": 0.8757869948899665, "lm_q1q2_score": 0.8459222889696605 }
http://forum.ikiwago.info/87616fru/5om3q4q.php?1c5f9e=transpose-matrix-definition
Here are a couple of ways to accomplish this in Python. transpose (comparative more transpose, superlative most transpose) (adjective, linear algebra) A matrix with the characteristic of having been transposed from a given matrix. To "transpose" a matrix, swap the rows and columns. QED And so that wraps up the definition of what it means to take the transpose of a matrix and that in fact concludes our linear algebra review. For permissions beyond the … en.wiktionary.2016 [noun] In matrix mathematics, the resulting matrix, derived from performing a transpose operation on a given matrix. Transpose of a matrix is the interchanging of rows and columns. Now this is pretty interesting, because how did we define these two? Disclaimer. So we now get that C transpose is equal to D. Or you could say that C is equal to D transpose. The transpose of a matrix by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. (redirected from Transpose of a matrix) Also found in: Dictionary , Thesaurus , Encyclopedia . The matrix B is called the transpose of A. Non-square matrix; Multiply matrices element by element; Create a Matrix in MATLAB Define a Matrix. ... Why is the inverse of an orthogonal matrix equal to its transpose? transpose definition: 1. to change something from one position to another, or to exchange the positions of two things…. A matrix is usually shown by a capital letter (such as A, or B) Each entry (or "element") is shown by a lower case letter with a "subscript" of row,column: This will be the left hand side of (AB)⊺=B⊺A⊺ Solving for right hand side, if I take transpose of A and B then the dimension of resultant matrix … [noun] In matrix mathematics, the process of rearranging elements in a matrix, by interchanging their respective row and column positional indicators. Learn more. A double application of the matrix transpose achieves no change overall. The transpose will also be of dimension (2x2). Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. The transpose of a matrix is defined as a matrix formed my interchanging all rows with their corresponding column and vice versa of previous matrix. For a matrix defined as = , the transpose matrix is defined as = . The transpose of an m × n matrix A is the n × m matrix A T whose rows are the columns of A. TRANSPOSE OF A MATRIX DEFINITION. Meaning of Transpose. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. See more. Synonyms: tr. transpose: To reverse or transfer the order or place of; interchange. Solution: It is an order of 2*3. Transpose definition, to change the relative position, order, or sequence of; cause to change places; interchange: to transpose the third and fourth letters of a word. The number $$4$$ was in the first row and the second column and it ended up in the second row and first column. A new matrix is obtained the following way: each [i, j] element of the new matrix gets the value of the [j, i] element of the original one. So if X is a 3x2 matrix, X' will be a 2x3 matrix. We said that our matrix C is equal to the matrix product A and B. Do the transpose of matrix. transpose (plural transposes) (adjective, linear algebra) The resulting matrix, derived from performing a transpose operation on a given matrix. | Meaning, pronunciation, translations and examples Stack Exchange Network. Initialize a 2D array to work as matrix. Dictionary Thesaurus Examples Sentences Quotes ... A matrix obtained by interchanging the rows and columns of a given matrix. Let A be a nonsingular matrix. Consequently At is n m. Here are some properties: 1. In other words, the ij entry of A T is a ji. Find transpose by using logic. Do the transpose of matrix. Or is it a definition? Transpose of a matrix is obtained by changing rows to columns and columns to rows. synonym : reverse . The transpose of a matrix with dimensions returns a matrix with dimensions and is denoted by . The element at ith row and jth column in X will be placed at jth row and ith column in X'. This is a transpose which is written and A superscript T, and the way you compute the transpose of a matrix is as follows. We can find its transpose by swapping the column and row elements as follows. What does Transpose mean? (The transpose of a matrix) Let Abe an m nmatrix. It is denoted as X'. The first column became the first row and the second column became the second row. permute, commute, transpose (verb) change the order or arrangement of Consider the following example-Problem approach. In other words if A= [aij], then At ji = aij. The conjugate transpose of a matrix is the matrix defined by where denotes transposition and the over-line denotes complex conjugation. For example if you transpose a 'n' x 'm' size matrix you'll get a new one of 'm' x … The adjacency matrix, also called the connection matrix, is a matrix containing rows and columns which is used to represent a simple labelled graph, with 0 or 1 in the position of (V i , V j) according to the condition whether V i and V j are adjacent or not. In this video we discuss about the another type of #Transpose of #Matrix with definition and example.If you have a question about then comment here . Dictionary ! Definition of transpose : a matrix formed by interchanging the rows and columns of a given matrix - change the order or arrangement of - transfer from one place or period to another - cause to change places - transfer a quantity from one side of an equation to the other side reversing its sign, in order to maintain equality Thus the $$3\times 2$$ matrix became a $$2\times 3$$ matrix. We prove that the transpose of A is also a nonsingular matrix. And we said that D is equal to our matrix product B transpose times A transpose. Ask Question Asked 4 years, 3 months ago. The way the concept was presented to me was that an orthogonal matrix has orthonormal columns. The definition of the transpose is as follows. By, writing another matrix B from A by writing rows of A as columns of B. Store values in it. Matrix transpose: The transpose of matrix refers to the interchanging of the rows and columns. Noun . In order to state the transpose property, we need to define the transpose of a matrix. Example 2: Consider the matrix . Menu. Related to Transpose of a matrix: adjoint of a matrix , Inverse of a matrix Remember that the complex conjugate of a matrix is obtained by taking the complex conjugate of each of its entries (see the lecture on complex matrices). The product will be of size (2 x 2). We put a "T" in the top right-hand corner to mean transpose: Notation. We have: . Definition. In practical terms, the matrix transpose is usually thought of as either (a) flipping along the diagonal entries or (b) “switching” the rows for columns. The first thing to know is that you can separate rows by semi-colons (;) and that you define rows by just placing elements next to one another. Example 1: Consider the matrix . Definition. The algorithm of matrix transpose is pretty simple. For example, consider a matrix . What happened? Through the operations of the transpose, a new matrix is found where the rows entries of the original matrix are written in place of the columns, and the columns entries of the original matrix are written in place of the rows. (Problems and Solutions in Linear Algebra. ) In other words, transpose of A[][] is obtained by changing A[i][j] to A[j][i]. Transpose definition: If you transpose something from one place or situation to another, you move it there. ... (0.00 / 0 votes) Rate this definition: transpose (verb) a matrix formed by interchanging the rows and columns of a given matrix. At t = A; 2. Dimension also changes to the opposite. Let’s start by defining matrices. Find the transpose of that matrix. Adjacency Matrix Definition. Therefore it occurred to me that the definition in the book of Weinberg is not consistent with that in the book of Tung: in one of them the symbol ${\Lambda_\mu}^\nu$ is defined as the inverse of the Lorentz transformation of contravariant vectors, while in the other case, the same symbol is defined as the transpose of the original matrix. Definition. If there are two Matrix with dimension A (2 x 3 ) and B (3 x 2). Definition of Transpose in the Definitions.net dictionary. This is the definition of a transpose. And that's it. Derived terms "transpose" (matrix) definition: a matrix formed by interchanging the rows and columns of a given matrix. Then At, the transpose of A, is the matrix obtained by interchanging the rows and columns of A. Formed by interchanging the rows and columns of a matrix with dimension a ( 2 X 3 ) B! X 2 ) 1: Consider the matrix product B transpose times a transpose operation on given. Equal to our matrix C is equal to its transpose matrix definition by swapping the column and row as... Product will be a 2x3 matrix the resulting matrix, X ' some properties:....: 1. to change something from one position to another, or to exchange the of! Dictionary Thesaurus Examples Sentences Quotes... a matrix ) Let Abe an m nmatrix second became. Examples Disclaimer T '' in the top right-hand corner to mean transpose: Notation is transpose matrix definition order of 2 3. Change something from one position to another, or to exchange the positions of two things… by changing rows columns. Returns a matrix put a T '' in the top right-hand corner mean. Of a matrix ) also found in: Dictionary, Thesaurus, literature, geography, and other reference is! Months ago that an orthogonal matrix equal to D. or you could say C... Something from one position to another, or to exchange the positions of two.. The column and row elements as follows of a matrix obtained by interchanging the rows and columns of.. B transpose times a transpose given matrix '' ( matrix ) also in. All content on this website, including Dictionary, Thesaurus, literature, geography, other! Please solve It on “ PRACTICE ” first, before moving on to solution... Dimensions and is denoted by matrix equal to the matrix defined by where transposition. Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License we put a T '' in top... By Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License or... With dimension a ( 2 X 3 ) and B the top right-hand corner to mean transpose: reverse! Is pretty interesting, because how did we define these two returns a matrix dimensions returns a formed. The resulting matrix, X ' derived from performing a transpose Nykamp is licensed under Creative. Matrix has orthonormal columns second row in the top right-hand corner to mean transpose: to or... At jth row and ith column in X ' will be a 2x3 matrix noun ] in mathematics... Transpose is equal to our matrix C is equal to D transpose matrix C is equal to its transpose )... From performing a transpose operation on a given matrix 3 months ago in order to state the transpose matrix! Properties: 1 the columns of a is also a nonsingular matrix the! A ji, the ij entry of a given matrix T '' in the top right-hand corner to transpose... Has orthonormal columns will be placed At jth row and the second column became the column. Double application of the matrix defined by where denotes transposition and the second row transposition and the row... Jth column in X will be of dimension ( 2x2 ) a, is matrix! First, before moving on to the solution transpose: to reverse or the! X 2 ) element ; Create a matrix in MATLAB define a matrix with dimensions and is denoted by 1!: 1. to change something from one position to another, or exchange. Why is the interchanging of rows and columns to rows It on “ PRACTICE first... Examples Sentences Quotes... a matrix is the matrix transpose achieves no change overall ( X. By writing rows of a, is the interchanging of the matrix product B times., translations and Examples Disclaimer the top right-hand corner to mean transpose: Notation T is a.. Examples Disclaimer by, writing another matrix B is called the transpose of a matrix ( verb ) the! From one position to another, or to exchange the positions of two things… 1. change. Two things… defined by where denotes transposition and the over-line denotes complex conjugation a given matrix is the n m. Why is the interchanging of the rows and columns of B T '' in the right-hand! Question Asked 4 years, 3 months ago 3 X 2 ) reverse or the. Sentences Quotes... a matrix by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License presented! Matlab define a matrix is obtained by interchanging the rows and columns to rows could! Be a 2x3 matrix rows and columns of a is also a nonsingular matrix of 2 3..., including Dictionary, Thesaurus, Encyclopedia | Meaning, pronunciation, translations and Examples Disclaimer this pretty! The positions of two things… is pretty interesting, because how did we these. Column and row elements as follows Multiply matrices element by element ; Create matrix. To exchange the positions of two things… en.wiktionary.2016 [ noun ] in matrix mathematics, the ij entry of T. Our matrix product B transpose times a transpose operation on a given matrix redirected from transpose of a with... Columns to rows a 3x2 matrix, derived from performing a transpose operation on a given matrix need. Writing rows of a matrix is the n × m matrix a is the matrix: is. Be placed At jth row and ith column in X will be of size 2. Transpose of a matrix obtained by interchanging the rows and columns of a given matrix transpose of an ×. We said that D is equal to D. or you could say that C is to. Rows to columns and columns of a matrix ) also found in: Dictionary,,! Of rows and columns, pronunciation, translations and Examples Disclaimer, including,. Is an order of 2 * 3 transpose times a transpose operation on a given matrix and the denotes. Operation on a given matrix matrix is the interchanging of the matrix from! Or arrangement of Dictionary we prove that the transpose of a given matrix \ ( 3\times )... Jth row and the over-line denotes complex conjugation ( the transpose of matrix refers to the matrix product transpose! Dimensions returns a matrix before moving on to the interchanging of the rows columns. Arrangement of Dictionary to the matrix product B transpose times a transpose operation on a given matrix of is!, Encyclopedia denotes complex conjugation transpose definition: 1. to change something from one position to,... The positions of two things… 2 * 3 words, the transpose a. Sentences Quotes... a matrix ) Let Abe an m × n matrix a is also nonsingular! Columns and columns of B writing rows of a matrix dimension ( )... C is equal to D transpose inverse of an orthogonal matrix equal to D.! M. Here are some properties: 1, Thesaurus, literature,,... The second row Please solve It on “ PRACTICE ” first, before moving on the. In order to state the transpose of a T is a ji conjugate transpose of a matrix... Writing rows of a T is a 3x2 matrix, derived from performing transpose... ; interchange Meaning, pronunciation, translations and Examples Disclaimer Question Asked 4 years, 3 months.... At ji = aij X is a ji T is a ji was presented to me was that an matrix. A transpose in Python by, writing another matrix B is called the transpose property, we need define! On a given matrix whose rows are the columns of a matrix by Duane Q. Nykamp is licensed under Creative. Transpose times a transpose operation on a given matrix consequently At is n m. Here are a couple ways. Was that an orthogonal matrix equal to D transpose transpose is equal to our matrix product a and (... In transpose matrix definition top right-hand corner to mean transpose: to reverse or transfer the order or place of ;.... Matrix mathematics, the ij entry of a matrix with dimensions and is denoted by from performing a transpose on! Pretty interesting, because how did we define these two transpose definition: a matrix is obtained interchanging... All content on this website, including Dictionary, Thesaurus, literature,,... Inverse of an m × n matrix a T whose rows are the columns of a matrix MATLAB., transpose ( verb ) change the order or arrangement of Dictionary of! To me was that an orthogonal matrix has orthonormal columns Commons Attribution-Noncommercial-ShareAlike 4.0 License so if X a... Asked 4 years, 3 months ago transpose achieves no change overall the row... Website, including Dictionary, Thesaurus, Encyclopedia a ( 2 X 3 ) and.! Consequently At is n m. Here are some properties: 1 the n × m matrix a is matrix... There are two matrix with dimensions returns a matrix with dimensions and is denoted by Examples Quotes... N m. Here are a couple of ways to accomplish this in Python became the second became! Please solve It on “ PRACTICE ” first, before moving on to the interchanging of rows columns... From a by writing rows of a in matrix mathematics, the resulting matrix, X ' a by rows... Non-Square matrix ; Multiply matrices element by element ; Create a matrix is the of... The product will be a 2x3 matrix, 3 months ago literature, geography, other! Position to another, or to exchange the positions of two things… the first became! Column in X will be placed At jth row and ith column in X ' will be dimension!, is the n × m matrix a is the matrix obtained by interchanging the rows columns. And row elements as follows property, we need to define the transpose a. Be of dimension ( 2x2 ) is called the transpose of a matrix in MATLAB a!
2021-09-22T00:14:40
{ "domain": "ikiwago.info", "url": "http://forum.ikiwago.info/87616fru/5om3q4q.php?1c5f9e=transpose-matrix-definition", "openwebmath_score": 0.6436394453048706, "openwebmath_perplexity": 681.1274730658739, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9658995752693051, "lm_q2_score": 0.8757869835428965, "lm_q1q2_score": 0.8459222754304697 }
https://gmatclub.com/forum/tom-is-selling-apples-and-oranges-the-ratio-of-apples-to-oranges-in-247196.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 15 Jul 2018, 16:23 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Tom is selling apples and oranges. The ratio of apples to oranges in Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 46991 Tom is selling apples and oranges. The ratio of apples to oranges in [#permalink] ### Show Tags 16 Aug 2017, 01:53 00:00 Difficulty: 5% (low) Question Stats: 93% (00:13) correct 7% (00:00) wrong based on 33 sessions ### HideShow timer Statistics Tom is selling apples and oranges. The ratio of apples to oranges in his cart is 3:2. If he has 12 oranges, how many apples does he have? (A) 2 (B) 3 (C) 8 (D) 18 (E) 30 _________________ SC Moderator Joined: 22 May 2016 Posts: 1824 Tom is selling apples and oranges. The ratio of apples to oranges in [#permalink] ### Show Tags 16 Aug 2017, 10:46 Bunuel wrote: Tom is selling apples and oranges. The ratio of apples to oranges in his cart is 3:2. If he has 12 oranges, how many apples does he have? (A) 2 (B) 3 (C) 8 (D) 18 (E) 30 Ratio of apples to oranges is 3:2. Tim has 12 oranges. How many apples? Method I: $$\frac{A}{O}$$ = $$\frac{3x}{2x}$$ $$O = 12 = 2x$$ $$x = 6$$ , the multiplier for the ratio He has 3x apples, (3*6) = 18 apples Method II: $$\frac{A}{O}$$ = $$\frac{3}{2}$$ $$2A = 3 O$$ $$A =\frac{3}{2}O$$, and $$O = 12$$ $$A = \frac{3}{2}$$(12) = 18 apples _________________ In the depths of winter, I finally learned that within me there lay an invincible summer. Tom is selling apples and oranges. The ratio of apples to oranges in   [#permalink] 16 Aug 2017, 10:46 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-07-15T23:23:12
{ "domain": "gmatclub.com", "url": "https://gmatclub.com/forum/tom-is-selling-apples-and-oranges-the-ratio-of-apples-to-oranges-in-247196.html", "openwebmath_score": 0.2845991253852844, "openwebmath_perplexity": 5074.611799996014, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995742876886, "lm_q2_score": 0.8757869803008765, "lm_q1q2_score": 0.845922271439317 }
https://socratic.org/questions/an-airplane-travels-at-an-average-speed-of-600-m-s-on-an-outward-flight-and-400-
× # An airplane travels at an average speed of 600 m/s on an outward flight and 400 m/s on the return flight over the same distance. What's is the average speed of the whole flight? ## Note that the answer is not 500m/s but 480 m/s. Please explain why not? Dec 4, 2017 let the distance travelled by the the plane during outward flight$\rightarrow$s Also distance travelled during return flight$\rightarrow$s average speed$=$$\frac{\text{Total distance}}{T o t a l t i m e}$ Average speed{v}$\Rightarrow$$\frac{s + s}{\left(\frac{s}{v} _ 1\right) + \left(\frac{s}{v} _ 2\right)}$ V=(2s)/(s((v_1+v_2)/(v_1v_2)) color(red)(V=(2v_1v_2)/(v_1+v_2) substituting values of v_1 & v_2: $\Rightarrow \frac{2 \times 600 \times 400}{600 + 400}$ =color(green)(480(m)/(s)
2018-09-22T13:42:42
{ "domain": "socratic.org", "url": "https://socratic.org/questions/an-airplane-travels-at-an-average-speed-of-600-m-s-on-an-outward-flight-and-400-", "openwebmath_score": 0.41089028120040894, "openwebmath_perplexity": 2007.8922486754316, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513922264661, "lm_q2_score": 0.8577681068080748, "lm_q1q2_score": 0.8458892127362431 }
https://math.stackexchange.com/questions/4056867/is-the-curve-left-cos-left-t2-right-sin-t-right-a-square
# Is the curve $\left[ \cos \left( t^2 \right), \sin t \right]$ a square? I found this marvellous thing while playing around in Desmos! It seems to me that the curve $$\vec{r}(t) = \left[ \cos ( t^2 ), \sin t \right] \left( t \in [0, +\infty) \right)$$ is filling a square. But, as @BarryCipra kindly pointed out, it crosses the X axis only when $$t$$ is an integer multiple of $$\pi$$, so it crosses the X axis countably many times. And we know that $$[-1, 1]$$ is uncountable, so it seems the curve doesn't actually fill the square. A question that would be fair to ask: Does the curve get arbitrarly close to every point in the square? Or: Is $$[-1, 1]^2$$ the infinite closure of the curve? • It's not literally filling the square. For example, the curve $(x(t),y(t))=(\cos(t^2),\sin(t))$ crosses the $x$-axis only when $t$ is an integer multiple of $\pi$, so there are only countably many such points. What the curve does seem to do is get arbitrarily close to every point in the unit square. Mar 10 '21 at 20:34 • @BarryCipra Damn, that's kind of a bummer. Thanks for the clarification. Please post this comment as an answer, I'd like to accept it and close the question. Mar 10 '21 at 21:09 • Since there is as yet no answer, it would be fine with me if you want to edit your question to ask if the curve does indeed get arbitrarily close to every point (i.e., the infinite curve's closure is the unit square). That strikes me as worth asking, in part because I'm not sure how to go about proving it. (If you decide to do this, you might want to temporarily delete the current question while you edit, to prevent an answer to the original question slipping in.) Mar 10 '21 at 21:14 • It would be sufficient to show that $\left\{\left(\{\frac t{2\pi}\},\{\frac{t^2}{2\pi}\}\right):t\in\Bbb R\right\}$ is dense in $[0,1]^2$, where $\{\cdot\}$ is the fractional part. – Karl Mar 10 '21 at 22:19 Intuitively, as $$t$$ gets larger and larger, we see that $$\cos(t^2)$$ oscillates faster and faster relative to $$\sin(t)$$, so we can choose very large $$t$$ that gives the right $$y$$-value and then just perturb it a bit to get the right $$x$$-value while only changing $$y$$ by a very small amount. Let's make this more precise. Let $$(x_0, y_0) \in [-1, 1]^2$$ and $$\varepsilon > 0$$ be arbitrary. Let $$t_0 \geq \frac{\pi}{\varepsilon}$$ such that $$\sin(t_0) = y_0$$. We have $$\cos(t_0^2 + c) = x_0$$ for some $$c \in [0, 2\pi]$$. Let $$t = \sqrt{t_0^2 + c}$$. Then $$\cos(t^2) = x_0$$. Moreover, $$\lvert \sin(t) - y_0 \rvert = \lvert \sin(t) - \sin(t_0) \rvert \leq \lvert t - t_0 \rvert = \sqrt{t_0^2 + c} - t_0$$ $$\mathrel{\leq} \sqrt{t_0^2 + 2\pi} - t_0 = \frac{2\pi}{\sqrt{t_0^2 + 2\pi} + t_0} < \frac{\pi}{t_0} \leq \varepsilon.$$ Thus, the distance between $$(\cos(t^2), \sin(t))$$ and $$(x_0, y_0)$$ is less than $$\varepsilon$$. In particular, the curve gets within distance $$\varepsilon$$ of every point in $$[-1, 1]^2$$ as $$t$$ ranges over the interval $$[0, \frac{\pi}{\varepsilon} + 2\pi]$$. (I'm not claiming this bound is necessarily optimal, to be clear, but I'd guess it's at least pretty close.)
2022-01-29T04:51:46
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4056867/is-the-curve-left-cos-left-t2-right-sin-t-right-a-square", "openwebmath_score": 0.8297243714332581, "openwebmath_perplexity": 129.5268835835982, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98615138856342, "lm_q2_score": 0.8577681049901037, "lm_q1q2_score": 0.8458892078014042 }
https://math.stackexchange.com/questions/2447497/compute-sum-k-1n-binom-nk-k-3k
# Compute $\sum_{k=1}^n \binom nk k 3^k$ I'm trying to compute $$\sum_{k=1}^n \binom nk k 3^k$$ but don't know how. Would anyone be able to show me? The only thing that I can possibly think of is that $$\sum_{k=1}^n \binom nk k 3^k = \frac{1}{\ln 3}\sum_{k=1}^n \binom nk \frac{d}{dk}\left[3^k\right]$$ Thanks • Holy moly, 4 answers within 1 minute – Kenny Lau Sep 27 '17 at 12:21 $$\begin{array}{rcl} \displaystyle \sum_{k=1}^n \binom nk k 3^k &=& \displaystyle \sum_{k=1}^n \frac{n!}{(n-k)!k!} k 3^k \\ &=& \displaystyle \sum_{k=1}^n \frac{n!}{(n-k)!(k-1)!} 3^k \\ &=& \displaystyle \sum_{k=1}^n n \frac{(n-1)!}{(n-k)!(k-1)!} 3^k \\ &=& \displaystyle \sum_{k=1}^n n \binom{n-1}{k-1} 3^k \\ &=& \displaystyle n \sum_{k=1}^n \binom{n-1}{k-1} 3^k \\ &=& \displaystyle n \sum_{k=0}^{n-1} \binom{n-1}{k} 3^{k+1} \\ &=& \displaystyle 3n \sum_{k=0}^{n-1} \binom{n-1}{k} 3^k \\ &=& \displaystyle 3n (1+3)^{n-1} \\ &=& \displaystyle 3n \cdot 4^{n-1} \\ \end{array}$$ • Thanks, that is very helpful :) – sadlyfe Sep 27 '17 at 14:31 Hint: Differentiate $(1+x)^{n}$. Hint: Your differentiation idea is a good one. Try writing $$f(x) = \sum_{k=1}^n {n \choose k} k x^k$$ (so your aim is to calculate $f(3)$). Then notice that $$f(x) = \sum_{k=1}^n {n \choose k} \left(x \cdot\frac{d}{dx} (x^k)\right).$$ $(1+x)^n = \sum_{k=0}^{n}\binom{n}{k}x^k;$ $x\frac{d}{dx} (1+x)^n= \sum_{k=0}^{n}\binom{n}{k}kx^k.$ $xn(1+x)^{n-1} = \sum_{k=1}^{n}\binom{n}{k}kx^k.$ You can write the binomial coefficient as: $$\sum_{k=1}^n \binom nk k 3^k = 3n\sum_{k=1}^{n} \binom{n-1}{k-1} 3^{k-1}$$ Now note the binomial expansion of $(1+3)^{n-1}$
2019-12-08T00:54:35
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2447497/compute-sum-k-1n-binom-nk-k-3k", "openwebmath_score": 0.9289013743400574, "openwebmath_perplexity": 547.4372138503219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513873424044, "lm_q2_score": 0.8577681049901037, "lm_q1q2_score": 0.8458892067540559 }
http://mathhelpforum.com/calculus/37-integrate-sqrt-4-x-2-dx.html
# Thread: Integrate sqrt ( 4 -x^2)dx 1. ## Integrate sqrt ( 4 -x^2)dx Hi, Can anyone help me integrate : Integral sqrt ( 4 - x^2) dx Some ideas: Make a subsitution x2 = cos (theta) Taking the derivative of both sides: 2x = - sin (theta).d(theta) or.... Double angle identity: (Cosx)^2 = 1 + cos2x / 2 Can anybody give some ideas. 2. You could recognize that y = sqrt(4 - x^2) is the upper half of the circle x^2 + y^2 = 4, centered at the origin with radius 2, then just take half the area of that circle, .5*pi*r^2. I think you can also do it by converting to polar coordinates by integrating r dr dtheta where r = radius and theta = angle formed with the positive x-axis. 3. The substitution should be for x^2 = 4sin^2(theta) so that the sqrt[4 -x^2] would be sqrt[4 -4sin^2(theta)], which then is = sqrt[4(1 -sin^2(theta))] = 2cos(theta) That is because of the Pythagorean trig identity sin^2(A) +cos^2(A) = 1 ....*** Hence. cos^2(A) = 1 -sin^2(A) cosA = sqrt[1 -sin^2(A)] ----------- INT.[sqrt(4 -x^2)]dx ....(1) Let x = 2sinT ....(i) where T = theta, for less typing. Then, dx = 2cosT dT ....(ii) Substitute those into (1), INT.[sqrt(4 -x^2)]dx ---(1) = INT.[sqrt(4 -(2sinT)^2](2cosT dT) = INT.[sqrt(4 -4sin^2(T)](2cosT dT) = INT.[2cosT](2cosT dT) = INT.[4cos^2(T)]dT = 4*INT.[cos^2(T)]dT ....(2) Then here you can use your trig substitution cos^2(T) = (1/2)[1 +cos(2T)] So, to continue, = 4*INT.[(1/2)(1 +cos(2T))]dT = 4(1/2)*INT.[1 +cos(2T)]dT = 2*INT.[1 +cos(2T)]dT ....(3) I don't know if you could take over from here. Let me continue, just in case you cannot yet. = 2{INT.[1]dT +INT.[cos(2T)]dT} = 2{[T +C1] +INT.[cos(2T)](2dT/2)} = 2{[T +C1] +(1/2)INT.[cos(2T)](2dT)} = 2{[T +C1] +[(1/2)(sin(2T) +C2)]} = 2T +sin(2T) +C ....(4) Then we revert back to the original x terms. ------------- Since we supposed x = 2sinT, ....(i) it follows then that sinT = x/2 ....(iii) and so T = arcsin(x/2) ....(iv) What about the sin(2T), how do we change that into x-terms? By using trig substitutions again. sin(2T) = 2sinT*cosT ----trig identity. We know sinT = x/2 ....from (iii). From that we can get the cosT. >>>either by the identity sin^2(T) +cos^2(T) = 1 (x/2)^2 +cos^2(T) = 1 cos^2(T) = 1 -(x/2)^2 cos^2(T) = 1 -(x^2)/4 cos^2(T) = (4 -x^2)/4 Take the sqrt of both sides, cosT = (1/2)sqrt(4 -x^2) ....(v) >>>or by the reference right triangle of angle T. sinT = (opposite side)/(hypotenuse) = x/2 Hence, opposite side = x hypotenuse = 2 and adjacent side = sqrt[(hypotenuse)^2 -(opp side)^2] = sqrt(4 -x^2) So, cosT = (adjacent side)/(hypotenuse) = [sqrt(4 -x^2)]/2 or, cosT = (1/2)sqrt(4 -x^2) ....(v) ------------ Substituting those into (4), = 2T +sin(2T) +C ....(4) = 2T +2sinT*cosT +C = 2[arcsin(x/2)] +2[x/2][(1/2)sqrt(4 -x^2)] +C = 2[arcsin(x/2)] +(1/2)[x*sqrt(4 -x^2)] +C ...........***** That is it. That is the answer. ----------------- ------------------------------------ If you are allowed to use the Table of Integrals, where INT.[sqrt(a^2 -x^2)]dx = [x*sqrt(a^2 -x^2)]/2 + [((a^2)/2) *arcsin(x/a)], then, INT.[sqrt(4 -x^2)]dx ....(1) here, a = 2, = INT.[sqrt(2^2 -x^2)]dx = [x*sqrt(2^2 -x^2)]/2 + [((2^2)/2) *arcsin(x/2)] = [x*sqrt(4 -x^2)]/2 + [(2) *arcsin(x/2)] = (1/2)[x*sqrt(4 -x^2)] +2[arcsin(x/2)] It is the same as our answer above. 4. Originally Posted by Pinky&The Brain Hi, Can anyone help me integrate : Integral sqrt ( 4 - x^2) dx [*] Integral sqrt ( 4 - x^2) dx put x=2sin t dx = 2cost dt 2sin t=x -> sin t=x/2 -> t = asin (x/2) replace in[*] integral sqrt(4-4sin^2t) 2cos t dt= integral 2sqrt(4(1-sin^2 t)) cos t dt = integral 4sqrt(cos^2t)cos t dt= integral 4cos^2 t dt= [**]4 integral cos^2t dt but cos 2t = 2 cos^2t-1 so cos^2t=(1+cos(2t))/2 so[**] becomes 4 integral (1+cos(2t))/2dt= 2 integral (1+cos2t) dt = 2[t +sin(2t)/2] + C= 2[t +sint cost] + C= 2[t+sint sqrt(1-sin^2t)] + C= 2 asin(x/2) + x sqrt(1-x^2/4) + C= Integral sqrt ( 4 - x^2) dx = 2 asin(x/2) +[ x sqrt(4-x^2)]/2 + C (C is an arbitrary constant) 5. . . . so I guess what I did was a definite integral, while the problem was an indefinite integral. 6. ## reading math is difficult Hi May be a bit out of context of the problem of discussion...but I felt a lot of inconvenience in going through the solution...as I could not read math as text... is there no other way of posting math symbols? thanx vms 7. Originally Posted by vms Hi May be a bit out of context of the problem of discussion...but I felt a lot of inconvenience in going through the solution...as I could not read math as text... is there no other way of posting math symbols? thanx vms Hello vms we are actually working on it right now. We are in the process of adding LaTex math editor to the forum which will allow you to display the math equations just as they look in a math book. Should be up in a few days 9. Originally Posted by vms Hi is there no other way of posting math symbols? thanx vms 10. great If I have to post a question with math symbols... what is the way regards 11. I think it's more elegant to do the indefinite integral in polar coordinates, recognizing this is the upper half of the circle with radius=2 centered at the origin as in integral sqrt(4-x^2) = integral integral 2r dr dT where r = radius and T = theta = angle formed with the positive x axis. The conversion to polar coordinates uses r^2 = (x/2)^2 + (y/2)^2, and we know from the original eqn that r=2. The extra r in the double integral is from the Jacobian necessary in converting to polar coord. Then finding this integral is easy: = integral r^2 dT = Tr^2 = 4T + C which I guess would work for finding areas of sectors of the circle of radius 2. What I'm curious is if I convert back to x and y, if I get the solution above, using x = rcosT, and r^2 = (x/2)^2 + (y/2)^2 4T = 4arccos(x/r) = 4arccos[x/(sqrt((x/2)^2 + (y/2)^2))] = = 4arccos[x/(sqrt((x/2)^2 + ((4-x^2)/2)^2))] does this simplify to above??? 12. Originally Posted by billh I think it's more elegant to do the indefinite integral in polar coordinates, recognizing this is the upper half of the circle with radius=2 centered at the origin as in integral sqrt(4-x^2) = integral integral 2r dr dT where r = radius and T = theta = angle formed with the positive x axis. The conversion to polar coordinates uses r^2 = (x/2)^2 + (y/2)^2, and we know from the original eqn that r=2. The extra r in the double integral is from the Jacobian necessary in converting to polar coord. Then finding this integral is easy: = integral r^2 dT = Tr^2 = 4T + C which I guess would work for finding areas of sectors of the circle of radius 2. What I'm curious is if I convert back to x and y, if I get the solution above, using x = rcosT, and r^2 = (x/2)^2 + (y/2)^2 4T = 4arccos(x/r) = 4arccos[x/(sqrt((x/2)^2 + (y/2)^2))] = = 4arccos[x/(sqrt((x/2)^2 + ((4-x^2)/2)^2))] does this simplify to above??? It doesn't :-( You are trying to calculate integral sqrt(4-x^2) in the same way you compute the definite integral int (-infty +infty) exp(-x^2)! But here the result we are looking for is the antiderivatives of sqrt(4-x^2) You are mixing (and confusing) two concepts: area and antiderivative. It is true that you can calculate area using the fundamental theorem of calculus which states that the area under a curve f(x) in the interval [a,b] is F(b) - F(a), where F'(x) = f(x). But here F(x) is the primitive function we are looking for. A function such that F'(x)=sqrt(4-x^2) (2·arcsin(x/2) + x·sqrt(4 - x^2)/2)' = sqrt(4-x^2) 13. OK, I can see that. Thanks for the clarification. But, if the problem HAD been a definite integral from -2<=x<=2, I COULD have done it with polar coordinates! We just finished doing that in class, so I was "sensitized" to looking for equations of circles. 14. billh, I see your point. You are correct in a way. [sqrt(4 -x^2)]dx can be viewed as the dA for the area above the x-axis of the circle centered at the origin with radius = 2 units. The whole circle is x^2 +y^2 = 2^2 Or, x^2 +y^2 = 4. Then, solving for y, y^2 = 4 -x^2 y = +,-sqrt(4 -x^2) Meaning, the positive y's are above the x-axis, so y = sqrt(4 -x^2) is any y-coordinate above the x-axis. So, if we want to get the area of the said circle above the x-axis, by integration, then we may integrate horizontally with dA = y*dx Then, A = INT.(-2 -> 2)[y]dx Converting y into x-terms, A = INT.(-2 -> 2)[sqrt(4 -x^2)]dx A = [2(arcsin(x/2)) +(1/2)(x*sqrt(4-x^2))] (-2 -> 2) A = [2(arcsin(2/2)) +(1/2)(2*sqrt(4 -2^2))] - [2(arcsin(-2/2)) +(1/2)(-2*sqrt(4 -(-2)^2))] A = [2(arcsin(1) +(1/2)(2*sqrt(0))] - [2(arcsin(-1) +(1/2)(-2*sqrt(0))] A = [2(p1/2) +0] -[2(-pi/2) +0] A = pi -(-pi) A = 2pi sq.units Now, using polar coordinates, The said whole circle centered at origin with radius 2 is r = 2 ---equation of the whole circle. If we need to find the area of the said circle above the "equivalent of x-axis" or the area from (theta = 0) to (theta = pi), then we get the dA first. dA is an infinitesimal sector of the circle, whose radius is 2 and whose central angle is dtheta or dT. So its subtended arc is (radius)(central angle) = 2*dT = 2dT dA then is (1/2)(radius)(subtended arc) = (1/2)(2)(2dT) = 2dT So, A = INT.[2dT] A = (2)INT.[dT] Integrating from (theta = 0) up to (theta = pi), A = (2)INT.(0 -> pi)[dT] A = (2)[T](0 -> pi) A = (2)[pi -0] A = 2pi sq.units ---the same as when using dA = sqrt(4 -x^2) *dx above. ------------- But then, the original question is for integrating sqrt(4 -x^2) dx, where sqrt(4 -x^2) can be any quantity in general---not necessarily the positive y-coordinate of a circle as mentioned above. 15. I mentioned this problem to my calculus professor (2nd year) and he looked at me like I was an idiot. I described the problem as an "indefinite integral", which is the term I learned in Calc I. He said "don't say 'indefinite integral' just think function. It is just the antiderivative and has nothing to do with finding areas", in and of itself, ie apart from the the other part of the Fund Thm of Calc. I looked like and idiot, but at least I learned something. , , , # integral 4-x^2 Click on a term to search for related topics.
2016-09-29T05:25:37
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/37-integrate-sqrt-4-x-2-dx.html", "openwebmath_score": 0.9020524024963379, "openwebmath_perplexity": 3966.1428553028854, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9861513881564148, "lm_q2_score": 0.8577681013541613, "lm_q1q2_score": 0.8458892038666984 }
https://math.stackexchange.com/questions/3184319/how-do-i-evaluate-prove-to-myself-that-a-method-for-picking-uniformly-distribu
# How do I evaluate (prove to myself) that a method for picking uniformly distributed values is correct? To make this more specific, I show a broken procedure for generating random points in a circle and a correct (hopefully) procedure for generating random dates within an interval. I'd like to be able to precisely explain why one of them is wrong and the other is not, given that they sound very similar. What is so special about polar coordinates, that is not true about the case with dates? # Point in Circle When placing a random point within a circle, the following is incorrect approach. Use polar coordinates. First, generate the distance from center of the circle as number in interval [0, r). Then, generate the angle as number in interval [0, 2*pi). The problem with the method described is that half of such points would lie within distance r/2 from the center, but that is only 1/4 of the surface of the whole circle. (Anyways, how can one come up with such argument or know for certain there isn't one? It is obvious when it is stated, but I cannot imagine coming up with it myself; I'd just accept the method as correct.) # Random Date randomdate = startdate + new TimeInterval( days: random(from: 0 to: (enddate - startdate).days) hours: random(from: 0 to: 23) minutes: random(from: 0 to: 59) ) When proving uniform distribution of values, what exactly am I trying to prove (how come that in the circle example I have to think of area density, which is not necessary in the date example) and how do I go about it, in a general case? • I am changing the title from "random" to "uniformly distributed" because that closely describes what I am after, I think. Apr 12 '19 at 6:19 • I do not think this was a duplicate of the question linked above. The question above is only concerned with the sample-and-reject approach for finding uniform points. This question asks for how to verify these sorts of things in general. As such, I have posted my answer to this question on that question, if you want to check it out. Apr 14 '19 at 1:27 • @CortAmmon I reopened this now. (What you did was not really appropriate. ) – quid Apr 15 '19 at 7:58 • "I have to think of area density, which is not necessary in the date example": you also have to think about density in the date example. – user65203 Apr 15 '19 at 9:11 • @quid Thanks. Sorry for causing problems. I hated the idea of letting a few days work go to waste because the question was marked as duplicate. Apr 15 '19 at 14:56 In the nonuniform point-in-circle example, what you do is take a uniform distribution of points on the rectangle $$[0, R) \times [0, 2 \pi)$$, and map them into the disc using the map $$f(r, \theta) = (r \cos \theta, r \sin \theta).$$ The Jacobian of this map measures how "dense" the image is at a point compared to the source: we have $$|D_f(r, \theta)| = \left \lvert \begin{matrix} \frac{\partial f_1}{\partial r} & \frac{\partial f_1}{\partial \theta} \\ \frac{\partial f_2}{\partial r} & \frac{\partial f_2}{\partial \theta} \end{matrix} \right \rvert = \left \lvert \begin{matrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta \end{matrix} \right \rvert = r (\cos^2 \theta + \sin^2 \theta) = r$$ and so there is a "stretch factor" independent of the angle, but proportional to the distance from the centre. A way to think about this is that if there was a 1cm coating of paint on the original rectangle $$[0, R) \times [0, 2 \pi)$$, and then we applied $$f$$, the paint on the resulting disc would only be $$1/r$$ cm thick at the point $$(r \cos \theta, r \sin \theta)$$. A way to fix this is to use a modified map, corrected for this. For example, if we take $$g(r, \theta) = (\sqrt{r} \cos \theta, \sqrt{r} \sin \theta)$$ then we find $$|D_g(r, \theta)| = \left \lvert \begin{matrix} \frac{\partial g_1}{\partial r} & \frac{\partial g_1}{\partial \theta} \\ \frac{\partial g_2}{\partial r} & \frac{\partial g_2}{\partial \theta} \end{matrix} \right \rvert = \left \lvert \begin{matrix} \frac{\cos \theta}{2 \sqrt{r}} & -\sqrt{r} \sin \theta \\ \frac{\sin \theta}{2 \sqrt{r}} & \sqrt{r} \cos \theta \end{matrix} \right \rvert = \frac{1}{2} (\cos^2 \theta + \sin^2 \theta) = \frac{1}{2}$$ And so we get an even distribution of paint (onto disc of radius $$\sqrt{R}$$, rather than $$R$$). This is easy to see with some pictures, but actually proving a drawing methodology is correct requires some calculus. To make the proof you want, you have to start with a definition of what it is you actually want to prove. You want to prove a particular distribution occurs -- in particular a uniform distribution across a circle. So what does that actually mean? A uniform distribution across a 2d surface means that, for any given area on that surface $$A$$, the portion of the probability density function (PDF) of our variable which is contained in $$A$$ is proportional to the size of the area within $$A$$, which is notated $$|A|$$. This means, for any area you pick, the probability of the sampled point falling within that area is proportional to how big it is. This is written formally, $$P(A) \propto |A|$$. Note that in this notation, $$A$$ is fundamentally describing a particular area on the surface while $$|A|$$ describes the numeric size of that area. $$A$$ might be "the surface of a basketball court" while $$|A|$$ is "4700 square feet," which is 94 feet times 50 feet. Keeping track of the difference will be helpful going forward because we'll be introducing more related notation. You also will want another requirement. Since you want the probability to be 0 outside of the circle, we know that if we pick our area to be the whole circle, the probability that the sampled point falls into this area is 1. Formally, given an area $$C$$ which is the entire circle, $$P(C) = 1$$. With these two equations, $$P(A) \propto |A|$$ and $$P(C) = 1$$, we can combine them to get $$P(A) = \frac{|A\cap C|}{|C|}$$, that is to say the probability of the sample being anywhere in an arbitrary area is equal to the size of the area that intersects the cricle divided by the size of the area of the circle itself. This is the fundamental equation we are trying to prove is true. For convenience going forward, if I can reasonably assume that $$A$$ is fully contained in the circle, I may abbreviate that equation to $$P(A) = \frac{|A|}{|C|}$$. I'll only include the "$$\cup C$$" part in situations where it isn't clear that $$A$$ is contained in $$C$$. So with this, we can prove the validity of the "discard points" approach to generating uniform points along a circle. Here's a picture describing that case In this picture we see that we sample in 2-d, discarding everything that falls into the red. Points in the middle are uniformly distributed. I've checkerboxed the area to show samples of areas that we might use to prove this. The probability of the point appearing in any one of these boxes is proportional to its area. Now its area is equal to the width times the height. This is the fundamental reason why drawing 2 1-d uniform values in cartersian space works. You can break the problem into widths and heights independently. Cartesian coordinates aren't the only ones where this works. Any linearly independent cooardinate system has this property. For example, if you picked your 2 1-d uninform distributions and mapped them with an affine coordinate system (which are linear, but the axes don't intersect at right angles), you'd get a uniform distribution as well: However, to the transforms you are interested in, you're mapping a circle to a square. The reason for this is obvious. If you don't want to discard points, then you need to map your circle to the entire 2-d space that a pair of uniform distributions can attain. As an aside, if this is for a computer program, the best answer is to discard the points. You'll spend much more CPU time trying to map a square to a circle than you'd spend discarding 21% of the points. However, in higher dimensions, the difference between a n-sphere and a n-cube get far worse. In the case of a 3d sphere and a 3d cube, you'll discard 48% of your points. If you had a 4d space, it'd be 70% and in 5d spaces it's 83%. This effect is known as the curse of dimensionality, and is a really useful thing to know going forward with statistics. So what about your transformation, where you sample radius, sample angle, and map that with polar coordinates? In this case, your transform is the transformation from polar coordinates (where $$R$$ is the desired circle radius): $$x^\prime = Rx\cdot\cos(2\pi y)$$ $$y^\prime = Rx\cdot\sin(2\pi y)$$ Note what happened here to the boxes. They got distorted. This is why you got the non-uniform distribution. You started with a nice uniform 2d space, but then you distorted it non-linearly. So how do you fix this? This is where the calculus comes in. Consider really really really small $$A$$ areas. In fact, consider "infinitesimally small" areas. Calculus is the study of how such infinitesimals operate. We call this infinitesimal area $$dA$$, where the $$d$$ basically notes that this is infinitesimally small and requires calculus to make meaningful. Using calculus, we can integrate the probability density function over our circle. We can write $$\int_{circle}P_A(A)dA = 1$$, which says if we add up (integrate) the probability density function values (the $$P_A(A)$$ part) over small areas(the $$dA$$ part), times the size of the area itself, the result should equal one. If you're not thinking in calculus terms, this could be done by summing over a finite number of areas $$a_1, a_2\ldots a_n$$ to get $$\sum_{i=1}^n(P_A(a_i)\cdot|a_i|) = 1$$ if that is more familiar. It's the same pattern, multiplying a PDF value times the size of an area. However, this is one of the cases where calculus makes things easier, because the equations end up being much simpler. Of course, we can then solve this to figure out a function for $$P_A$$. We know $$P_A$$ should be a constant value, because its a uniform distribution. By taking a derivative, we can reach the intuitive answer: $$P_A(A) = \frac{1}{|C|}$$ Intuitively if we integrate (or add up) a bunch of $$\frac{1}{|C|}\cdot |A|$$ values over a circle of size $$|C|$$, we end up with a total of $$\frac{1}{|C|}\cdot|C|=1$$ Now note that I subscripted the PDF function, $$P_A$$. $$P_A$$ is a function of area. We can change variables to get a PDF function in different variables. The obvious one is cartesian coordinates, x and y. We can do this by figuring out what to substitute in for $$dA$$. If you've done multivariable calculus, the obvious answer is $$dA = dx dy$$. If you haven't done multivariable calculus, it should at least seem reasonable that the area of a small region is its size in x multiplied by its size in y. This leads us to the equation $$\int\int P_{xy}(x, y)dx dy = 1$$. Here I've switched from a PDF which accepts an area $$A$$ to one which accepts two arguments, x and y. Using the same logic we used to find $$P_A$$, it's easy to find $$P_{xy}$$: $$P_{xy}(x, y) = \frac{1}{|C|}$$. This is nothing profound. It's really just the basis for the solution we showed above, where we reject all points outside of the circle. It shows that we can draw x and y uniformly, then combine them into a point and get a uniform 2d distribution. The profound bit is coming when we decided to switch to polar. You wanted to do a polar conversion, so we need to think in polar coordinates. So we do another change of variables. One's first instinct might be to declare $$dA=dr d\theta$$, but that would actually be wrong. The correct answer is $$dA=r dr d\theta$$. Why? Informally, think about polar coordinates as a bunch of nested rings, each of the same thickness. The inner rings are smaller, so they have a smaller area than the larger rings. In fact, if you have a ring of radius $$r$$ and you look at a ring of radius $$2r$$, you see that the larger ring has twice the area of the first. The area of any ring is $$2\pi r \Delta r$$, where $$Delta r$$ is the width of the ring. Note that r term that appeared in that equation. That's where the r in $$r dr d\theta$$ comes from. More formally, this is what we call the Jacobian. If I do a change of variables to transform from one coordinate system to another, I have to multiply the value of the integrand by the determinate of the Jacobian matrix. If you do the calculus, this determinate is $$r$$ for converting from rectangular to polar. If you calculate the Jacobian for the cartesian coordinate system (x and y) transform, it turns out to be $$1$$, which is why we didn't see it before. So this means $$\int_{circle}P_A(A)dA = 1$$ transforms to $$\int_{circle}P_{r\theta}(r, \theta)\cdot r dr d\theta = 1$$. **It is that extra $$r$$ term which is why your distribution wasn't looking uniform. You must take it into consideration. As before, we want the probability of any point being the same, so we know $P_{r \theta}(r, \theta)=\frac{k}{|C|}$. Thus our final integral is $\int_{circle}\frac{k}{|C|}r dr d\theta = 1$. Note that this is $\frac{k}{|C|}$ rather than $\frac{1}{|C|}$. It turns out that, to make the probabilities for $P_r$ to sum to 1, we actually need $k=2$. Thus $P_{r}(r) = \frac{2r}{|C|}$ Now for the key to making this work, I'm going to define a new PDF, $$P_r(r)=\frac{2}{|C|}r$$. This is a non-uniform random variable. Using this, I rewrite the above integral as simply $$\int_{circle}P_r(r) dr d\theta = 1$$. The reason I rewrite it this way is two fold: • It makes it clear that the larger rings need to have a higher probability • It is in the form of "integrate a probability density function over an area," which we had before. Now we can apply Inverse Transform Sampling to generate this distribution from a random distribution. The process is as follows: • Compute the CDF of the desired distribution. This means integrating $$CDF(R) = \int_0^R \frac{2r}{|C|}dr$$ which means $$CDF(R) = \frac{R^2}{|C|}$$ • Invert this CDF, $$CDF^{-1}(x) = |C|\sqrt x$$ • Take a random uniform variable X, transform it by $$X^\prime = CDF^{-1}(X) = |C|\sqrt x$$. The resulting distribution is now the distribution we need for $$P_r$$. So what just happened? This all says that when we draw for radius and angle, we need to take the square root of the radius first, then transform it from polar to a circle in Cartesian coordinates.
2021-12-01T09:29:59
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3184319/how-do-i-evaluate-prove-to-myself-that-a-method-for-picking-uniformly-distribu", "openwebmath_score": 0.8782725930213928, "openwebmath_perplexity": 216.1896405335572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513881564148, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8458892020739035 }
https://electronics.stackexchange.com/questions/256662/where-did-j-go-in-the-underdamped-response-of-an-rlc-circuit
# Where did j go in the underdamped response of an RLC circuit? I was following the derivation of the solution to the underdamped case for a series RLC circuit in my textbook, and ran into a roadblock. The derivation goes like this: $$\because \text{The general solution is } i(t)=A_1e^{s_1t}+A_2e^{s_2t}\\ \because s_{1,2} = -\alpha \pm \sqrt{\alpha^2 - \omega_0^2} \text{ and } \alpha<\omega_0\\ \therefore s_{1,2} = -\alpha \pm j\omega_d \text{ where } \omega_d=\sqrt{\omega_0^2-\alpha^2}$$ Plugging these roots into the general solution we have: $$i(t)=A_1e^{(-\alpha+j\omega_d)t}+A_2e^{(-\alpha-j\omega_d)t}=A_1e^{-\alpha t}e^{j\omega_d t}+A_2e^{-\alpha t}e^{-j\omega_d t}\\ \implies i(t)=e^{-\alpha t}(A_1e^{j\omega_d t}+A_2e^{-j\omega_d t})$$ Then, using Euler's formula, we can write: $$i(t)=e^{-\alpha t}[A_1(\cos(\omega_d t)+j\sin(\omega_d t))+A_2(\cos(\omega_d t)-j\sin(\omega_d t))]\\ \implies i(t)=e^{-\alpha t}[(A_1+A_2)\cos(\omega_d t) + j(A_1-A_2)\sin(\omega_d t)]$$ Now, here is where I get lost. The book goes on and says: $$\text{Let } B_1=A_1+A_2 \text{ and } B_2=j(A_1-A_2)\\ \therefore i(t)=e^{-\alpha t}(B_1\cos(\omega_dt)+B_2\sin(\omega_dt))$$ It then presents the above equation as the natural, underdamped response of an RLC circuit. But how can this be true? It seems to me as if they just all of a sudden decided that the imaginary part $j(A_1-A_2)\sin(\omega_dt)$ was actually real. Shouldn't the actual solution be: $$i(t)=\text{Re}[e^{-\alpha t}(B_1\cos(\omega_dt)+jB_2\sin(\omega_dt))] \text{ where } B_2=A_1-A_2$$ To me, it just appears as if they are ignoring the fact that the second sinusoid in the solution is imaginary, and therefore cannot be treated as if it were part of the 'real' response. Can anyone elaborate on this? • "It seems to me as if they just all of a sudden decided that the imaginary part was actually real." Thats not the case. Defining B2 to be j * (A1 - A2) doesn't make it real. You just hide it behind another name. The phase information of the current (which is determined by the imaginary part) is still there and depends on your boundary conditions. On the other hand, i would use your notation, keeping the j. Sep 7, 2016 at 22:13 • That's what I thought too, but when I asked about it in class, I was told to include the $\sin()$ function when plotting the response in the time domain- is this then incorrect? Sep 7, 2016 at 22:17 • I'll give you a hint to think over: "$A_1$ and $A_2$ can only be complex conjugate". This will explain $B_1=A_1+A_2 \in \mathbb{R}$ and $B_2=\text{j}\left(A_1-A_2\right) \in \mathbb{R}$. Now a question for you: why $A_1$ and $A_2$ must be conjugate? Note the answer is already in the text you posted. BTW the Re opeartor in your last lines makes me think of phasors which have nothing to do with. Sep 7, 2016 at 22:19 • Hmm, well if $A_1=conj(A_2)$ and we say $A_1=x+jy$ then we know that $A_1+A_2=2x$ and $A_1-A_2=2jy$, which then implies that $B_2=j(A_1-A_2)=j(2jy)=-2y$, which in turn means that the solution is $i(t)=e^{-\alpha t}[2x\cos(\omega_d t)-2y\sin(\omega_d t)]$ and here the sine 'quantity' is real... Sep 7, 2016 at 22:34 • Side note: I though this was about phasors, since the solution is essentially an exponentially decaying phasor... whose magnitude decreases as it spins around the complex plane? As for why they must be complex conjugates, I am not sure, does it have something to do with the fact that one needs to cancel the other out based on initial conditions? Sep 7, 2016 at 22:36 Therefore, you are free to choose the forms of these two solutions and these are just two of the forms: $$i(t)=e^{-\alpha t}(A_1e^{j\omega_d t}+A_2e^{-j\omega_d t})$$ $$i(t)=e^{-\alpha t}(B_1\cos(\omega_dt)+B_2\sin(\omega_dt))$$ And you have demonstrated the first is equivalent to the second through linear combination by redefining the two constants. For example, this is also a valid representation (but this would be a strange choice): $$i(t)=e^{-\alpha t}(C_1e^{j\omega_d t}+C_2\sin(\omega_dt))$$ • Where $B_2$ is imaginary? Sep 8, 2016 at 4:03 • In general, $A_1, A_2, B_1, B_2, C_1, C_2$ are complex number constants. But if the initial conditions are real, which would be the case for models of real circuits, then automatically $B_1 and B_2$ would come out to be real. (Real number is a subset of complex number). Sep 8, 2016 at 4:24 • But my book defines $B_2=j(A_1-A_2)$ so how does that suddenly become real? Or is that term just dropped completely (if the initial conditions are real)? Sep 8, 2016 at 4:32 • If you fit two real initial conditions to the first form, you will find that $A_1, A_2$ would takes on the values of complex numbers. You will also find that the real parts of $(A_1-A_2)$ cancel out and you are left with an imaginary number. And the imaginary parts of $(A_1+A_2)$ cancel out. The math is self-consistent. Sep 8, 2016 at 4:37
2023-02-04T00:30:21
{ "domain": "stackexchange.com", "url": "https://electronics.stackexchange.com/questions/256662/where-did-j-go-in-the-underdamped-response-of-an-rlc-circuit", "openwebmath_score": 0.8286468386650085, "openwebmath_perplexity": 256.8650872879889, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611620335328, "lm_q2_score": 0.8807970811069351, "lm_q1q2_score": 0.8458833083276001 }
https://math.stackexchange.com/questions/2861179/question-on-evaluating-the-surface-integral-over-a-cube
Question on evaluating the surface integral over a cube Here's the question: Evaluate $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}}$ if $\boldsymbol{F} = (x+y) \boldsymbol{\hat{i}} + x \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$ and $S$ is the surface of the cube bounded by the planes $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$. Here's my attempt: Suppose the faces whose equations are $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$ are respectively named $S_1$, $S_2$ and so on respectively and let $\boldsymbol{\hat{n}}$ denote the unit vector normal to them. Now on $S_1$, $\boldsymbol{F} = y \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$, $\boldsymbol{\hat{n}}=\boldsymbol{\hat{i}}$. Therefore $\iint_{S_1} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \int_{0}^{1} \int_{0}^{1} y \mathrm{d}y \mathrm{d}z = \frac{1}{2}$. Similarly we have $\iint_{S_2} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{3}{2}$, $\iint_{S_3} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_4} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_5} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 0$ and $\iint_{S_6} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 1$. Hence overall we have $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 4$. But the answer on the textbook seems to be $2$. I checked everything up and there doesn't seem to be any error on my part but I was wondering how the answer doesn't match up. • Check the signs on your normal vectors. How do the normal vectors for $S_1$ and $S_2$ differ? – Michael Burr Jul 24 '18 at 10:14 In your case, the outward pointing normal vector for $S_1$ is $\langle -1,0,0\rangle$, which changes the sign of your answer. The outward pointing normal vector for $S_2$ remains $\langle 1,0,0\rangle$, so that answer doesn't change. There are two more vectors which need to swap signs, and after that, you'll get $$\left(-\frac{1}{2}\right)+\frac{3}{2}+\left(-\frac{1}{2}\right)+\frac{1}{2}+(-0)+1=2.$$ $$div\boldsymbol{F} = 2\implies \iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}}\, dS=\iiint_{V} div\boldsymbol{F} \,dV=2 \iiint_{V} \,dV=2$$
2019-09-23T09:17:54
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2861179/question-on-evaluating-the-surface-integral-over-a-cube", "openwebmath_score": 0.9426947236061096, "openwebmath_perplexity": 92.58954247999154, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9905874104123901, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8458752208231837 }
http://mathhelpforum.com/math-challenge-problems/81198-missing-intercept-print.html
# The Missing Intercept • March 29th 2009, 03:26 AM Soroban The Missing Intercept The Missing Intercept I posted this puzzle some time ago, but no one provided a satisfactory answer. Given: . $\begin{Bmatrix}x \:=\:\dfrac{1-t^2}{1+t^2} \\ \\[-3mm] y \:=\:\dfrac{2t}{1+t^2} \end{Bmatrix}$ We have the parametric equations of a unit circle. Verification Square: . $\begin{Bmatrix}x^2 \:=\:\dfrac{(1-t^2)^2}{(1+t^2)^2} & [1]\\ \\[-3mm] y^2 \:=\:\dfrac{(2t)^2}{(1+t^2)^2} & [2]\end{Bmatrix}$ Add [1] and [2]: . $x^2+y^2\:=\:\frac{1 - 2t^2 + t^4}{(1+t^2)^2} + \frac{4t^2}{(1+t^2)^2}$ $= \;\frac{1 + 2t^2+t^4}{(1+t^2)^2} \:=\:\frac{(1+t^2)^2}{(1+t^2)^2} \;=\;1$ Hence, a unit circle: . $x^2+y^2\:=\:1$ To find the $y$-intercepts, let $x = 0.$ $x = 0\!:\;\;\frac{1-t^2}{1+t^2}\:=\:0 \quad\Rightarrow\quad t \:=\:\pm1$ Hence, the $y$-intercepts are: . $(0,1),\;(0,\text{-}1)$ To find the $x$-intercepts, let $y = 0.$ $y = 0\!:\;\;\frac{2t}{1+t^2} \:=\:0 \quad\Rightarrow\quad t \:=\:0$ Hence, the $x$-intercept is: . $(1,0)$ ? Where is the other $x$-intercept? • March 29th 2009, 04:16 AM running-gag Hi I don't know which answers have been given previously so I am trying (Happy) Let E the set defined by $\begin{Bmatrix}x \:=\:\dfrac{1-t^2}{1+t^2} \\ \\[-3mm] y \:=\:\dfrac{2t}{1+t^2} \end{Bmatrix}$ for t real By showing that $x^2+y^2=1$ you are proving that E is included in the unit circle, but not that it is equal to the unit circle. And it is not equal since the point (-1,0) is not inside E (there is no value of t such that x=-1 and y=0). Only if you are allowed to consider infinite values of t, you can find this point. • March 29th 2009, 06:15 AM CaptainBlack Quote: Originally Posted by Soroban The Missing Intercept I posted this puzzle some time ago, but no one provided a satisfactory answer. Given: . $\begin{Bmatrix}x \:=\:\dfrac{1-t^2}{1+t^2} \\ \\[-3mm] y \:=\:\dfrac{2t}{1+t^2} \end{Bmatrix}$ We have the parametric equations of a unit circle. Verification Square: . $\begin{Bmatrix}x^2 \:=\:\dfrac{(1-t^2)^2}{(1+t^2)^2} & [1]\\ \\[-3mm] y^2 \:=\:\dfrac{(2t)^2}{(1+t^2)^2} & [2]\end{Bmatrix}$ Add [1] and [2]: . $x^2+y^2\:=\:\frac{1 - 2t^2 + t^4}{(1+t^2)^2} + \frac{4t^2}{(1+t^2)^2}$ $= \;\frac{1 + 2t^2+t^4}{(1+t^2)^2} \:=\:\frac{(1+t^2)^2}{(1+t^2)^2} \;=\;1$ Hence, a unit circle: . $x^2+y^2\:=\:1$ To find the $y$-intercepts, let $x = 0.$ $x = 0\!:\;\;\frac{1-t^2}{1+t^2}\:=\:0 \quad\Rightarrow\quad t \:=\:\pm1$ Hence, the $y$-intercepts are: . $(0,1),\;(0,\text{-}1)$ To find the $x$-intercepts, let $y = 0.$ $y = 0\!:\;\;\frac{2t}{1+t^2} \:=\:0 \quad\Rightarrow\quad t \:=\:0$ Hence, the $x$-intercept is: . $(1,0)$ ? Where is the other $x$-intercept? Not much of a puzzle it's at $t=\pm\infty$ (or rather it's the limit point as $t \to \infty$ ) and there the point is $(-1,0)$. So the curve never reaches the second intercept and the curve is the unit circle with a hole at $(-1,0).$ CB • March 29th 2009, 11:19 AM Soroban Thank you, running-gag and The Cap'n! Those are the answers I was hoping for.
2015-05-23T07:03:32
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/math-challenge-problems/81198-missing-intercept-print.html", "openwebmath_score": 0.953953206539154, "openwebmath_perplexity": 1368.9386977295842, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9905874099955753, "lm_q2_score": 0.8539127473751341, "lm_q1q2_score": 0.84587521678454 }
https://stats.stackexchange.com/questions/330700/posterior-distribution-after-observing-only-difference-in-gaussians
# Posterior distribution after observing only difference in Gaussians Suppose I have two independent random deviates $A$ and $B$ sampled from Gaussian (Normal) distributions with means $\mu_a$ and $\mu_b$ and standard deviations $\sigma_a$ and $\sigma_b$. I can't observe $A$ or $B$ directly, but see only their difference $C = A - B$. Given that I observe $C=c$, what's $Pr\{A = a | C=c\}$ ? Seems like a job for Bayes' rule, and its easy to write down $$Pr\{A=a | c\} = {Pr\{C | A\} Pr\{A\} \over Pr\{C\}}$$ From the assumptions above $Pr\{A\}$ ~ Normal($\mu_a, \sigma_a^2$), and $$Pr\{C | A\}$$ $$= Pr\{C=A-B | A\}$$ $$= Pr\{B = A-C | A\}$$, which is also ~ Normal($\mu_b, \sigma_b^2$) (we've conditioned on $A$, so we just want the probability that $B$ equals some value) ...however this leads to a dense thicket of algebra I can't seem to climb out of. Any handy tricks or references I should examine? Based on simulations the solution is Gaussian, but it's some complex function of $\mu_a, \sigma_a, \mu_b, and \sigma_b$, which I can't seem to derive. Thanks! • Isn't $A$ independent of $C$ and therefore $P(A) = P(A|C)$? – Vivek Subramanian Feb 27 '18 at 0:01 • 1. $\mathbb{P}(A=a|C=c)=0$ since $A$ is a continuous random variable, you need to phrase this in terms of densities. 2. Are the means and standard deviations of $A$ and $B$ known? If they are you just phrase the problem in terms the joint distribution of A and B, transform it to the joint distribution of $C$ and $A$ (this is just a linear transformation), then find the conditional of $A$ given $C$. All of this is doable from the properties of the multivariate normal distribution. – aleshing Feb 27 '18 at 0:35 • @Vivek, since C = A - B, then given a value for C we have some information about what A must have been, so I don't think they're independent (and from simulations they do not seem to be) – user2225493 Feb 27 '18 at 1:55 • @marmle, thanks I've been sloppily using Pr{A=a} to denote probability density, but perhaps p(a)_A would be more clear, apologies – user2225493 Feb 27 '18 at 2:51 Given the specified distributions for $A$ and $B$, you have the initial joint distribution: $$\begin{bmatrix} A \\ B \end{bmatrix} \sim \text{N} \Bigg( \begin{bmatrix} \mu_A \\ \mu_B \end{bmatrix} , \begin{bmatrix} \sigma_A^2 & 0 \\ 0 & \sigma_B^2 \end{bmatrix} \Bigg).$$ Applying the appropriate linear transformation gives the joint distribution of interest: $$\begin{bmatrix} A \\ C \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} A \\ B \end{bmatrix} \sim \text{N} \Bigg( \begin{bmatrix} \mu_A \\ \mu_A - \mu_B \end{bmatrix} , \begin{bmatrix} \sigma_A^2 & \sigma_A^2 \\ \sigma_A^2 & \sigma_A^2 + \sigma_B^2 \end{bmatrix} \Bigg).$$ (As you can see, the random variables $A$ and $C$ are not independent.) Using the standard rules for the conditional distribution of a multivariate normal distribution, we have the conditional distribution $A|C \sim \text{N} (\mu_*(C), \sigma_*^2)$ where: $$\begin{matrix} \mu_*(C) \equiv \mu_A + \frac{\sigma_A^2}{\sigma_A^2 + \sigma_B^2}(C - \mu_A+\mu_B) & & \sigma_*^2 \equiv \frac{\sigma_A^2 \sigma_B^2}{\sigma_A^2 + \sigma_B^2} \end{matrix}.$$ So as you can see, observing $C$ allows you an imperfect glimpse into $A$. If $\sigma_A \gg \sigma_B$ then you get a good predictor of $A$ and if $\sigma_A \ll \sigma_B$ then you get a poor predictor of $A$. • Nice! I was stuck because I was using the transformation matrix $[1 -1]^T$ which gives just the marginal of $C$ rather than the one you used which gives the joint on $A$ and $C$. Makes it easy to find the conditional. – Vivek Subramanian Feb 27 '18 at 5:30 • This matches my simulation results almost perfectly, thank you! – user2225493 Feb 27 '18 at 15:04 Given $A \sim \textsf{N}(\mu_a,\sigma_a^2)$ and $B \sim \textsf{N}(\mu_b,\sigma_b^2)$ where $A$ and $B$ are independent. Let $C = A - B$. The joint distribution of $(A,C)$ is bivariate normal: $$\begin{bmatrix} A \\ C \end{bmatrix} \sim \textsf{N}\left(\begin{bmatrix}\mu_a \\ \mu_a - \mu_b\end{bmatrix}, \begin{bmatrix}\sigma_a^2 & \sigma_a^2 \\ \sigma_a^2 & \sigma_a^2 + \sigma_b^2 \end{bmatrix}\right) .$$ Bayes rule says $$p(A|C) = \frac{p(A,C)}{P(C)}.$$ Therefore, the distribution for $A$ given $C$ is $$A|C \sim \textsf{N}(m,s^2) ,$$ where $$s^2 = \left(\frac{1}{\sigma_a^2} + \frac{1}{\sigma_b^2}\right)^{-1}$$ and $$m = s^2\left(\frac{\mu_a}{\sigma_a^2} + \frac{C + \mu_b}{\sigma_b^2}\right) .$$
2019-10-20T22:25:36
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/330700/posterior-distribution-after-observing-only-difference-in-gaussians", "openwebmath_score": 0.9201036691665649, "openwebmath_perplexity": 267.57870449731695, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.975576914206421, "lm_q2_score": 0.8670357735451835, "lm_q1q2_score": 0.8458600844617873 }
https://stats.stackexchange.com/questions/489075/what-is-the-mean-absolute-difference-between-values-in-a-normal-distribution
# What is the mean absolute difference between values in a normal distribution? I understand that variance is mean of squared differences and that standard deviation is square root of the mean. What, however, is the average difference between values in a normal distribution (without considering the sign, of course, since if we consider the sign, it would be 0)? • In my opinion, it is still zero. Case of very large mean, the absolute value transform is not material and the expected difference remains zero. Instead, consider moving the mean to zero. This implies we have a Truncated Normal distribution (see en.wikipedia.org/wiki/…) which is truncated at the mean (now zero). As this, per Wikipedia, is "a mean preserving contraction" again no effect, the answer remains zero. Sep 25, 2020 at 5:18 • @AJKOER …what? I think you've probably misread the question (and also the Wikipedia article you reference). In particular, you seem to be considering the difference of the absolute values of two i.i.d. normal random variables, whereas the OP is clearly asking about their absolute difference (i.e. $|X-Y|$, not $|X|-|Y|$). Also, as Wikipedia clearly says, "truncation is a mean-preserving contraction combined with a mean-changing rigid shift" (emphasis mine), and thus is not mean-preserving as a whole. Sep 25, 2020 at 17:56 Assume that $$X, Y\sim N(\mu,\sigma^2)$$ are iid. Then their difference is $$X-Y\sim N(0,2\sigma^2)$$. As you write, the expectation of this difference is zero. And the absolute value of this difference $$|X-Y|$$ follows a folded normal distribution. Its mean can be found by plugging the mean $$0$$ and variance $$2\sigma^2$$ of $$X-Y$$ into the formula at the Wikipedia page: $$\sqrt{2}\sigma\sqrt{\frac{2}{\pi}} = \frac{2\sigma}{\sqrt{\pi}}.$$ A quick simulation in R is consistent with this: > nn <- 1e6 > sigma <- 2 > set.seed(1) > XX <- rnorm(nn,0,sigma) > YY <- rnorm(nn,0,sigma) > mean(abs(XX-YY)) [1] 2.257667 > sqrt(2)*sigma*sqrt(2/pi) [1] 2.256758 • Is there an immense gratitude button anywhere on the internet? Sep 25, 2020 at 5:20 • Yes, it's the little checkmark you already clicked - thank you! Sep 25, 2020 at 5:21 • It's the formula for the mean "$\mu_Y$" in the sidebar at the Wikipedia page, where we substitute $\mu=0$ for the mean of $X-Y$ and use $\sqrt{2}\sigma$ in the place of $\sigma$ for the standard deviation of $X-Y$. Sep 25, 2020 at 5:35 • This is a general result on sums of independent normal variables: if $X\sim N(\mu_X, \sigma^2_X)$ and $Y\sim N(\mu_Y, \sigma^2_Y)$ are independent, then $-Y\sim N(-\mu_Y,\sigma^2_Y)$ and $X-Y=X+(-Y)\sim N(\mu_X-\mu_Y, \sigma^2_X+\sigma^2_Y)$. Sep 25, 2020 at 17:37 • @RiteshSingh You can start a bounty for the question, then assign it to Stephan Kolassa if you want. Sep 25, 2020 at 21:38
2022-05-27T05:51:06
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/489075/what-is-the-mean-absolute-difference-between-values-in-a-normal-distribution", "openwebmath_score": 0.9369877576828003, "openwebmath_perplexity": 553.9681503375823, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769085257165, "lm_q2_score": 0.8670357718273068, "lm_q1q2_score": 0.8458600778604926 }
https://stats.stackexchange.com/questions/151096/independence-of-sample-mean-and-sample-range-of-normal-distribution/151101
# Independence of Sample mean and Sample range of Normal Distribution Let $X_1,\dots,X_n$ be i.i.d. random variables with $X_1 \sim N(\mu,\sigma^2)$. Let $\bar X =\sum_{i=1}^n X_i/n$ and $R = X_{(n)}-X_{(1)}$, where $X_{(i)}$ is the $i$ the order statistic. Show that $\bar X$ and $R$ are independently distributed. I know that sample mean and sample variance of normal distribution are independent. But this result state that sample mean and sample range of normal distribution are also independent. I know that $\bar X \sim N(\mu,\sigma^2/n)$. • for $n=2$, you can express the sample range $R$ as a function of the sample variance $S^2$, which proves it in that case. May 6 '15 at 17:25 This is evidently a self-study question, so I do not intend to deprive you of the satisfaction of developing your own answer. Moreover, I'm sure there are many possible solutions. But for guidance, consider these observations: 1. When a random variable $X$ is independent of other random variables $Y_1, \ldots, Y_m$, then $X$ is independent of any function of them, $f(Y_1, \ldots, Y_m)$. (See Functions of Independent Random Variables for more about this.) 2. Because the $X_i$ are jointly Normal, $X_1 + \cdots + X_n$ is independent of all the differences $Y_{ij} = X_i - X_j$, since their covariances are zero. Because the range can be expressed as $$X_{(n)} - X_{(1)} = \max_{i,j}(|X_i - X_j|) = \max_{i,j}(|Y_{ij}|)$$ you can exploit (1) and (2) to finish the proof. For more intuition, a quick simulation might be of some help. The following shows the marginal and joint distribution of the mean and range in the case $n=3$, using $10,000$ independent datasets. The joint distribution clearly is not bivariate Normal, so the temptation to prove independence by means of a zero correlation--although a good idea--is bound to fail. However, a close analysis of these results ought to suggest that the conditional distribution of the range does not vary with the mean. (The appearance of some variation at the right and left is due to the paucity of outcomes with such extreme means.) Here is the R code that produced these figures. It is easily modified to vary $n$, vary the simulation size, and to analyze the simulation results more extensively. n <- 3; n.sim <- 1e4 sim <- apply(matrix(rnorm(n * n.sim), n), 2, function(y) c(mean(y), diff(range(y)))) par(mfrow=c(1,3)) hist(sim[1,], xlab="Mean", main="Histogram of Means") hist(sim[2,], xlab="Range", main="Histogram of Ranges") plot(sim[1,], sim[2,], pch=16, col="#00000020", xlab="Mean", ylab="Range") This exercise is an immediate application of Basu's theorem. To begin with, first note that $\bar{X}$ and $R$ are independent if and only if $\sigma^{-1}\bar{X}$ and $\sigma^{-1}R$ are independent, so we may assume $X_1, \ldots, X_n \text{ i.i.d.} \sim N(\mu, 1)$. It is well-known that $\bar{X}$ is a sufficient and complete statistic for $\mu$, so according to Basu's theorem, to show that $\bar{X}$ and $R$ are independent, it remains to show $R$ is ancillary, i.e., $R$'s distribution is independent of $\mu$. This is easily seen by noticing $$R = X_{(n)} - X_{(1)}= (X_{(n)} - \mu) - (X_{(1)} - \mu).$$ Clearly, the distribution of $(X_{(1)} - \mu, \ldots, X_{(n)} - \mu)$ is identical to the distribution of $(Z_{(1)}, \ldots, Z_{(n)})$, where $Z_1, \ldots, Z_n \text{ i.i.d.} \sim N(0, 1)$, which is distribution-constant. Consequently, the distribution of $R$ does not depend on $\mu$. This completes the proof. A well-known property of the normal distribution is that the joint distribution of the differences $$X_i-\overline X$$, $$i=1,2,\ldots,n-1$$ is independent of the distribution of sample mean $$\overline X$$. This is also apparent from the independence of $$\overline X=\frac{1}{n}\sum\limits_{i=1}^n X_i$$ with the sample variance $$\frac{1}{n-1}\sum\limits_{i=1}^n (X_i-\overline X)^2$$, giving us $$\operatorname{Cov}(X_i-\overline X,\overline X)=0$$ for each $$i$$. The joint normality of $$\overline X$$ and $$X_i-\overline X$$ then implies their independence. Since the sample range $$R=\max(X_i-\overline X)-\min(X_i-\overline X)$$, $$i=1,2,\ldots,n$$ is a measurable function of $$X_i-\overline X$$ for each $$i$$, we can justify that $$\overline X$$ is independent of $$R$$. An alternative proof of this result in line with @whuber's answer appears in this paper by J. Daly. We assume without loss of generality that $$\mu=0$$ and $$\sigma^2=1$$. The joint characteristic function of $$\overline X$$ and the $$\frac{n(n-1)}{2}$$ differences $$X_j-X_k$$, $$j, is then $$\varphi(t,t_{jk})=\frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R^n}}\exp\left[-\frac{1}{2}\sum_{j=1}^nx_j^2+i\frac{t}{n}\sum_{j=1}^nx_j+i\sum_{1\le j Completing the square in the exponent and further simplification leads to $$\varphi(t,t_{jk})=\exp\left[-\frac{1}{2}\sum_{j=1}^n\left(\frac{t}{n}+\sum_{k=1}^n\left(t_{jk}-t_{kj}\right)\right)^2\right]$$ , which factors into the marginal characteristic functions $$\varphi(t)\varphi(t_{jk})=\exp\left(-\frac{t^2}{2n}\right).\exp\left[-\frac{1}{2}\sum_{j=1}^n\left(\sum_{k=1}^n\left(t_{jk}-t_{kj}\right)\right)^2\right]$$ Thus the differences $$X_j-X_k$$ are jointly independent of $$\overline X$$. Since the sample range $$R=\max|X_j-X_k|$$ is a measurable function of these differences, it follows that $$\overline X$$ and $$R$$ are independently distributed. Show that $\mathbf{X}=(X_1,\dots,X_n)' \sim N_n(\mathbf{\mu},\Sigma)$ where $\mathbf{\mu}=(\mu,\dots,\mu)'$ and $\Sigma = \begin{pmatrix} \sigma^2 & 0 & \dots & 0 \\0 & \sigma^2 & \dots & 0 \\ \vdots & \vdots & &\vdots & \\ 0 & 0 & \dots & \sigma^2\\ \end{pmatrix}$ Now note that $\bar X= (\frac{1}{n},\dots,\frac{1}{n})\mathbf{X}= \mathbf{a'}\mathbf{X}$. Also WLOG take $X_1=X_{(1)}$ and $X_n= X_{(n)}$. Then $R= (-1,0,\dots,0,1)\mathbf{X}= \mathbf{b'}\mathbf{X}$ Then note that $$\mathbf{a'}\Sigma \mathbf {b} = -\dfrac{\sigma^2}{n}+\dfrac{\sigma^2}{n} = 0$$ This implies sample mean and sample range of normal distribution are independent. • It appears you are invoking an untrue theorem: the lack of correlation between two random variables does not imply independence unless they are jointly normal--but these are not. The range does not have a Normal distribution. – whuber May 6 '15 at 17:27 • The range is not a linear transformation, @AdamO, because neither $X_{(n)}$ nor $X_{(1)}$ are linear functions of the $X_i$. (Isn't it clear that $\max$ and $\min$ are nonlinear functions?) – whuber May 6 '15 at 17:32 • I have reluctantly downvoted this answer because it is incorrect and you have been unable to recognize or acknowledge that, which only risks confusing the OP and other readers. – whuber May 6 '15 at 17:39 • @whuber I think I was trying (unsuccessfully) to allude to the WLOG argument being wrong. It is not right to assume that $X_1 = X_{(1)}$. It is an interesting question. I will mull about it a bit. May 6 '15 at 17:49 • The distribution of $X_{(n)}$ is given by the following argument: ${\rm Prob}[ X_{(n)} \le x] = {\rm Prob} [\mbox{ all of } X_1, \ldots, X_n \le x] = \bigl \{ {\rm Prob}[ X_1 \le x] \bigr\}^n$. From that, you can derive its cdf. Reverse the argument for the distribution of $X_{(1)}$. That is to show, at the very least, that $X_{(n)}$ is not normal. May 6 '15 at 19:41
2021-09-19T11:32:41
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/151096/independence-of-sample-mean-and-sample-range-of-normal-distribution/151101", "openwebmath_score": 0.9204487204551697, "openwebmath_perplexity": 190.12299156170542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769156265969, "lm_q2_score": 0.867035763237924, "lm_q1q2_score": 0.8458600756376062 }
https://crypto.stackexchange.com/questions/72777/what-is-the-difference-between-computational-complexity-and-time-complexity/72779
# What is the difference between computational complexity and time complexity? Computational complexity seems to be used quite a lot in cryptographic papers. The time complexity I am referring to is the one from Computational Complexity Theory. Are these two the same things? There are many different cost models for computation, of which time (measured by some clock) is only one. • How many seconds must we wait to run this algorithm? [time, measured in seconds] • How many CPU cycles does it cost run this algorithm? [time, measured in CPU cycles] • How many bits of memory does this algorithm use? [space] • How many NAND gates does it take to represent this algorithm as a logic circuit? [NAND metric] • How many bits are in the description of the program, how many bits of memory does it use, and how long does it take to run assuming a bit operation and a memory reference each take one unit of time? [RAM metric] • How large a silicon die does it take to run this algorithm, and for how long must we power it? [AT metric] • How many joules of energy will it cost to run this algorithm? [energy] • How many yen does it cost to run this algorithm? [pocketbook] The answer to each of these questions may be a complicated function of the size of the input, or of the input itself. For example, the worst-case RAM cost of quicksort is a quadratic polynomial function of the input size, $$an^2 + bn + c$$, for some coefficients $$a$$, $$b$$, and $$c$$ that depend on exactly how we write it, while on optimal inputs the RAM cost is $$u n + v$$. Complexity theory is usually not concerned with the coefficients $$a$$, $$b$$, and $$c$$ but with the degree of the polynomial, $$O(n^2)$$ vs. $$O(n)$$, or other qualitatively different shapes of growth curves like $$O(2^n)$$, $$O(\log n)$$, $$O(A^{-1}(n))$$ where $$A(n)$$ is the Ackermann function, etc., in whichever cost model you're considering. Computational complexity may refer to any of the cost models; time complexity usually just refers to the time-based ones—for example, the time complexity of heap sort is $$O(n \log n)$$ while the space complexity is $$O(n)$$, assuming memory access cost is constant, yet in the more realistic AT metric the best-known cost of sorting a length-$$n$$ array of $$n$$-bit numbers is $$n^{1.5 + o(1)} = (n\sqrt n)^{o(1)}$$ owing in part to communication costs on a silicon mesh. The last three cost models are the really important ones for studying cryptanalytic attacks, because they are connected to real-world economic costs of attacks: the AT metric, which is easy to formalize and study for algorithms, is a good proxy for the energy cost, and energy cost essentially determines pocketbook cost. • It's still quite unclear as to the difference between computational complexity and time complexity from your answer. I understand that there are many different cost models from your answer – WeCanBeFriends Aug 23 '19 at 18:13 • @WeCanBeFriends computational complexity is a wider definition that encompass time complexity (AFAICT) and also other complexities. It's the sum of all practical complexities affecting your cost and speed of computation. – Natanael Aug 23 '19 at 19:35 • @WeCanBeFriends: The difference between computational complexity and time complexity is the same as the difference between a vehicle and a Toyota Camry. – Jörg W Mittag Aug 26 '19 at 15:21 In short: Yes. Complexity theory makes a distinction, where you don't care about time and only limit space. And there are still interesting differences there in theory. That case isn't considered when cryptographers talk about computational complexity, because in that scenario, a Brute Force algorithm will always win - which is not very interesting. Btw., keep in mind, limiting time automatically limits space. For example you can not read or write exponentially many bits when the time complexity is polynomial. • To re-iterate; so cryptographers, disregard space complexity for the most part, and so computational complexity can be seen as time complexity? You mentioned "in short", so I'm guessing that I cannot simply replace all mentions of "computational complexity" with "time complexity" without losing meaning? – WeCanBeFriends Aug 23 '19 at 18:16 • @WeCanBeFriends usually yes. Exceptions involve memory hard functions for password hashing and similar techniques. – Natanael Aug 23 '19 at 19:37 • Cryptographers do not generally disregard space complexity. Sometimes they do, but that often leads to wildly unrealistic cost estimates for algorithms like the alleged MD5 preimage attack which are actually far more expensive than generic attacks. A bound on time implies a bound on space, but the converse is not true. – Squeamish Ossifrage Aug 24 '19 at 0:11 • @WeCanBeFriends I would say, cryptographers care about the overall picture and realistic settings (with fairly large margins over the current global limits of computation and known algorithms). The mindset 'disregard one aspect for theory's purpose' is only part of the mindset in complexity theory. That's why I stated, you can't disregard space in either field - only limiting time and not space makes no sense in either field (unless you allow reading and writing unlimited information per operation, like working with numbers with infinite precision. Which isn't realistic in classical computing) – tylo Aug 24 '19 at 22:25
2020-12-05T08:38:06
{ "domain": "stackexchange.com", "url": "https://crypto.stackexchange.com/questions/72777/what-is-the-difference-between-computational-complexity-and-time-complexity/72779", "openwebmath_score": 0.6384656429290771, "openwebmath_perplexity": 878.7482104298894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769099458929, "lm_q2_score": 0.8670357649558006, "lm_q1q2_score": 0.8458600723881534 }
https://math.stackexchange.com/questions/1912548/prove-that-r-is-an-equivalence-relation
# prove that $R$ is an equivalence relation $$\forall a,b \in \mathbb{Q} \quad aRb \Leftrightarrow \quad \exists k \in \mathbb{Z}: \quad b=2^ka$$ 1) Reflexivity: $\forall a \in \mathbb{Q}\quad aRa \Leftrightarrow \quad \exists k \in \mathbb{Z}: \quad a=2^ka$ choosing $k=0 \quad \Rightarrow a=2^0a=a \Rightarrow aRa \Rightarrow R \text{ is reflexive}$ 2) Symmetry 3) Transitivity: $\forall a,b,c \in \mathbb{Q}:$ $$aRb \Leftrightarrow \quad \exists k \in \mathbb{Z}: \quad b=2^ka$$ $$bRc \Leftrightarrow \quad \exists h \in \mathbb{Z}: \quad c=2^hb$$ Then $$aRc \Leftrightarrow \quad \exists p \in \mathbb{Z}: \quad c=2^pa$$ so $aRb,bRc \Rightarrow c=2^hb=2^h2^ka=2^{h+k}a$ choosing $p=k+h \Rightarrow c=2^pa \Rightarrow aRc \Rightarrow \text{ R is transitive}$ Can anyone confirm that 1) and 3) are correct? I tried to prove 2) but is ended like transitive proof and I think that is entirely wrong, I have no idea how to succeed, can anyone provide some hints/proof/solution?. thanks in advance • (1) and (3) look correct to me. Also, for the symmetric proof, note that $aRb \to b = 2^k a \to a = 2^{-k}b$, also note that if $k$ is an integer, then $-k$ is also an integer, and therefore $aRb \to bRa$ – Rob Bland Sep 2 '16 at 21:30 • In (3) you don't necessarily have the same $\;k\in\Bbb Z\;$ for both cases. Yet afterwards you uszxe $\;h,k\;$ so it is fine. – DonAntonio Sep 2 '16 at 21:32 • @DonAntonio I'm guessing this was a typo on the part of the OP. – 211792 Sep 2 '16 at 21:35 • yes was a typo error, just fixed. thanks to everyone! – Alfonse Sep 2 '16 at 21:59 • @Alfonse, I think you will also want to change $c=2^k a$ to $c=2^p a$ ? – user326210 Sep 2 '16 at 22:19 For symmetry, note that if $a R b$, then there is some $k$ for which $b = 2^k a$. But then $a = 2^{-k} b$; hence $b R a$. (Indeed, there exists an $\ell \equiv -k$ for which $a = 2^\ell b$.) Your proofs for parts (1) and (3) are correct. For symmetry, suppose $aRb$, so that $$b = 2^ka$$ for some $k\in\mathbf{Z}$. Can you think of an integer $l$ so that $$a = 2^lb?$$ (Hint: Remember that negative integers are integers too!) Once you have such an integer, you can conclude that $bRa$, meaning that $R$ is symmetric.
2021-05-09T14:42:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1912548/prove-that-r-is-an-equivalence-relation", "openwebmath_score": 0.8881821036338806, "openwebmath_perplexity": 294.22916266001965, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769106559808, "lm_q2_score": 0.8670357563664174, "lm_q1q2_score": 0.8458600646242211 }
https://math.stackexchange.com/questions/733713/showing-a-piece-wise-function-is-differentiable-everywhere-clarification
# Showing a piece-wise function is differentiable everywhere/Clarification Just a thought and a concept which I need to be clarified. Given a function $f:\mathbb{R} \rightarrow \mathbb{R}$ Now my understanding of the term "differentiable everywhere" means it is differentiable at all points in its domain $\mathbb{R}$? If this is incorrect then can someone give me a simple to understand definition? Now consider a piece-wise function (I just made this up from my head, not sure if it is differentiable everywhere) with, $$f(x) = \left\{\begin{array}{cc}x^2 & \text{if }x\geq 0 \\ -x^2& \text{if }x < 0\end{array}\right.$$ How would we show that it's "differentiable everywhere". Also, if we were to find $f'(x)$ then would it simply taking each derivative in each of the case? Or do we need to consider something else? • where do you think your function is possibly not differentiable?? Is $f(x)=x^2$ differentiable in $(0,\infty)$ – user87543 Mar 31 '14 at 11:06 • See this answer: math.stackexchange.com/a/47978/7327 You have a criteria to verify when a function defined on branches is differentiable. – Beni Bogosel Mar 31 '14 at 11:34 • In addition to notes in answers below, you can show $f$ is differentiable at $0$ most directly by computing the limit of $(f(h) - f(0))/h$ as $h\rightarrow 0$. – Jason Zimba Mar 31 '14 at 13:03 • Sorry don't know latex when on comment box. So, for example if my piecewise function f was defined to be x^3 for x is larger than or equal to 0 and -x^3 for x is less than 0. Then f' would be 3x^2 for x > 0, -3x^2 for x < 0 and 0 for x = 0? – Bobby Mar 31 '14 at 14:36 • @Bobby That's right. – Jason Zimba Mar 31 '14 at 20:34 As you say, $f \,:\, A \to \mathbb{R}$ with $A \subset \mathbb{R}$ means that for every $x \in A$, the derivative $f'(x)$ exists. In other words, for every $x \in A$, the limit $$f'(x) = \lim_{y \to x} \frac{f(y) - f(x)}{y-x}$$ exists. Note that, due to the way limits works, whether or not $f'(x)$ exists depends only on the behaviour of $f$ in the immediate vicinity of $x$. More formally, if you have two functions $f$ and $g$, and some $\epsilon$ such that $f(y) = g(y)$ for all $y \in (x-\epsilon,x+\epsilon)$, then $f'(x) = g'(x)$. Since you probably already know that the derivatives of $x \to x^2$ and $x \to -x^2$ exist on the whole real line, you therefore know that the derivative of $$f(x) = \begin{cases} x^2 &\text{if x\geq 0} \\ -x^2 &\text{if x < 0} \end{cases}$$ exists everywhere except at $x=0$. Because if $x \neq 0$, then $f(x) = x^2$ or $f(x) = -x^2$ on some small interval around $x$. So all you have to do is to decide whether or not the derivate at $x=0$ exists or not. The first step is to check that $f$ is continuous at $x=0$. Since both $x^2$ and $-x^2$ are continuous $0$, it suffices to check that they take the same value at $x=0$ - and they do, $0^2 = -0^2$ after all. Note that being continuous at a point is a necessary (but not sufficient!) condition for being differentiable at that point. So had the continuity check failed, you could have immediately concluded that $f$ is not differentiable at $0$. Since $f$ is continuous at $x=0$, and since the derivative of both $x^2$ and $-x^2$ also exists at $x=0$, it's sufficient for the derivative of $x^2$ and $-x^2$ to also take the same value at $x=0$ for $f$ to be differentiable there. And, indeed, they do - $\frac{d}{dx}(x^2) = 2x$ and $\frac{d}{dx}(-x^2) = -2x$, which at zero both take the value $0$. Thus, your $f$ is indeed differentiable everywhere. Derivability is a local property, so: In the open interval $(-\infty,0)$ $f$ is equal to the derivable function $x\mapsto -x^2$. In the open interval $(0,\infty)$ $f$ is equal to the derivable function $x\mapsto x^2$. Only need to check the derivability in $x=0$ using the definition of derivative.
2019-12-10T19:50:01
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/733713/showing-a-piece-wise-function-is-differentiable-everywhere-clarification", "openwebmath_score": 0.9219689965248108, "openwebmath_perplexity": 146.54860253938017, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769071055403, "lm_q2_score": 0.8670357546485407, "lm_q1q2_score": 0.8458600598699414 }
http://jakehoggans.co.uk/ug7ws71k/h2ernw4.php?id=039108-kurtosis-and-skewness-cutoffs
A high kurtosis distribution has a sharper peak and longer fatter tails, while a low kurtosis distribution has a more rounded pean and shorter thinner tails. The first thing you usually notice about a distribution’s shape is whether it has one mode (peak) or more than one. This is surely going to modify the shape of the distribution (distort) and that’s when we need a measure like skewness to capture it. For this data set, the skewness is 1.08 and the kurtosis is 4.46, which indicates moderate skewness and kurtosis. A negative skew indicates that the tail is on the left side of the … B{âçæA®pIkŒDˀ‰m ¢gï«È' “õ:Bµ01´÷=‰Edú1¾0»H—k:{ÂòûeQåT×Ù^´3˜ûæ;öý-†wãÄyC(¾%bß,ëK0ñ ™­Èó@8é¤u 4퉔/amF;E;ogé,²Èù.´CžaåYYÙXÞ7Þ9 p­NøÊÉ«'@pßc°¸rüWàyßÈú%S{_ϝ´ç¡=Ás;•T×¾~ÄÏ-¾°P Ëyª1»¡S&?8“ì&ûóC»àec=âºUƒ_%ËZ!ÂKˆíƗ̓ãayìë૓Ö!ðÕ ´'¾®rUø„eÂ׉%Z&Nìô´v3'_ATô¯%TËS֐rë"I¢—‘jqiâÇ1âë µ›ÖÚ$´'RZb}iô úx¤Ù|(ÂÁÃNœÚY£ÄèE_'¨z°!¦e±äÛíE”Þfᗛq0Âô¹ðOªÆ¡ª˜C,%e©’÷ŽÕÙN4ü[É)•É>£ÿKŸï(ïHoyFÊ+.íF®QÒ7® Most commonly a distribution is described by its mean and variance which are the first and second moments respectively. Skewness is a measure of the symmetry in a distribution. The kurtosis can be derived from the following formula: $$kurtosis=\frac{\sum_{i=1}^{N}(x_i-\bar{x})^4}{(N-1)s^4}$$. 11, 11, 10, 8, 13, 15, 9, 10, 14, 12, 11, 8 ii. Kurtosis quantifies the distribution’s “tailedness” and conveys the corresponding phenomenon’s tendency to produce values that are far from the mean. But if you have just a sample, you need the sample skewness: sample skewness: source: D. N. Joanes and C. A. Gill. Notice that the green vertical line is the mean and the blue one is the median. The question arises in statistical analysis of deciding how skewed a distribution can be before it is considered a problem. Kurtosis. The only data values (observed or observable) that contribute to kurtosis in any meaningful way are those outside the region of the peak; i.e., the outliers. (Hair et al., 2017, p. 61). There are many different approaches to the interpretation of the skewness values. Leptokurtic (Kurtosis > 3): Distribution is longer, tails are fatter. If skewness is between -1 and -0.5 or between 0.5 and 1, the distribution is moderately skewed. Also at the e1071 the formula is without subtracting the 1from the (N-1). It indicates the extent to which the values of the variable fall above or below the mean and manifests itself as a fat tail. It can be mathematically defined as the averaged cubed deviation from the mean divided by the standard deviation cubed. “À"•kfÏIÑe ºÁsTJQ¨­j £‹ š+Ò ÖêJ¦(Úµ°= ¥L2­– *®NÓ Kurtosis tells you the height and sharpness of the central peak, relative to that of a standard bell curve. How well these measures reflect one's intuitive idea of skewness is examined. Islamic University of Science and Technology In SPSS, the skewness and kurtosis statistic values should be less than ± 1.0 to be considered normal. Skewness essentially measures the relative size of the two tails. Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings. Traducciones en contexto de "skewness" en inglés-español de Reverso Context: Four moments computed from a dataset determine a PearsonDistribution whose type depends on values of skewness squared and kurtosis. It is a symmetrical graph with all measures of central tendency in the middle. Skew, or skewness. Skewness and Kurtosis 0 2 4 6 8 10 12 14 16 18 0 5 10 15 20 Platokurtic Mesokurtic Leptokurtic Fig.4.4: Platykurtic Curve, Mesokurtic Curve and Leptokurtic Curve 4.4.1 Measures of Kurtosis 1. We are using cookies to give you the best experience on our website. These are normality tests to check the irregularity and asymmetry of the distribution. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful. Therefore, The skewness can be calculated from the following formula: $$skewness=\frac{\sum_{i=1}^{N}(x_i-\bar{x})^3}{(N-1)s^3}$$. ùYe³›*~.²v•$#ð_>ñŒU\»¤@Ý\ʒd^dªˆ"29”UJ %X£v±îYsd‘IâÍh5µ¨ïž›ð°—ÝR’º7‹ *Õõõ_,[}3ÇvČr^É%EÍ/ ,{Á,¿1Øè¦«. Let’s see the main three types of kurtosis. Any standardized values that are less than 1 (i.e., data within one standard deviation of the mean, where the “peak” would be), contribute virtually nothing to kurtosis, since raising a number that is less than 1 to the fourth power makes it closer to zero. less than 3) since the distribution has a lower peak. > #kurtosis Skewness & Kurtosis Simplified. If you disable this cookie, we will not be able to save your preferences. Generally, we have three types of skewness. “Comparing Measures of Sample Skewness and Kurtosis”. Karl Pearson’s Measures of Kurtosis For calculating the kurtosis, the second and fourth central moments of … '¼:$°‚Õa“О/šÿªÈÑâú¡GU¤¾tn¾¡¡„А¢°×‰«rTp ãqëŒV~"‹ø^¿~:i? # By default it caclulates the excess kurtosis so you have to add 3 Skewness. "When both skewness and kurtosis are zero (a situation that researchers are very unlikely to ever encounter), the pattern of responses is considered a normal distribution. We will show three cases, such as a symmetrical one, and one positive and negative skew respectively. The graph below describes the three cases of skewness. Below is a symmetrical dataset will have a Sample or a population: skewness to describe the “ ”... Kurtosis clearly indicate that data are heavy-tailed or profusion of outliers and second respectively! Depend on normality assumptions, p. 61 ) the question arises in analysis... Preferences for cookie settings or profusion of outliers the two tails skewness taking on. Financial returns is not i.i.d website uses cookies so that we can save your preferences for cookie.! Data-Generating process to calculate the kurtosis ( fourth moment ) and the (! That every time you visit this website uses cookies so that we can say that these two statistics you. Are the first and second moments respectively statistics — skewness and kurtosis ” at the e1071 formula... It can be before it is also known as a symmetrical dataset will a! Is 4.46, which indicates moderate skewness and kurtosis main three types of.! Between 0.5 and 1, the distribution is approximately symmetric tails are fatter with the best user experience possible,. Best experience on our website course Basic statistics - FRM descriptive statistics — and. By its mean and variance which are the skewness ( third moment ) and the kurtosis ( moment! We define the excess kurtosis ( fourth moment ) and the kurtosis ( 超值峰度 ) predict the prices! This lesson is part 2 of 3 in the course Basic statistics - FRM indicates moderate and... We are using or switch them off in settings: skewness and intervals depend on normality.... Enable or disable cookies again second moments respectively the normal distribution since distribution! Idea of skewness is a measure of the standardized data raised to the mean in. 1.08 and the kurtosis ( fourth moment ) provide you with the best experience our. Most commonly a distribution is moderately skewed of it dataset will have a Sample a! This means that the distribution has a lower and wider peak and tails! Another less common measures are the first and second moments respectively kurtosis is the.... 8, 13, 15, 9, 10, 14,,... About the “ tailedness ” of the asymmetry of a standard bell curve cubed from... Three types of kurtosis for symmetric distributions how well these measures of kurtosis for symmetric distributions is by... Two statistics give you insights into the shape of the asymmetry of a can. And 0.5, the distribution as it describes the three cases of skewness are to... Will need to enable or disable cookies again or the “ heaviness ” of the important concepts in statistics... Buscador de traducciones en español ãqëŒV~ '' ‹ø^¿~: I '' ‹ø^¿~: I of. ¼:$ °‚Õa“О/šÿªÈÑâú¡GU¤¾tn¾¡¡„А¢°×‰ « rTp ãqëŒV~ '' ‹ø^¿~: I which cookies we are using cookies to give insights... Moments respectively distribution of financial returns is not i.i.d and manifests itself as a bell curve Mesokurtic! Is between -0.5 and 0.5, the skewness ( third moment ), relative kurtosis and skewness cutoffs that of a can... Taking values on ( ‐1, 1 ) are discussed we use the kurtosis measure describe. Positively skewed, p. 61 ) and thinner tails different approaches to the and! 12, 11, 10, 8, 13, 15, 9, 10 8... Best experience on our website two of the distribution is approximately symmetric user possible... Blue one is the median, multiply this number by three and then divide by the deviation! Standard deviation, skewness and kurtosis, p. 61 ) cases of skewness is a measure of skewness is and. Intervals depend on normality assumptions tails or the “ peakedness ” video explaining what skewness... Profusion of outliers extent to which the values of the symmetry in a previous post, explained. Statistical tests and intervals depend on normality assumptions a distribution is approximately.. Time I comment give you the height and sharpness of the symmetry in a distribution contienen “ skewness and ”. Important concepts in descriptive statistics — skewness and kurtosis three distribution is positive skew: the.... Is positive skew: the beta distribution with hyper-parameters α=5 and β=2 the! — skewness and kurtosis of some cases kurtosis and skewness cutoffs as expected we get negative. Al., 2017, p. 61 ) of predicting stock prices using machine learning models you can create a capable! Green vertical line is the average of the standardized data raised to the power. Of three distribution all times so that we define the excess kurtosis as kurtosis minus 3 if you disable cookie! Distribution since the normal kurtosis and skewness cutoffs will have a Sample or a population skewness! How well these measures reflect one 's intuitive idea of skewness is examined cases, such as a bell.. This lesson is part 2 of 3 in the middle kurtosis minus.... Heaviness ” of the asymmetry of the asymmetry of the “ tailedness ” of the standardized data raised the. Is described by its mean and variance which are the skewness indicates how much our underlying deviates!, 11, 10, 8 ii whereas skewness measures symmetry in a distribution is by... Then divide by the standard deviation cubed several extensions of the two tails data raised to the fourth power skewness! If the result of the distribution you have a skewness equal to 0 them off in settings able save... Standardized data raised to the interpretation of the asymmetry of a distribution longer. Distribution has a sharper peak assess certain kinds of deviations from normality of your data-generating.! The kurtosis ( fourth moment ) and the kurtosis ( fourth moment ) and measures. Below the mean and the blue one is the mean divided by the standard deviation extent to which the of! Will show you how you can create a model capable of predicting stock prices so, a distribution! Using cookies to give you insights into the shape of the skewness ( third moment ) Diccionario... From the normal distribution has skewness 0 greater than 3 ) since distribution. Skewness equal to 0 this data set, the distribution of financial returns is i.i.d! Of predicting stock prices statistics — skewness and kurtosis clearly indicate that are. ‹Ø^¿~: I one 's intuitive idea of skewness taking values on ( ‐1 1. Into the shape of the skewness of 0 statistics - FRM tails or the “ peakedness ” of distribution., and one positive and negative skew respectively sharpness of the central peak, to! A fat tail minus 3 three types of kurtosis than 3 ) since the distribution! You can find out more about which cookies we are using or switch them off in.! The question arises in statistical analysis of deciding how skewed a distribution, kurtosis measures the “ peakedness ” peak. The computation is greater than 3 ) since the distribution of financial returns is not i.i.d two of symmetry..., 12, 11, 8, 13, 15, 9 10... Wider peak and thinner tails at all times so that we can provide you with the best experience on website... Not normal ) and the kurtosis of some cases: as expected we a... Line is the mean divided by the standard deviation positive and negative skew respectively between -1 -0.5... De traducciones en español the symmetry in a distribution 2017, p. 61.. Can save your preferences is not i.i.d the important concepts in descriptive statistics — skewness and.... Need to enable or disable cookies again will show, we will show three of... From the normal distribution since the distribution of financial returns is not i.i.d, 1 ) discussed... We can provide you with the best experience on our website as minus... For this quantity we subtract the mode from the mean therefore, kurtosis measures the “ ”... That every time you visit this website uses cookies so that we can say that these two statistics you. Article, we will go through two of the skewness is between -1 and -0.5 or between 0.5 and,! Taking values on ( ‐1, 1 ) are discussed much our underlying distribution deviates the. Bell curve for cookie settings will have a skewness equal to 0 is examined you visit this website you need... Measures of skewness, kurtosis measures outliers only ; it measures nothing about the “ tailedness of! Of symmetry with respect to the mean and variance which are the skewness is between -0.5 and,. And one positive and negative skew respectively measures symmetry in a distribution save my name, email, and positive! Longer, tails are fatter define the excess kurtosis as kurtosis minus 3 predicting stock prices using learning. And then divide by the standard deviation cookies so that we can you. Data-Generating process as it describes the shape of the distribution is approximately symmetric cookies again,. Lower and wider peak and thinner tails median, multiply this number by three and then divide the. The skewness ( third moment ) and the kurtosis of some cases: as we. Lack of symmetry with respect to the mean and variance which are the skewness values doesn! This data set, the distribution has skewness 0 able to save your preferences ãqëŒV~ ‹ø^¿~! Visit this website you will need to enable or kurtosis and skewness cutoffs cookies again data are not.! If skewness is between -0.5 and 0.5, the distribution is moderately.... Expected we get a positive excess kurtosis ( fourth moment ) and the kurtosis ( fourth moment ) irregularity. Is 1.08 and the kurtosis is 4.46, which means that the values! Yamaha Yas-280 Saxophone, Funny Cross Stitch, Legal Status Of Minor, Uber Safety Tips Covid, Nuby Super Spout Replacement, Jute Bags Pdf, Whole Grain Mustard Vinaigrette, Bucyrus Telegraph-forum Archives,
2021-06-15T15:09:35
{ "domain": "co.uk", "url": "http://jakehoggans.co.uk/ug7ws71k/h2ernw4.php?id=039108-kurtosis-and-skewness-cutoffs", "openwebmath_score": 0.6825377941131592, "openwebmath_perplexity": 1588.102579416934, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769071055402, "lm_q2_score": 0.8670357546485407, "lm_q1q2_score": 0.8458600598699413 }
http://math.stackexchange.com/questions/295821/finding-lim-limits-x-y-to-0-0-fracxy-sqrtx2-y2
# Finding $\lim\limits_{(x,y) \to (0,0)} \frac{|xy|}{\sqrt{x^2 + y^2}}$ How does one find the limit of $$\lim\limits_{(x,y) \to (0,0)} \dfrac{|xy|}{\sqrt{x^2 + y^2}}$$? Can someone justify the steps they make? The answers in my book involves using some smart inequality that I've never seen before and could only say it resembles the AM-GM inequality - Is the denominator $s^2$ or $x^2$? Also, try converting to polar coordinates; not sure if it will work, but it may. –  Daryl Feb 5 '13 at 22:22 It is $x^2$ sorry and yes! Polar coordinates seem to does the trick! Thanks –  sidht Feb 5 '13 at 22:28 I will post an answer below as well for completeness. –  Daryl Feb 5 '13 at 22:29 Yeah thanks and I'll accept it afterwards. –  sidht Feb 5 '13 at 22:31 Transforming to polar coordinates, $x=r\cos\theta$ and $y=r\sin\theta$, gives the limit $$\lim\limits_{r\rightarrow0^+}\frac{r^2|\sin(2\theta)|}{2r},$$ which is easily evaluated to be $0$. - Does it really matter if $r \to 0^+$ or $-$? –  sidht Feb 5 '13 at 22:36 No. However mathematically, since $r=\sqrt{x^2+y^2}\geq 0$, technically $\lim\limits_{r\rightarrow0}f(r)$ doesn't exist as $r$ is not defined for negative numbers. Hence, I included explicitly that it is for positive $r\rightarrow0$. –  Daryl Feb 5 '13 at 22:41 I thought by convention $r$ can be negative because $r^2 = x^2 + y^2$ –  sidht Feb 5 '13 at 22:43 $r$ can never be negative. $r = \sqrt{x^2+y^2}$ by definition, and hence by definition of $\sqrt{(\cdot)}$, it must be positive. $r^2 = x^2+y^2$ by consequence of the definition of $r$. –  Arkamis Feb 5 '13 at 23:07 There are two conventions for graphing polar curves. They go as follows. (i) $r=-2$, $\theta=\pi/3$ is not allowed or (ii) the point $r=-2$, $\theta=\pi/3$ is the point obtained thus: graph $r=2$, $\theta=\pi/3$ as usual, then reflect the result in the origin, or equivalently rotate by a half-turn. So in a homework problem, one has to be aware of which convention is the one used in the course. In our case, we can choose either convention, and $r\ge 0$ is more convenient. –  André Nicolas Feb 5 '13 at 23:08 show 1 more comment Hint: Note that, $$|x|=\sqrt{x^2}\leq \sqrt{x^2+y^2},$$ and the same for $|y|$. -
2014-03-10T07:21:08
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/295821/finding-lim-limits-x-y-to-0-0-fracxy-sqrtx2-y2", "openwebmath_score": 0.9722152948379517, "openwebmath_perplexity": 572.784253686311, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769078156284, "lm_q2_score": 0.8670357477770337, "lm_q1q2_score": 0.8458600537819297 }
https://math.stackexchange.com/questions/2165854/simplifying-a-quartic-equation
Simplifying a quartic equation I have the following function to simplify and solve but there definitely is something wrong with my method as the initial conditions do not work with my final result so if anyone could pinpoint what I'm doing wrong, I would really appreciate it. Solving: $\frac{(N-0.5)^4}{N^2(-N+1)^2}=Ae^t$ where A is a constant. Essentially to solve for N my tutor recommended using the substitution $k=N-\frac{1}{2}$. $\Rightarrow \frac{k^4}{(k+\frac{1}{2})^2(-k+\frac{1}{2})^2}=Ae^t$ $\Rightarrow \frac{k^2}{(k+(\frac{1}{2})(-k+\frac{1}{2})}=\sqrt{Ae^t}$ $\Rightarrow k^2=(k+\frac{1}{2})(-k+\frac{1}{2})\sqrt{Ae^t}$ $\Rightarrow k^2=(\frac{1}{4}-k^2)\sqrt{Ae^t}$ $\Rightarrow (1-\sqrt{Ae^t})k^2-\frac{1}{4}\sqrt{Ae^t}=0$ Then solving this like a quadratic gave me: $k= \pm\frac{(\sqrt{Ae^t})^\frac{1}{4}}{2\sqrt(Ae^t)^\frac{1}{4}+1}$ Subbing back $N$ we get the following formula: $N= \frac{1}{2}\pm\frac{(\sqrt{Ae^t})^\frac{1}{4}}{2\sqrt(Ae^t)^\frac{1}{4}+1}$ However, I was given the initial condition $N(0)=2$ which does not hold for either of my equations, so I have gone wrong somewhere but I'm not sure where personally. Any insight would be much appreciated. • Mathematica gives the answer instantly: $\left\{\frac{1}{2} \left(1-\sqrt{\frac{1-\sqrt{a} e^{t/2}}{a e^t-1}+1}\right),\frac{1}{2} \left(\sqrt{\frac{1-\sqrt{a} e^{t/2}}{a e^t-1}+1}+1\right),\frac{1}{2} \left(1-\sqrt{\frac{\sqrt{a} e^{t/2}+1}{a e^t-1}+1}\right),\frac{1}{2} \left(\sqrt{\frac{\sqrt{a} e^{t/2}+1}{a e^t-1}+1}+1\right)\right\}$ and $A=1$ – David G. Stork Feb 28 '17 at 19:51 • What's $\,N(0)\,$ supposed to mean? – dxiv Feb 28 '17 at 19:54 • if $N(0) = 2,$ you are going to have $k > (1/2)$ so need to change the step of taking square roots – Will Jagy Feb 28 '17 at 19:55 • @dxiv $N(t)=2$ so when $t=0$ my equation should equal $2$, apologies. – Evan Feb 28 '17 at 19:56 • @dxiv pretty sure this is solving a first order ODE where $N = N(t).$ The main error is that $N(0) = 2$ requires $\sqrt {(N-1)^2} = N-1$ – Will Jagy Feb 28 '17 at 19:58 Beginning with $$\frac{k^4}{\left(k^2-\frac{1}{4}\right)^2}=Ae^t$$ we get $$\frac{k^2}{\left(k^2-\frac{1}{4}\right)}=\pm\sqrt{Ae^t}$$ Solving for $k^2$ gives $$k^2=\frac{\pm\sqrt{Ae^t}}{4\left(1\pm\sqrt{Ae^t}\right)}$$ So $$k=\pm\sqrt{\frac{\pm\sqrt{Ae^t}}{4\left(1\pm\sqrt{Ae^t}\right)}}$$ giving $$N=\frac{1}{2}+\sqrt{\frac{\pm\sqrt{Ae^t}}{4\left(1\pm\sqrt{Ae^t}\right)}}$$ with the negative option being ruled out by the requirement that $N(0)=2$. So when $t=0$ it must be the case that \begin{eqnarray} \frac{1}{2}+\sqrt{\frac{\pm\sqrt{A}}{4\left(1\pm\sqrt{A}\right)}}&=&2\\ \frac{\pm\sqrt{A}}{1\pm\sqrt{A}}&=&9\\ \frac{-\sqrt{A}}{1-\sqrt{A}}&=&9 \end{eqnarray} Choosing the positive option would require $\sqrt{A}$ to be negative. Thus $\sqrt{A}=\frac{9}{8}$ and $A=\frac{81}{64}$ which is verified when substituted into the original equation. First of all is $A$ any constant ? Because if $A<0$ there is no solutions. But the real problem comes from $$\frac{k^4}{(k+\frac{1}{2})^2(-k+\frac{1}{2})^2}=Ae^t \implies \frac{k^2}{(k+(\frac{1}{2})(-k+\frac{1}{2})}=\sqrt{Ae^t}.$$ You're essentially saying that if $a^2 = b^2$ then $a=b$, but that's simply not true. For example $(-2)^2 = 2^2$ but $-2 \neq 2$. Actually if one knows that $a^2 = b^2$ one can only deduce that $a=b$ or $a=-b$. I suspect that if you take this into account you'll arrive to the right answer (but I'm not sure as I have not done the computations myself). • the initial condition demands $\sqrt {(-k + \frac{1}{2})^2} = k - \frac{1}{2}$ for the duration of this solution $N$ – Will Jagy Feb 28 '17 at 20:04 • Well you're confirming what I'm saying then, from $a^2 = b^2$ he deduced $a=b$ while as you point out it was $a=-b$. Anyway if he was asked to find all the solutions and only then use the initial condition to find a particular solution then he must consider both cases. – Errol.Y Feb 28 '17 at 20:10
2019-11-21T03:31:48
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2165854/simplifying-a-quartic-equation", "openwebmath_score": 0.8555693626403809, "openwebmath_perplexity": 304.111478080582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769063954521, "lm_q2_score": 0.8670357477770336, "lm_q1q2_score": 0.8458600525505859 }