Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
http://math.stackexchange.com/questions/61102/how-do-i-find-the-inductive-definition-of-the-set-defined-as-2n3m1n-m-in-m
# How do I find the inductive definition of the set defined as $\{2n+3m+1|n,m\in\mathbb N\}$? $\lbrace 2n+3m+1:n,m\in N\rbrace$ is the set of all positive integers except for $0$ and $2$. I need to know how to write its inductive definition. This is part of the introduction on learning how to develop recursive functions using lambda calculus. I can do some of them but on others, such as this one, I get lost. How do you handle the multiple variables. Please explain how you got your answer as well since an answer doesn't do me much good if I don't know how to get it. Here are two of the ones I know how to do. $\lbrace 3n+2: n\in \mathbb N\rbrace$ Top Down: $n = 2; n - 3 \in S$ Bottom up: $2 \in S$; if $n \in S$, then $(n + 3) \in S$ Rule of Inference: $2 \in S$; if $n \in S$, then $(n+3) \in S$ $\lbrace(n,2n+1): n\in \mathbb N\rbrace$ Top Down: $(n,m)=(0,1);(n-1,m-2) \in S$ Bottom up: $(0,1) \in S$; if $(n,m) \in S$, then $(n+1,m+2) \in S$ - So, which of the three things "Top Down", "Bottom up", "Rule of Inference" is the inductive definition for $\lbrace3n+2:n\in N\rbrace$? –  Gerry Myerson Sep 1 '11 at 4:55 By the way, I edited a little TeX into the first line of your question. You can see how I did it, and then edit the rest yourself, if you like the way it looks. –  Gerry Myerson Sep 1 '11 at 4:57 @Gerry They are all inductive definitions. They all mean the same thing it's just different ways of writing it. Top down is the most important method because later in the class when I have to write programs that check to see if something is in a set, I would have to use Top down because of the way programming works. –  C Dawg Sep 1 '11 at 5:08 A solution could be found by starting with $1$. Then, if $n\in S$, you require $n+f(n)\in S$ too, where $f(n)$ is 2 for $n=1$ and $1$ otherwise. One way to do this is to make use of the step function $\frac{x}{|x|}$, which is $-1$ for negative input and $+1$ for positive input. Inserting a horizontal shift so that the discontinuity is at $2$ rather than $0$, scaling down by 2 (since we want a span that's $1$ unit long, not $2$), inverting vertically since we want the larger jump to happen earlier, and shifting up by the proper amount ($\frac{3}{2}$)... $$f(n) = \frac{3}{2}-\frac{n-2}{2|n-2|}$$ (Edited: I had it completely backwards the first time!) So that means you can have: $1\in S$; if $n\in S$, then $n+\frac{3}{2}-\frac{n-2}{2|n-2|}\in S$. EDIT: For moving down... You just need a different step function: one with its discontinuity between $3$ and $4$. Also, it's negated from the one used for moving up. $$g(n)= -\frac{3}{2}+\frac{n-3.5}{2|n-3.5|}$$ And then: $1\in S$; if $n\in S$, then $n-\frac{3}{2}+\frac{n-3.5}{2|n-3.5|}\in S$. - Of course, $f$ could just be defined piecewise with 'if's and 'then's, but I don't think that's what you are after. –  alex.jordan Sep 1 '11 at 6:12 How would I invert that for top down form? For top down I would need: n - f(n) where f(n) = -2 when n = 3 and -1 otherwise. –  C Dawg Sep 1 '11 at 22:54 How's this: 1 and 4 are in $S$; if $n$ is in $S$, then $n+2$ is in $S$. - The problem is that I need one definition that works for every number in the set. A small sample of the set is (1,3,4,5,6,7,8,9,10...). With n+2 starting at 1, I would only get the odd numbers and miss out on all the even numbers that are in the set. The problem is that it's every natural number except 0 and 2 and I just can't find a rule that will only skip those two numbers. I've been trying with top down and I've been working around the idea of something like n=1; ((n-2)+(n-3)) ∈ N and numerous variations of it but my definitions either skip numbers or don't skip the 0 and 2. –  C Dawg Sep 1 '11 at 5:49 @C Dawg: Gerry’s definition works for every number in the set, because he specified that both $1$ and $4$ are in $S$; $n+2$ gives you the odd natural numbers starting at $1$ and the even natural numbers greater than $2$ starting at $4$. –  Brian M. Scott Sep 1 '11 at 6:00 I think this answer is great. The OP might be objecting because he wants something that simultaneously indexes the set; i.e., has a single seed. @C Dawg: Is that what you are looking for? –  alex.jordan Sep 1 '11 at 6:22 The most straightforward bottom up definition description of $S=\lbrace 2n+3m+1:n,m\in N\rbrace$ can be read directly from the expression $2n+3m+1$. The base case is clearly $n=m=0$, giving $1 \in S$. The term $2n$ tells you that if $k \in S$, then $k + 2 \in S$, and similarly, the $3m$ term tells you that if $k \in S$, then $k + 3 \in S$: these rules correspond to incrementing $n$ and $m$, respectively, by $1$. In short: • $1 \in S$; • if $k \in S$, then $k+2 \in S$; and • if $k \in S$, then $k+3 \in S$. This is not the most efficient recursive description of $S$, but it is the one that most directly matches the definition that you’ve been given. After you prove that $S$ is in fact the set of all positive integers except $2$, you can give a simpler recursive description $-$ Gerry Myerson’s, for instance. I’m not familiar with your top down style of description, but if I’ve extrapolated correctly from your first example, the top down version of the bottom up description that I just gave is: • $n=1$; • $n-2 \in S$; and • $n-3 \in S$. (Your top down version for $\lbrace(n,2n+1): n\in N\rbrace$ must be incomplete; I’m guessing that the rest of it should be $(n-1,m-2) \in S$.) -
2015-08-31T13:27:35
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/61102/how-do-i-find-the-inductive-definition-of-the-set-defined-as-2n3m1n-m-in-m", "openwebmath_score": 0.8952412009239197, "openwebmath_perplexity": 363.3428264540885, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9669140239948331, "lm_q2_score": 0.8740772466456689, "lm_q1q2_score": 0.8451575478364879 }
https://math.stackexchange.com/questions/3398442/if-people-sharing-the-same-birthday-raise-their-hand-how-many-hands-do-you-expe
# If people sharing the same birthday raise their hand, how many hands do you expect to see raised? The following question is taken from an interview book assuming that no calculator is provided. Question: There are $$25$$ people at a party. One person asks everybody to annouycne their birthday, and for anyone who has the same birthday as someone to raise a hand. How many hands do you expect to see raised? For example, if John, Jon, Stephen and Mark all have the same birthday, January $$15$$, but nobody else at the aprty has a matching birthday, the count of hands is four. My attempt: Let $$X$$ be the number of hands raised. Then for every $$2\leq x\leq 25,$$ we have $$P(X=x) = \frac{\binom{25}{x}}{365^{x-1}}.$$ It follows that $$E(X) = \sum_{x=2}^{25}xP(X=x) = \sum_{x=2}^{25}x\frac{\binom{25}{x}}{365^{x-1}}.$$ I have no idea how to compute the series in closed form. Answer given in the book is $$1.59$$. EDITED: from wolfram alpha, my sum is $$1.697$$, which is different from the answer given. It would be good if someone can point out flaw in my sum. • It's a finite series, so the punch line will eventually be to use a calculator, right? Why not just do that here? Oct 18 '19 at 3:21 • Imagine this is an interview question where I am not provided any calculator. How should I proceed? Oct 18 '19 at 3:24 • Is it clear that such a method should exist? It very well may, though I don't see it immediately. But do you have a reason to believe this should be doable for this problem? Oct 18 '19 at 3:28 • I suppose to get a good estimate, you can discard all but the first 2 or 3 terms, since they'll be tiny. Oct 18 '19 at 3:29 • @AaronMontgomery Actually I am not sure. That is why I seek help from MSE community. Oct 18 '19 at 3:29 The probability that a given person raises his hand is the probability that someone else at the party has the same birthday as he does: $$1-\left({364\over365}\right)^{24}$$ The expected number of hands raised is $$25-25\left({364\over365}\right)^{24}\approx1.593,$$ by linearity of expectation. • This is actually how the book solves it too. But I am wondering whether my approach is correct... Oct 18 '19 at 3:28 • I think this is clean as the answer will possibly get. Oct 18 '19 at 3:32 • @Idonknow It doesn't look right to me. For one thing, when you look at the probability that $4$ people raise their hands, you only seem to be considering that they all have the same birthday, not that there are two pairs of people with the same birthday. Oct 18 '19 at 3:34 • @saulspatz You are talking about my approach? Oct 18 '19 at 3:37 • Since $24 \ll 365$, a simpler approximation would be $P(\text{raise hand}) = 24/365$ which gives the expected number as $25 \times 24 / 365 \approx 25 \times 24 / 360 = 25 \times 2 / 30 = 5/3 = 1.67$ Oct 18 '19 at 3:39 In one of the comments you ask how to approximate this when this is given as an interview question and you don't have your calculator. Well, the birthday problem is well-known, and it is indeed well known that with 23 people the chance of there being two people sharing a birthday is about $$50$$%. So, with 25 people here should be a chance of a little over $$50$$% that two or more people share a birthday, meaning that the expected number of hands is defintely above $$1$$. On the other hand, the probability of there being three or more is a good bit lower. So, eyeballing that, you'd end up with something above $$1$$ though still well below $$2$$. Personally I would have guessed in the $$1.2$$ or $$1.3$$ neighborhood, so I am a bit surprised it's close to $$1.6$$, but I bet my answer would've satisfied the interviewer. :) Apparently the flaw in your formula is that while estimating the probability of a group of $$x$$ people with the same birthday, you neglect to account for the fact that in order for the size of the group to be exactly $$x,$$ nobody else among the $$25$$ people can have the same birthday as these $$x$$ people. The probability that nobody else has the same birthday is $$\left(\frac{364}{365}\right)^{25-x},$$ so a more accurate sum is $$E(X) = \sum_{x=2}^{25}x\frac{\binom{25}{x}364^{25-x}}{365^{24}}.$$ What was surprising to me is that according to Wolfram Alpha, this gives exactly the correct answer: https://www.wolframalpha.com/input/?i=sum+x+%2825+choose+x%29364%5E%2825-x%29%2F365%5E24+for+x+%3D+2+to+25 The reason this is surprising is that one would think you have to account for cases where there is a group of $$3$$ and a group of $$2,$$ or three groups of $$2,$$ and so forth. But I think the derivation of this summand is not literally $$xP(X=x),$$ but $$xE(\text{number of cliques of size x}),$$ where $$\begin{multline}E(\text{number of cliques of size x}) =\\ (\text{number of subsets of size x})P(\text{a given subset of size x is a clique}) \end{multline}$$ where a "clique" is a subset who all have the same birthday which they do not share with anyone else. To evaluate that sum (i.e., the correct one) without a calculator, I think the easiest method is to convert it back to the original problem and then observe that that problem is solved by evaluating $$25\left(1 - \left(\frac{364}{365}\right)^{24}\right),$$ so you now have a much simpler calculation. As a first approximation, $$\left(\frac{364}{365}\right)^{24} = \left(\frac{365-1}{365}\right)^{24} \approx \frac{365-24}{365} = 1 - \frac{24}{365}.$$ Then since $$\frac{24}{365} \approx \frac{24}{360} = \frac1{15},$$ we're looking for something slightly less than $$\frac1{15} = 0.0666\ldots.$$ The number we want is at least $$1\%$$ smaller but not $$2\%$$ smaller. So let's say it's $$0.066$$ to keep the number of significant digits small. So now we have $$25(1 - (1 - 0.066)) = 25(0.066) = 0.165.$$ This is a little high. The fault is mainly in the first approximation. • Wow, nice! In this case, how should I define the random variable that will answer the question? I suppose that my random variable defined in my post is incorrect? Oct 18 '19 at 14:20 • I once wrote a Java applet to calculate $P(X=x).$ It's quite computationally intensive, not the sort of thing you would ever want to do by hand. Oct 18 '19 at 14:23
2021-10-21T17:24:08
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3398442/if-people-sharing-the-same-birthday-raise-their-hand-how-many-hands-do-you-expe", "openwebmath_score": 0.7590380907058716, "openwebmath_perplexity": 160.36395389246172, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713883126863, "lm_q2_score": 0.8577681122619883, "lm_q1q2_score": 0.8451343788187214 }
https://math.stackexchange.com/questions/637750/probability-of-getting-two-pair-in-poker/637752
# Probability of getting two pair in poker I was looking at this website http://www.cwu.edu/~glasbys/POKER.HTM and I read the explanation for how to calculate the probability of getting a full house. To me, the logic basically looked like you figure out the number of possible ranks and multiply by the number of ways to choose the cards from that given rank. In other words, for a full house $P=$ $$\frac{{13\choose1}{4\choose3}{12\choose1}{4\choose2}}{52\choose5}$$ Following this logic, I tried to calculate the probability of getting two pair. My (incorrect) logic was that there are 13 possible ranks for the first pair and $4\choose2$ ways to choose two cards from that rank, 12 possible ranks for the second pair and $4\choose2$ ways to choose two cards from that rank, and 11 possible ranks for last card and $4\choose1$ ways to choose a card from that rank. So I tried $P=$ $$\frac{{13\choose1}{4\choose2}{12\choose1}{4\choose2}{11\choose1}{4\choose1}}{52\choose5}$$ Obviously my solution was incorrect. I read explanation and the correct answer is $P=$ $$\frac{{13\choose2}{4\choose2}{4\choose2}{11\choose1}{4\choose1}}{52\choose5}$$ I'm still a bit fuzzy on where I went wrong though. Can anyone help me understand this problem a little better? Thank you very much for your help. You have to choose the two card values you want as your pairs simultaneously. Remember--multiplying the numbers ${13\choose1}{4\choose2}{12\choose1}{4\choose2}{11\choose1}{4\choose1}$ assumes an $order$, i.e. you are counting, say, QQKK2 as different from KKQQ2. This is why you have to do ${13\choose2}{4\choose2}{4\choose2}{11\choose1}{4\choose1}$. It makes the counting not sensitive to which pair you choose first. • I think the part that is tripping me up is why my solution for the two pair solution assumes an order, but the full house solution does not. In other words, why doesn't the full house solution count say, QQQKK differently from KKQQQ? – Curt Jan 14, 2014 at 6:05 • Because the formula given above for the full house doesn't count things of the form KKQQQ. The ${4\choose3}$ comes before the ${4\choose2}$! The formula used assumes an order of "triple first then pair." You could just as easily switch the order and get the exact same answer. Jan 14, 2014 at 6:09 • It doesn't count it differently, but one is a three of a kind, the other is a pair. Kings over aces (KKKAA) is different than aces over kings (AAAKK). See my answer. – John Jan 14, 2014 at 6:11 • This comment from @John is the right answer. The symmetry of the pairs makes necessary to divide by two to avoid counting AAKK and then KKAA (note that ${13\choose 2} = \frac{1}{2} {13\choose 1}{12\choose 1}$) Oct 15, 2018 at 11:14 • @Anjan It's necessary to pick the values the come in pairs separately, because in your answer, you are not distinguishing between KK22Q, KK2QQ, and K22QQ. You could multiply your answer by $3\choose2$ to choose which two of the three card values will be paired, and that will also give the right answer. Jun 22, 2020 at 17:01 First choose the two (different) values of the cards that will be pairs: $13 \choose 2$. For each of these values, pick two suits from the four suits available: ${4 \choose 2}{4 \choose 2}$. Then, since this is only two pair and not more, choose the value of the other card, and its suit: ${11 \choose 1}{4 \choose 1}$. Finally, divide by the total number of combinations of all hands: $52 \choose 5$. And there it is: $$P = \frac{{13\choose2}{4\choose2}{4\choose2}{11\choose1}{4\choose1}}{52\choose5}$$ The difference between this solution and that for the full house is that there is more "symmetry" for the two pair: both pairs are groups of two. With the full house, one is a group of three, and the other is a group of two. Aces over kings is distinct from kings over aces. Here, you choose the card for the three of a kind, then pick the three suits: ${13 \choose 1}{4 \choose 3}$. Then, you choose the card for the pair, and pick the two suits: ${12 \choose 1}{4 \choose 2}$. • Is this result for Texas Hold'em (where a player uses the best five-card poker hand out of seven cards)? Nov 15, 2015 at 21:02 • I did a manual count of this using a computer script to run through all subsets of {1,2,3,...,52} associating cards with congruence classes mod 13. It gave me the result 147992 which is different from your answer of 123552. I've checked the script and am pretty sure it's right: It correctly counts all possible hands of cards, 52C5, and it correctly tests any hand for whether it contains two pair. Apr 1, 2017 at 4:40 • @Addem Sorry I missed this for over a year. Your answer has a prime factorization of $2^3 \cdot 13 \cdot 1423$. I can't resolve the discrepancy between your answer and mine, but at the same time your answer seems a bit removed from a combinatorial counting argument. – John Aug 31, 2018 at 18:09 Another way is for the first to choose three values out of 13: $${13 \choose 3}$$, then choose 2 values out of 3 for pair: $${3 \choose 2}$$, for each pair choose 2 suits out of 4: $${4 \choose 2}$$ - twice, and finally choose one suit for the 5-th card, which is not in any of pairs: $${4 \choose 1}$$. Resulting formula: $${13 \choose 3}*{3 \choose 2}*{4 \choose 2}*{4 \choose 2}*{4 \choose 1}=123552$$ No difference with previous answers because of $${13 \choose 3}*{3 \choose 2} = {13 \choose 2}*{11 \choose 1}$$ I find permutation more intuitive to follow for this kind of problems. For people like me: We have five slots to fill: - - - - - . The first slot can take all 52 cards. The second slot can take only three cards so that they can make a pair. Similarly, the third and fourth slots can take 48 and 3 cards, respectively. The last and final slot can take any of remaining 44 cars. Therefore: 52 * 3 * 48 * 3 * 44 = 988416. Please note, this is order dependent. In other words, this is the count of x x y y z. However, we should count all the possibilities (i.e., z x y x y). Therefore, we multiply 988416 with 5! and divide by 2! (order between two xs) * 2! (order between two ys) and 2! (order between the pair of xs and ys). The total count is 14826240. This is the numerator. The denominator is 52*51*50*49*48 = 311875200. The probability is 0.0475390156062425. Note that if you want to count how many different hands can be dealt, then you have to divide 14826240 by 5! to compute the combination. • I came up with this method on my own while trying to do a problem for Combinatorics, you're the only person I found online talking about this method so thanks for that! But I don't understand the part where you divide by 2!*2!*2!, I feel like it has to do with the implicit separation of the cards into different cases, but I just don't really follow it. Would you mean explaining a bit more? And when I think about the combination equivalent I get even more confused. The pattern is x x y y z, but the two x's and two y's are distinct cards in the pack. Jan 13, 2021 at 7:12 • When we multiply with 5! we consider all the possible permutations. However, there are some permutations we already considered. If we dont discard them we just over-count the permutations. So the first 2! discards the already considered permutations between the first two cards. The second 2! discards the already considered permutations between the third and fourth cards. The third 2! discards already considered permutations between the first and second pair. Jan 15, 2021 at 20:50 • Ah, that helps. Thanks! Jan 19, 2021 at 1:57 When making a tree, you have 13 choices for the first kind, 6 ways to get it, 12 for the second kind, 6 ways to do that, and 44 cards that are not the first two kinds: 13x6x12x6x44 The problem with this reasoning is that you will have 10s and 4s on one branch and on another you will have 4s and then 10s--the SAME hand. In other words, you have doubled your hands. So logic says, divide 13x12 by 2 to take away all the double answers that will show up on your tree. So....it is (13x12/2)x6x6x44 yielding 123552. For the probability, you need to still divide that by: 52C5 But this verifies WHY your second answer is correct and the first one is wrong: 13C2 x 4C2 X4C2 x 44C1 / 52C5 Riffing off prony's answer, which I think is a little confusing. Here are the possibilities for each card: • Card 1: 52 cards • Card 2: 3, since it must match the suite of Card 1 • Card 3: 48, since we can't match the suite of Card 1 • Card 4: 3, since we can't match the suite of Card 3 • Card 5: 44, since we can't match the suite of the other cards This will give us all orderings of the form XXYYZ. We then notice two issues that we are double counting: 1. Cards 1 and 2 can be interchanged (XX). (2!) 2. Cards 3 and 4 can be interchanged (YY). (2!) 3. Cards 1 and 2 can collectively be interchanged with Cards 3 and 4 (XX with YY). (2!) So we have distinct, unordered $$(52 \times 3 \times 48 \times 3 \times 44)/(2! 2! 2!)$$ ways. Dividing this by the number of combinations $${52 \choose 5}$$ yields our answer.
2022-06-28T22:35:50
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/637750/probability-of-getting-two-pair-in-poker/637752", "openwebmath_score": 0.7247796058654785, "openwebmath_perplexity": 459.93703472349847, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713870152409, "lm_q2_score": 0.8577681122619883, "lm_q1q2_score": 0.8451343777058141 }
https://math.stackexchange.com/questions/2734148/what-are-different-ways-to-compute-int-0-infty-frac-cos-xa2x2dx
# What are different ways to compute $\int_{0}^{+\infty}\frac{\cos x}{a^2+x^2}dx$? I am interested to compute the following integral $$I=\int_{0}^{+\infty}\frac{\cos x}{a^2+x^2}dx$$ where $a\in\mathbb{R}^+$. Let me explain my first idea. As the integrand is an even function of $x$ then $$2I=\int_{-\infty}^{+\infty}\frac{\cos x}{a^2+x^2}dx=\lim_{R\to+\infty}\int_{-R}^{R}\frac{\cos x}{a^2+x^2}dx:=\lim_{R\to+\infty}J$$ So, I first focus on computing the $J$ integral by first modifying it as follows \begin{align*} J&=\int_{-R}^{R}\frac{\cos x}{a^2+x^2}dx=\int_{-R}^{R}\frac{\cos x}{a^2+x^2}dx+i\int_{-R}^{R}\frac{\sin x}{a^2+x^2}dx \\ &= \int_{-R}^{R}\frac{(\cos x+i\sin x)}{a^2+x^2}dx = \int_{-R}^{R}\frac{\exp(ix)}{a^2+x^2}dx \end{align*} Then, I use the well-known techniques of complex variable theory. First, I replace the real variable $x$ in $J$ with a complex variable $z$ and consider a contour integral over $C=C_1\cup C_2$ $$K:=\int_{C}\frac{\exp(iz)}{a^2+z^2}dz$$ Then, according to the Cauchy's integral theorem and the Residue theorem, I get \begin{align*} K=J+\int_{C_2}\frac{\exp(iz)}{a^2+z^2}dz &= \int_{C_3}\frac{\exp(iz)}{a^2+z^2}dz=\int_{C_3}\frac{\exp(iz)}{(z+ia)(z-ia)}dz \\ &=2\pi i \frac{\exp(i^2a)}{2ia}=\frac{\pi}{a}\exp(-a) \end{align*} Next, taking the limit $R\to+\infty$ from the above relation, we obtain $$2I+\lim_{R\to+\infty}\int_{C_2}\frac{\exp(iz)}{a^2+z^2}dz=\frac{\pi}{a}\exp(-a)$$ but, we can show that $$\lim_{R\to+\infty}\int_{C_2}\frac{\exp(iz)}{a^2+z^2}dz=0$$ and then we can obtain the final result $$I=\frac{\pi}{2a}\exp(-a)$$ First, please check my steps to see the final result is correct or not. Second, is there any other way to compute $I$? • a beautiful way: math.stackexchange.com/a/2480371/515527 – Zacky Apr 12 '18 at 16:28 • This is the most beautifully explained question I've seen on the math.stackexchange. $(+1)$ if I did not reach my daily voting limit... – Mr Pie Apr 12 '18 at 16:28 • – Zacky Apr 12 '18 at 16:33 • @zacky: Thanks for the link. How did you search the site to find the links? I wasn't able to find anything! – H. R. Apr 12 '18 at 16:34 • I posted a week ago a pretty similar integral: math.stackexchange.com/questions/2723369/… And if you look at the linked questions(on the right) you find some of them, altough I searched alot of time last week to find similar integrals since I also was interested. – Zacky Apr 12 '18 at 16:39 A rather exotic approach leading to a known functional equation, which has an exponential solution. Consider a function ($a>0$): $$f(a)=a \int_{-\infty}^\infty \frac{\cos x}{a^2+x^2} dx=\int_{-\infty}^\infty \frac{\cos a x}{1+x^2} dx=\pi e^{-a}$$ Let's square it and change the dummy variable: $$f^2(a)=\int_{-\infty}^\infty \int_{-\infty}^\infty \frac{\cos a x \cos a y}{(1+x^2)(1+y^2)} dx dy=$$ $$=\frac{1}{2} \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{\cos a (x-y)+ \cos a (x+y)}{(1+x^2)(1+y^2)} dx dy$$ Due to the infinite limits, we can easily make substitutions in the form $x \pm y=t$, which will lead to the following expression under the integral: $$\frac{\cos a t}{y^2+1} \left(\frac{1}{(y+t)^2+1} +\frac{1}{(y-t)^2+1} \right)$$ We will do partial fraction decomposition to integrate w.r.t. $y$. $$\frac{1}{((y+t)^2+1)(y^2+1)}=\frac{1}{t (4+t^2)} \left(\frac{2y+3t}{(y+t)^2+1}-\frac{2y-t}{y^2+1} \right)$$ $$\frac{1}{((y-t)^2+1)(y^2+1)}=\frac{1}{t (4+t^2)} \left(\frac{-2y+3t}{(y-t)^2+1}-\frac{-2y-t}{y^2+1} \right)$$ Let's consider separately the 'problematic' integrals, but with finite limits: $$\int_{-L}^L \frac{2y dy}{(y+t)^2+1}=\int_{-L-t}^{L+t} \frac{2u du}{u^2+1}-2t\int_{-L-t}^{L+t} \frac{du}{u^2+1}$$ The first integral vanishes, the second after taking the limit $L \to \infty$, gives us $-2 \pi t$. In the same way we find the other integral with $(y-t)^2$. So the two 'problematic' integrals give us: $$-2 \pi \int_{-\infty}^\infty \frac{\cos at ~dt}{4+t^2}$$ Grouping the other terms we get: $$\frac{3t}{(y+t)^2+1}+\frac{3t}{(y-t)^2+1}+\frac{2t}{y^2+1}$$ After integration w.r.t. $y$ and adding all the results, we obtain: $$f^2(a)=2\pi \int_{-\infty}^\infty \frac{\cos at ~dt}{4+t^2}=\pi f(2a)$$ The functional equation: $$f^2 (a)=\pi f(2a)$$ has a general solution: $$f(a)=\pi e^{c a}$$ We should have $c<0$, as can be seen by considering the original integral definition and taking the limit $a \to \infty$. I'm not sure how to prove $c=-1$, but it should be possible. • (+1) Thanks for the attention and your different approach. :) – H. R. Apr 13 '18 at 19:40 • @H.R., thank you. You could also check out this answer math.stackexchange.com/a/1841104/269624. That's the method I planned to use initially, but this user already has a great solution this way. Which is why I made up another one – Yuriy S Apr 13 '18 at 20:30 For any $a>0$, $$I(a)=\int_{0}^{+\infty}\frac{\cos(x)}{x^2+a^2}\,dx = \frac{1}{a}\int_{0}^{+\infty}\frac{\cos(ax)}{1+x^2}\,dx = \frac{J(a)}{a}$$ and the Laplace transform of $J(a)$ is given by $$\int_{0}^{+\infty}J(a) e^{-sa}\,da = \int_{0}^{+\infty}\int_{0}^{+\infty}\frac{\cos(ax)e^{-sa}}{1+x^2}\,dx\,da$$ or, by invoking Fubini's theorem and integration by parts: $$\int_{0}^{+\infty}\frac{s}{(1+x^2)(s^2+x^2)}\,dx =\frac{\pi}{2(1+s)}$$ by partial fraction decomposition. $\mathcal{L}^{-1}$ then gives $J(a)=\frac{\pi}{2}e^{-a}$ and $I(a)=\frac{\pi}{2a}e^{-a}$ as wanted. Another way is to prove $$\int_\mathbb{R}\dfrac{a}{\pi}\dfrac{e^{ikx}}{a^2+x^2}=\exp -a|k|$$for $a>0$, by noting we're just trying to compute the characteristic function of a Cauchy distribution. The inversion theorem implies we need only check this characteristic function gives the right pdf. To prove $$\int_\mathbb{R}\exp (-ikx-a|k|)dk=\dfrac{2a}{a^2+x^2},$$write the left-hand side as the sum of integrals either side of $k=0$. The left-hand side is then $$\dfrac{1}{a+ix}+\dfrac{1}{a-ix},$$as required.
2019-06-24T11:16:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2734148/what-are-different-ways-to-compute-int-0-infty-frac-cos-xa2x2dx", "openwebmath_score": 0.9765316843986511, "openwebmath_perplexity": 261.87126933998155, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985271386582759, "lm_q2_score": 0.8577681122619885, "lm_q1q2_score": 0.8451343773348451 }
http://math.stackexchange.com/questions/310392/is-my-solution-to-the-system-of-equations-correct
# Is my solution to the system of equations correct? If I'm told that $T(\vec x)=A\vec x=\vec b$ and $A=\left[\begin{matrix}1&-3&2\\ 3&-8&8\\ 0&1&2\\ 1&0&8\\\end{matrix}\right]$ and that $\vec b=\left[\begin{matrix}1\\6\\3\\10\end{matrix}\right]$. I need to find some vector $\vec x$ that whose image under $T$ is $\vec b$. So normally to do this we set up an augmented matrix like this and solve: $$A=\left[\begin{matrix}1&-3&2&1\\ 3&-8&8&6\\ 0&1&2&3\\ 1&0&8&10\\\end{matrix}\right]$$ I was able to row reduce this to the matrix: $$A=\left[\begin{matrix}1&0&8&10\\ 0&1&2&3\\ 0&0&0&0\\ 0&0&0&0\\\end{matrix}\right]$$ So would that mean that I found the vector $\vec x$ to be $\vec x=\left[\begin{matrix}10\\3\\0\end{matrix}\right]$ as one solution? This would mean that there must be infinite solutions since there is one free variable right? - Best to present the solution in the form $$x_3 = \alpha \implies x_1 = 10 - 8 \alpha \implies x_2 = 3 - 2\alpha$$ $$\implies \vec x = \begin{pmatrix} \\ \\ 10 - 8 \alpha \\ \\ 3 - 2\alpha \\ \\ \alpha \\ \\ \end{pmatrix}$$ And yes, this indeed means that since there is a free variable we are denoting $\alpha$, there are infinitely many possible values for $\alpha$, hence an infinite number of solutions to the system. However, since the values of $x_1, x_2$ depend on $x_3 = \alpha$, we do have that for any fixed $\alpha$, the other variables are then determined. So there are some constraints on the solutions. - Your answers tend to always make sense :-) Clear and straight to the point. Thanks! – TheHopefulActuary Feb 22 '13 at 0:04 Thanks Kyle, you're welcome! – amWhy Feb 22 '13 at 0:08 Always have trouble to make matrix here.+ – Babak S. Feb 22 '13 at 5:57 The row reduced matrix gives the following: $x_1+8 x_3 = 10$, $x_2+2x_3 = 3$, which can be written as $x_1 = 10-8 x_3$, $x_2 = 3-2 x_3$. So you can choose $x_3$ arbitrarily and then compute the $x_1, x_2$ that satisfies the equations. The solutions are then given by $\begin{bmatrix}10\\3\\0\end{bmatrix}+ x_3 \begin{bmatrix}-8\\-2\\0\end{bmatrix}$, $x_3 \in \mathbb{R}$. -
2015-10-13T23:45:46
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/310392/is-my-solution-to-the-system-of-equations-correct", "openwebmath_score": 0.9752787351608276, "openwebmath_perplexity": 160.00589126732538, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713861502773, "lm_q2_score": 0.8577681068080749, "lm_q1q2_score": 0.8451343715902911 }
http://math.stackexchange.com/questions/167107/tell-if-mathbb-z-6-odot-is-a-semigroup-and-if-the-identity-element-belong
# Tell if $(\mathbb Z_6, \odot)$ is a semigroup and if the identity element belongs to it Let the operation $\odot$ be defined in $\mathbb Z_6$ as follows: $$a \odot b = a +4b+2$$ check if $(\mathbb Z_6, \odot)$ is a semigroup and if the identity element belongs to it. This is the way I have solved this exercise: Let $x,y,z \in \mathbb Z_6$ then in order for $(\mathbb Z_6, \odot)$ to be a semigroup, the following condition must be met: $$(x\odot y)\odot z = x\odot (y\odot z)$$ Considering only the first part of the equation: \begin{aligned} (x\odot y)\odot z &= (x+4y+2)\odot z \\ &= (x+4y+2)+4z+2 \\ &=x+4y+4z+4 \end{aligned} now considering the second part of the equation: \begin{aligned} x\odot (y\odot z) &= x \odot (y+4z+2) \\ &= x+4(y+4z+2)+2 \\ &= x+4y+16z+10 \\ &= x+4y+4z+4 \end{aligned} So I conclude stating that $(\mathbb Z_6, \odot)$ is a semigroup. When it comes to verifying the presence of the identity element within the semigroup, some confusion arises: $$x \odot 1_{\mathbb Z_6} = x+4\cdot 1_{\mathbb Z_6} + 2 \neq x$$ and also $$1_{\mathbb Z_6} \odot x = 1_{\mathbb Z_6} +4x+2 \neq x$$ so the identity element does not belong to $(\mathbb Z_6, \odot)$. Is my solution right or am I wrong? - $x + 4 \cdot 1_{\mathbb{Z}_6} + 2 = x$ holds, since $4 + 2 = 0$. – Cocopuffs Jul 5 '12 at 16:22 @Cocopuffs I am afraid I don't understand what you mean. – haunted85 Jul 5 '12 at 16:24 @haunted85 We're working modulo $6$, so $4 \cdot 1 + 2 = 0$. – Cocopuffs Jul 5 '12 at 16:27 I suppose addition and multiplication is interpreted modulo 6. (Otherwise it would not be a binary operation on $\mathbb Z_6$.) I guess you have to find out whether the given semigroup has identity. This means: Is there an element $e$ such that $a\odot e=e\odot a=e$ for all elements. $a\odot e=a$ means $$a+4e+2=a\\4e+2=0.$$ We can easily check that this is fulfilled by $e\in\{1,4\}$. So this semigroup has two right identities. Since there can be only one identity element, there cannot be left identity. But we can check this anyway. Existence of left identity $e$ would mean that for each $a$ we have $e\odot a=a$, i.e. $$e+4a+2=a\\e=4+3a.$$ The expression $4+3a$ has various values for various $a$'s (namely the values $1$ and $4$), so there is no element $e$ fulfilling this for each $a\in\mathbb Z_6$. - how have you come up with $e \in \{1,4\}$? Have you just solved the equation treating $e$ as an unknown quantity or is it a deduction you made by thinking? Is there a way to know for sure if $1$ and $4$ are all and only right identities? – haunted85 Jul 5 '12 at 16:41 @haunted85 Since we're working modulo $6$, you can easily check all possibilities as there are only six of them ($0,1,2,3,4,5$). However, with enough experience working with modular arithmetic the solutions to an equation modulo some small $n$ are usually obvious. – Alex Becker Jul 5 '12 at 16:50 Recall that $\rm\:e\:$ is a right identity (or neutral) element for $\bigodot$ if $\rm\,x\bigodot e = x,\,$ for all $\rm\,x,\,$ and similar for a left identity element. You need to check if these equations have solutions for $\rm\,e.\,$ Note that the identity element for this operation need have no relationship to identity elements for other operations (such as the two-sided identity $1$ for multiplication in $\mathbb Z_6$). -
2016-07-01T06:30:58
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/167107/tell-if-mathbb-z-6-odot-is-a-semigroup-and-if-the-identity-element-belong", "openwebmath_score": 0.9996961355209351, "openwebmath_perplexity": 280.3382725973513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713878802045, "lm_q2_score": 0.8577681031721325, "lm_q1q2_score": 0.8451343694917774 }
https://math.stackexchange.com/questions/2634314/two-equivalent-definitions-riemann-integral
# Two equivalent definitions Riemann integral Definition 7.1 in Apostol's book gives the following definition for the Riemann integral. Let $P=\{x_0,x_1,\ldots,x_n\}$ be a partition of $[a,b]$ and $t_k$ any point in $[x_{k-1},x_k]$. Denote $S(P,f)=\sum_{k=1}^n f(t_k)(x_k-x_{k-1})$. Definition 1: We say that $f$ is Riemann integrable on $[a,b]$ if there exists a real number $A$ such that, for all $\epsilon>0$, there exists a partition $P_\epsilon$ of $[a,b]$ such that, for each finer partition $P$ and for each arbitrary choice $t_k\in [x_{k-1},x_k]$, it holds $|S(P,f)-A|<\epsilon$. One of the intuitions I have to better understand the Riemann integral could be summarized in the following definition: Definition 2: We say that $f$ is Riemann integrable on $[a,b]$ if there exists a real number $A$ and a sequence of partitions $\{P_n\}$ with mesh tending to $0$, such that, for each arbitrary choice $t_k\in [x_{k-1},x_k]$ in $P_n$, it holds $\lim_{n\rightarrow\infty} S(P_n,f)=A$. I have never seen a proof of the equivalence of both definitions. Maybe they are not equivalent. • – GEdgar Feb 3 '18 at 21:39 The definitions are equivalent. We have the Riemann criterion whereby a function $f$ is integrable over $[a,b]$ according to Definition 1 if and only if for every $\epsilon > 0$ there exists a partition $P$ such that upper and lower sums satisfy $U(P,f) - L(P,f) < \epsilon$. Suppose $f$ satisfies Definition 2. Given $\epsilon > 0$ there exists a positive integer $N$ such that for all $n \geqslant N$ and any choice of tags $\{t_j\}$ we have $|S(P_n,f, \{t_j\}) - A| < \epsilon/4$. In particular, we have $$\tag{*} A - \epsilon/4 < S(P_N,f,\{t_j\}) < A + \epsilon/4.$$ Consider any subinterval $I_j = [x_{j-1},x_j]$ of $P_N$. Let $m_j = \inf_{x \in I_j} f(x)$ and $M_j = \sup_{x \in I_j}f(x)$. There exist points $\alpha_j, \beta_j \in I_j$ such that $$m_j \leqslant f(\alpha_j) < m_j + \frac{\epsilon}{4(b-a)}, \\ M_j - \frac{\epsilon}{4(b-a)} < f(\beta_j) \leqslant M_j.$$ Multiplying by $(x_j - x_{j-1})$, summing over $j$ and using (*) we get $$A - \epsilon/4<S(P_N,f, \{\alpha_j\})< L(P_N,f) + \epsilon/4, \\ U(P_N,f) - \epsilon/4 < S(P_N,f, \{\beta_j\})< A + \epsilon/4$$ This implies $A- \epsilon/2 < L(P_N,f)$ and $U(P_N,f) < A + \epsilon/2$. and, hence, $$U(P_N,f) - L(P_N,f) < \epsilon.$$ Since we are able to find a partition where the Riemann criterion is satisfied, a function that is integrable under Definition 2 is also integrable under Definition 1. The converse is easy to prove. There are two possible interpretations of $\lim_{n \to \infty} S(P_n,f) = A$ in Definition 2. (2a) For every $\epsilon > 0$ there exists a positive integer $N(\epsilon)$ such that if $n \geqslant N(\epsilon)$, then $|S(P_nf, \{t_j\})-A| < \epsilon$ holds for every choice of tags $\{t_j\}$. (2b) For every $\epsilon > 0$ and each choice of tags $\{t_j\}$, there exists a positive integer $N(\epsilon, \{t_j\})$ such that if $n \geqslant N(\epsilon, \{t_j\})$, then $|S(P_nf, \{t_j\})-A| < \epsilon$ holds. To be precise, Definition 1 is equivalent to Definition 2a. It was shown above that (2a) implies (1). The converse follows because if a function is Riemann integrable under Definition 1, there is an equivalence to the statement that for any $\epsilon > 0$ there exists $\delta > 0$ such that $\|P\| < \delta$ implies that $|S(P,f, \{t_j\}) - A| < \epsilon$ for any choice of tags. • Why is the number $N$ the same for all tags $\{t_j\}$? A priori, the rapidness of convergence depends on the tag. – user39756 Feb 5 '18 at 19:01 • @user39756: That's the way I'm reading the statement and that is what I'm proving. You won't find this definition 2 in any book that I've seen so I'm not sure how convoluted it should be. Normally the alternative definition to integrability in terms of partition refinement is that, for every $\epsilon> 0$ there is a $\delta >0$ such that for any partition $P$ with mesh $\|P\| < \delta$ we have $|S(P,f,T) - I| < \epsilon$ for any choice of tags. Even here books will chose one definition over another and never prove the equivalence. – RRL Feb 5 '18 at 19:12 • Where did you get definition 2? – RRL Feb 5 '18 at 19:13 • Under Definition 1 which I know is standard we have $U(P,f) - L(P,f) < \epsilon$ for any further refinement of the partition. All Riemann sums regardless of tags are squeezed in between upper and lower sums so all converge at the same rate. Riemann integration is "convergence of a net". I would doubt the equivalence of another definition without this property. – RRL Feb 5 '18 at 19:17 • I think the argument would work with minor changes. We have $A$ and the partitions $\{P_n\}$. For each $\epsilon>0$ and each sequence of tags $\{t_j\}$ for each $P_n$, $n\geq1$, there exists a number $N=N(\epsilon,\{t_j\})$ such that, for all $n\geq N$, $A-\epsilon/4<S(P_n,f,\{t_j\})<A+\epsilon/4$. Take $\alpha_j$ and $\beta_j$ as you did. For the sequence of tags $\{\alpha_j\}$, there is a number $N_1(\epsilon,\{\alpha_j\})$; for the sequence of tags $\{\beta_j\}$, there is a number $N_2(\epsilon,\{\alpha_j\})$. Take $N_0=\max\{N_1,N_2\}$. Then $U(P_{N_0},f)-L(P_{N_0},f)<\epsilon$. – user39756 Feb 5 '18 at 19:28
2019-09-17T11:08:31
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2634314/two-equivalent-definitions-riemann-integral", "openwebmath_score": 0.9599655866622925, "openwebmath_perplexity": 74.27353249495317, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713878802045, "lm_q2_score": 0.8577681031721325, "lm_q1q2_score": 0.8451343694917774 }
https://math.stackexchange.com/questions/2934410/limit-of-infinite-sequence-from-partial-sum
# Limit of infinite sequence from partial sum I think there was a rule in Calculus that mentions this, but I am not sure. If I need to find $$\lim_{n \to \infty} a_n$$ and I am only given the nth partial sum: $$S_n =\sum_{k=1}^{n} a_k = f(n)$$ To find $$\lim_{n \to \infty} a_n$$ I just have to find $$\lim_{n \to \infty} f(n)$$ correct? • Hint: $a_n=f(n)-f(n-1)$ – lulu Sep 28 '18 at 14:14 • @lulu so $\lim_{n \to \infty} f(n)-f(n-1)$ I have to find? – glockm15 Sep 28 '18 at 14:18 • Yes. $\quad \quad$. – lulu Sep 28 '18 at 14:31 As noticed by lulu in the comment note that $$S_n-S_{n-1} =\sum_{k=1}^{n} a_k-\sum_{k=1}^{n-1} a_k = a_n\color{red}{+\sum_{k=1}^{n-1} a_k-\sum_{k=1}^{n-1} a_k}=a_n=f(n)-f(n-1)$$ Remark: • that is precisely the reason for which $$a_n\to 0$$ is a necessary condition for the convergence of any series $$\sum_{k=1}^{\infty} a_k$$, indeed $$\lim_{n\to \infty}S_n=\sum_{k=1}^{\infty} a_k=L \implies S_n-S_{n-1} =a_n \to 0$$ • So the limit of my sequence is always 0!? If partial sums exist? – glockm15 Sep 28 '18 at 14:25 • @StackUser With reference to the OP we have that $a_n=f(n)-f(n-1)$ therefore $$\lim_{n \to \infty} a_n=\lim_{n \to \infty} f(n)-f(n-1)$$ – user Sep 28 '18 at 14:27 • @StackUser The remark given refers to a more general fact about the series, it is not strictly related to your specific example. – user Sep 28 '18 at 14:29 • Omm, I am not sure if I get it but, what is happening is that. We are trying to find the last term of $a_n$ which is equivalent to $$\lim_{n \to \infty} a_n$$ and we are doing that by subtracting "the space that is covered" by the partial sums as n goes to infinity to get what the last term will be? – glockm15 Sep 28 '18 at 14:31 • @StackUser Do not consider the remark to solve the question, it is another fact we can discuss later. For the OP we need to find $a_n$ and we can use the $S_n=S_{n-1}+a_n\implies a_n=S_n-S_{n-1}$. Here we are using a finite value for $n$. Once we have $a_n$ we can evaluate the limit. – user Sep 28 '18 at 14:34
2020-01-24T05:08:54
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2934410/limit-of-infinite-sequence-from-partial-sum", "openwebmath_score": 0.8793675899505615, "openwebmath_perplexity": 233.56193179203834, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713831229044, "lm_q2_score": 0.8577681068080749, "lm_q1q2_score": 0.8451343689935071 }
https://math.stackexchange.com/questions/766366/choosing-poker-hand-with-a-specific-card
# choosing poker hand with a specific card How many ways can you choose at least one A from a deck of card in a poker hand? I just wanted to double check my answer, would it be C(52,5)- C(48,5) Help is much appreciated, • Yes, your calculation gives the number of hands with at least one Ace. – André Nicolas Apr 23 '14 at 19:32 To get at least one ace is to get 1, 2, 3, or 4. You are selecting the aces among the four aces, the other cards among the $52 - 4 = 48$ non-aces. In all: $$\binom{4}{1} \cdot \binom{48}{4} + \binom{4}{2} \cdot \binom{48}{3} + \binom{4}{3} \cdot \binom{48}{2} + \binom{4}{4} \cdot \binom{48}{1}$$ Or you could say there are $\binom{52}{5}$ hands in all, of those $\binom{48}{5}$ are ace-less, which gives: $$\binom{52}{5} - \binom{48}{5}$$
2019-10-19T02:08:30
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/766366/choosing-poker-hand-with-a-specific-card", "openwebmath_score": 0.6503745317459106, "openwebmath_perplexity": 302.99477542400075, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9852713878802045, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8451343659093873 }
https://math.stackexchange.com/questions/760193/divergent-subsequence-of-an-unbounded-sequence
# Divergent subsequence of an unbounded sequence Let $(a_n)$ be a sequence of real numbers that is unbounded above. Show that $\exists$ a subsequence $(a_{n_k})_{k \ge 1}$ such that $\lim_{k \rightarrow \infty} a_{n_k} = + \infty$. Working so far: Since $(a_n)$ is not bounded above, this means that there exists no $M \in \mathbb{R}$ such that $a_n \le M$ for all $n \in \mathbb{N}$. In other words, there exist infinitely many $n$'s such that $a_n > M$ for any real $M$. I will prove by construction that there exists a strictly increasing subsequence $(a_{n_k})_{k \ge 1}$ of $(a_n)$ that diverges to $+\infty$. For the first term of the subsequence, begin by picking $n_1$ to be the first index such that $a_{n_1} > 1$. This is possible because by assumption, $(a_n)$ is unbounded. For $k \ge 2$, pick $n_k > n_{k-1}$ to be the first index such that $a_{n_k} > k$ and $a_{n_k} > a_{n_{k-1}}$. I will use a proof by contradiction to show that we can always find such an index $n_k$, $k \ge 2$. If $n_k$, $k \ge 2$, does not exist, then either $a_n \le k$ or $a_n \le a_{n_{k-1}}$ for all $n \ge n_k$. But this contradicts the fact that $(a_n)$ is unbounded above. Hence, such an index $n_k$, $k \ge 2$, always exists. Now I will show that the subsequence $(a_{n_k})_{k \ge 1}$ defined above diverges to $+\infty$, that is, $\lim_{k \rightarrow \infty} a_{n_k} = +\infty$. To do this, we need to show that for all $H > 0$, there exists an $N \in \mathbb{N}$ such that whenever $n_k \ge N$, then $a_{n_k} \ge H$. Question 1) Is my argument to construct the subsequence correct? I thought I might've needed to use induction, but is the contradiction statement correct? 2) I know this may seem obvious, but how exactly do I use the definition of diverging to $+\infty$ to finish my proof? That is, how do I pick $N$? EDIT: So to summarize, the above construction produces a subsequence $(a_{n_k})_{k \ge 1}$ such that for all $n_k < n_{k+1}$, we have $a_{n_k} < a_{n_{k+1}}$ and $a_{n_k} > k$. To show that this subsequence diverges, just pick $k$ large enough so that $k \ge H$ and then set $N = n_k$. construction part is correct and nice. In the construction of required sequence, you have picked $n_k$ which satisfies so and so condition. Use the same index $n_k$ to prove it diverges. • Thanks, please see my edit. Basically we can just pick $k$ large enough so that $k \ge H$ and set $N = n_k$, right? – user40333 Apr 19 '14 at 9:09
2019-08-17T11:24:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/760193/divergent-subsequence-of-an-unbounded-sequence", "openwebmath_score": 0.9792577028274536, "openwebmath_perplexity": 57.89671629690563, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713857177955, "lm_q2_score": 0.8577681013541613, "lm_q1q2_score": 0.8451343658457369 }
https://tobydriscoll.net/fnc-julia/krylov/inviter.html
# 8.3. Inverse iteration¶ Power iteration finds only the dominant eigenvalue. We next show that it can be adapted to find any eigenvalue, provided you start with a reasonably good estimate of it. Some simple linear algebra is all that is needed. Theorem 8.3.1 Let $$\mathbf{A}$$ be an $$n\times n$$ matrix with eigenvalues $$\lambda_1,\ldots,\lambda_n$$ (possibly with repeats), and let $$s$$ be a complex scalar. Then: 1. The eigenvalues of the matrix $$\mathbf{A}-s\mathbf{I}$$ are $$\lambda_1-s,\ldots,\lambda_n-s$$. 2. If $$s$$ is not an eigenvalue of $$\mathbf{A}$$, the eigenvalues of the matrix $$(\mathbf{A}-s\mathbf{I})^{-1}$$ are $$(\lambda_1-s)^{-1},\ldots,(\lambda_n-s)^{-1}$$. 3. The eigenvectors associated with the eigenvalues in the first two parts are the same as those of $$\mathbf{A}$$. Proof The equation $$\mathbf{A}\mathbf{v}=\lambda \mathbf{v}$$ implies that $$(\mathbf{A}-s\mathbf{I})\mathbf{v} = \mathbf{A}\mathbf{v} - s\mathbf{I}\mathbf{v} = \lambda\mathbf{v} - s\mathbf{v} = (\lambda-s)\mathbf{v}$$. That proves the first part of the theorem. For the second part, we note that by assumption, $$(\mathbf{A}-s\mathbf{I})$$ is nonsingular, so $$(\mathbf{A}-s\mathbf{I})\mathbf{v} = (\lambda-s) \mathbf{v}$$ implies that $$\mathbf{v} = (\lambda-s) (\mathbf{A}-s\mathbf{I}) \mathbf{v}$$, or $$(\lambda-s)^{-1} \mathbf{v} =(\mathbf{A}-s\mathbf{I})^{-1} \mathbf{v}$$. The discussion above also proves the third part of the theorem. Consider first part 2 of the theorem with $$s=0$$, and suppose that $$\mathbf{A}$$ has a smallest eigenvalue, $|\lambda_n| \ge |\lambda_{n-1}| \ge \cdots > |\lambda_1|.$ Then clearly $|\lambda_1^{-1}| > |\lambda_{2}^{-1}| \ge \cdots \ge |\lambda_n^{-1}|,$ and $$\mathbf{A}^{-1}$$ has a dominant eigenvalue. Hence power iteration on $$\mathbf{A}^{-1}$$ can be used to find the eigenvalue of $$\mathbf{A}$$ closest to zero. For nonzero values of $$s$$, then we suppose there is an ordering (8.3.1)$|\lambda_n-s| \ge \cdots \ge |\lambda_2-s| > |\lambda_1-s|.$ Then it follows that $|\lambda_1-s|^{-1} > |\lambda_{2}-s|^{-1} \ge \cdots \ge |\lambda_n-s|^{-1},$ and power iteration on the matrix $$(\mathbf{A}-s\mathbf{I})^{-1}$$ converges to $$(\lambda_1-s)^{-1}$$, which is easily solved for $$\lambda_1$$ itself. ## Algorithm¶ A literal application of Algorithm 8.2.2 would include the step (8.3.2)$\mathbf{y}_k = (\mathbf{A}-s\mathbf{I})^{-1} \mathbf{x}_k.$ As always, we do not want to explicitly find the inverse of a matrix. Instead we should write this step as the solution of a linear system. Algorithm 8.3.2 :  Inverse iteration Given matrix $$\mathbf{A}$$ and shift $$s$$: 1. Choose $$\mathbf{x}_1$$. 2. For $$k=1,2,\ldots$$, a. Solve for $$\mathbf{y}_k$$ in (8.3.3)$(\mathbf{A}-s\mathbf{I}) \mathbf{y}_k =\mathbf{x}_k .$ b. Find $$m$$ such that $$|y_{k,m}|=\|{\mathbf{y}_k} \|_\infty$$. c. Set $$\alpha_k = \dfrac{1}{y_{k,m}}$$ and $$\,\beta_k = s + \dfrac{x_{k,m}}{y_{k,m}}$$. d. Set $$\mathbf{x}_{k+1} = \alpha_k \mathbf{y}_k$$. Note that in Algorithm 8.2.2, we used $$y_{k,m}/x_{k,m}$$ as an estimate of the dominant eigenvalue of $$\mathbf{A}$$. Here, that ratio is an estimate of $$(\lambda_1-s)^{-1}$$, and solving for $$\lambda_1$$ gives the $$\beta_k$$ in Algorithm 8.3.2. Each pass of inverse iteration requires the solution of a linear system of equations with the matrix $$\mathbf{B}=\mathbf{A}-s\mathbf{I}$$. This solution might use methods we consider later in this chapter. Here, we use (sparse) PLU factorization and hope for the best. Since the matrix $$\mathbf{B}$$ is constant, the factorization needs to be done only once for all iterations. The details are in Function 8.3.3. Function 8.3.3 :  inviter Shifted inverse iteration for the closest eigenvalue 1""" 2 inviter(A,s,numiter) 3 4Perform numiter inverse iterations with the matrix A and shift 5s, starting from a random vector. Returns a vector of 6eigenvalue estimates and the final eigenvector approximation. 7""" 8function inviter(A,s,numiter) 9 n = size(A,1) 10 x = normalize(randn(n),Inf) 11 β = zeros(numiter) 12 fact = lu(A - s*I) 13 for k in 1:numiter 14 y = fact\x 15 normy,m = findmax(abs.(y)) 16 β[k] = x[m]/y[m] + s 17 x = y/y[m] 18 end 19 return β,x 20end ## Convergence¶ The convergence is linear, at a rate found by reinterpreting (8.2.9) with $$(\mathbf{A}-s\mathbf{I})^{-1}$$ in place of $$\mathbf{A}$$: (8.3.4)$\frac{\beta_{k+1} - \lambda_1}{\beta_{k} - \lambda_1} \rightarrow \frac{ \lambda_1 - s } {\lambda_2 - s}\quad \text{ as } \quad k\rightarrow \infty,$ with the eigenvalues ordered as in (8.3.1). Thus, the convergence is best when the shift $$s$$ is close to the target eigenvalue $$\lambda_1$$, specifically when it is much closer to that eigenvalue than to any other. Demo 8.3.4 We set up a $$5\times 5$$ triangular matrix with prescribed eigenvalues on its diagonal. λ = [1,-0.75,0.6,-0.4,0] # Make a triangular matrix with eigenvalues on the diagonal. A = triu(ones(5,5),1) + diagm(λ) 5×5 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 0.0 -0.75 1.0 1.0 1.0 0.0 0.0 0.6 1.0 1.0 0.0 0.0 0.0 -0.4 1.0 0.0 0.0 0.0 0.0 0.0 We run inverse iteration with the shift $$s=0.7$$ and take the final estimate as our “exact” answer to observe the convergence. s = 0.7 β,x = FNC.inviter(A,s,30) eigval = β[end] 0.5999999999999984 As expected, the eigenvalue that was found is the one closest to 0.7. The convergence is again linear. err = @. abs(eigval-β) plot(0:28,err[1:end-1],m=:o, title="Convergence of inverse iteration", xlabel=L"k",yaxis=(L"|\lambda_3-\beta_k|",:log10,[1e-16,1])) The observed linear convergence rate is found from the data. @show observed_rate = err[22]/err[21]; observed_rate = err[22] / err[21] = 0.33326532173735623 We reorder the eigenvalues to enforce (8.3.1). The sortperm function returns the index permutation needed to sort the given vector, rather than the sorted vector itself. λ = λ[ sortperm(abs.(λ.-s)) ] 5-element Vector{Float64}: 0.6 1.0 0.0 -0.4 -0.75 Hence the theoretical convergence rate is @show theoretical_rate = (λ[1]-s) / (λ[2]-s); theoretical_rate = (λ[1] - s) / (λ[2] - s) = -0.3333333333333332 ## Dynamic shifting¶ There is a clear opportunity for positive feedback in Algorithm 8.3.2. The convergence rate of inverse iteration improves as the shift gets closer to the true eigenvalue—and the algorithm computes improving eigenvalue estimates! If we update the shift to $$s=\beta_k$$ after each iteration, the convergence accelerates. You are asked to implement this algorithm in Exercise 6. Let’s analyze the resulting convergence. If the eigenvalues are ordered by distance to $$s$$, then the convergence is linear with rate $$|\lambda_1-s|/|\lambda_2-s|$$. As $$s\to\lambda_1$$, the change in the denominator is negligible. So if the error $$(\lambda_1-s)$$ is $$\epsilon$$, then the error in the next estimate is reduced by a factor $$O(\epsilon)$$. That is, $$\epsilon$$ becomes $$O(\epsilon^2)$$, which is quadratic convergence. Demo 8.3.5 λ = [1,-0.75,0.6,-0.4,0] # Make a triangular matrix with eigenvalues on the diagonal. A = triu(ones(5,5),1) + diagm(λ) 5×5 Matrix{Float64}: 1.0 1.0 1.0 1.0 1.0 0.0 -0.75 1.0 1.0 1.0 0.0 0.0 0.6 1.0 1.0 0.0 0.0 0.0 -0.4 1.0 0.0 0.0 0.0 0.0 0.0 We begin with a shift $$s=0.7$$, which is closest to the eigenvalue 0.6. s = 0.7 x = ones(5) y = (A-s*I)\x β = x[1]/y[1] + s 0.7034813925570228 Note that the result is not yet any closer to the targeted 0.6. But we proceed (without being too picky about normalization here). s = β x = y/y[1] y = (A-s*I)\x β = x[1]/y[1] + s 0.5612761406172997 Still not much apparent progress. However, in just a few more iterations the results are dramatically better. for k in 1:4 s = β x = y/y[1] y = (A-s*I)\x @show β = x[1]/y[1] + s end β = x[1] / y[1] + s = 0.5964312884753865 β = x[1] / y[1] + s = 0.5999717091820104 β = x[1] / y[1] + s = 0.5999999978556353 β = x[1] / y[1] + s = 0.6 There is a price to pay for this improvement. The matrix of the linear system to be solved, $$(\mathbf{A}-s\mathbf{I})$$, now changes with each iteration. That means that we can no longer do just one LU factorization for the entire iteration. The speedup in convergence usually makes this tradeoff worthwhile, however. In practice power and inverse iteration are not as effective as the algorithms used by eigs and based on the mathematics described in the rest of this chapter. However, inverse iteration can be useful for turning an eigenvalue estimate into an eigenvector estimate. ## Exercises¶ 1. ⌨ Use Function 8.3.3 to perform 10 iterations for the given matrix and shift. Compare the results quantitatively to the convergence given by (8.3.4). (a) $$\mathbf{A} = \begin{bmatrix} 1.1 & 1 \\ 0 & 2.1 \end{bmatrix}, \; s = 1 \qquad$$ (b) $$\mathbf{A} = \begin{bmatrix} 1.1 & 1 \\ 0 & 2.1 \end{bmatrix}, \; s = 2\qquad$$ (c) $$\mathbf{A} = \begin{bmatrix} 1.1 & 1 \\ 0 & 2.1 \end{bmatrix}, \; s = 1.6\qquad$$ (d) $$\mathbf{A} = \begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix}, \; s = -0.33 \qquad$$ (e) $$\mathbf{A} = \begin{bmatrix} 6 & 5 & 4 \\ 5 & 4 & 3 \\ 4 & 3 & 2 \end{bmatrix}, \; s = 0.1$$ 2. ✍ Let $$\mathbf{A} = \displaystyle \begin{bmatrix} 1.1 & 1 \\ 0 & 2.1 \end{bmatrix}.$$ Given the starting vector $$\mathbf{x}_1=[1,1]$$, find the vector $$\mathbf{x}_2$$ for the following shifts. (a) $$s=1\quad$$ (b) $$s=2\quad$$ (c) $$s=1.6$$ 3. ✍ Why is it a bad idea to use unshifted inverse iteration with the matrix $$\displaystyle \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$$? Does the shift $$s=-1$$ improve matters? 4. ✍ When the shift $$s$$ is very close to an eigenvalue of $$\mathbf{A}$$, the matrix $$\mathbf{A}-s\mathbf{I}$$ is close to a singular matrix. But then (8.3.3) is a linear system with a badly conditioned matrix, which should create a lot of error in the numerical solution for $$\mathbf{y}_k$$. However, it happens that the error is mostly in the direction of the eigenvector we are looking for, as the following toy example illustrates. Prove that $$\displaystyle \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}$$ has an eigenvalue at zero with associated eigenvector $$\mathbf{v}=[-1,1]^T$$. Suppose this matrix is perturbed slightly to $$\displaystyle \mathbf{A} = \begin{bmatrix} 1 & 1 \\ 0 & \epsilon \end{bmatrix}$$, and that $$\mathbf{x}_k=[1,1]$$ in (8.3.3). Show that once $$\mathbf{y}_k$$ is normalized by its infinity norm, the result is within $$\epsilon$$ of a multiple of $$\mathbf{v}$$. 5. ⌨ (Continuation of Exercise 8.2.3.) This exercise concerns the $$n^2\times n^2$$ sparse matrix defined by FNC.poisson(n) for integer $$n$$. It represents a lumped model of a vibrating square membrane held fixed around the edges. (a) The eigenvalues of $$\mathbf{A}$$ closest to zero are approximately squares of the frequencies of vibration for the membrane. Using eigs, find the eigenvalue $$\lambda_m$$ closest to zero for $$n=10,15,20,25$$. (b) For each $$n$$ in part (a), apply 50 steps of Function 8.3.3 with zero shift. On one graph, plot the four convergence curves $$|\beta_k-\lambda_m|$$ using a semi-log scale. (c) Let v be the eigenvector (second output) found by Function 8.3.3 for $$n=25$$. Visualize the vibration mode of the membrane using surface(reshape(v,n,n)) 6. ⌨ This problem explores the use of dynamic shifting to accelerate the inverse iteration. (a) Modify Function 8.3.3 to change the value of the shift $$s$$ to be the most recently computed value in the vector $$\beta$$. Note that the matrix B must also change with each iteration, and the LU factorization cannot be done just once. (b) Define a matrix with eigenvalues at $$k^2$$ for $$k=1,\ldots,100$$ via A = diagm(0=>(1:100).^2,1=>rand(99)) Using an initial shift of $$s=920$$, apply the dynamic inverse iteration. Determine which eigenvalue was found and make a table of the log10 of the errors in the iteration as a function of iteration number. (These should approximately double, until machine precision is reached, due to quadratic convergence.) (c) Repeat part (b) using a different initial shift of your choice.
2022-05-28T23:45:16
{ "domain": "tobydriscoll.net", "url": "https://tobydriscoll.net/fnc-julia/krylov/inviter.html", "openwebmath_score": 0.866527259349823, "openwebmath_perplexity": 627.4945269926969, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713874477228, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8451343655384183 }
https://math.stackexchange.com/questions/3074770/six-card-hand-with-2-cards-of-each-suit-did-i-calculate-this-correctly
# Six-card hand with $2$ cards of each suit - did I calculate this correctly? Suppose you have a deck of $$36$$ cards - $$3$$ different suits, $$12$$ cards per suit. If you draw a $$6$$-card hand, what is the chance of a hand with $$2$$ cards of each suit ($$2-2-2$$)? I would do $$\frac{\left(\binom{12}2\right)^3}{\binom{36}6} = 0.1476$$ But this seems very small. Am I doing this correctly? For a hand of $$3-2-1$$ ($$3$$ of one suit, $$2$$ of another suit, $$1$$ of the last), I would do $$\frac{\binom{12}3 \cdot \binom{12}2 \cdot \binom{12}1}{\binom{36}6} = 0.0895$$ Again, this seems quite small. Are these correct? • Welcome to MathSE. This tutorial explains how to typeset mathematics on this site. – N. F. Taussig Jan 15 at 18:48 Choose the suit from which three cards will be drawn, choose three of the twelve cards of that suit, choose from which of the two remaining suits two cards will be drawn, choose two of the twelve cards of that suit. The remaining card must be drawn from the remaining suit. Choose one of the twelve cards of that suit. $$\frac{\binom{3}{1}\binom{12}{3}\binom{2}{1}\binom{12}{2}\binom{1}{1}\binom{12}{1}}{\binom{36}{6}} = \frac{3!\binom{12}{3}\binom{12}{2}\binom{12}{1}}{\binom{36}{6}}$$ where the factor of $$3!$$ accounts for the number of ways we can select from which suit three cards are drawn, from which suit two cards are drawn, and from which suit one card is drawn. If we disregard the order in which the terms are listed (so $$3,2,1$$ is the same as $$2,1,3$$), I count at least $$7$$ ways to add three non-negative integers with sum $$6.$$ And you say you observe one of them with probability greater than $$\frac17.$$ So this would not be a particularly unlikely combination at that rate. For the $$3,2,1$$ hand you presumably don’t care which suit is the one with $$3$$ cards or which has $$2$$ cards. If you do care then you already have the correct probability (and there are several other events with that exact same probability); otherwise you should multiply by the number of ways you can select different suits for the $$3$$-card, $$2$$-card, and $$1$$-card suits. If you still have doubts you can work out the other five distributions of numbers of cards of each suit and check that the sum of all probabilities is $$1.$$
2019-04-22T16:33:33
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3074770/six-card-hand-with-2-cards-of-each-suit-did-i-calculate-this-correctly", "openwebmath_score": 0.6899964809417725, "openwebmath_perplexity": 224.32025101740658, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713852853136, "lm_q2_score": 0.8577681013541613, "lm_q1q2_score": 0.8451343654747677 }
https://mathhelpboards.com/threads/evaluate-%E2%8C%8Ax%E2%8C%8B-%E2%8C%8Ay%E2%8C%8B-%E2%8C%8Az%E2%8C%8B.7607/
# Evaluate ⌊x⌋ + ⌊y⌋ + ⌊z⌋ #### anemone ##### MHB POTW Director Staff member If $x, y, z$ are real numbers such that $x+y+z=6$, $xy+yz+xz=9$, find the sum of all possible values of the expression $\lfloor x\rfloor+\lfloor y\rfloor+\lfloor z\rfloor$. #### Klaas van Aarsen ##### MHB Seeker Staff member Re: Evaluate ⌊x⌋+⌊y⌋+⌊z⌋ If $x, y, z$ are real numbers such that $x+y+z=6$, $xy+yz+xz=9$, find the sum of all possible values of the expression $\lfloor x\rfloor+\lfloor y\rfloor+\lfloor z\rfloor$. Since the floor of a real number is at most, but not quite, 1 point lower than the original number, it follows that: $$3 < ⌊x⌋+⌊y⌋+⌊z⌋ \le 6$$ $$4 \le ⌊x⌋+⌊y⌋+⌊z⌋ \le 6$$ Working out the equations for instance for x=0, x=ε, and x=1-ε (where ε > 0 is an arbitrary small number), shows that the numbers 4, 5, and 6 are all possible. Therefore the sum of all possible values of ⌊x⌋+⌊y⌋+⌊z⌋ is 4+5+6=15. #### Opalg ##### MHB Oldtimer Staff member Re: Evaluate ⌊x⌋+⌊y⌋+⌊z⌋ Let $k=xyz$. The polynomial with roots $x,y,z$ is then $\lambda^3 - 6\lambda^2 + 9\lambda - k.$ You can see from the graph that the only values of $k$ for which the equation $k = \lambda^3 - 6\lambda^2 + 9\lambda$ has three real roots are $0\leqslant k\leqslant4.$ As $k$ increases from $0$ to $4$, we can tabulate the values of the roots as follows, where the $+$ and $-$ subscripts mean addtition or subtraction of a small amount (less than $1/2$). $$\begin{array}{c|c|c|c}k&x,y,z & \lfloor x\rfloor,\, \lfloor y\rfloor,\, \lfloor z\rfloor & \lfloor x\rfloor+\lfloor y\rfloor+\lfloor z\rfloor \\ \hline 0& 0,\,3,\,3 &0,\,3,\,3 & 6 \\ 1 & 0_+,\,3_-,\,3_+ & 0,\,2,\,3 & 5 \\ 2 & 0_+,\,2,\,4_- & 0,\,2,\,3 & 5 \\ 3 & 0_+,\,2_-,\,4_- & 0,\,1,\,3 & 4 \\ 4& 1,\,1,\,4 & 1,\,1,\,4 & 6 \end{array}$$ The only possible values for $\lfloor x\rfloor+\lfloor y\rfloor+\lfloor z\rfloor$ are $4$, $5$ and $6$. If I read the question correctly, it asks for the sum of those values, which is $15.$ Edit. I like Serena beat me by just seconds! #### Klaas van Aarsen ##### MHB Seeker Staff member Re: Evaluate ⌊x⌋+⌊y⌋+⌊z⌋ Edit. I like Serena beat me by just seconds! We can only see that we posted in the same minute. Let $x$ be the time I like Serena posted in minutes, and let $y$ be the time Opalg posted. Then we know that $⌊x⌋=⌊y⌋$ and also that $x<y$. Therefore $0 < y-x < 60 \text{ s}$. Note that a smaller amount is more likely, since with higher amounts the probability increases that we'd have posted in different minutes. The leaves the question what the expected time difference is. #### anemone ##### MHB POTW Director Staff member Re: Evaluate ⌊x⌋+⌊y⌋+⌊z⌋ Hey I like Serena and Opalg, Thank you so so much for participating! At first I thought that folks are jaded with me already... getting bored because I posted almost a challenge a day here without fail. To be completely candid, sometimes, I even ask Mark if it's appropriate for me to keep posting! Solution which I found along with the problem: $6=x+y+z$ $3=(x-1)+(y-1)+(z-1)<\lfloor x \rfloor+\lfloor y \rfloor+\lfloor z \rfloor \le \lfloor x+y+z \rfloor=6$ $\therefore \lfloor x \rfloor+\lfloor y \rfloor+\lfloor z \rfloor=4, 5, 6$ and hence $\lfloor x \rfloor+\lfloor y \rfloor+\lfloor z \rfloor=4+5+6=15$. We can only see that we posted in the same minute. Let $x$ be the time I like Serena posted in minutes, and let $y$ be the time Opalg posted. Then we know that $⌊x⌋=⌊y⌋$ and also that $x<y$. Therefore $0 < y-x < 60 \text{ s}$. Note that a smaller amount is more likely, since with higher amount the probability increases that we'd have posted in different minutes. The leaves the question what the expected time difference is. I laughed out loud (more than once) when I read this, I like Serena, you have a wonderful personality and a sense of humor!
2021-06-15T12:10:11
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/evaluate-%E2%8C%8Ax%E2%8C%8B-%E2%8C%8Ay%E2%8C%8B-%E2%8C%8Az%E2%8C%8B.7607/", "openwebmath_score": 0.9394071698188782, "openwebmath_perplexity": 786.5073049092917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713831229043, "lm_q2_score": 0.8577681013541611, "lm_q1q2_score": 0.8451343636199219 }
https://artofproblemsolving.com/wiki/index.php?title=2019_AMC_8_Problems/Problem_3&diff=prev&oldid=113346
# Difference between revisions of "2019 AMC 8 Problems/Problem 3" ## Problem 3 Which of the following is the correct order of the fractions $\frac{15}{11},\frac{19}{15},$ and $\frac{17}{13},$ from least to greatest? $\textbf{(A) }\frac{15}{11}< \frac{17}{13}< \frac{19}{15} \qquad\textbf{(B) }\frac{15}{11}< \frac{19}{15}<\frac{17}{13} \qquad\textbf{(C) }\frac{17}{13}<\frac{19}{15}<\frac{15}{11} \qquad\textbf{(D) } \frac{19}{15}<\frac{15}{11}<\frac{17}{13} \qquad\textbf{(E) } \frac{19}{15}<\frac{17}{13}<\frac{15}{11}$ ## Solution 1 Consider subtracting 1 from each of the fractions. Our new fractions would then be $\frac{4}{11}, \frac{4}{15},$ and $\frac{4}{13}$. Since $\frac{4}{15}<\frac{4}{13}<\frac{4}{11}$, it follows that the answer is $\boxed{\textbf{(E)}\frac{19}{15}<\frac{17}{13}<\frac{15}{11}}$ -will3145 ## Solution 2 We take a common denominator: $$\frac{15}{11},\frac{19}{15}, \frac{17}{13} = \frac{15\cdot 15 \cdot 13}{11\cdot 15 \cdot 13},\frac{19 \cdot 11 \cdot 13}{15\cdot 11 \cdot 13}, \frac{17 \cdot 11 \cdot 15}{13\cdot 11 \cdot 15} = \frac{2925}{2145},\frac{2717}{2145},\frac{2805}{2145}.$$ Since $2717<2805<2925$ it follows that the answer is $\boxed{\textbf{(E)}\frac{19}{15}<\frac{17}{13}<\frac{15}{11}}$. -xMidnightFirex ~ dolphin7 - I took your idea and made it an explanation. ## Solution 3 When $\frac{x}{y}>1$ and $z>0$, $\frac{x+z}{y+z}<\frac{x}{y}$. Hence, the answer is $\boxed{\textbf{(E)}\frac{19}{15}<\frac{17}{13}<\frac{15}{11}}$. ~ ryjs This is also similar to Problem 20 on the AMC 2012.
2023-03-20T15:34:39
{ "domain": "artofproblemsolving.com", "url": "https://artofproblemsolving.com/wiki/index.php?title=2019_AMC_8_Problems/Problem_3&diff=prev&oldid=113346", "openwebmath_score": 0.6266825199127197, "openwebmath_perplexity": 669.9309481304049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850862376966, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.8451226176073562 }
https://continuous-time-mcs.quantecon.org/poisson.html
# 2. Poisson Processes¶ ## 2.1. Overview¶ Counting processes count the number of “arrivals” occurring by a given time (e.g., the number of visitors to a website, the number of customers arriving at a restaurant, etc.) Counting processes become Poisson processes when the time interval between arrivals is IID and exponentially distributed. Exponential distributions and Poisson processes have deep connections to continuous time Markov chains. For example, Poisson processes are one of the simplest nontrivial examples of a continuous time Markov chain. In addition, when continuous time Markov chains jump between states, the time between jumps is necessarily exponentially distributed. In discussing Poisson processes, we will use the following imports: import numpy as np import matplotlib.pyplot as plt import quantecon as qe from numba import njit from scipy.special import factorial, binom ## 2.2. Counting Processes¶ ### 2.2.1. Jumps and Counts¶ Let $$(J_k)$$ be an increasing sequence of nonnegative random variables satisfying $$J_k \to \infty$$ with probability one. For example, $$J_k$$ might be the time the $$k$$-th customer arrives at a shop. Then (2.1)$N_t := \sum_{k \geq 0} k \mathbb{1} \{ J_k \leq t < J_{k+1} \}$ is the number of customers that have visited by time $$t$$. The next figure illustrate the definition of $$N_t$$ for a given jump sequence $$\{J_k\}$$. Ks = 0, 1, 2, 3 Js = 0, 0.8, 1.8, 2.1, 3 n = len(Ks) fig, ax = plt.subplots() ax.plot(Js[:-1], Ks, 'o') ax.hlines(Ks, Js[:-1], Js[1:], label='$N_t$') ax.vlines(Js[:-1], (0, Ks[0], Ks[1], Ks[2]), Ks, alpha=0.25) ax.set(xticks=Js[:-1], xticklabels=[f'$J_{k}$' for k in range(n)], yticks=(0, 1, 2, 3), xlabel='$t$') ax.legend(loc='lower right') plt.show() An alternative but equivalent definition is $N_t := \max \{k \geq 0 \,|\, J_k \leq t \}$ As a function of $$t$$, the process $$N_t$$ is called a counting process. The jump times $$(J_k)$$ are sometimes called arrival times and the intervals $$J_k - J_{k-1}$$ are called wait times or holding times. ### 2.2.2. Exponential Holding Times¶ A Poisson process is a counting process with independent exponential holding times. In particular, suppose that the arrival times are given by $$J_0 = 0$$ and $J_k := W_1 + \cdots W_k$ where $$(W_i)$$ are IID exponential with some fixed rate $$\lambda$$. Then the associated counting process $$(N_t)$$ is called a Poisson process with rate $$\lambda$$. The rationale behind the name is that, for each $$t > 0$$, the random variable $$N_t$$ has the Poisson distribution with parameter $$t \lambda$$. In other words, (2.2)$\PP\{N_t = k\} = e^{-t \lambda} \frac{(t \lambda)^k }{k!} \qquad (k = 0, 1, \ldots)$ For example, since $$N_t = 0$$ if and only if $$W_1 > t$$, we have $\PP\{N_t =0\} = \PP\{W_1 > t\} = e^{-t \lambda}$ and the right hand side agrees with (2.2) when $$k=0$$. This sets up a proof by induction, which is time consuming but not difficult — the details can be found in $$\S29$$ of . Another way to show that $$N_t$$ is Poisson with rate $$\lambda$$ is to appeal to Lemma 1.1. We observe that $\PP\{N_t \leq n\} = \PP\{J_{n+1} > t\} = 1 - \PP\{J_{n+1} \leq t\}$ Inserting the expression for the Erlang CDF in (1.5) with shape $$n+1$$ and rate $$\lambda$$, we obtain $\PP\{N_t \leq n\} = \sum_{k=0}^{n} \frac{(t \lambda )^k}{k!} e^{-t \lambda}$ This is the (integer valued) CDF for the Poisson distribution with parameter $$t \lambda$$. An exercise at the end of the lecture asks you to verify that $$N_t$$ is Poisson-$$(t \lambda )$$ informally via simulation. The next figure shows one realization of a Poisson process $$(N_t)$$, with jumps at each new arrival. np.random.seed(1234) T = 5 Ws = np.random.exponential(size=T) Js = np.cumsum(Ws) Ys = np.arange(T) fig, ax = plt.subplots() ax.plot(np.insert(Js, 0, 0)[:-1], Ys, 'o') ax.hlines(Ys, np.insert(Js, 0, 0)[:-1], Js, label='$N_t$') ax.vlines(Js[:-1], Ys[:-1], Ys[1:], alpha=0.25) ax.set(xticks=[], yticks=range(Ys.max()+1), xlabel='time') ax.grid(lw=0.2) ax.legend(loc='lower right') plt.show() ## 2.3. Stationary Independent Increments¶ One of the defining features of a Poisson process is that it has stationary and independent increments. This is due to the memoryless property of exponentials. It means that 1. the variables $$\{N_{t_{i+1}} - N_{t_i}\}_{i \in I}$$ are independent for any strictly increasing finite sequence $$(t_i)_{i \in I}$$ and 2. the distribution of $$N_{t+h} - N_t$$ depends on $$h$$ but not $$t$$. A detailed proof can be found in Theorem 2.4.3 of . Instead of repeating this, we provide some intuition from a discrete approximation. In the discussion below, we use the following well known fact: If $$(\theta_n)$$ is a sequence such that $$n \theta_n$$ converges, then (2.3)$\text{Binomial}(n, \theta_n) \approx \text{Poisson}(n \theta_n) \quad \text{for large } n$ (The exercises ask you to examine this claim visually.) That is, we fix small $$h > 0$$ and let $$t_i := ih$$ for all $$i \in \ZZ_+$$. Let $$(V_i)$$ be IID binary random variables with $$\PP\{V_i = 1\} = h \lambda$$ for some $$\lambda > 0$$. • either one or zero customers visits a shop at each $$t_i$$. • $$V_i = 1$$ means that a customer visits at time $$t_i$$. • Visits occur with probability $$h \lambda$$, which is proportional to the length of the interval between grid points. We learned that the wait time until the first visit is approximately exponential with rate $$t \lambda$$. Since $$(V_i)$$ is IID, the same is true for the second wait time and so on. Moreover, these wait times are independent, since they depend on separate subsets of $$(V_i)$$. Let $$\hat N_t$$ count the number of visits by time $$t$$, as shown in the next figure. ($$V_i = 1$$ is indicated by a vertical line at $$t_i = i h$$.) fig, ax = plt.subplots() np.random.seed(1) T = 10 p = 0.25 B = np.random.uniform(size=T) < p N = np.cumsum(B) m = N[-1] # max of N t_grid = np.arange(T) t_ticks = [f'$t_{i}$' for i in t_grid] ax.set_yticks(range(m+1)) ax.set_xticks(t_grid) ax.set_xticklabels(t_ticks, fontsize=12) ax.step(t_grid, np.insert(N, 0, 0)[:-1], label='$\hat N_t$') for i in t_grid: if B[i]: ax.vlines((i,), (0,), (m,), ls='--', lw=0.5) ax.legend(loc='center right') plt.show() We expect from the discussion above that $$(\hat N_t)$$ approximates a Poisson process. This intuition is correct because, fixing $$t$$, letting $$k := \max\{i \in \ZZ_+ \,:\, t_i \leq t\}$$ and applying (2.3), we have $\hat N_t = \sum_{i=1}^k V_i \sim \text{Binomial}(k, h \lambda) \approx \text{Poisson}(k h \lambda )$ Using the fact that $$kh = t_k \approx t$$ as $$h \to 0$$, we see that $$\hat N_t$$ is approximately Poisson with rate $$t \lambda$$, just as we expected. This approximate construction of a Poisson process helps illustrate the property of stationary independent increments. For example, if we fix $$s, t$$, then $$\hat N_{s + t} - \hat N_s$$ is the number of visits between $$s$$ and $$s+t$$, so that $\hat N_{s+t} - \hat N_s = \sum_i V_i \mathbb 1\{ s \leq t_i < s + t \}$ Suppose there are $$k$$ grid points between $$s$$ and $$s+t$$, so that $$t \approx kh$$. Then $\hat N_{s+t} - \hat N_s \sim \text{Binomial}(k, h \lambda ) \approx \text{Poisson}(k h \lambda ) \approx \text{Poisson}(t\lambda)$ This illustrates the idea that, for a Poisson process $$(N_t)$$, we have $N_{s+t} - N_s \sim \text{Poisson}(t\lambda)$ In particular, increments are stationary (the distribution depends on $$t$$ but not $$s$$). The approximation also illustrates independence of increments, since, in the approximation, increments depend on separate subsets of $$(V_i)$$. ## 2.4. Uniqueness¶ What other counting processes have stationary independent increments? Theorem 2.1 (Characterization of Poisson Processes) If $$(M_t)$$ is a stochastic process supported on $$\ZZ_+$$ and starting at 0 with the property that its increments are stationary and independent, then $$(M_t)$$ is a Poisson process. In particular, there exists a $$\lambda > 0$$ such that $M_{s + t} - M_s \sim \text{Poisson}(t\lambda)$ for any $$s, t$$. The proof is similar to our earlier proof that the exponential distribution is the only memoryless distribution. Details can be found in Section 6.2 of or Theorem 2.4.3 of . ### 2.4.1. The Restarting Property¶ An important consequence of stationary independent increments is the restarting property, which means that, when simulating, we can freely stop and restart a Poisson process at any time: Theorem 2.2 (Poisson Processes can be Paused and Restarted) If $$(N_t)$$ is a Poisson process, $$s > 0$$ and $$(M_t)$$ is defined by $$M_t = N_{s+t} - N_s$$ for $$t \geq 0$$, then $$(M_t)$$ is a Poisson process independent of $$(N_r)_{r \leq s}$$. Proof. Independence of $$(M_t)$$ and $$(N_r)_{r \leq s}$$ follows from indepenence of the increments of $$(N_t)$$. In view of the uniqueness statement above, we can verify that $$(M_t)$$ is a Poisson process by showing that $$(M_t)$$ starts at zero, takes values in $$\ZZ_+$$ and has stationary independent increments. It is clear that $$(M_t)$$ starts at zero and takes values in $$\ZZ_+$$. In addition, if we take any $$t < t'$$, then $M_{t'} - M_t = N_{s+t'} - N_{s + t} \sim \text{Poisson}((t' - t) \lambda)$ Hence $$(M_t)$$ has stationary increments and, using the relation $$M_{t'} - M_t = N_{s+t'} - N_{s + t}$$ again, the increments are independent as well. We conclude that $$(N_{s+t} - N_s)_{t \geq 0}$$ is indeed a Poisson process independent of $$(N_r)_{r \leq s}$$. ## 2.5. Exercises¶ Exercise 2.1 Fix $$\lambda > 0$$ and draw $$\{W_i\}$$ as IID exponentials with rate $$\lambda$$. Set $$J_n := W_1 + \cdots W_n$$ with $$J_0 = 0$$ and $$N_t := \sum_{n \geq 0} n \mathbb 1\{ J_n \leq t < J_{n+1} \}$$. Provide a visual test of the claim that $$N_t$$ is Poisson with parameter $$t \lambda$$. Do this by fixing $$t = T$$, generating many independent draws of $$N_T$$ and comparing the empirical distribution of the sample with a Poisson distribution with rate $$T \lambda$$. Try first with $$\lambda = 0.5$$ and $$T=10$$. Exercise 2.2 In the lecture we used the fact that $$\Binomial(n, \theta) \approx \Poisson(n \theta)$$ when $$n$$ is large and $$\theta$$ is small. Investigate this relationship by plotting the distributions side by side. Experiment with different values of $$n$$ and $$\theta$$. ## 2.6. Solutions¶ Note code is currently not supported in sphinx-exercise so code-cell solutions are immediately after this solution block. Here is one solution. The figure shows that the fit is already good with a modest sample size. Increasing the sample size will further improve the fit. λ = 0.5 T = 10 def poisson(k, r): "Poisson pmf with rate r." return np.exp(-r) * (r**k) / factorial(k) @njit def draw_Nt(max_iter=1e5): J = 0 n = 0 while n < max_iter: W = np.random.exponential(scale=1/λ) J += W if J > T: return n n += 1 @njit def draw_Nt_sample(num_draws): draws = np.empty(num_draws) for i in range(num_draws): draws[i] = draw_Nt() return draws sample_size = 10_000 sample = draw_Nt_sample(sample_size) max_val = sample.max() vals = np.arange(0, max_val+1) fig, ax = plt.subplots() ax.plot(vals, [poisson(v, T * λ) for v in vals], marker='o', label='poisson') ax.plot(vals, [np.mean(sample==v) for v in vals], marker='o', label='empirical') ax.legend(fontsize=12) plt.show() Here is one solution. It shows that the approximation is good when $$n$$ is large and $$\theta$$ is small. def binomial(k, n, p): # Binomial(n, p) pmf evaluated at k return binom(n, k) * p**k * (1-p)**(n-k) θ_vals = 0.5, 0.2, 0.1 n_vals = 50, 75, 100 fig, axes = plt.subplots(len(n_vals), 1, figsize=(6, 12)) for n, θ, ax in zip(n_vals, θ_vals, axes.flatten()): k_grid = np.arange(n) binom_vals = [binomial(k, n, θ) for k in k_grid] poisson_vals = [poisson(k, n * θ) for k in k_grid] ax.plot(k_grid, binom_vals, 'o-', alpha=0.5, label='binomial') ax.plot(k_grid, poisson_vals, 'o-', alpha=0.5, label='Poisson') ax.set_title(f'$n={n}$ and $\\theta = {θ}$') ax.legend(fontsize=12) fig.tight_layout() plt.show()
2021-12-02T04:28:45
{ "domain": "quantecon.org", "url": "https://continuous-time-mcs.quantecon.org/poisson.html", "openwebmath_score": 0.9237521886825562, "openwebmath_perplexity": 719.5748317107717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850852465429, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.8451226167552972 }
http://mathematica.stackexchange.com/questions/46114/how-can-i-calculate-the-limit-without-using-the-lhopitals-rule
# How can I calculate the limit without using the L'Hopital's rule I need to prove this limit without using the L'Hopital's rule: $$\lim_{x\to 0} \frac{(1+a\,x)^{1/4} - (1+b\,x)^{1/4}}{x} = \frac{a-b}{4}$$ How can I do it in Mathematica? - I have decided not to close this question or migrate, because it has a sensible Mathematica-based answer. As such it will still be useful to the community, even if the OP wasn't actually wanting a Mathematica solution. There is only one more vote needed to close, and I'm happy to be overruled. –  Verbeia Apr 14 '14 at 23:24 One can simply use: Limit[((1 + a x)^(1/4) - (1 + b x)^(1/4))/x, x -> 0] to get the result. However Limit does know about the l'Hopital rule. Nonetheless there are different ways to go, e.g. let's use Series to write down the first few terms of the Taylor series of ((1 + a x)^(1/4) - (1 + b x)^(1/4))/x: Series[((1 + a x)^(1/4) - (1 + b x)^(1/4))/x, {x, 0, 3}] (a - b)/4 - 3/32 (a^2 - b^2) x + 7/128 (a^3 - b^3) x^2 - (77 (a^4 - b^4) x^3)/2048 + O[x]^4 This demonstrates clearly what the limit of the expression is when x -> 0. Unfortunately this still uses some analytical methods and it might be especially useful if we can get rid of them. ## Elementary Proof Therefore we can provide quite an elementary proof (only ancient Greeks' methods). We take the expression ((1 + a x)^(1/4) - (1 + b x)^(1/4)) and multiply it by expr1 = ((1 + a x)^(1/4) + (1 + b x)^(1/4)); The latter term is equal to 2 at x == 0 (expr1 /. x -> 0 yields 2). ((1 + a x)^(1/4) - (1 + b x)^(1/4)) ((1 + a x)^(1/4) + (1 + b x)^(1/4)) // Expand Sqrt[1 + a x] - Sqrt[1 + b x] Now we take Sqrt[1 + a x] - Sqrt[1 + b x] and we multiply it by expr2 = Sqrt[1 + a x] + Sqrt[1 + b x]; similarily the latter term is equal to 2 at x == 0. Now we have: (Sqrt[1 + a x] - Sqrt[1 + b x]) (Sqrt[1 + a x] + Sqrt[1 + b x]) // Expand a x - b x We have multiplied the interesting term twice by 2 and now we take the denominator (i.e. x) of the initial exppession. Thus we can see from the last output that $$\lim_{x\rightarrow 0 }\frac{(1 + a x)^{\frac{1}{4}} - (1 + b x)^{\frac{1}{4}}}{x}=\frac{a-b}{4}$$ Q.E.D. - I am guessing this is math homework and should be migrated, don't you think so? –  sebhofer Apr 14 '14 at 23:04 Not necessarily, I demonstrated what the OP wanted without the l'Hopital rule. I could also provide another ways. Just showed something for those about to study calculus, with the help of Mathematica. Right? –  Artes Apr 14 '14 at 23:08 Ok, we will see what the OP says anyways. –  sebhofer Apr 14 '14 at 23:17 i'm sure its trivial to show that the series converges, but i'd think you need to show that step to offer this up as proof. –  george2079 Apr 14 '14 at 23:30 @george2079 Now you have got what you wanted. –  Artes Apr 15 '14 at 0:01 First, let us notice that the limit follows from the following: Lemma. $\displaystyle \lim_{u \rightarrow 0} {(1+u)^{1/4}-1 \over u} = {1 \over 4}$. For $${{(1+a\,x)^{1/4} - (1+b\,x)^{1/4}} \over {x}} = {{(1+a\,x)^{1/4} - 1} \over {x}} - {{(1+b\,x)^{1/4} - 1} \over {x}}$$ $$= a\,{{(1+u)^{1/4} - 1} \over {u}} - b\,{{(1+v)^{1/4} - 1} \over {v}}\,,$$ where $u = ax$ and $v = bx$. ## Mathematica proofs Sans L'Hôpital: The nub of a proof is an equality like this: Simplify[Abs[((1 + u)^(1/4) - 1)/u - 1/4] < Abs[u], -1 < u < 1 && u != 0] (* True *) More directly from the definition: Resolve[ForAll[epsilon, Implies[epsilon > 0, Exists[delta, ForAll[u, Implies[Abs[u] < delta && u != 0, Abs[((1 + u)^(1/4) - 1)/u - 1/4] < epsilon]]]]], Reals ] (* True *) ## Another elementary proof Proof á la Euclid. [Prompted by @Artes's remark about ancient Greeks.] Let $BX$ be given with $OA=OB=1$ and $AX = u>0$. Let $OP$ have been drawn perpendicular to $BX$ with $OC=1$. Let $BX$ have been bisected at $Q$. With center $Q$ and distance $QX$, let circle $XPB$ have been described. Let $PQ$ be joined. Then $PQ = BQ = 1+u/2$ and $OP = \sqrt{1+u}$; further $OP < PQ = BQ$. Subtracting $BO = OC = 1$ yields $CP = \sqrt{1+u}-1 < OQ = u/2$. On the other hand, let $QM$, $XH$ have been drawn perpendicular to $OX$, and let $CG$ and $PH$ have been drawn perpendicular to $OP$. Let $PX$ have been joined and intersect $QM$ at $K$. Finally let a line have been drawn perpendicular to $QM$ at $K$. The complements $OK$, $KH$ of the diagonal $PX$ are equal [Eucl. I.43]. Therefore the rectangle $CH$ is greater than the rectangle $OK$ which is greater than $OG$. Thus $CH = (1+u)(\sqrt{1+u}-1) > u/2$. Therefore $${u \over 2(1+u)} < \sqrt{1+u}-1 < {u \over 2}$$ or $$1 + {u \over 2(1+u)} < \sqrt{1+u} < 1 + {u \over 2}$$ Similarly, letting $OX = \sqrt{1+u}$, we obtain $$1 + { \sqrt{1+u}-1 \over 2( \sqrt{1+u})} < (1+u)^{1/4} < 1 + {\sqrt{1+u}-1 \over 2}$$ Applying the previous inequalities, we have $$1 + {u \over 4(1+u/2)} < 1 + { \sqrt{1+u}-1 \over 2( \sqrt{1+u})} \quad\hbox{and}\quad 1 + {\sqrt{1+u}-1 \over 2} < 1 + {u \over 4}\,.$$ From this we get $$1 + {u \over 4(1+u/2)} < (1+u)^{1/4} < 1 + {u \over 4}\,.$$ It follows that $$\left| {(1+u)^{1/4} -1 \over u} - {1 \over 4} \right| < {u \over 8 + 4u} < u\,.$$ Similarly, taking $X$ between $O$ and $A$ so that $AX = -u$, one can show that $$\left| {(1+u)^{1/4} -1 \over u} - {1 \over 4} \right| < {-u}\,,$$ provided $-1/2 < u < 0$. Thus the lemma is established. ## Calculus proof The limit in the lemma is the derivative of $x^{1/4}$ at $x = 1$. ## Code dump for figure labels[u_] := MapThread[Text, Transpose[{ {"O", {0, 0}, {0, 1.5}}, {"A", {1, 0}, {0, 1.5}}, {"B", {-1, 0}, {-1.8, 1.5}}, {"C", {0, 1}, {1.5, 0}}, {"X", {1 + u, 0}, {-1.8, 1.5}}, {"P", {0, Sqrt[1 + u]}, {1.5, -1}}, {"Q", {u/2, 0}, {0, 1.5}}, {"G", {u/2, 1}, {1.5, 1.5}}, {"H", {1 + u, Sqrt[1 + u]}, {-1.8, -1}}, {"K", {u/2, (2 + u)/2/Sqrt[1 + u]}, {-2, -1.5}}, {"L", {u/2, 1}, {-2.7, 1.5}}, {"M", {u/2, Sqrt[1 + u]}, {0., -2.5}} }]]; Manipulate[ With[{u = Exp[logu]}, Graphics[ {Point[{{0, 0}, {1, 0}, {0, 1}, {-1, 0}, {1 + u, 0}, {u/2, 0}, {0, Sqrt[1 + u]}}], Circle[{u/2, 0}, 1 + u/2], Line[{{{-1, 0}, {1 + u, 0}, {1 + u, Sqrt[1 + u]}, {0, Sqrt[ 1 + u]}, {0, 0}}, {{0, 1}, {1 + u, 1}}, {{u/2, 0}, {u/2, Sqrt[ 1 + u]}}, {{u/2, 0}, {0, Sqrt[1 + u]}, {1 + u, 0}}, {{0, (2 + u)/2/Sqrt[1 + u]}, {1 + u, (2 + u)/2/Sqrt[ 1 + u]}}}], labels[u] }, PlotRange -> {{-1.1, 2.1}, {-0.5, 1.5}}, ImageSize -> 400 ] ], {{logu, -0.3}, -3., 0.} ] - To suggest correctness of the stated Lemma, in Mathematica one might look at Series[((1 + u)^(1/4) - 1)/u, {u, 0, 10}], say. (Mathematically, one still must worry about whether the series in fact converges in some interval about 0.) –  murray Apr 19 '14 at 3:59 @murray Yes, Artes's answer points out the use of Limit and Series. I didn't think the lemma different enough to warrant pointing it out here. –  Michael E2 Apr 19 '14 at 11:21 @MichaelE2 I like it i.e. geometric reasoning. –  Artes Apr 19 '14 at 12:39
2015-07-28T01:27:31
{ "domain": "stackexchange.com", "url": "http://mathematica.stackexchange.com/questions/46114/how-can-i-calculate-the-limit-without-using-the-lhopitals-rule", "openwebmath_score": 0.5287978053092957, "openwebmath_perplexity": 1602.559644676707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850892111574, "lm_q2_score": 0.8596637523076225, "lm_q1q2_score": 0.8451226166289374 }
https://math.stackexchange.com/questions/2716901/expected-value-of-the-absolute-value-of-the-difference-between-two-independent-u
# Expected value of the absolute value of the difference between two independent uniform random variables? [closed] I am trying to calculate the expected value of the absolute value of the difference between two independent uniform random variables. Let $X_1\sim\operatorname{Uniform}(0, 2)$ and $X_2\sim\operatorname{Uniform}(0, 2)$ and $X_1$ and $X_2$ are independent. I want to calculate $\operatorname E \left[|X_1 - X_2|\right]$. • @mzp thank you very much for your edit! Could you please mind help me with the problem? Apr 1 '18 at 2:19 For every independent random variables $X_1$ and $X_2$ with densities $f_1$ and $f_2$ and every measurable function $g$, $$\operatorname E[g(X_1,X_2)]=\int_{D_1}\int_{D_2} g(x_1,x_2) f_1(x_1) f_2(x_2) \, \mathrm{d}x_2 \, \mathrm{d}x_1.$$ where $D_1$ and $D_2$ are the domains of $X_1$ and $X_2$. Since $f_1(x_1) = f_2(x_2) = 1/2$, and $D_1=D_2=[0,2]$ we have that $$\operatorname E[|X_1-X_2|]=\int_0^2\int_0^2 \frac{|x_1-x_2|}{4} \, \mathrm{d}x_2 \, \mathrm{d}x_1 =\frac{2}{3}.$$ Alternatively, we can avoid integrating (explicitly) by using conditional expectation and mean/variance formulas: \begin{align} \mathbb{E}[|X_1 - X_2|] &= \mathbb{E}\big[\mathbb{E}[abs(X_1-X_2)|X_2]\big] \\ &= \mathbb{E}\Bigg[ \frac{X_2^2}{4} + \frac{(2-X_2)^2}{4} \Bigg] \\ &= \frac{1}{4}\mathbb{E}[X_2^2 + (2-X_2)^2] \\ &= \frac{1}{4}\mathbb{E}[X_2^2 + 4 - 4X_2 + X_2^2] \\ &= \frac{1}{4}\mathbb{E}[X_2^2] + 1 - \mathbb{E}[X_2] + \frac{1}{4}\mathbb{E}[X_2^2] \\ &= \frac{1}{2}\mathbb{E}[X_2^2]^2 \\ &= \frac{1}{2}\mathbb{E}[X_2]^2 + \frac{1}{2}\text{Var}[X_2] \\ &= \frac{1}{2} + \frac{1}{6} = \frac{2}{3} \end{align} The second line follows as the probability $$\mathbb{P}[X_1 < X_2 | X_2] = \frac{X_2}{2}$$, and in that case the expectation is $$\mathbb{E}[abs(X_1-X_2)|X_2, X_1. Similarly when $$X_1>X_2$$ we get the max. • What is the $\mathbb{R}$ operator that you are using on the second to last line?
2022-01-17T15:54:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2716901/expected-value-of-the-absolute-value-of-the-difference-between-two-independent-u", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 266.50579165653255, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850897067342, "lm_q2_score": 0.8596637487122112, "lm_q1q2_score": 0.8451226135203715 }
https://math.stackexchange.com/questions/1953400/probability-of-rolling-exactly-4-of-a-kind-on-6-dice
# Probability of Rolling Exactly 4 of a kind on 6 Dice Probability of 3 of a kind with 7 dice I've looked at the link above - along with several others - and dusted off my combinatorics notebook to try to figure out the answer to the problem: If you roll six 6-sided dice, what is the probability of rolling exactly four of a kind? Using combinations, I came up with: 6C1 ways to choose the side of a die 6C4 ways to choose four of a kind 5 ways to chose the 5th die 5 ways to chose the 6th die. Then, there are 6^6 possible outcomes. Putting everything together, I get (15*5*5*6)/6^6 = 2250/ 46656 This is the same answer as on this Wolfram site http://www.wolframalpha.com/input/?i=6+dice What I would like to know is if there is a way to arrive at this answer using counting principles? I attempted this by considering the chance of four of a kind of each number as mutually exclusive events (see images). For the 1 side of a die, for example: the chance rolling a 1 is 1/6, the chance of rolling a second 1 is 1/6, the chance of rolling a third 1 is 1/6, the chance of rolling a fourth 1 is 1/6, the chance of rolling anything but a 1 for the 5th die is 5/6, and the chance of rolling anything but a 1 for the sixth is 5/6. Then 4(1/6) * 2(5/6) = 25/6^6 chance of four of a kind of 1s. I then repeated this for sets of four of a kind of 2s, 3s, 4s, 5s, and 6s. Since these are mutually exclusive events, I then took the sum of the probability of each of the six events. The result of this method was 25/6^6 + 25/6^6 + 25/6^6 + 25/6^6 + 25/6^6 + 25/6^6 = 150/6^6, which is very different from 2250/6^6. What am I missing with this approach? Do I need to consider the different arrangements of the dice? • So, I may have gotten ahead of myself and realized that I hadn't accounted for the arrangements of the dice in the second method. For fun, I drew out the 15 different ways that the dice could be arranged so that I could have a visual to help me see what's going on. Now I'm wondering, is this a situation where order is important or is it not important? We don't really care how we get the four of a kind, correct? So if I didn't multiply by the 15 different arrangements, that would be the permutation of this problem where order matters? – Clusterfluff Oct 4 '16 at 13:37 Your first approach is correct if you permit $4$ of a kind and a pair as well as $4$ of a kind and two different values for the other two dice. In your second approach, you do indeed need to account for the different arrangements of the dice. There are $\binom{6}{4} = 15$ orders in which the same number could occur on four of the six rolls. Multiplying your result by $15$ yields the same probability that you obtained using the first method. Four of a kind without a pair: The total number of outcomes is $6^6$. The number of favorable outcomes is $$\binom{6}{1}\binom{6}{4}\binom{5}{2}\binom{2}{1}$$ since there are $\binom{6}{1}$ choices for the number that occurs four times, $\binom{6}{4}$ ways for that number to appear in four of the six rolls, $\binom{5}{2}$ choices numbers that could occur once each during the six rolls, and $\binom{2}{1}$ choices for which those of numbers appears first. Hence, the probability that four of a kind occurs with two different numbers on the remaining rolls is $$\frac{\dbinom{6}{1}\dbinom{6}{4}\dbinom{5}{2}\dbinom{2}{1}}{6^6}$$ Four of a kind with a pair: The total number of outcomes is again $6^6$. The number of favorable outcomes is $$\binom{6}{1}\binom{6}{4}\binom{5}{1}$$ since there are $\binom{6}{1}$ choices for the number that occurs four times, $\binom{6}{4}$ ways for that number to appear in four of the six rolls, and $\binom{5}{1}$ choices for the number that appears in both of the other two rolls. Hence, the probability that four of a kind and a pair occur is $$\frac{\dbinom{6}{1}\dbinom{6}{4}\dbinom{5}{1}}{6^6}$$
2019-12-08T08:23:55
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1953400/probability-of-rolling-exactly-4-of-a-kind-on-6-dice", "openwebmath_score": 0.8042610883712769, "openwebmath_perplexity": 86.36491287129135, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850867332735, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.8451226127314928 }
https://www.physicsforums.com/threads/limit-involved-in-derivative-of-exponential-function.290825/
# Limit involved in derivative of exponential function 1. Feb 8, 2009 ### symbolipoint Can a convenient value for a be found without resorting to substituting numerical values for h in this expression? EDIT: I am trying to indicate, "as h approaches zero". EDIT: neither of the formattings worked; hopefully someone understands what I am asking? Lim$$_{h\rightarrow\0}$$$$\frac{ax-1}{h}$$ In case that formatting failed, an attempt at rewriting it is: Limh$$\rightarrow$$0$$\frac{ax-1}{h}$$ The most desired value for this limit is 1, and the suitalbe value for a would need to be a = e. I have seen this accomplished using numerical value substitutions , but can the same be accomplished using purely symbolic steps, without any numerical value subsitutions? Last edited: Feb 8, 2009 2. Feb 8, 2009 ### mathman It looks like you meant x and h to be the same thing. ah=ehln(a). Expand in a power series to get 1+hln(a)+O(h2). Therefore the limit for h->0 will be ln(a). 3. Feb 10, 2009 ### symbolipoint There is another way to achieve the derivation for derivative of the exponential function, relying on a bit of clever algebra with logarithms and implicit differentiation of y=a^x. I still wish I could find a clear way to understand the limit of (a^h - 1)/h as h approaches zero; without using numeric value substitutions. $$lim_{h to 0}\frac{a^h-1}{h}$$ edit: that typesetting is better than what I accomplished earlier, but I'd sure like to put in that right-pointing arrow instead of "to" Last edited: Feb 10, 2009 4. Feb 11, 2009 Use \to for the arrow. 5. Feb 11, 2009 ### HallsofIvy How you do that depends on exactly how you define the exponential and, in particular, how you define e. If you define e as "limit of (1+ 1/n)n as n goes to infinity" then you can say that e is approximately equal to (1+ 1/n)n for large n so that e1/n is approximately 1+ 1/n. Setting h= 1/n, h goes to 0 as n goes to infinity and that says that eh is approximately equal to 1+ h so that eh-1 is approximately equal to h and (eh-1)/h goes to 1 as h goes to 0. Of course, for the general case, use the fact that ah= eh ln(a). 6. Feb 11, 2009 ### lurflurf so we define exp(x) to be a function such that exp(x+y)=exp(x)*exp(y) this property does not define a unique function there are an infinite number of both nice and non nice functions having this property there are several ways of picking out one in particular the classical exponential (the "nice" one for which exp(1)=e) has exp'(0)=1 exp'(0)=lim [exp(h+0)-exp(0)]/h=lim [exp(h)-1]/h the general nice exponential is exp(c*x) {[exp(a*x)]'|x=0}=c in the exponential notation we may write exp(c*x)=exp(c)^x let a=exp(1) exp(c*x)=a^x we may ask the relation between exp(1)=a and c clearly lim [a^h-1]/h=c we may (some justification required) invert the relation into a=lim (1+h*c)^(1/h) this requires a definition for x^y such as x^y:=exp(y*log(x)) an adjustment is needed to avoid circular reasoning we may define integer exponents in the obvious inductive way (x^(n+1+=x*x^n) then consider the restricted form of the limit a=lim (1+h*c)^(1/h) that is let h=1,1/2,1/3,1/4,... a=lim{n=1,2,...} (1+c/n)^n=exp(c) lim{n=1,2,...} (1+1/n)^n=exp(1)=e this gives as desired a symbolic form for e, how useful this form is depends on the application
2018-07-17T08:00:43
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/limit-involved-in-derivative-of-exponential-function.290825/", "openwebmath_score": 0.9649001359939575, "openwebmath_perplexity": 1870.2300104368637, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850842553892, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.8451226106013455 }
http://bootmath.com/whats-the-probability-of-a-an-outcome-after-n-trials-if-you-stop-trying-once-youre-successful.html
# What's the probability of a an outcome after N trials, if you stop trying once you're “successful”? This follows on from this question about being hit by a bus. In this question, there is a 1/1000 chance of being hit and the question was about the probability of being hit if you cross the road 1000 time. I wondered what would happen to this probability if I stopped trying to cross the road as soon as I get hit. Does the probability change? As far as I can figure it, the probability then just becomes the sum of the geometric series $$P(\text{hit by bus within 1000 crossings}) = \sum_{n=0}^{999} 1/1000 * (999/1000)^n$$ thus $$P(\text{hit by bus within 1000 crossings}) = 1/1000 * \frac{1-(999/1000)^{1000}}{1-999/1000}$$ However, this is identical to $$P(\text{hit by bus within 1000 crossings}) = 1-P(\text{not hit by bus within 1000 crossing}) = 1-(999/1000)^{1000}$$ which is the answer to the previous question. I’m curious as to why they are not different, since the first approach is specifically ignoring all instances where (for instance) I get hit by a bus on the first try and then keep trying and get hit by subsequent buses. #### Solutions Collecting From Web of "What's the probability of a an outcome after N trials, if you stop trying once you're “successful”?" Suppose that I begin a repeated series of attempts to cross a street where I have a $1/1000$ chance of being hit by a bus during any single attempt. Now suppose I cross the street $49$ times without being hit, but on the fiftieth attempt I am hit by a bus. At that point, what difference does it make whether I attempt another $950$ crossings or never try to cross that street again? An obvious difference is that if I continue crossing, I could be hit by a bus again. I could be hit several times before the $1000$th attempt to cross. This will make a difference to the expected number of times I am hit, which is less than one if I intend to give up after being hit once (maximum outcome is $1$ but there is a positive probability of outcome $0$). But $P(\text{hit by bus within 1000 crossings})$ is not expected value. It is simple probability. After crossing number $50,$ during which I was hit by a bus, there is nothing I can do to change the fact of whether I have been hit by a bus. I certainly cannot be unhit, and likewise there is no way to make the predicate “hit by bus within $1000$ crossings” any more likely than it already is. It would therefore be quite remarkable (suspicious, in fact) if I were to find that the calculation of $P(\text{hit by bus within 1000 crossings})$ depends on whether the procedure is that I stop crossing if I am hit or continue crossing until I have done it $1000$ times. Let $q_n$ be the probability of not being struck on the $n$-th crossing. Then your first calculation for the probability of first collision within $N$ attempts may be rewritten as \begin{align} \sum_{n=0}^{N-1} (1-q_n)\prod_{k=0}^{n-1}q_k &=1-q_0q_1+(q_0q_1)(1-q_2)+\cdots+(q_1 q_2\cdots q_{N-1})(1-q_N)\\ &=1-q_0q_1q_2+\cdots+(q_1 q_2\cdots q_{N-1})(1-q_N)\\ &\cdots\\ &=1-q_0 q_1\cdots q_{N-1}. \end{align} (If you are properly sketpical of algebra here, you may verify this with induction.) But this is the complementary probability to no collsions being seen at all. Hence the two probabilities considered in the OP coincide regardless of the probability of collision per trial. They are explicitly the same situation: the probability of being hit at least once before finishing 1000 crossings. Whether or not you stop searching after the event occurs should make no difference to the probability of the event occurring. There’s no foreknowledge of future trials in the scenario. On the other hand, the conditional probability that you get hit at least once within the first hundred crossings when you (somehow) know that you will get hit on the one hundred and first crossing is quite dependent on whether you stop crossing after getting hit once or not. This is the cumulative probability of the geometric distribution – http://en.wikipedia.org/wiki/Geometric_distribution The probability distribution of the number X of Bernoulli trials needed to get one success, supported on the set { 1, 2, 3, …} The CDF is $1 – (1-p)^k$l. This is the same as Semiclassical’s answer (if the probabilities are equal), but it’s useful to know this has a name, as you can more easily research other properties and communicate with others about it.
2018-06-24T14:42:50
{ "domain": "bootmath.com", "url": "http://bootmath.com/whats-the-probability-of-a-an-outcome-after-n-trials-if-you-stop-trying-once-youre-successful.html", "openwebmath_score": 0.43136510252952576, "openwebmath_perplexity": 354.7465213810449, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850887155806, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8451226091337173 }
http://mathhelpforum.com/algebra/34868-fluid-concentration-workhour-problem-print.html
# fluid concentration, workhour problem • April 17th 2008, 05:17 AM agus hendro fluid concentration, workhour problem I need help for these problems : 1. 1000 kgs of chemical is stored in a container. The chemical is made up of 99% water and 1% oil. Some water is evaporated from the chemical until the water content is reduced to 96%. How much does the chemical weigh now ? 2. A piece of pasture grow at a constant rate everyday. 200 sheep will eat up the grass in 100 days. 150 sheep will eat up the grass in 150 days. How many days does it take for 100 sheep to eat up the grass? Waiting for expert advice. Thank you. (Doh) • April 17th 2008, 09:19 AM Soroban Hello, agus hendro! Here's the first one . . . Quote: 1. 1000 kgs of chemical is stored in a container. The chemical is made up of 99% water and 1% oil. Some water is evaporated until the water content is reduced to 96%. How much does the chemical weigh now? Consider the number of kgs of water at each stage. . . It contains: . $99\% \times 1000 \:=\:990$ kgs of water. We remove $x$ kgs of water. . . It now contains: . $\boxed{990 - x}$ kgs of water. Start again . . . We have 1000 kgs of solution. . . We remove $x$ kgs of water. So we have: $1000 - x$ kgs of stuff. But this is supposed to be 96% water. . . So it contains: . $\boxed{0.96(1000-x)}$ kgs of water. We just described the final amount of water in two ways. There is our equation! . . . . ${\color{blue}990 - x \:=\:0.96(1000-x)}$ • April 18th 2008, 07:34 AM agus hendro I found another problem like the second problem. Teams X and Y work separately on two different projects. On sunny days, team X can complete the work in 12 days, while team Y needs 15 days. On rainy days, team X's efficiency decreases by 50%, while team Y's efficiency decreases by 25%. Given that the two teams started and ended the projects at the same time, how many rainy days are there ? I think if I got the clue for the second problem, this new problem will be easy for me. Please help...... • April 20th 2008, 11:19 AM Soroban Hello, agus hendro! I think I've solved it . . . Quote: Teams $X$ and $Y$ work separately on two different projects. On sunny days, team $X$ can complete the work in 12 days, . . while team $Y$ needs 15 days. On rainy days, team $X$'s efficiency decreases by 50%, . . while team $Y$'s efficiency decreases by 25%. Given that the two teams started and ended the projects at the same time, how many rainy days are there ? In sunny weather, team X can do the job in 12 days. In one sunny day, team X can do $\frac{1}{12}$ of the job. . . In $S$ sunny days, X can do $\frac{S}{12}$ of the job. In rainy weather, team X can do the job in 24 days. In one rainy day, can do $\frac{1}{24}$ of the job. . . In $R$ rainy days, can do $\frac{R}{24}$ of the job. X's equation is: . $\frac{S}{12} + \frac{R}{24} \:=\:1\;\;{\color{blue}[1]}$ In sunny weather, team Y can do the job in 15 days. In one sunny day, Y can do $\frac{1}{15}$ of the job. . . In $S$ sunny days, Y can do $\frac{S}{15}$ of the job. In rainy weather, team Y can do the job in: . $125\% \times 15 \:=\:\frac{75}{4}$ days. In one rainy day, Y can do $\frac{4}{75}$ of the job. . . In $R$ rainy days, Y can do $\frac{4R}{75}$ of the job. Y's equation is: . $\frac{S}{15} + \frac{4R}{75} \:=\:1\;\;{\color{blue}[2]}$ $\begin{array}{ccccc}\text{Multiply {\color{blue}[1]} by 96:} & 8S + 4R &=& 96 & {\color{blue}[3]} \\ \text{Multiply {\color{blue}[2]} by 75:} & 5S + 4R &=& 75 & {\color{blue}[4]} \end{array}$ Subtract [4] from [3]: . $3S \:=\:21\quad\Rightarrow\quad S\,=\,7$ Substitute into [4]: . $5(7) + 4R \:=\:75\quad\Rightarrow\quad R \:=\:10$ Therefore, there were $\boxed{10\text{ rainy days}}$ • April 22nd 2008, 05:54 AM agus hendro still don't understand Thank you for the answer. But please explain why do you add s/12 and r/24 and how come the result of the addition equal to 1? (Wondering)
2015-09-04T16:35:53
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/34868-fluid-concentration-workhour-problem-print.html", "openwebmath_score": 0.3569759428501129, "openwebmath_perplexity": 2497.7448520610574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850867332735, "lm_q2_score": 0.8596637451167995, "lm_q1q2_score": 0.8451226074295997 }
https://math.stackexchange.com/questions/2425866/uniform-convergence-preserves-continuity
# Uniform Convergence Preserves Continuity Briefly, the definitions of point-wise convergence (PWC) and uniform convergence (UC) for a sequence of functions $f_n:[a,b]\to\mathbb{R}$ in my mind are recorded as \begin{align*} &\text{Point Wise Convergent on $[a,b]$} \iff \\ &\forall x\in [a,b]\,\forall\epsilon\gt0\,\exists N=\mathcal{N}(\epsilon,x)\gt0,\,n\ge N \implies |f_n(x)-f(x)|<\epsilon \\ \\ &\text{Uniformly Convergent on $[a,b]$}\iff \\ &\forall x\in [a,b]\,\forall\epsilon\gt0\,\exists N=\mathcal{N}(\epsilon)\gt0,\quad\, n\ge N \implies |f_n(x)-f(x)|<\epsilon. \end{align*} So the difference is that in PWC the number $N$ depends on $x$ while in UC it does not which means just one $N$ works for all $x$ in $[a,b]$. I want to prove the following theorem. Theorem. If the functions $f_n:[a,b]\to\mathbb{R}$ are continuous at $x_0\in[a,b]$ and their sequence converges uniformly to the function $f:[a,b]\to\mathbb{R}$ on $[a,b]$ then $f$ is continuous at $x_0$. Proof. According to the definition of continuity at $x_0$ for $f$, we want to show that \begin{align*} \forall\epsilon\gt0\,\exists \delta=\Delta(\epsilon,x_0)\gt0,\,|x-x_0|<\delta \implies |f(x)-f(x_0)|<\epsilon. \end{align*} According to triangle inequality we have \begin{align*} |f(x)-f(x_0)|\le|f(x)-f_n(x)|+|f_n(x)-f_n(x_0)|+|f_n(x_0)-f(x_0)|. \tag{1} \end{align*} If we could control each of the three terms on the RHS of $(1)$ such that they were less than $\frac{\epsilon}{3}$ then the theorem was proved. According to the assumptions we know that the following holds \begin{align*} &\forall\epsilon_1\gt0\,\exists \delta_1=\Delta_1(\epsilon_1,x_0,n)\gt0,\,|x-x_0|<\delta_1 \implies |f_n(x)-f_n(x_0)|<\epsilon_1 \\ \\ &\forall x\in [a,b]\,\forall\epsilon_2\gt0\,\exists N=\mathcal{N}(\epsilon_2)\gt0, n\ge N \implies |f_n(x)-f(x)|<\epsilon_2. \end{align*} Finally, choosing any $\epsilon_1$ and $\epsilon_2$ such that $0<\epsilon_1\le\frac{\epsilon}{3}$ and $0<\epsilon_2\le\frac{\epsilon}{3}$ and setting any $\delta$ such that $\delta\le\delta_1$ will do the job. For simplicity, one can usually take the equality cases which means $\epsilon_1=\epsilon_2=\frac{\epsilon}{3}$ and $\delta=\delta_1$. $1$. Is my proof OK? Any suggestions for improvement is really appreciated. $2$. Are the notations $\mathcal{N}(\epsilon,x)$ or $\Delta(\epsilon,x,n)$ OK? I just employed them to emphasize the the dependence on $\epsilon$ and $x$. Any better suggestion is welcomed. $3$. I was wondering which step would fail if we just had PWC? An example can be helpful. Your notations and proof seem great, and why the condition PWC is not sufficient is that under this you cannot choose your $\mathcal{N}(\epsilon_2)$ feasible for any $x$ in your domain. (Maybe for arbitrarily large $N$ there always exist some $x$ near $x_0$ making your argument fail.) • Thanks. :) Is there any weaker condition than uniform convergence on $[a,b]$. For example, can't we say that $f_n$ be uniformly convergent in some neighborhood of $x_0$? I don't know maybe some notion like locally uniform convergent or something may still work. – H. R. Sep 12 '17 at 11:31
2019-09-15T18:40:02
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2425866/uniform-convergence-preserves-continuity", "openwebmath_score": 0.9992689490318298, "openwebmath_perplexity": 248.71355389448297, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718456529512, "lm_q2_score": 0.853912760387131, "lm_q1q2_score": 0.8450934175989382 }
https://www.hpmuseum.org/forum/thread-1453-post-12753.html#pid12753
SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan 05-29-2014, 10:28 AM (This post was last modified: 05-29-2014 01:19 PM by CR Haeger.) Post: #1 CR Haeger Member Posts: 275 Joined: Dec 2013 SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan In CAS I notice now that taking first derivative of sin, cos or tan yields different results if in radians versus degrees mode. In radians mode, d sin(x)/dx gives cos(x) In degrees mode it gives PI * cos(x)/180 I assume these should be consistent and match the result from radians mode? Running 6030 firmware. 05-29-2014, 10:40 AM Post: #2 cdecastro Junior Member Posts: 22 Joined: Dec 2013 RE: Hp Prime - CAS inconsistent derivatives of sin, cos, tan This is correct. The derivative rules d(sin(x))=cos(x) etc.. only hold when the angle is measured in radians. If the angle is in degrees the appropriate rule is found by applying the chain rule. i.e. if the given angle x is measured in radians as x_rad and in degrees as x_deg, then x_deg = 180/Pi * x_rad, so d( sin( Pi/180 * x_deg ) ) = cos( Pi/180 * x_deg ) * Pi/180 (using chain rule) = cos( Pi/180 * x_deg) * Pi/180 Regards, Chris 05-29-2014, 01:20 PM (This post was last modified: 05-29-2014 01:22 PM by Tugdual.) Post: #3 Tugdual Senior Member Posts: 756 Joined: Dec 2013 RE: Hp Prime - CAS inconsistent derivatives of sin, cos, tan (05-29-2014 10:40 AM)cdecastro Wrote:  This is correct. The derivative rules d(sin(x))=cos(x) etc.. only hold when the angle is measured in radians. If the angle is in degrees the appropriate rule is found by applying the chain rule. i.e. if the given angle x is measured in radians as x_rad and in degrees as x_deg, then x_deg = 180/Pi * x_rad, so d( sin( Pi/180 * x_deg ) ) = cos( Pi/180 * x_deg ) * Pi/180 (using chain rule) = cos( Pi/180 * x_deg) * Pi/180 Regards, Chris Correct $$(g\circ f)'=g'\circ f*f'\\ \sin { (a*x)'=cos(a*x)*a }$$ While converting from degree to radian you get the conversion factor $$a=\frac { \pi }{ 180 }$$ 05-29-2014, 01:22 PM (This post was last modified: 05-29-2014 01:26 PM by CR Haeger.) Post: #4 CR Haeger Member Posts: 275 Joined: Dec 2013 RE: SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan Yes, of course you and the Prime are both correct... I got tripped up as I used to do DERVX in CAS with the 50G and it forced me to switch from deg --> rad. The Prime provides correct answers in either mode. This would seem to be a great feature for Geometry teachers. Thanks again. ps TW: 3, Me: 0 05-29-2014, 01:30 PM Post: #5 Tim Wessman Senior Member Posts: 2,277 Joined: Dec 2013 RE: SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan I would think it is more 3 to <some much larger number> and not 0... TW Although I work for the HP calculator group, the views and opinions I post here are my own. 05-29-2014, 01:53 PM Post: #6 CR Haeger Member Posts: 275 Joined: Dec 2013 RE: SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan (05-29-2014 01:30 PM)Tim Wessman Wrote:  I would think it is more 3 to <some much larger number> and not 0... Nope - 3-0 (since the new firmware release). However, Im still waiting to hear if there is a way to recover my custom spreadsheet apps content following the upgrade. Id prefer not to retype all this back in. 05-29-2014, 02:11 PM Post: #7 Michael de Estrada Senior Member Posts: 361 Joined: Dec 2013 RE: SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan (05-29-2014 01:53 PM)CR Haeger Wrote: (05-29-2014 01:30 PM)Tim Wessman Wrote:  I would think it is more 3 to <some much larger number> and not 0... However, I'm still waiting to hear if there is a way to recover my custom spreadsheet apps content following the upgrade. Id prefer not to retype all this back in. Start typing. 05-31-2014, 01:29 AM Post: #8 rprosperi Senior Member Posts: 5,066 Joined: Dec 2013 RE: SOLVED Hp Prime - CAS inconsistent derivatives of sin, cos, tan (05-29-2014 01:53 PM)CR Haeger Wrote:  ...Id prefer not to retype all this back in. (05-29-2014 02:11 PM)Michael de Estrada Wrote:  Start typing. Yup! What he said... It got me too. --Bob Prosperi « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2022-01-18T13:48:47
{ "domain": "hpmuseum.org", "url": "https://www.hpmuseum.org/forum/thread-1453-post-12753.html#pid12753", "openwebmath_score": 0.30099716782569885, "openwebmath_perplexity": 13154.469874491064, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9706877700966098, "lm_q2_score": 0.8705972751232809, "lm_q1q2_score": 0.8450781276416022 }
https://dsp.stackexchange.com/questions/61119/transient-and-steady-state-response-of-first-order-system
# transient and steady-state response of first order system Considering this general 1st order transfer function $$H(z) = \frac{b_0 + b_1z^{-1}}{1-az^{-1}}$$ How to find (analytically) the transient and steady-state responses? With steady-state response I mean response to a sinusoid. I'm particularly interested in the case when $$a=\pm1$$. Conceptually I have some difficulties grasping how to deal with the fact that the z-transform for a causal sinusoid only exists for $$|z|>1$$ (explained here) and that for the case when $$a=\pm1$$ the z-transform does not converge for $$|z|=1$$. So the 'straight-forward' procedure of multiplying $$H(z)$$ with the z-transform of a causal sine and then inverse z-transform the result seems to not be possible/valid. I'm confused. Edit: simple example fs = 48000; f0 = 1000; T = 3/f0; N = round(T*fs); t = linspace(0,T,N); x = sin(2*pi*f0*t); bd = [1 0]; figure; plot(t,x,'b'); hold on; plot(t,yi,'r') The region of convergence (ROC) of the $$\mathcal{Z}$$-transforms of a step-modulated sinusoid $$x[n]=\sin(\omega_0n)u[n]\tag{1}$$ is the region $$|z|>1$$. The ROC of the transfer function $$H(z)=\frac{b_0+b_1z^{-1}}{1- z^{-1}}\tag{2}$$ also equals $$|z|>1$$ (assuming a causal system). Consequently, multiplying the two transforms doesn't pose any problem, and the result also converges for $$|z|>1$$. As an example, let's compute the response $$y[n]$$ of the system $$(2)$$ to the input $$(1)$$. For the sake of simplicity we assume $$b_0=1$$ and $$b_1=0$$ in $$(2)$$. First, we compute the response to $$\tilde{x}[n]=e^{j\omega_0n}u[n]\tag{3}$$ The $$\mathcal{Z}$$-transform of $$(3)$$ is $$\tilde{X}(z)=\frac{1}{1-e^{j\omega_0}z^{-1}}\tag{4}$$ The $$\mathcal{Z}$$-transform of the response $$\tilde{y}[n]$$ is given by \begin{align}\tilde{Y}(z)&=\tilde{X}(z)H(z)\\&=\frac{1}{1-e^{j\omega_0}z^{-1}}\cdot \frac{1}{1-z^{-1}}\\&=\frac{A}{1-e^{j\omega_0}z^{-1}}+\frac{A^*}{1-z^{-1}}\tag{5}\end{align} with $$A=\frac{1}{1-e^{-j\omega_0}}=H(e^{j\omega_0})\tag{6}$$ The inverse $$\mathcal{Z}$$-transform of $$(5)$$ is $$\tilde{y}[n]=H(e^{j\omega_0})e^{j\omega_0n}u[n]+H^*(e^{j\omega_0})u[n]\tag{7}$$ The response to the step-modulated sinusoid $$(1)$$ is easily obtained from $$(7)$$ by taking its imaginary part: \begin{align}y[n]&=\textrm{Im}\big\{\tilde{y}[n]\big\}\\&=\big|H(e^{j\omega_0})\big|\sin\big(\omega_0n+\arg\left\{H(e^{j\omega_0})\right\}\big)u[n]\\&\qquad -\textrm{Im}\big\{H(e^{j\omega_0})\big\}u[n]\tag{8}\end{align} The first term in $$(8)$$ is the steady-state response, and the second term is the transient response, which doesn't decay because of the system's pole at $$z=1$$. So the output has a DC component due the imaginary part of $$H(e^{j\omega_0})$$. Note that this is no contradiction with the linearity of the system, because the input signal is not a single spectral line but it has a continuous spectrum extending down to DC due to the sinusoid being switched on at $$n=0$$. This DC value of the input's Fourier transform triggers the system's eigenfrequency, which is a DC component. Note that this always happens with LTI systems: the output may contain oscillations at frequencies that are only determined by the system, not by the input signal. However, unlike in the given example, usually these transients decay because most of the time we consider asymptotically stable systems. I slightly modified your Matlab/Octave script and added the analytical result $$(8)$$ for comparison. The analytical result is identical (up to numerical accuracy) to the result obtained by filtering: fs = 48000; f0 = 1000; w0 = 2*pi*f0/fs; N = 200; n = 0:N-1; x = sin(w0*n); bd = [1 0]; figure; plot(n,x,'b'); hold on; plot(n,yi,'r') % analytical computation A = 1 / ( 1 - exp( -1i*w0 ) ); y2 = abs(A) * sin( w0*n + angle(A) ) - imag(A); plot(n,y2,'k.'), hold off legend('input','response by filtering','response by analytical computation') • ah, so I should have focused on the Fourier transform instead of the z-transform. Is there any chance that you can explain/derive or provided a reference to the added pole expressions for $H_{-1}$ and $H_{1}$? Can one say that the delta function in the $H_{1}$ expression change the magnitude response but not the phase response? Would you call the systems $H_{-1}$ and $H_{1}$ linear? The reason for this question is that if $\sin\big(n\omega_0\big)$ is input to the $H_{1}$ system a DC offset is added so in some sense intuitively this conflicts with the notion of linearity.. Oct 8 '19 at 9:36 • your answer is really good and I'm going to accept it and upvote (although it seems I can't right now). I just need to digest your answer to understand exactly how it works. Oct 8 '19 at 9:38 • also I'm a little confused about your expression (4). Would you call that the steady-state response? what is the transient response then? Oct 8 '19 at 9:40 • Do you think it feasible to multiply the Fourier transform of a sine to the $H_1$ system and analytically inverse Fourier transform the result? Oct 8 '19 at 9:46 • hmm, I'm a little bit confused. Does the system have a transient response? I will try to add a small Matlab snippet for the $H_1$ system (with $b_0=1$ and $b_1=0$) which illustrates that there is a DC offset added to the output for a sine input (I somehow hope that the transient response can explain this). Oct 8 '19 at 9:58
2021-12-03T01:01:31
{ "domain": "stackexchange.com", "url": "https://dsp.stackexchange.com/questions/61119/transient-and-steady-state-response-of-first-order-system", "openwebmath_score": 0.8392671942710876, "openwebmath_perplexity": 493.58167641607207, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9706877709445759, "lm_q2_score": 0.870597270087091, "lm_q1q2_score": 0.8450781234912713 }
http://mathhelpforum.com/statistics/156324-help-probability.html
# Thread: help with probability!! 1. ## help with probability!! I need help with the following questions, i attempted to solve them, and i also put my answers here too. When you are answering them, please show all your steps as i am kinda lost in this field. 1. suppose S = {1,2,3} and P({1,2}) = 1/3 and P({2,3}) = 2/3 compute P({1}), P({2}), P({3}) P({1}) + P({2}) + P({3}) = 1 <--- 1 being 100% and 0 being 0% P({1}) + P({2}) = 1/3 P({2}) + P({3}) = 2/3 a) P({1}) + (1/3 - P({1})) + P({3}) = 1 P({3}) = 2/3 b) P({1}) + (2/3 - P({3})) + P({3}) = 1 P({1}) = 1/3 so then 1/3 + P({2}) + 2/3 = 1 so P({2}) = 0 therefore: P({1}) = 1/3, P({2}) = 0, P({2}) = 2/3 2. Suppose A1 watches the six o'clock news a 2/3 of the time and watches eleven o'clock news 1/2 of the time and watches both news 1/3 of the time. For a randomely selected day what is the probability that A1 watches only the six o'clock new?s, and whats the probability that A1 watches neither news? chance for 6oclock = 2/3 chance for 11oclock news = 1/2 => complementary is 1/2 for only six o'clock news has to watch the 6oclock news which is 2/3, and not watch the 11oclock news which is 1/2 2/3*1/2 = 1/3 so only watching 6 o'clock news is 1/3 chance... for watching neither avoid watching 6o'clock news which is 1/3 (complementary) and avoid watching eleven o'clock news 1/2 (complementary) so its 1/3*1/2 = 1/6 so watching neither is 1/6 chance... 3. Suppose your right knee is sore 15% of the time and your left knee is sore 10% of the time. What is the largest possible percentage that at least 1 knee is sore? What is the smallest possible percentage that at least 1 knee is sore? Largest 15%+10% = 25% Smallest 0.15*0.10 = 0.015 so 1.5% 4. Suppose a card is randomely chosen from a standard 52 card deck. What is the probability that the card is a jack or a club (or both) ? chance of jack = 4/52 chance of club = 13/52 chances of club or jack = 4/52 + 13/52 = 17/52 chances of jack and club is 1/52 5. Suppose 55% of students are female and 45% are male. 44% of females have long hair and 15% of males have long hair. What is the probability that a random student will either be female or have long hair (or both)? chance of female: 55% chance of long hair: (55%*44%) + (45%*15%) = 0.2420 + 0.0675 = 0.3095 = 30.95% chance of female or long hair = 55% + 30.95% = 85.95% chance of female with long hair: 0.2420 = 24.2% 2. Hello, Sneaky! 4. Suppose a card is randomely chosen from a standard 52 card deck. What is the probability that the card is a Jack or a Club (or both) ? You started correctly, though . . . There are 4 Jacks: $J\heartsuit,\,J\spadesuit,\, J\diamondsuit,\,\boxed{J\clubsuit}$ . . $P(J) \:=\:\dfrac{4}{52}$ There are 13 Clubs: $A\clubsuit,\,2\clubsuit,\,3\clubsuit,\,4\clubsuit, \,5\clubsuit,\,6\clubsuit,\,7\clubsuit,\,8\clubsui t,\,9\clubsuit,\,10\clubsuit,\,\boxed{J\clubsuit}, \,Q\clubsuit,\,K\clubsuit$ . . $P(\clubsuit) \:=\:\dfrac{13}{52}$ $\text{But }\,P(J\text{ or }\clubsuit) \;=\;\dfrac{16}{52}$ There are only 16 cards that will "win the bet". You counted the $J\clubsuit$ twice. 3. ok i understand why its 16/52. This also means that the other 4 questions were correct? But now i have some other concerns. This is a question from my book. a guy has 3 cards 1 red on both sides 1 black on both sides 1 red / black he puts 1 card on the table, and u only see the top side which is red. what are the chances of the other side being red my guess is 50/50, but the book says 2/3 using conditional probability. Why is it 2/3?? Also a question about number 5 Like u said in number 4, i counted the jack of clubs twice, so in question 5 when adding 55% and 30.95% to get chance of female or a long hair person, doesn't part of the long hair people already count in the chance of females?
2017-06-24T04:30:45
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/statistics/156324-help-probability.html", "openwebmath_score": 0.2970256805419922, "openwebmath_perplexity": 2591.0460741314832, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9706877692486438, "lm_q2_score": 0.8705972650509008, "lm_q1q2_score": 0.8450781171262292 }
https://math.stackexchange.com/questions/564580/basic-induction-proof-that-all-natural-numbers-can-be-written-in-the-form-2a
# Basic induction proof that all natural numbers can be written in the form $2a + 3b$ The theorem given is: If $n$ is a natural number then $n$ can be written in the form $2a + 3b$ for some integers $a$ and $b$. How would I prove this by induction? I've had a go at proving this but I don't know if my technique is sound. The base case would be when n = 1 = 2(-1) + 3(1) (if we take the natural numbers as excluding 0). Then if I assume n = 2a + 3b is true, n+1=2a+3b+1. Therefore n+1=2a+3b+2(-1)+3(1) which can be written as n+1=2(a-1)+3(b+1) which should conclude the proof. Is this a proper proof or is there some other way of doing it? How would I prove the theorem if I took the natural numbers to include 0 (i.e. could I still use 1=2(-1)+3(1) when it would no longer be the base case)? • Hint: 1 = 3 - 2. Your proof won't be by induction if you use this, though. – Magdiragdag Nov 12 '13 at 21:42 • @Magdiragdag, that is a most unhelpful comment. – dfeuer Nov 12 '13 at 21:43 • @dfeuer Is it? I could have commented that gcd(2,3) = 1, but considered that to be confusing for a hint. – Magdiragdag Nov 12 '13 at 21:47 • @Magdiragdag, the OP had already demonstrated that they knew what to do with that fact! – dfeuer Nov 12 '13 at 21:51 Your proof is perfectly good. You can use whatever integer $b$ you like as the base case, to prove some proposition $P(n)$ is true for all integers $n\ge b$. $0$ and $1$ are both very common base cases. You can also use induction in the other direction (e.g., for negative numbers) to prove that every integer below $b$ satisfies the proposition. Formally, induction is usually defined in the upwards direction, and usually to start at $0$ (or $1$, depending which text you use), but extending it to do other things is quite straightforward. The downward induction can be recast as upwards: rather than induction downward in $n$, do induction upward in $-n$. Same thing. • Of course, $n=0$ would be the "nicer" base case as you simply have $0=2\cdot 0+3\cdot 0$. :) – Hagen von Eitzen Nov 12 '13 at 22:41 Also you can avoid induction. For instance if $n$ is even then $n=2\cdot a+3\cdot 0.$ If n is odd then $n=2\cdot a+1$ for some natural $a.$ Further, $1=3-2$ so $n=2a+3-2=2\cdot (a-1)+3\cdot 1.$
2019-06-27T08:21:52
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/564580/basic-induction-proof-that-all-natural-numbers-can-be-written-in-the-form-2a", "openwebmath_score": 0.7876660823822021, "openwebmath_perplexity": 238.37654889655076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759677214398, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8450621301353651 }
http://mathhelpforum.com/math-topics/229863-function.html
1. ## function The function g is defined by g(x)=(x+1)/(x-2), x is not equal to k and m, find the values of k and m of course, I know that x is definitely not equal to 2, but how about the other one? 2. ## Re: function That's it -- x cannot equal 2. Are you sure you wrote the function correctly? 3. ## Re: function The other value would be $k+m-2$. 4. ## Re: function I suspect there is some aspect of this problem being ignored. Is this the exact and complete language of the problem? $a \in \mathbb R\ and\ a \ne 0 \implies \dfrac{1}{a} \in \mathbb R.$ $\therefore a, b \in \mathbb R\ and\ a \ne 0 \implies b * \dfrac{1}{a} \equiv \dfrac{b}{a} \in \mathbb R.$ So if the only restriction on g(x) is that it is a real-valued function, x = 2 is the only number necessarily outside its domain. Of course the definition of g may exclude any other real number. 5. ## Re: function Perhaps the "other value" is for $g(x)$. While $x \neq 2$, it is fairly straightforward to show that $g(x) \neq 1$ for any value of $x$. In other words, $g(x)$ has an inverse function for any value except at $g(x)=1$. 6. ## Re: function Originally Posted by SlipEternal Perhaps the "other value" is for $g(x)$. While $x \neq 2$, it is fairly straightforward to show that $g(x) \neq 1$ for any value of $x$. In other words, $g(x)$ has an inverse function for any value except at $g(x)=1$. That's a very clever guess. 7. ## Re: function Originally Posted by JeffM I suspect there is some aspect of this problem being ignored. Is this the exact and complete language of the problem? $a \in \mathbb R\ and\ a \ne 0 \implies \dfrac{1}{a} \in \mathbb R.$ $\therefore a, b \in \mathbb R\ and\ a \ne 0 \implies b * \dfrac{1}{a} \equiv \dfrac{b}{a} \in \mathbb R.$ So if the only restriction on g(x) is that it is a real-valued function, x = 2 is the only number necessarily outside its domain. Of course the definition of g may exclude any other real number. yeah, that's it. The answer given is k=2, m=5. I can't understand 8. ## Re: function Originally Posted by Trefoil2727 yeah, that's it. The answer given is k=2, m=5. I can't understand Given the answer, I can come up with a question that would have that as its answer. What values of $x$ are not in the domain of $(g\circ g)$? That is the function $g$ composed with itself. $(g\circ g)(x) = g(g(x)) = \dfrac{g(x)+1}{g(x)-2} = \dfrac{\dfrac{x+1}{x-2}+1}{\dfrac{x+1}{x-2}-2}$ From here, you can simplify: $\dfrac{\dfrac{x+1}{x-2}+1}{\dfrac{x+1}{x-2}-2}\cdot \dfrac{x-2}{x-2} = \dfrac{x+1+x-2}{x+1-2(x-2)} = \dfrac{2x-1}{5-x}$ Obviously, $x\neq 5$ for this. But, since $(g\circ g)(x)$ is only defined when $g(x)$ is defined, we also have $x \neq 2$. 9. ## Re: function Clearly $f(x)= \frac{x+1}{x- 2}$ is undefined when x= 2. To determine a value that f(x) cannot be equal to, do the division: $\frac{x+1}{x- 2}= 1+ \frac{3}{x- 2}$. Since the fraction is never 0, f(x) is never 1.
2016-10-22T08:18:57
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/math-topics/229863-function.html", "openwebmath_score": 0.9101123213768005, "openwebmath_perplexity": 376.21338990817316, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9808759666033576, "lm_q2_score": 0.8615382076534743, "lm_q1q2_score": 0.8450621221978258 }
https://math.stackexchange.com/questions/4261626/name-of-random-variable-thats-1-or-1-with-equal-probability
# Name of random variable that's +1 or -1 with equal probability? Is there a name for this distribution: $$P(X = 1) = P(X = -1) = 0.5?$$ I'm currently writing $$2X-1$$ where $$X \sim \text{Ber}(0.5)$$. • You can call it “uniform distribution on $\{-1, 1\}$”. Sep 27, 2021 at 11:44 • You can also refer to it as the sign of $x$, as in $\frac{x}{|x|} ~: ~a~$ is any positive constant, and $~-a \leq x \leq a, ~x\neq 0.$ Sep 27, 2021 at 11:52
2022-08-09T17:59:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4261626/name-of-random-variable-thats-1-or-1-with-equal-probability", "openwebmath_score": 0.7516509890556335, "openwebmath_perplexity": 332.23564705968175, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759632491112, "lm_q2_score": 0.8615382094310357, "lm_q1q2_score": 0.8450621210515816 }
https://math.stackexchange.com/questions/1917659/prove-existence-of-unique-fixed-point/1917714
# Prove existence of unique fixed point Let $f(x)$ be a strictly decreasing function on $\mathbb{R}$ with $|f(x)-f(y)|<|x-y|$ whenever $x\neq y$. Set $x_{n+1}=f(x_n)$. Show that the sequence $\{x_n\}$ converges to the root of $x=f(x)$. Note that the condition is weaker than what is required in the contracting mapping principle. Firstly, here's a picture of what's going on: Formally, we start by observing that because $f$ is strictly decreasing and continuous, $f(x) = x$ must have a unique solution. Call this fixed point $r$; then defining \begin{align*} A &= \{x \in \mathbb{R} : x < r \} \\ B &= \{r\} \\ C &= \{x \in \mathbb{R} : x > r \}. \end{align*} we have \begin{align*} f(x) &> x \text{ for all } x \in A \\ f(r) &= r \\ f(x) &< x \text{ for all } x \in C. \end{align*} We can say more: • $\boldsymbol{f}$ maps $\boldsymbol{A}$ into $\boldsymbol{C}$ and vice versa: For $a \in A$, $f(a) > a$, so by decreasing $f(f(a)) < f(a)$. So $f(a) \in C$. Similarly for $C$. • $\boldsymbol{f(f(x)) > x}$ on $\boldsymbol{A}$ and $\boldsymbol{f(f(x)) < x}$ on $\boldsymbol{C}$: For $a \in A$, $|f(f(a)) - f(a)| < |f(a) - a|$. So $f(f(a))$ is closer to $f(a)$ than $a$ is, and since $a, f(f(a)) \in A$ and $f(a) \in C$, "closer to $f(a)$" implies larger. So now fix any $x_1 \in \mathbb{R}$. We may assume that $x_1 \in A$. By the above facts, it follows that $x_1, x_3, x_5, \ldots$ is an increasing sequence in $A$, and $x_2, x_4, x_6, \ldots$ is a decreasing sequence in $C$. It follows that both of them converge, say to $x$ and to $y$ respectively, where $x \le r \le y$. But by continuity, $f(x_{2k})$ must converge to the same thing as $f(x_{2k+1})$, so $x = y = r$, and therefore $x_n \to r$. • I rehashed my answer to remove a lot of unnecessary details and clarify the main point. – 6005 Sep 7 '16 at 10:01 • Thanks! Great answer, and the intuition too. It's called a cobweb plot, as I recall. – Zhang Edison Sep 7 '16 at 10:08 • @ZhangEdison yeah kinda reminded me of the microeco lesson I had. – Vim Sep 9 '16 at 1:02 Uniqueness. If $x$ and $y$ are distinct fixed points then $0<|x-y|=|f(x)-f(y)|<|x-y|$. Contradiction. Existence. $|f(x)-f(y)|<|x-y|$ implies that $f$ is continuous. If $f$ has not fixed point then i) $f(x)>x$ for all $x$ or ii) $f(x)<x$ for all $x$. If i) holds then for $x_0\in \mathbb{R}$, $x_1=f(x_0)>x_0$ and since $f$ is decreasing $x_2=f(x_1)<f(x_0)=x_1$. Hence if $h(x)=f(x)-x$ we have that $h(x_0)>0$ and $h(x_1)<0$ and by the IVT, we have a root of $h(x)=0$, i.e. a fixed point for $f$. The case ii) is similar. Convergence of iterates. Let $z$ be the fixed point. And let $x_0\in\mathbb{R}$, then $$|x_n-z|=|f(x_{n-1})-z|<|x_{n-1}-z|<\dots<|x_{0}-z|$$ which means that $f$ send the compact $K=[z-|z-x_0|,z+|z-x_0|]$ in itself. Moreover $d_n=|x_n-z|$ is strictly decreasing, and admits a limit $r\geq 0$. Let $x_{n_k}$ be a subsequence which converges to some $y\in K$. If $y\not=z$ then $$r=|y-z|=\lim_{k\to\infty} d_{n_k}=\lim_{k\to\infty} d_{n_{k}+1}=\lim_{k\to\infty}|f(x_{n_k})-z| =|f(y)-z|=|f(y)-f(z)|<|y-z|$$ which is a contradiction. Therefore any convergent subsequence of $\{x_n\}_n$ has limit $z$, which, along with the compactness of $K$, implies that $\{x_n\}_n$ converges to $z$. P.S. Once we have established the existence of the fixed point the decreasing hypothesis is not needed anymore. Note:$f(x)=x-\arctan(x)+\pi/2$ is a strictly increasing function which is a weak contraction in $\mathbb{R}$, but it has no fixed points. • +1: for a great use of compactness, and for the note that the existence of a fixed point is sufficient (decreasing is not needed). – 6005 Sep 7 '16 at 10:14
2019-11-22T18:33:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1917659/prove-existence-of-unique-fixed-point/1917714", "openwebmath_score": 0.99986732006073, "openwebmath_perplexity": 128.01360443889902, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759610129464, "lm_q2_score": 0.8615382112085969, "lm_q1q2_score": 0.8450621208686072 }
https://stats.stackexchange.com/questions/391010/probability-question-about-panda-births-and-statistical-tests
# Probability question about panda births and statistical tests I am self-learning statistics, and I have a question about how to do the following problem: There are two species of panda bear, A and B. Both are equally common in the wild and live in the same places. A veterinarian has a new genetic test that can identify the species of a panda. But the test, like all tests, is imperfect. This is the information you have about the test: • The probability it correctly identifies a species A panda is 0.8. • The probability it correctly identifies a species B panda is 0.65. The vet administers the test to your panda and tells you that the test is positive for species A. Compute the posterior probability that your panda is species A. I wish to calculate P(species=A | test=A), by using the Bayes theorem and calculate the prior, P(test=A). I am confused about the test on species B. How to calculate the prior? • I think you are missing some information. – user2974951 Feb 11 '19 at 13:38 Bayes' theorem gives $$$$\mathrm{prob}(\mathrm{species} = A | \mathrm{test} = A, \mathcal{I}) = \frac{\mathrm{prob}(\mathrm{test} = A | \mathrm{species} = A, \mathcal{I}) \: \mathrm{prob}(\mathrm{species} = A | \mathcal{I})}{\mathrm{prob}(\mathrm{test} = A | \mathcal{I})}$$$$ with $$\mathcal{I}$$ being the problem information. By the marginalisation and product rules we can expand the denominator as \begin{align} \mathrm{prob}(\mathrm{test} = A | \mathcal{I}) &= \sum_{s \in \{A, B\}} \mathrm{prob}(\mathrm{test} = A, \mathrm{species} = s | \mathcal{I}) \\ &= \sum_{s \in \{A, B\}} \mathrm{prob}(\mathrm{test} = A | \mathrm{species} = s, \mathcal{I}) \: \mathrm{prob}(\mathrm{species} = s | \mathcal{I}) \end{align} From the problem information we have \begin{align} \mathrm{prob}(\mathrm{test} = A | \mathrm{species} = A, \mathcal{I}) &= 0.8 \\ \mathrm{prob}(\mathrm{test} = B | \mathrm{species} = B, \mathcal{I}) &= 0.65 \\ \mathrm{prob}(\mathrm{species} = A | \mathcal{I}) &= 0.5 \\ \mathrm{prob}(\mathrm{species} = B | \mathcal{I}) &= 0.5 \end{align} and from the second line $$$$\mathrm{prob}(\mathrm{test} = A | \mathrm{species} = B, \mathcal{I}) = 0.35$$$$ \begin{align} \mathrm{prob}(\mathrm{species} = A | \mathrm{test} = A, \mathcal{I}) &= \frac{0.8 \cdot 0.5}{0.8 \cdot 0.5 + 0.35 \cdot 0.5} \\ &= \frac{16}{23} \simeq 0.696 \end{align}
2021-03-01T17:19:15
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/391010/probability-question-about-panda-births-and-statistical-tests", "openwebmath_score": 1.0000052452087402, "openwebmath_perplexity": 2731.037203661761, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759632491112, "lm_q2_score": 0.8615382040983515, "lm_q1q2_score": 0.8450621158208799 }
https://math.stackexchange.com/questions/2152688/what-is-the-shortest-distance-between-skew-lines-in-n-dimensions
# What is the shortest distance between skew lines in N dimensions? I have two skew lines in $\mathbb{R}^N$ ($N > 2$) defined as $\vec{x} = \vec{x}_A + \vec{d}_A t$ and $\vec{x} = \vec{x}_B + \vec{d}_B s$ ($t, s \in \mathbb{R}$). Now, I'd like to calculate the shortest distance between those lines. In 3D, this seems to be rather simple since the cross product $[\vec{d}_A \times \vec{d}_B]$ is a vector. However, in $\mathbb{R}^N$, there is an infinite number of vectors that are perpendicular to $\vec{d}_A$ and $\vec{d}_B$ and that lie on a subset $H^{\perp}$ of dimension $N - 2$. My question is: How can one calculate the minimal distance without generalizing the cross product to $N$ dimensions? You could compute the minimum of $$d(s,t)=\Vert(\vec x_A+\vec d_At)-(\vec x_B+\vec d_Bs)\Vert=\Vert(\vec x_A-\vec x_B)+\vec d_At-\vec d_Bs\Vert$$ using basic analysis. In more detail: the above gives you a function $\mathbb R^2\rightarrow \mathbb R$. Compute its gradient, and look for zeros. Hint: Even easier, use $d(s,t)^2$. • Thanks for your answer! I though of minimizing $d(s, t)^2$ too. This would give me two points $P_A$ and $P_B$ that lie on either line, and the vector connecting the points $\vec{v}_{AB}$ would give the direction. However, I still have a problem with the uniqueness of $\vec{v}_{AB}$. It's true that $|\vec{v}_{AB}|$ is the minimal distant I am after, but how many vectors in $H^{\perp}$ would have the same distance?.. Is my logic correct? Feb 20, 2017 at 10:16 • I'm not sure what you mean with "vectors in $H^\bot$ having a distance". Only the vector $\vec v_{AB}$ gives you the shortest connection between the lines. Of course, there are other vectors perpendicular to $\vec d_A$ and $\vec d_B$ at the same time, but if they are not a solution to your minimization, they cannot point from one line to the other while being also short as $\vec v_{AB}$. Feb 20, 2017 at 10:28 • Yes, this was exactly what I meant. In this sense, $\vec{v}_{AB}$ must be unique since it's a) perpendicular to $\vec{d}_A$ and $\vec{d}_B$ and b) points from one line to the other. Other vectors from $H^{\perp}$ satisfy only a). Thanks for clarifying. Feb 20, 2017 at 10:38 It's worth noting that the 3-dimensional case is the most general case. If $$x(s) = p + s u$$ and $$y(t) = q + t v$$ generate the lines then the distance is the minimum norm of the residual $$r(s, t) = x(s) - y(t) = w + s u - t v$$ where $$w = p-q$$. The residual lies in the vector space spanned by $$u$$, $$v$$ and $$w$$. Within that space it occupies the 2-dimensional affine plane through $$w$$ with directions $$u$$ and $$v$$, assuming $$u$$ and $$v$$ are independent. We get the minimum norm residual by projecting parts of $$w$$ parallel to the $$uv$$ plane. If $$u$$ and $$v$$ are unit vectors, we get an orthogonal basis for the plane: $$\hat{u} = u, \hat{v} = (1 - u u^T) v$$. The distance is $$\|(1 - \hat{u} \hat{u}^T - \hat{v} \hat{v}^T /(\hat{v}^T \hat{v})) w\|$$. Compared to cross product solution for the 3-dimensional case, you'll note that projecting onto the cross product of $$u$$ and $$v$$ is equivalent to the projection operator $$1 - \hat{u} \hat{u}^T - \hat{v} \hat{v}^T /(\hat{v}^T \hat{v})$$ when $$u$$ and $$v$$ are independent. More generally, if $$P$$ is an orthogonal projection onto a subspace then $$1-P$$ is the orthogonal projection onto the subspace's orthogonal complement. What I went through is really just a special case of solving a least-squares problem with Gram-Schmidt. You can calculate the distance between arbitrary affine subspaces by a similar process. Let $$A, B$$ be linear maps into a vector space $$V$$ and let $$a, b$$ be points in $$V$$. Then $$a + \text{im}(A)$$ and $$b + \text{im}(B)$$ are affine subspaces of $$V$$. Their distance is the minimum norm of the residual $$r(x, y) = a-b + Ax - By$$. This is just a least-squares problem for a block matrix $$C$$ composed of $$A$$ and $$-B$$. Let $$z = [x, y]$$, $$c = a-b$$ and $$C = [A, -B]^T$$. Then $$c + Cz = a-b + Ax - By$$.
2022-07-01T11:51:38
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2152688/what-is-the-shortest-distance-between-skew-lines-in-n-dimensions", "openwebmath_score": 0.9179495573043823, "openwebmath_perplexity": 94.61485241430506, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750529474512, "lm_q2_score": 0.855851154320682, "lm_q1q2_score": 0.8450460788125207 }
https://math.stackexchange.com/questions/2651165/let-n-be-a-6-digit-number-perfect-square-and-perfect-cube-if-n-6-is-not-ev/2651170
# Let $n$ be a 6-digit number, perfect square and perfect cube. If $n-6$ is not even or a multiple of 3, find $n$ Let $n$ be a 6-digit number, perfect square and perfect cube. If $n-6$ is not even or a multiple of 3, find $n$. My try Playing with the first ten perfect squares and cubes I ended with: The last digit of $n \in (1,5,9)$ If $n$ last digit is $9$, then the cube ends in $9$, Ex: if $n$ was $729$, the cube is $9^3$ (ends in $9$) and the square ends in $3$ or $7$ If $n$ last digit is 5, then the cube ends in 5 and the square ends in 5 If $n$ last digit is 1, then the cube ends in 1 and the square ends in 1 By brute force I saw that from $47^3$ onwards, the cubes are 6-digit, so I tried some cubes (luckily for me not for long) and $49^3 = 343^2 = 117649$ worked. So I found $n=117649$ but I want to know what is the elegant or without brute force method to find this number because my method isn't very good, just pure luck maybe. Note that the required number is both a square and a cube, so it must be a sixth power. Already $10^6=1000000$ has seven digits and $5^6=15625$ has only five digits, so that leaves us with $6^6,7^6,8^6,9^6$ to test. Furthermore, we are given that $n-6$ is not even and not a multiple of 3, which implies that $n$ itself is also not even and not a multiple of 3. This eliminates $6^6,8^6$ and $9^6$ immediately, leaving $7^6$ as the only possible answer. • Pretty clever, i don't know how i didn't think about that. Thanks – Rodrigo Pizarro Feb 15 '18 at 2:29 If $n$ is both a perfect square and perfect cube then. $n = a^6$ If $n-6$ is neither even nor divisible by $3$, then $n$ is not even nor divisible by $3$ and $a$ is not even or divisible by 3. $a^6$ is a $6$ digit number $6<a<10$ $7$ is the only integer in that interval that is not divisible by $2$ or by $3.$ $n = 7^6$
2021-03-04T19:42:58
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2651165/let-n-be-a-6-digit-number-perfect-square-and-perfect-cube-if-n-6-is-not-ev/2651170", "openwebmath_score": 0.6623613238334656, "openwebmath_perplexity": 77.52112165097368, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750507184357, "lm_q2_score": 0.8558511469672594, "lm_q1q2_score": 0.8450460696442291 }
http://math.stackexchange.com/questions/183428/trying-to-prove-that-lim-n-rightarrow-infty-frac-gamma-n1n-logn
# Trying to prove that $\lim_{n\rightarrow\infty}(\frac{\Gamma '(n+1)}{n!} -\log(n))=0$ In my attempt to prove that $\Gamma'(1)=-\gamma$, I've reduced the problem to proving that $\lim_{n\rightarrow\infty}(\frac{\Gamma '(n+1)}{n!} -\log(n))=0$. Where $\gamma$ is the Euler-Mascheroni constant, and $\log$ denotes the natural logarithm. I've been messing with it for a while without achieving much of anything. The first derivative of the Gamma function does have a recursive formula which can be found through iterated integration by parts, but that was what I used to get where I am, and applying it again just takes me back to where I started. My book lists a ton of equivalent definitions for the Gamma function, but only gives the integral definition for its derivatives and I've just had considerable trouble doing much with that integral. I should note that at first I was trying to prove that $\frac{\Gamma'(n+1)}{n!}\sim\log(n)$, but now I'm pretty sure that showing that their difference in the limit is zero would be sufficient, since what I'm ultimately interested in is showing that $\lim_{n\rightarrow\infty}(-\sum_{k=0}^n\frac{1}{k} +\frac{1}{n!}\Gamma'(n+1)) =-\gamma$ Hopefully someone can help me with this. Thanks. - See this as well. –  Guess who it is. Aug 16 '12 at 23:44 We have $\log(\Gamma(n+2))-\log(\Gamma(n+1))=\log(n+1)$, so by the mean value theorem, $\frac{\Gamma'}{\Gamma}(s)=\log(n+1)$ for some $s\in[n+1,n+2]$. Now, $\frac{\Gamma'}{\Gamma}$ is increasing, so repeating the argument on the interval $[n,n+1]$, we get $\log(n)\leq\frac{\Gamma'}{\Gamma}(n+1)\leq \log(n+1)$. The result now follows, as $\lim_{n\to\infty}\log(n+1)-\log(n)=0$. - @C.Williamson $\Gamma(n+2) = (n+1) \Gamma(n+1)$, hence $\log\Gamma(n+2) = \log\Gamma(n+1) + \log(n+1)$. –  Sasha Aug 16 '12 at 23:30 Yeah, I deleted my comment after seeing my mistake. I like the answer! –  C. Williamson Aug 16 '12 at 23:32 Wow, this is a really clever use of the MVT, thanks. –  Thoth Aug 16 '12 at 23:35 $\Gamma'(x)=\Gamma(x)\psi(x)$ where $\psi(x)$ is the digamma function, i.e. the logarithmic derivative of $\Gamma(x)$. Then $$\lim_{x \to \infty}\frac{\Gamma'(x+1)}{x!}-\log(x)= \lim_{x \to \infty}\frac{\Gamma(x+1)\psi(x+1)}{\Gamma(x+1)}-\log(x)= \lim_{x \to \infty}\psi(x+1)-\log(x)$$ Noting $$\psi(x+1) = \log x+O\left(\frac{1}{x}\right)$$ We conclude $$\lim_{x \to \infty}\psi(x+1)-\log(x)=0$$ - If what you want is to prove that $\Gamma'(1)=-\gamma$, I will show you a very slick solution I learned from user robjohn. (I guess) you know that $$\int_0^1 \frac{1-x^n}{1-x}dx=H_n$$ (just expand the function it is easy) Now look at $$I(n)=\int_0^1 x^{n}\log(1-x)dx$$ We integrate by parts, to get $$I(n)=\int_0^1 x^{n}\log(1-x)dx=\left.\frac{1-x^{n+1}}{n+1}\log(1-x)\right|_0^1- \frac 1{n+1}\int_0^1 \frac{1-x^{n+1}}{1-x}dx$$ $$I(n)=\int_0^1 x^{n}\log(1-x)dx=-\frac 1{n+1}\int_0^1 \frac{1-x^{n+1}}{1-x}dx$$ $$I(n)=\int_0^1 x^{n}\log(1-x)dx=-\frac {H_{n+1}}{n+1}$$ Now let $x=1-u$, then $un=m$, $$I(n)=\int_0^1 (1-u)^{n}\log(u)du$$ $$I(n)=\frac 1 n\int_0^n \left(1-\frac m n\right)^{n}\log\left(\frac m n\right)dm$$ $$I(n)=\frac 1 n\int_0^n \left(1-\frac m n\right)^{n}\log ( m )dm-\frac 1 n\int_0^n \left(1-\frac m n\right)^{n}\log( n)dm$$ Now we use the last equiality we derived, to start getting into something: $$-\frac {n}{n+1}H_{n+1}=\int_0^n \left(1-\frac m n\right)^{n}\log ( m )dm-\log n \int_0^n \left(1-\frac m n\right)^{n}dm$$ Now the rightmost integral is just $$\int_0^n {{{\left( {1 - \frac{m}{n}} \right)}^n}} dm = n\int_0^1 {{{\left( {1 - u} \right)}^n}} du = n\int_0^1 {{u^n}} du = \frac{n}{{n + 1}}$$ $$-\frac{n}{{n + 1}}{H_{n + 1}} = \int_0^n {{{\left( {1 - \frac{m}{n}} \right)}^n}} \log mdm - \frac{n}{{n + 1}}\log n$$ So we get $$\frac{n}{{n + 1}}\left( {\log n - {H_{n + 1}}} \right) = \int_0^n {{{\left( {1 - \frac{m}{n}} \right)}^n}} \log mdm$$ Now, by letting $n\to \infty$, we get \eqalign{ & \mathop {\lim }\limits_{n \to \infty } \frac{n}{{n + 1}}\left( {\log n - {H_{n + 1}}} \right) = \mathop {\lim }\limits_{n \to \infty } \int_0^n {{{\left( {1 - \frac{m}{n}} \right)}^n}} \log mdm \cr & - \gamma = \mathop {\lim }\limits_{n \to \infty } \int_0^n {{{\left( {1 - \frac{m}{n}} \right)}^n}} \log mdm \cr & - \gamma = \int_0^\infty {{e^{ - m}}} \log mdm \cr} But $$\Gamma '\left( n \right) = \int_0^\infty {{e^{ - m}}} {m^{n - 1}}\log mdm \Rightarrow \Gamma '\left( 1 \right) = \int_0^\infty {{e^{ - m}}} \log mdm$$ -
2015-10-04T03:19:03
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/183428/trying-to-prove-that-lim-n-rightarrow-infty-frac-gamma-n1n-logn", "openwebmath_score": 0.9903777837753296, "openwebmath_perplexity": 383.3243143064881, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750481179169, "lm_q2_score": 0.8558511488056151, "lm_q1q2_score": 0.8450460692337186 }
https://math.stackexchange.com/questions/3395244/can-a-finite-set-with-a-non-prime-number-of-elements-be-a-field/3395256#3395256
Can a finite set with a non prime number of elements be a field? I understand that as typically defined (using modular arithmetic) finite fields require a prime number of elements. But I recall hearing someone say that if you modify the way addition and multiplication is defined on a set with a non-prime number of elements, say 4 elements, then it could still be a field. Is this true? How would this set look and how would you define the addition and multiplication? For any prime $$p$$ and integer $$k\geq 1$$, there is, up to isomorphism, exactly one field of order $$p^k$$. In the case of $$2^2$$ elements, one usually denotes the elements as $$0,1,x,x+1$$ (or something similar), with addition done modulo $$2$$. The multiplication table looks like this: $$\begin{array}{|c|cccc|}\hline &0&1&x&x+1\\\hline 0&0&0&0\\1&0&1&x&x+1\\ x&0&x&x+1&1\\ x+1&0&x+1&1&x\\\hline\end{array}$$ In general, you can find a multiplication table the following way: Start with $$\Bbb Z_p$$, the integers modulo $$p$$ (also known as the field with $$p$$ elements), and an irreducible polynomial $$f$$ of degree $$k$$ with coefficients in $$\Bbb Z_p$$. Then take the polynomial ring $$\Bbb Z_p[x]$$, and divide out by the ideal generated by $$f$$. Any element of our $$p^k$$-element field will correspond to a polynomial of degree less than $$k$$, with addition as normal. Multiplication is defined by reducing modulo $$f$$. In our example, we have $$\Bbb Z_2$$, $$k=2$$ and $$f(x)=x^2+x+1$$. The elements are as given above, and addition is done as for regular polynomials with coefficients in $$\Bbb Z_2$$. As for multiplication, let's look at $$x(x+1)$$ as an example. With regular polynomials we have $$x(x+1)=x^2+x$$. Then reducing modulo $$f$$ basically means either • Subtract multiples of $$f$$ until the degree is lower than $$k=2$$. • $$f(x)=0$$ means $$x^2=x+1$$. Substitute this, repeatedly if necessary, until the degree is lower than $$k=2$$. In either case, $$x^2+x$$ is reduced to $$1$$. • Thank you. Very well explained. Oct 15, 2019 at 21:29 In fact for any power $$p^n$$ of a prime $$p$$ you can find a finite field, usually denoted by $$\mathbb F_{p^n}$$ or $$GF(p^n)$$. You can construct these by finding an integer polynomial $$p \in \mathbb Z[X]$$ of degree $$\deg p = n$$ that is irreducible over $$\mathbb Z_p := \mathbb Z/p\mathbb Z$$. Then $$\mathbb F_{p^n} := \mathbb Z[X]/(p(X))$$ (where $$(p(X))$$ denotes the principal ideal $$p(X)\mathbb Z[X]$$. One can even show that there is exactly one field of order $$p^n$$ (up to isomorphism) and that these are all the finite fields. Example: For $$p=n=2$$ (so $$p^n=4$$) there is exactly one such polynomial and it is $$p(X) = X^2 + X + 1$$. This means all elements in $$\mathbb Z[X]/(p(X))$$ are represented by the residue classes $$[0], [1], [X], [1+X]$$. Now we can actually do computations: E.g. $$[X] \cdot [1+X] = [X+X^2] = [X+X^2 - p(X)] = [-1] = [1]$$ • Slightly confused because we have not studied groups (in fact, this was totally skipped), but why is $$[X+X^2] = [X+X^2 - p(X)]$$? Oct 15, 2019 at 21:25 • $[q(X)]$ is a symbol of a residue class modulo $p(X)$. This means $[q(X)] = [q(X) + kp(X)]$ for all $k \in \mathbb Z$. This is the same as modular arithmetic in the integers: The elements in $\mathbb Z/n\mathbb Z$ ("modulo $n$") can be represented as $[x]$ where $[x] = [x+kn]$ for all $k\in \mathbb Z$: If we do computations modulo $4$ then $[1] = [5] = [-3] = [9] = ...$. Oct 16, 2019 at 6:53 • And I have to apologize, it is sometimes different to judge what people know already and what they do not, but please feel free to ask if you have further questions about this! Oct 16, 2019 at 6:54 Actually, one proves the number of elements in a finite field is always some power $$p^k$$ of a prime $$p$$. Conversely, for any natural number $$k$$ and any prime, there exists a field, denoted $$\mathbf F_{p^k}$$, with $$p^k$$ elements, and this field is unique up to a field isomorphism. Furthermore, the field $$\mathbf F_{p^k}$$ is (isomorphic to) a subfield of the field $$\mathbf F_{p^l}$$ if and only if $$k$$ divides $$l$$.
2022-08-18T02:15:10
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3395244/can-a-finite-set-with-a-non-prime-number-of-elements-be-a-field/3395256#3395256", "openwebmath_score": 0.9640396237373352, "openwebmath_perplexity": 100.66577024386768, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750514614409, "lm_q2_score": 0.8558511414521923, "lm_q1q2_score": 0.8450460648346914 }
https://math.stackexchange.com/questions/3201055/for-any-finite-abelian-group-g-there-is-an-integer-m-with-g-isomorphic-to
# For any finite abelian group $G$, there is an integer $m$ with $G$ isomorphic to a subgroup of $U(\mathbb{Z}_{m})$. I want to prove if the following assertion from Rotmans Advanced Algebra page 205 is true: For any finite abelian group $$G$$, there is some integer $$m$$ with $$G$$ isomorphic to a subgroup of $$U(\mathbb{Z}_{m})$$, where $$U(\mathbb{Z}_{m})$$ are the units of Integers module m. Is this sentence true or false? The book I'm studying says this is not true, but I cannot find a proper counterexample to understand the mentioned claim. Thanks • What book are you studying? Apr 24 '19 at 20:58 • What is $U(\Bbb Z_m)$? Apr 24 '19 at 21:23 • units in the ring of integers modulo $m$ ? Apr 24 '19 at 21:27 • Already edited. I'm studying Rotmans Advanced Algebra and U(Zm) means units of Integers module m – Cos Apr 24 '19 at 22:17 • Probably follows from the Kronecker-Weber Theorem and this. Apr 24 '19 at 22:39 This is, in fact, a true statement. There are two essential facts that I use here without proof. 1. Any finite abelian group is a product of cyclic groups. Look up the structure theorem for modules over a PID for more on this. 2. For any $$n \geq 2$$ there are infinitely many primes $$p$$ such that $$p \equiv 1 \text{ mod } n$$. This is a special case of Dirchlet's theorem on primes in an arithmetic progression. Now, take such a finite abelian group $$G$$. We have $$G = \prod_{i=1}^n \mathbb Z/n_i \mathbb Z$$, some integers $$n_i$$. Let $$p_i$$ be a prime such that $$p_i \equiv 1 \text{ mod } n_i$$. This exists by the second fact listed above. In fact, as there are infinitely many such primes we can take these $$p_i$$ to be distinct. Now, as $$p_i \equiv 1 \text{ mod } n_i$$, we have $$n | p_i - 1$$. Thus, we have $$\mathbb Z / n_i \mathbb Z \subseteq \mathbb Z/(p_i - 1) \mathbb Z$$. Strictly speaking, this means that there is an injective homomorphism between these groups, but identitying a group with its isomorphic image doesn't affect us in this case. As $$p_i$$ is prime, we know that $$\mathbb Z / (p_i-1) \mathbb Z = (\mathbb Z/p_i \mathbb Z)^{\times}$$. Hence, we have $$\mathbb Z/ n_i \mathbb Z \subseteq (\mathbb Z/ p_i \mathbb Z)^{\times}$$ Thus, we have $$G = \prod_{i=1}^n \mathbb Z/n_i \mathbb Z \subseteq \prod_{i=1}^n (\mathbb Z/p_i \mathbb Z)^{\times}$$. One can show easily that the unit group of the product of rings is the product of their unit groups, so we have $$\prod_{i=1}^n (\mathbb Z/p_i \mathbb Z)^{\times} = \left(\prod_{i=1}^n \mathbb Z/p_i \mathbb Z\right)^{\times}$$. We took the $$p_i$$ to be distinct, hence they are pairwise relatively prime. By the Chinese Remainder Theorem, $$\prod_{i=1}^n \mathbb Z/p_i \mathbb Z = \mathbb Z/ p_1 p_2 \dots p_n \mathbb Z$$. Thus, $$G \subseteq \left(\prod_{i=1}^n \mathbb Z/p_i \mathbb Z\right)^{\times} = (\mathbb Z/p_1 p_2 \dots p_n \mathbb Z)^{\times}$$. Field theory is listed as a tag for this, so I should mention that once you have that $$\mathbb Q(\zeta_n)$$ has Galois group $$(\mathbb Z/n \mathbb Z)^{\times}$$, this result proves the inverse Galois problem for finite abelian groups, i.e. all finite abelian groups are Galois groups over $$\mathbb Q$$. • And I'm wondering why Rotman would say otherwise. Apr 25 '19 at 1:18 • You didn't explicitly answer the question "yes" or "no" (it is conventional to do so at the start of the answer). Apr 25 '19 at 3:01 • @BillDubuque I've added this, thanks for catching that. Apr 25 '19 at 4:08 • @QuangHoang In the copy of Rotman I have, the remark on page 205 says "We cannot conclude more from the proposition; given any finite abelian group G, there is some integer m with G isomorphic to a subgroup of U(Im)". It seems to me that Rotman is saying that the result is true, but it isn't a corollary of the result he just proved (that $\text{Gal}(\mathbb F(\zeta_n)/F)$ is a subgroup of $(\mathbb Z/m \mathbb Z)^{\times})$. The phrasing is kind of poor, so I see where the confusion arises. Apr 25 '19 at 4:17
2021-09-19T17:30:57
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3201055/for-any-finite-abelian-group-g-there-is-an-integer-m-with-g-isomorphic-to", "openwebmath_score": 0.9179410338401794, "openwebmath_perplexity": 136.6500036576976, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669235266053, "lm_q2_score": 0.8652240895276224, "lm_q1q2_score": 0.84503774985271 }
http://www.mathematicsgre.com/viewtopic.php?f=1&t=150
## Sequences, series Forum for the GRE subject test in mathematics. CoCoA Posts: 42 Joined: Wed Sep 03, 2008 5:39 pm ### Sequences, series What is lim_{n->\infty}{(n!)^(1/n)}? Nameless Posts: 128 Joined: Sun Aug 31, 2008 4:42 pm Use the Sterling formula : lim (n!/[sqrt(2pi * n)e^(-n)n^(n)])=1 when n goes to infinity so the answer will be infinity correct me if I am wrong Last edited by Nameless on Tue Oct 14, 2008 6:34 pm, edited 1 time in total. Posts: 24 Joined: Sun Oct 05, 2008 1:41 am I think you are correct nameless... If we recall that e = lim (n/(n!^(1/n))), then lim (n!^(1/n)) = lim [(n!^(1/n))/n]*n = (1/e)*(lim n) = infinity. Kastro Posts: 10 Joined: Mon Oct 13, 2008 10:36 pm We know that n! increases more rapidly than k^n, where k is constant. That is, for all values of k there exists an N such that n > N implies n! > k^n But then we must have that n!^(1/n) > k Since k is unbound, n!^(1/n) is unbound above, implying that the limit must go to infinity. CoCoA Posts: 42 Joined: Wed Sep 03, 2008 5:39 pm Consider 3 series \sum_{n=1}^{infinity}{a_n}, with each a_n given below. Which of them converge(s)? I. a_n = {log(n^{-2})} / (n^{-2}) II. a_n = (log 4) / (2n) III. a_n = n / (2^n) A. None B. I only C. II only D. III only E. More than one of the series converge. Last edited by CoCoA on Tue Oct 14, 2008 10:47 pm, edited 1 time in total. Nameless Posts: 128 Joined: Sun Aug 31, 2008 4:42 pm The problem is straightforward : I ) diverges since lim (a_n) !=0 II) = log(2)* harmonic series , hence deverges III) use the ratio test lim(a_n+1/ a_n)=1/2 so the series converges Last edited by Nameless on Tue Oct 14, 2008 10:42 pm, edited 1 time in total. CoCoA Posts: 42 Joined: Wed Sep 03, 2008 5:39 pm Evaluate sum from n=0 to infinity of {(-1)^n} / {(2n+1)(3^n)}: A. 1/(2e^3) B. {e^(1/3) }/2 C. (3^{1/2)*pi)/6 D. 3/(pi*{1/2}) E. Diverges Last edited by CoCoA on Tue Oct 14, 2008 10:49 pm, edited 1 time in total. CoCoA Posts: 42 Joined: Wed Sep 03, 2008 5:39 pm in I is m a constant or a typo? typo corrected sorry! Posts: 24 Joined: Sun Oct 05, 2008 1:41 am How about one from the 05 practice test... Find the set of real numbers for which the series converges: Sum[1 to inf] [n!*x^(2n)]/[n^n(1+x^(2n))] CoCoA Posts: 42 Joined: Wed Sep 03, 2008 5:39 pm mistake in my last problem also - corrected now - sorry Nameless Posts: 128 Joined: Sun Aug 31, 2008 4:42 pm since X^2n>=0 and X^2n/(1+X^2n)<1 so the series <=sum(n=1..infinity)n!/n^n Use the Sterling formula : lim (n!/[sqrt(2pi * n)e^(-n)n^(n)])=1 when n goes to infinity and the ROOT test, then the radius of convergence is infinity Nameless Posts: 128 Joined: Sun Aug 31, 2008 4:42 pm Evaluate sum from n=0 to infinity of {(-1)^n} / {(2n+1)(3^n)}: A. 1/(2e^3) B. {e^(1/3) }/2 C. (3^{1/2)*pi)/6 D. 3/(pi*{1/2}) E. Diverges The series is convergence so eliminate E) We have 1/(1+x^2)= sum(n=0...infinity)[(-1)^n]x^(2n) take the integral from 0 to 1/sqrt(3) both sides then the answer is C) --------------------------------------------------------- Hey friends, can we embed LATEX into this site ? CoCoA Posts: 42 Joined: Wed Sep 03, 2008 5:39 pm freddie Posts: 2 Joined: Mon Oct 27, 2008 5:24 am According to $n!\ ge (\frac{n+1}3)^n$ we get $lim_{n\to \infty}(n!)^1/n\to \infty$
2017-07-26T16:28:02
{ "domain": "mathematicsgre.com", "url": "http://www.mathematicsgre.com/viewtopic.php?f=1&t=150", "openwebmath_score": 0.870195209980011, "openwebmath_perplexity": 9731.61686247965, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706084, "lm_q2_score": 0.8652240895276223, "lm_q1q2_score": 0.8450377469148894 }
https://math.stackexchange.com/questions/3151152/four-married-couples-attend-a-party-each-person-shakes-hands-with-every-other-p/3151158
# Four married couples attend a party. Each person shakes hands with every other person, except their own spouse, exactly once. How many handshakes? Four married couples attend a party. Each person shakes hands with every other person, except their own spouse, exactly once. How many handshakes? My book gave the answer as $$24$$. I do not understand why. I thought of it like this: You have four pairs of couples, so you can think of it as M1W2, M2W2, M3W3, M4W4, where M is a man and W is a woman. M1 has to shake 6 other hands, excluding his wife. You have to do this 4 times for the other men, so you have $$4\times 6$$ handshakes, but in my answer, you are double counting. How do I approach this problem? • In your answer, you both overcounted and undercounted, and incidentally these happened to cancel out and give you the correct answer without having to do anything further. You did $4 \times (\text{Handshakes done by the men})$, which overcounted the man-man handshakes, but left out the woman-woman handshakes. – M. Vinay Mar 17 at 4:49 • And that's easily fixed by counting all such handshakes in the same way, not just those done by men, so you get $48$. And now, as you said, you have indeed double-counted. But if you know it's exactly double counting, you can get the answer by halving it! – M. Vinay Mar 17 at 4:56 • @Issel No, Person #2 being the spouse of Person #1, also has to shake hands with $6$ people, and so on, so it's $6 + 6 + 4 + 4 + 2 + 2 + 0 + 0 = 24$. – M. Vinay Mar 17 at 5:42 • Possible duplicate of Handshakes in a party – Xander Henderson Mar 17 at 20:45 • @user21820 Hm, if it gets reopened, I'll post an answer. I don't think I see why it got closed. Sure it's an elementary problem, but it clearly shows effort and at least a part of the question is why the specific method used seems to be wrong but gives the correct answer. – M. Vinay Mar 19 at 5:29 $$8$$ people. Each experiences handshakes with $$6$$ people. There are $$6\times 8=48$$ experiences of handshakes. Each handshake is experienced by two people so there $$48$$ experiences means $$48\div 2=24$$ handshakes. Suppose the spouses were allowed to shake each other's hands. That would give you $$\binom{8}{2} = 28$$ handshakes. Since there are four couples, four of these handshakes are illegal. We can remove those to get the $$24$$ legal handshakes. • This uses Inclusion-Exclusion Principle. – smci Mar 17 at 11:48 • Inclusion-Exclusion helps to find the cardinality of a union of non-disjoint sets. I'm merely using the fact that a set together with its complement (which are disjoint) comprise the entire universe. – Austin Mohr Mar 18 at 2:30 • and that's just a case of Inclusion-Exclusion Principle. (By the way, the set we're enumerating here isn't the 'entire universe', since it's not the total number of handshakes, or handshakes with all people in the world, or even n-way handshakes with all people.) – smci Mar 19 at 0:27 You may proceed as follows using combinations: • Number of all possible handshakes among 8 people: $$\color{blue}{\binom{8}{2}}$$ • Number of pairs who do not shake hands: $$\color{blue}{4}$$ It follows: $$\mbox{number of hand shakes without pairs} = \color{blue}{\binom{8}{2}} - \color{blue}{4} = \frac{8\cdot 7}{2} - 4 = 24$$ Let's look at it not from individuals, but from couples. There are four couples, i.e. $$3!=6$$ meetings of couples. Per meeting of couples, there are four handshakes. This makes it $$6\times4=24$$ handshakes. Thanks @CJ Dennis for pointing out an error in the reasoning: It should, of course, be the sum, not the product, so the correct number of meetings of couples is $$\sum_{k=1}^{n-1}k=\frac{n(n-1)}{2}$$. $$k$$ couples entails $$2k$$ people. If we imagine each couple going in sequential order, couple 1 will each have to shake $$2k-2$$ couple's hands for each individual, or $$4k-4$$ handshakes for couple 1 total. Since there is 1 fewer couple every time a new couple shakes hands, there will be $$4k-4i$$ handshakes by the $$i$$-th couple. So the total number of handshakes is given by: $$\sum_{i=1}^k (4k-4i) = \sum_{i=1}^k4k - \sum_{i=1}^k4i = 4k^2 - 4\frac{k(k+1)}{2} = 4(k^2 - \frac{k^2+k}{2}) = 4(k^2 - (\frac{k^2}{2} + \frac{k}{2})) = 4(\frac{k^2}{2}-\frac{k}{2}) = 2(k^2-k)$$ for $$k$$ couples. Plugging in $$k$$ = 4 verifies a solution of 24 for this case. • Well… Each of the $2k$ people shakes hands with $2k - 1 - 1 = 2k - 2$ others (everyone except the spouse). So that's $2k(2k- 2) = 4k(k - 1)$, but since every handshake must've been counted twice, divide that by $2$ to get $2k(k - 1)$ handshakes in total. – M. Vinay Mar 17 at 5:36 Each line is a handshake between the required two people. There are 24 lines: A simple approach: There are 8 person in total. Each one will shake hands with 6 others. Total shakehands from individual perspective: 6*8 gives 48 Actual shakehands: 48/2 = 24 • How is different from fleablood's answer? – Toby Mak Mar 17 at 8:46 • @TobyMak sorry, I really didn't see it. When I posted there were only four answers including mine. That answer was really not there, completely surprised. I don't know how this happened? – Vijendra Parashar Mar 17 at 15:28 • I see. Since you wrote your answer independently from fleablood, it's only fair to keep your answer. – Toby Mak Mar 18 at 8:22 There are $$4\text{ couples}=8\text{ people}$$. The statement, " Each person shakes hands with every other person, except their own spouse, exactly once," means that $$8$$ people shook hands with $$6$$ other people. That yields $$8\times6=48$$. It would be $$7$$ others but a spouse and the handshaker are both excluded in each handshake. Dividing by $$2$$, we get $$24$$ handshakes. • Correct me if I am wrong but each person in a group of 8 shakes hands with 7 other persons without any restrictions. Here, each shakes hands with 6 people; the persons with whom there is no handshake are the person him/herself and his/her spouse – Aaratrick Aug 29 at 23:41 • @Aaratrick Thanks for your comment. Somehow, it didn't occur to me that the handshaker also does not normally shake hands with himself. – poetasis Aug 30 at 2:50 • No problem; it happens to all of us sometime or the other, when we just can't seem to change our perspective on something or not notice a key point – Aaratrick Aug 31 at 0:21 If all of them handshakes each other then there are 8!/2! =28 handshakes, but none of them handshake with their own spouse so their are 28-4=24 handshakes.
2019-12-16T10:49:35
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3151152/four-married-couples-attend-a-party-each-person-shakes-hands-with-every-other-p/3151158", "openwebmath_score": 0.6972377896308899, "openwebmath_perplexity": 1032.197979746073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915196, "lm_q2_score": 0.8652240895276223, "lm_q1q2_score": 0.8450377463273254 }
https://www.physicsforums.com/threads/complex-integration-poles-on-the-imaginary-axis.600835/
# Complex Integration - Poles on the Imaginary axis 1. Apr 27, 2012 ### knowlewj01 1. The problem statement, all variables and given/known data evaluate the integral: $I_1 =\int_0^\infty \frac{dx}{x^2 + 1}$ by integrating around a semicircle in the upper half of the complex plane. 2. Relevant equations 3. The attempt at a solution first i exchange the real vaiable x with a complex variable z & factorize the denominator. Also, the contour of integration is a semicircle with radius= infinity $I_2 = \int_{-\infty}^{\infty}\frac{dz}{(z+i)(z-i)}$ the contour contains only the pole in the upper half, so from residue theorem we know: $I_2 = 2\pi i R(i)$ where R(i) is the residue at the point z=i R(i) = 1/(i+i) = 1/2i Hence $I_2 = \pi$ Now, i know the answer to the original integral is supposed to be pi/2. Can i say that: because the original limits range from 0 to infinity, and i have integrated twice this amount, my answer should be divided by 2? or is this reasoning flawed? 2. Apr 27, 2012 ### Whovian I THINK you do divide by two, as noted, but how I'd approach this would be by using trig substitution, substituting x = tan(θ). :P But yep, I think your reasoning's right, or at least close! 3. Apr 27, 2012 ### jackmell 4. Apr 28, 2012 ### knowlewj01 Thanks for the replies. Is this because the function is even in the upper half of the complex plane? I thought of doing this by integrating a contour in only the positive quadrent, ie: (0,0) to (R,0) (R,0) to (0,iR) along contour ω [a radial path of radius R from the real axis to the imaginary axis in the positive quadrent] (0,iR) to (0,i+δ) [where δ is a small value which we will take to 0 around the pole] (0,i+δ) to (0,i-δ) along a contour λ [radial path of radius δ in the clockwise direction such that the pole is not included in the enclosed path] (0,i-δ) to (0,0) we have: $F(z) = \frac{1}{Z^2+1}=\frac{1}{(z+i)(z-i)}$ $\oint_C F(z)dz =\int_0^R F(z)dz + \int_\omega F(z)dz + \left[\int_{iR}^{i+\delta}F(z)dz+\int_{i-\delta}^0 F(z)dz\right] + \int_\lambda F(z)dz =0$ as F(z) is analytic everywhere inside the contour we demand that the integral = 0 by Green's theorem. we know from residue theorem that integration on the contour λ will give a contribution of -πiR(i) as it is counterclockwise (& where R(i) is the residue at i, which is 1/2i) now let R→∞ and δ→0: First: consider the integral along the contour ω, if we have $z=Re^{i\theta}$ $\int_\omega F(z)dz = \int_\omega \frac{iRe^{i\theta}d\theta}{R^2e^{i2\theta} + 1}$ $\lim_{R \to \infty}\int_\omega \frac{iRe^{i\theta}d\theta}{R^2e^{i2\theta}+1}≈ \frac{1}{R}\int_\omega \frac{id\theta}{e^{i\theta}}=0$ Is this correct? I thought Jordan's Lemma worked only for semicircles but this seems to give the same result. we are left with: $\int_0^\infty F(z)dz + \int_{i\infty}^0 F(z)dz - \frac{\pi}{2} = 0$ so i know that the integral along the imaginary axis must be 0 but i'm not sure how to prove it
2017-08-21T00:12:00
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/complex-integration-poles-on-the-imaginary-axis.600835/", "openwebmath_score": 0.9156146049499512, "openwebmath_perplexity": 762.7656737596274, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451418, "lm_q2_score": 0.865224072151174, "lm_q1q2_score": 0.8450377334692314 }
https://math.stackexchange.com/questions/1910324/why-must-a-radical-be-isolated-before-squaring-both-sides/1910431
# Why must a radical be isolated before squaring both sides? In the following equation: $$\sqrt{2x + 1} + 1 = x$$ You are supposed to isolate the radical: $$\sqrt{2x + 1} = x - 1$$ And then proceed by squaring both sides. If you start by solving the equation this way, you will eventually complete the square and get an answer of: $$4$$ However, why must the radical be isolated before squaring both sides? Why can't you do, for example... $$(\sqrt{2x + 1} + 1)^2 = x^2$$ I know this would lead you down the wrong path, but I don't know why. It doesn't make sense to me because I can (once I isolate the radical) square both sides when one side $$x-1$$ involves addition/subtraction. Is there some special property of radicals that makes them have to be completely alone before they can be squared? Thank you. • There's nothing mathematically incorrect with $$(\sqrt{2x+1} +1)^2 = x^2$$ The reason you isolate the radical first is because then squaring both sides eliminates the radical. – Klint Qinami Aug 31 '16 at 22:56 • Try it and see. If you don't isolate the radical and you square correctly you will have an expression that still has a radical in it. – John Coleman Aug 31 '16 at 22:57 • As John Coleman said, you will get a radical again, and still square it, yet another radical and so on... – gambler101 Aug 31 '16 at 22:58 • When I first saw the title, I thought this was talking about radical people on two sides of a war... – Mehrdad Sep 1 '16 at 5:32 • @Mehrdad And indeed the classic "square both sides" military strategy doesn't work unless all radicals have been isolated! ....wait. – Kyle Strand Sep 2 '16 at 17:15 You can of course write $$(\sqrt{2x+1}+1)^2=x^2,$$ but when you multiply out you get $$2x+2\sqrt{2x+1}+2=x^2,$$ and there is still a radical in your new equation. The point in isolating the radical is that after that, as you square the equation, you get rid of it completely. Well, it doesn't lead to the wrong path: if you square the equation right away you get that $$(2x+1)+2\sqrt{2x+1}+1=x^2$$ And because $\sqrt{2x+1}=x-1$, you get the equation $$(2x+1)+2(x-1)+1=x^2$$ This is $x^2-4x=0$ which has solutions $x=0,4$, but the solution $x=0$ doesn't do it because $\sqrt{1}$ is taken to be $1$ (and not $-1$). (Sure, I'm replacing the square root by clearing $x-1$: everything you do to solve the equation will end up being equivalent. Point is that it is not wrong, just a bit more farfetched. It is interesting to note that mindlessly squaring forces one to note "clearing the square root" is necessary, and one can then realize one could have started doing this in the first place.) • +1 wow, fantastic answer to a question that I almost ended up ignoring. – Mehrdad Sep 2 '16 at 8:34 You can do what ever you darned well want. But you have to do it correctly. You can do this: $\sqrt{2x + 1} + 1 = x$ $(\sqrt{2x + 1} + 1)^2 = x^2$ $(2x + 1) + 2\sqrt{2x + 1} + 1 = x^2$ but now what? ... it's true but you've just made things more difficult. What you can NOT do under any circumstances is this: $(\sqrt{2x + 1} + 1)^2 = x^2$ $(2x + 1) + 1 = x^2$ The point is if you want to solve it, then you want to get rid of the radical and you can't do that is you square a sum with other terms. $(a + \sqrt{b})^2 = a^2 + 2a\sqrt{b} + b$ So that doesn't do anything to get rid of it. But it's not wrong. It's just... not what you want. ==== " Is there some special property of radicals that makes them have to be completely alone before they can be squared?" Not really, everything has to be completely alone before you square it if you don't want the square to involve other ... things in it. Actually, your question is a bit like asking "Why must we isolate before we divide:" "$3x + 2 = 11$" "Why do we isolate the $3x$ "$3x = 11 -2$ "Why don't we just divide first: ""$(3x +2)/3 = 11/3$ The answer is we can do what we darned well like: $x + \frac 23 = \frac {11}3$ $x = \frac {11}3 - \frac 23$. Nothing wrong with that... but nothing right with it either. We isolate terms, for whatever operation, for the purpose of isolating them so we can work directly with them. That's all. • Once you've got $(2x+1) + 2\sqrt{2x+1} + 1 = x^2$, you can put $x-1$ in place of $\sqrt{2x+1}$. You get a quadratic equation, and some extraneous roots. So it's not impossible to do it that way; it's just more complicated. $\qquad$ – Michael Hardy Aug 31 '16 at 23:10 • @MichaelHardy You didn't see my post, did you? =) – Pedro Tamaroff Aug 31 '16 at 23:13 • I never said it was impossible. Just that it gets you where it takes you which is probably not where you want to go. And you can't really replace $x-1$ with $\sqrt{2x+1}$ when you never isolated the radical first. – fleablood Aug 31 '16 at 23:14 There is (often) another way to solve these equations that isn't really easier but at least I think it is interesting. \begin{align} \sqrt{2x + 1} + 1 &= x \\ (\sqrt{2x + 1} - 1)(\sqrt{2x + 1} + 1) &= (\sqrt{2x + 1} - 1)x \\ 2x &= (\sqrt{2x + 1} - 1)x \\ \sqrt{2x + 1} - 1 &= 2 &\text{(Need to check $x=0$ is not a solution.)}\\ \sqrt{2x + 1} &= 3 \\ 2x + 1 &= 9 \\ x &= 4 \end{align} • The problem with this one is that $\sqrt{2x + 1}$ - 1 can be zero so -- as always -- you need to check your solution is indeed correct. In very broad terms, once a radical is involved you need to check back. – chx Sep 1 '16 at 10:02 • @chx 8 - Yes, you do need to check it. – steven gregory Sep 1 '16 at 21:56 • @chx $\sqrt{2x+1}-1=0$ in fact corresponds to $x=0$. Of course, squaring both sides also adds extraneous roots. – steven gregory Sep 3 '16 at 21:26 • you are still isolating square root beetwen lines 4 and 5 – RiaD Dec 1 '17 at 17:23 The main problem with your approach is that squaring will still leave a radical, whereas isolating the radical won't. So it's mostly about simplicity, rather than doing some compulsory transformation. However, squaring is not sufficient. You can only do $$(\sqrt{2x+1}+1)^2=x^2$$ under the assumption that $x\ge0$, or you may add some spurious solutions. There is a different approach, though. You can set $t=\sqrt{2x+1}$, with $t\ge0$, and so $2x+1=t^2$ and $$x=\frac{t^2-1}{2}$$ so your equation becomes $$t+1=\frac{t^2-1}{2}$$ that simplifies to $$t^2-2t-3=0$$ The roots are $t=-1$ and $t=3$, but the negative root must be discarded. Therefore we get $\sqrt{2x+1}=3$, that easily gives $x=4$. The alternative method of isolating the radical is, however, simpler. We get $\sqrt{2x+1}=x-1$, which, under the condition $x-1\ge0$, can be squared: $$2x+1=x^2-2x+1$$ and so $x^2-4x=0$. The roots are $0$ and $4$, but only the latter satisfies $x\ge1$. The reason that the radical must stand alone is that the square root and the square are inverse operations of one another. So in order for the radical to disappear (i.e. for the square root operation and the squaring operation to cancel one another), the squaring must be applied to the pure radical, not the radical plus some other stuff. • It must not stand all alone, it may be a factor of a product, but not a sum's summand. – Michael Hoppe Aug 31 '16 at 23:01 • @MichaelHoppe True. But squaring a product amounts to squaring each factor, and thus the the radical itself is squared in that case. No such luck for sums. – Arthur Aug 31 '16 at 23:02 • I added a comment to this effect before noticing your answer. This point is crucial. – Kyle Strand Sep 2 '16 at 17:23 There is no need to isolate the radical. However, you usually want to isolate the radical in order to simplify computations. From $$\sqrt{2x+1}+1=x=\frac{\left(\sqrt{2x+1}\right)^2-1}{2}\,,$$ we have $$\left(\sqrt{2x+1}\right)^2-2\,\sqrt{2x+1}-3=0\,.$$ Thus, $$\left(\sqrt{2x+1}-3\right)\,\left(\sqrt{2x+1}+1\right)=0\,.$$ Since $\sqrt{2x+1}+1>0$, we get $$\sqrt{2x+1}-3=0\,.$$ Thus, $x=4$. (Well, the last part still requires isolation of the radical: $\sqrt{2x+1}=3$.) Why can't you do, for example... $$(\sqrt{2x + 1} + 1)^2 = x^2$$ Just in case you're overlooking this: It's critically important to realize that exponents do not distribute over additions. That is, if you think that the left-hand side above simplifies to $(2x+1) + 1$, that's false. As others have pointed out, when you properly expand a binomial square in this way as per $(a+b)^2 = a^2 + 2ab + b^2$, you'll still have a radical in the expression. Only by isolating everything under the radical do the square-power and square-root elegantly cancel out. Understanding binomial squares is one of the most important basic facts at this level of algebra/precalculus. $$(\sqrt{2x + 1} + 1)^2 = x^2$$ $$\implies2x+2\sqrt{2x+1}+2=x^2$$ $$\implies2\sqrt{2x+1}=x^2 -2x -2$$ $$\implies4(2x+1)=(x^2 -2x -2)^2$$ $$\implies4(2x+1)=x^4 +4x^2 + 4 - 4x^3 + 8x -4x^2$$ $$\implies0=x^4 +4x^2 + 4 - 4x^3 + 8x -4x^2 -8x -4$$ $$\implies 0=x^4 - 4x^3$$ $$\therefore x= 4$$ Do anything as long as LHS = RHS • You are giving also a wrong solution: if $x=0$, the left-hand side is $2$, and the right-hand side is $0$. – egreg Sep 1 '16 at 12:56
2019-08-20T20:22:08
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1910324/why-must-a-radical-be-isolated-before-squaring-both-sides/1910431", "openwebmath_score": 0.9849418997764587, "openwebmath_perplexity": 474.20397922306483, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287862, "lm_q2_score": 0.865224073888819, "lm_q1q2_score": 0.8450377328160794 }
https://math.stackexchange.com/questions/3322138/rate-of-change-with-fx-4x2-7-on-1-b
# Rate of change with $f(x)=4x^2-7$ on $[1,b]$ As part of a textbook exercise I am to find the rate of change of $$f(x)=4x^2-7$$ on inputs $$[1,b]$$. The solution provided is $$4(b+1)$$ and I am unable to arrive at this solution. Tried: $$f(x_2)=4b^2-7$$ $$f(x_1)=4(1^2)-7=4-7=-3$$ If the rate of change is $$\frac{f(x_2)-f(x_1)}{x_2-x_1}$$ then: $$\frac{(4b^2-7)-3}{b-1}$$ = $$\frac{4b^2-10}{b-1}$$ This is as far as I got. I tried to see if I could factor out the numerator but this didn't really help me: $$(4b^2-10)=2(2b^2-5)$$ If I substitute this for my numerator I still cannot arrive at the provided solution. I then tried isolating b in the numerator: $$4b^2-10=0$$ $$4b^2=10$$ $$b^2=10/4$$ $$b=\frac{\sqrt{10}}{\sqrt{4}}=\frac{\sqrt{10}}{2}$$ This still doesn't help me arrive at the solution. How can I arrive at $$4(b+1)$$? • Formatting tip: type $x_1$ to obtain $x_1$. – N. F. Taussig Aug 14 at 9:00 • Thanks for the tip! – Doug Fir Aug 14 at 13:52 A small sign-mistake! See the highlighted parts in red and blue: Tried: $$f(x_2)=4b^2-7$$ $$\color{blue}{f(x_1)}=4(1^2)-7=4-7=\color{blue}{-3}$$ If the rate of change is $$\frac{f(x_2)-f(x_1)}{x_2-x_1}$$ then: $$\frac{(4b^2-7)\color{red}{-3}}{b-1}$$ = $$\frac{4b^2-10}{b-1}$$ Which should be: $$\frac{f(x_2)\color{red}{-}\color{blue}{f(x_1)}}{x_2-x_1} = \frac{(4b^2-7)\color{red}{-}\left(\color{blue}{-3}\right)}{b-1} = \frac{4b^2-4}{b-1}$$ Then proceed with $$4b^2-4=4\left(b^2-1\right)=4\left(b-1\right)\left(b+1\right)$$ and simplify. There's a mistake. $$f(x_2) - f(x_1) = 4b^2-7 - (-3) =4b^2 -4 = 4(b+1)(b-1)$$ Your $$f(1)$$ is $$-3$$ so $$f(b)-f(1)=4b^2-7+3=4b^2-4$$ Now it does work out.
2019-09-21T15:51:41
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3322138/rate-of-change-with-fx-4x2-7-on-1-b", "openwebmath_score": 0.9659201502799988, "openwebmath_perplexity": 278.90330848542425, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126482065489, "lm_q2_score": 0.863391617003942, "lm_q1q2_score": 0.8450122959172625 }
https://math.stackexchange.com/questions/3582012/number-of-ways-in-which-balls-are-distributed
# Number of ways in which balls are distributed In how many ways can we distribute 5 different balls into 4 different boxes, given that order does not matter inside the boxes and empty boxes are not allowed? My attempt First I chose $$4$$ balls out of $$5$$ and arranged them for the $$4$$ boxes: $$\binom 54 \times 4!.$$ Then for the remaining ball I can choose any of the $$4$$ boxes. Multiplying them, we get $$480$$, which is double the answer given as correct answer. Why I am wrong? And how can I solve the problem if the order matters inside the boxes? • Cases will repeat in your way.. Use inclusion-exclusion principle Mar 15, 2020 at 17:33 For any allowed distribution we have just one box with $$2$$ balls and the others with $$1$$ ball each. We choose such box in $$4$$ ways. Now multiply such number by the number of permutations of the $$5$$ balls, i.e. $$5!$$ and finally divide the result by $$2$$ because in the box with two balls order does not matter. Hence the result is $$\frac{4\cdot 5!}{2}=240.$$ With your attempt you count a disposition more than once. Named $$A,B,C,D$$ the boxes and $$a,b,c,d,e$$ the balls you count (for example) two times the combination: $$a,e\in A$$, $$b\in B$$, $$c\in C$$, $$d\in D$$. The first time you choose the set $$\{a,b,c,d\}$$, put $$a\in A, b\in B, c\in C, d\in D$$, then you put $$e\in A$$; The second time you chose the set $$\{e,b,c,d\}$$, put $$e\in A, b\in B, c\in C, d\in D$$, then you put $$a\in A$$. Since I do not see how to fix this count, I suggest another approach: First, choose the box which will contain $$2$$ balls in $$\binom{4}{1}$$, second choose the two balls you will put in the box chosen in $$\binom{5}{2}$$, then choose how to put the last three balls in last three boxes in $$3!$$. So the answer should be: $$\binom{4}{1} \cdot \binom{5}{2} \cdot 3! = 240$$ Using inclusion-exclusion principle, $$4^5 - ^4C_1 3^5 + ^4C_2 2^5 - ^4C_3 1^5$$ Subtract cases when all 5 objects go in 3 boxes (exclusion) then include when objects go in 2 boxes, because they are subtracted more than the number of times such case comes. Then again exclude when all objects go in one box. Two methods: Method 1: There is only one case possible 2,1,1,1 So, first we will go with unnamed distribution of 5 distinct things and then we will arrange them in 4 places. $$\frac{5!}{(2!)(1!)^3(3!)}4!= 240$$ Method 2: Distribution of identical objects then arranging them. Solve x+y=5 and arrange the balls in the boxes $$=C(_{3-1}^{5-1})\frac{5!}{2!}=240$$
2022-10-03T14:16:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3582012/number-of-ways-in-which-balls-are-distributed", "openwebmath_score": 0.924396812915802, "openwebmath_perplexity": 283.59403992212015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126513110865, "lm_q2_score": 0.863391611731321, "lm_q1q2_score": 0.8450122934373133 }
https://www.mathworks.com/help/matlab/ref/plot3.html;jsessionid=d5ae47d9080640492c969f4f15a4
# plot3 3-D point or line plot ## Syntax ``plot3(X,Y,Z)`` ``plot3(X,Y,Z,LineSpec)`` ``plot3(X1,Y1,Z1,...,Xn,Yn,Zn)`` ``plot3(X1,Y1,Z1,LineSpec1,...,Xn,Yn,Zn,LineSpecn)`` ``plot3(___,Name,Value)`` ``plot3(ax,___)`` ``p = plot3(___)`` ## Description example ````plot3(X,Y,Z)` plots coordinates in 3-D space. To plot a set of coordinates connected by line segments, specify `X`, `Y`, and `Z` as vectors of the same length.To plot multiple sets of coordinates on the same set of axes, specify at least one of `X`, `Y`, or `Z` as a matrix and the others as vectors. ``` example ````plot3(X,Y,Z,LineSpec)` creates the plot using the specified line style, marker, and color.``` example ````plot3(X1,Y1,Z1,...,Xn,Yn,Zn)` plots multiple sets of coordinates on the same set of axes. Use this syntax as an alternative to specifying multiple sets as matrices.``` example ````plot3(X1,Y1,Z1,LineSpec1,...,Xn,Yn,Zn,LineSpecn)` assigns specific line styles, markers, and colors to each `XYZ` triplet. You can specify `LineSpec` for some triplets and omit it for others. For example, `plot3(X1,Y1,Z1,'o',X2,Y2,Z2)` specifies markers for the first triplet but not the for the second triplet.``` example ````plot3(___,Name,Value)` specifies `Line` properties using one or more name-value pair arguments. Specify the properties after all other input arguments. For a list of properties, see Line Properties.``` example ````plot3(ax,___)` displays the plot in the target axes. Specify the axes as the first argument in any of the previous syntaxes.``` example ````p = plot3(___)` returns a `Line` object or an array of `Line` objects. Use `p` to modify properties of the plot after creating it. For a list of properties, see Line Properties.``` ## Examples collapse all Define `t` as a vector of values between 0 and 10$\pi$. Define `st` and `ct` as vectors of sine and cosine values. Then plot `st`, `ct`, and `t`. ```t = 0:pi/50:10*pi; st = sin(t); ct = cos(t); plot3(st,ct,t)``` Create two sets of x-, y-, and z-coordinates. ```t = 0:pi/500:pi; xt1 = sin(t).*cos(10*t); yt1 = sin(t).*sin(10*t); zt1 = cos(t); xt2 = sin(t).*cos(12*t); yt2 = sin(t).*sin(12*t); zt2 = cos(t);``` Call the `plot3` function, and specify consecutive `XYZ` triplets. `plot3(xt1,yt1,zt1,xt2,yt2,zt2)` Create matrix `X` containing three rows of x-coordinates. Create matrix `Y` containing three rows of y-coordinates. ```t = 0:pi/500:pi; X(1,:) = sin(t).*cos(10*t); X(2,:) = sin(t).*cos(12*t); X(3,:) = sin(t).*cos(20*t); Y(1,:) = sin(t).*sin(10*t); Y(2,:) = sin(t).*sin(12*t); Y(3,:) = sin(t).*sin(20*t);``` Create matrix `Z` containing the z-coordinates for all three sets. `Z = cos(t);` Plot all three sets of coordinates on the same set of axes. `plot3(X,Y,Z)` Create vectors `xt`, `yt`, and `zt`. ```t = 0:pi/500:40*pi; xt = (3 + cos(sqrt(32)*t)).*cos(t); yt = sin(sqrt(32) * t); zt = (3 + cos(sqrt(32)*t)).*sin(t);``` Plot the data, and use the `axis equal` command to space the tick units equally along each axis. Then specify the labels for each axis. ```plot3(xt,yt,zt) axis equal xlabel('x(t)') ylabel('y(t)') zlabel('z(t)')``` Create vectors `t`, `xt`, and `yt`, and plot the points in those vectors using circular markers. ```t = 0:pi/20:10*pi; xt = sin(t); yt = cos(t); plot3(xt,yt,t,'o')``` Create vectors `t`, `xt`, and `yt`, and plot the points in those vectors as a blue line with 10-point circular markers. Use a hexadecimal color code to specify a light blue fill color for the markers. ```t = 0:pi/20:10*pi; xt = sin(t); yt = cos(t); plot3(xt,yt,t,'-o','Color','b','MarkerSize',10,... 'MarkerFaceColor','#D9FFFF')``` Create vector `t`. Then use `t` to calculate two sets of x and y values. ```t = 0:pi/20:10*pi; xt1 = sin(t); yt1 = cos(t); xt2 = sin(2*t); yt2 = cos(2*t);``` Plot the two sets of values. Use the default line for the first set, and specify a dashed line for the second set. `plot3(xt1,yt1,t,xt2,yt2,t,'--')` Create vectors `t`, `xt`, and `yt`, and plot the data in those vectors. Return the chart line in the output variable `p`. ```t = linspace(-10,10,1000); xt = exp(-t./10).*sin(5*t); yt = exp(-t./10).*cos(5*t); p = plot3(xt,yt,t);``` Change the line width to `3`. `p.LineWidth = 3;` Starting in R2019b, you can display a tiling of plots using the `tiledlayout` and `nexttile` functions. Call the `tiledlayout` function to create a 1-by-2 tiled chart layout. Call the `nexttile` function to create the axes objects `ax1` and `ax2`. Create separate line plots in the axes by specifying the axes object as the first argument to `plot`3. ```tiledlayout(1,2) % Left plot ax1 = nexttile; t = 0:pi/20:10*pi; xt1 = sin(t); yt1 = cos(t); plot3(ax1,xt1,yt1,t) title(ax1,'Helix With 5 Turns') % Right plot ax2 = nexttile; t = 0:pi/20:10*pi; xt2 = sin(2*t); yt2 = cos(2*t); plot3(ax2,xt2,yt2,t) title(ax2,'Helix With 10 Turns')``` Create `x` and `y` as vectors of random values between `0` and `1`. Create `z` as a vector of random duration values. ```x = rand(1,10); y = rand(1,10); z = duration(rand(10,1),randi(60,10,1),randi(60,10,1));``` Plot `x`, `y`, and `z`, and specify the format for the z-axis as minutes and seconds. Then add axis labels, and turn on the grid to make it easier to visualize the points within the plot box. ```plot3(x,y,z,'o','DurationTickFormat','mm:ss') xlabel('X') ylabel('Y') zlabel('Duration') grid on``` Create vectors `xt`, `yt`, and `zt`. Plot the values, specifying a solid line with circular markers using the `LineSpec` argument. Specify the `MarkerIndices` property to place one marker at the 200th data point. ```t = 0:pi/500:pi; xt(1,:) = sin(t).*cos(10*t); yt(1,:) = sin(t).*sin(10*t); zt = cos(t); plot3(xt,yt,zt,'-o','MarkerIndices',200)``` ## Input Arguments collapse all x-coordinates, specified as a scalar, vector, or matrix. The size and shape of `X` depends on the shape of your data and the type of plot you want to create. This table describes the most common situations. Type of PlotHow to Specify Coordinates Single point Specify `X`, `Y`, and `Z` as scalars and include a marker. For example: `plot3(1,2,3,'o')` One set of points Specify `X`, `Y`, and `Z` as any combination of row or column vectors of the same length. For example: `plot3([1 2 3],[4; 5; 6],[7 8 9])` Multiple sets of points (using vectors) Specify consecutive sets of `X`, `Y`, and `Z` vectors. For example: `plot3([1 2 3],[4 5 6],[7 8 9],[1 2 3],[4 5 6],[10 11 12])` Multiple sets of points (using matrices) Specify at least one of `X`, `Y`, or `Z` as a matrix, and the others as vectors. Each of `X`, `Y`, and `Z` must have at least one dimension that is same size. For best results, specify all vectors of the same shape and all matrices of the same shape. For example: `plot3([1 2 3],[4 5 6],[7 8 9; 10 11 12])` Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `categorical` | `datetime` | `duration` y-coordinates, specified as a scalar, vector, or matrix. The size and shape of `Y` depends on the shape of your data and the type of plot you want to create. This table describes the most common situations. Type of PlotHow to Specify Coordinates Single point Specify `X`, `Y`, and `Z` as scalars and include a marker. For example: `plot3(1,2,3,'o')` One set of points Specify `X`, `Y`, and `Z` as any combination of row or column vectors of the same length. For example: `plot3([1 2 3],[4; 5; 6],[7 8 9])` Multiple sets of points (using vectors) Specify consecutive sets of `X`, `Y`, and `Z` vectors. For example: `plot3([1 2 3],[4 5 6],[7 8 9],[1 2 3],[4 5 6],[10 11 12])` Multiple sets of points (using matrices) Specify at least one of `X`, `Y`, or `Z` as a matrix, and the others as vectors. Each of `X`, `Y`, and `Z` must have at least one dimension that is same size. For best results, specify all vectors of the same shape and all matrices of the same shape. For example: `plot3([1 2 3],[4 5 6],[7 8 9; 10 11 12])` Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `categorical` | `datetime` | `duration` z-coordinates, specified as a scalar, vector, or matrix. The size and shape of `Z` depends on the shape of your data and the type of plot you want to create. This table describes the most common situations. Type of PlotHow to Specify Coordinates Single point Specify `X`, `Y`, and `Z` as scalars and include a marker. For example: `plot3(1,2,3,'o')` One set of points Specify `X`, `Y`, and `Z` as any combination of row or column vectors of the same length. For example: `plot3([1 2 3],[4; 5; 6],[7 8 9])` Multiple sets of points (using vectors) Specify consecutive sets of `X`, `Y`, and `Z` vectors. For example: `plot3([1 2 3],[4 5 6],[7 8 9],[1 2 3],[4 5 6],[10 11 12])` Multiple sets of points (using matrices) Specify at least one of `X`, `Y`, or `Z` as a matrix, and the others as vectors. Each of `X`, `Y`, and `Z` must have at least one dimension that is same size. For best results, specify all vectors of the same shape and all matrices of the same shape. For example: `plot3([1 2 3],[4 5 6],[7 8 9; 10 11 12])` Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `categorical` | `datetime` | `duration` Line style, marker, and color, specified as a character vector or string containing symbols. The symbols can appear in any order. You do not need to specify all three characteristics (line style, marker, and color). For example, if you omit the line style and specify the marker, then the plot shows only the marker and no line. Example: `'--or'` is a red dashed line with circle markers Line StyleDescriptionResulting Line `'-'`Solid line `'--'`Dashed line `':'`Dotted line `'-.'`Dash-dotted line MarkerDescriptionResulting Marker `'o'`Circle `'+'`Plus sign `'*'`Asterisk `'.'`Point `'x'`Cross `'_'`Horizontal line `'|'`Vertical line `'s'`Square `'d'`Diamond `'^'`Upward-pointing triangle `'v'`Downward-pointing triangle `'>'`Right-pointing triangle `'<'`Left-pointing triangle `'p'`Pentagram `'h'`Hexagram Color NameShort NameRGB TripletAppearance `'red'``'r'``[1 0 0]` `'green'``'g'``[0 1 0]` `'blue'``'b'``[0 0 1]` `'cyan'` `'c'``[0 1 1]` `'magenta'``'m'``[1 0 1]` `'yellow'``'y'``[1 1 0]` `'black'``'k'``[0 0 0]` `'white'``'w'``[1 1 1]` Target axes, specified as an `Axes` object. If you do not specify the axes and if the current axes is Cartesian, then `plot3` uses the current axes. ### Name-Value Arguments Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. Example: `plot3([1 2],[3 4],[5 6],'Color','red')` specifies a red line for the plot. Note The properties listed here are only a subset. For a complete list, see Line Properties. Color, specified as an RGB triplet, a hexadecimal color code, a color name, or a short name. The color you specify sets the line color. It also sets the marker edge color when the `MarkerEdgeColor` property is set to `'auto'`. For a custom color, specify an RGB triplet or a hexadecimal color code. • An RGB triplet is a three-element row vector whose elements specify the intensities of the red, green, and blue components of the color. The intensities must be in the range `[0,1]`; for example, ```[0.4 0.6 0.7]```. • A hexadecimal color code is a character vector or a string scalar that starts with a hash symbol (`#`) followed by three or six hexadecimal digits, which can range from `0` to `F`. The values are not case sensitive. Thus, the color codes `'#FF8800'`, `'#ff8800'`, `'#F80'`, and `'#f80'` are equivalent. Alternatively, you can specify some common colors by name. This table lists the named color options, the equivalent RGB triplets, and hexadecimal color codes. Color NameShort NameRGB TripletHexadecimal Color CodeAppearance `'red'``'r'``[1 0 0]``'#FF0000'` `'green'``'g'``[0 1 0]``'#00FF00'` `'blue'``'b'``[0 0 1]``'#0000FF'` `'cyan'` `'c'``[0 1 1]``'#00FFFF'` `'magenta'``'m'``[1 0 1]``'#FF00FF'` `'yellow'``'y'``[1 1 0]``'#FFFF00'` `'black'``'k'``[0 0 0]``'#000000'` `'white'``'w'``[1 1 1]``'#FFFFFF'` `'none'`Not applicableNot applicableNot applicableNo color Here are the RGB triplets and hexadecimal color codes for the default colors MATLAB® uses in many types of plots. `[0 0.4470 0.7410]``'#0072BD'` `[0.8500 0.3250 0.0980]``'#D95319'` `[0.9290 0.6940 0.1250]``'#EDB120'` `[0.4940 0.1840 0.5560]``'#7E2F8E'` `[0.4660 0.6740 0.1880]``'#77AC30'` `[0.3010 0.7450 0.9330]``'#4DBEEE'` `[0.6350 0.0780 0.1840]``'#A2142F'` Line width, specified as a positive value in points, where 1 point = 1/72 of an inch. If the line has markers, then the line width also affects the marker edges. The line width cannot be thinner than the width of a pixel. If you set the line width to a value that is less than the width of a pixel on your system, the line displays as one pixel wide. Marker size, specified as a positive value in points, where 1 point = 1/72 of an inch. Marker outline color, specified as `'auto'`, an RGB triplet, a hexadecimal color code, a color name, or a short name. The default value of `'auto'` uses the same color as the `Color` property. For a custom color, specify an RGB triplet or a hexadecimal color code. • An RGB triplet is a three-element row vector whose elements specify the intensities of the red, green, and blue components of the color. The intensities must be in the range `[0,1]`; for example, ```[0.4 0.6 0.7]```. • A hexadecimal color code is a character vector or a string scalar that starts with a hash symbol (`#`) followed by three or six hexadecimal digits, which can range from `0` to `F`. The values are not case sensitive. Thus, the color codes `'#FF8800'`, `'#ff8800'`, `'#F80'`, and `'#f80'` are equivalent. Alternatively, you can specify some common colors by name. This table lists the named color options, the equivalent RGB triplets, and hexadecimal color codes. Color NameShort NameRGB TripletHexadecimal Color CodeAppearance `'red'``'r'``[1 0 0]``'#FF0000'` `'green'``'g'``[0 1 0]``'#00FF00'` `'blue'``'b'``[0 0 1]``'#0000FF'` `'cyan'` `'c'``[0 1 1]``'#00FFFF'` `'magenta'``'m'``[1 0 1]``'#FF00FF'` `'yellow'``'y'``[1 1 0]``'#FFFF00'` `'black'``'k'``[0 0 0]``'#000000'` `'white'``'w'``[1 1 1]``'#FFFFFF'` `'none'`Not applicableNot applicableNot applicableNo color Here are the RGB triplets and hexadecimal color codes for the default colors MATLAB uses in many types of plots. `[0 0.4470 0.7410]``'#0072BD'` `[0.8500 0.3250 0.0980]``'#D95319'` `[0.9290 0.6940 0.1250]``'#EDB120'` `[0.4940 0.1840 0.5560]``'#7E2F8E'` `[0.4660 0.6740 0.1880]``'#77AC30'` `[0.3010 0.7450 0.9330]``'#4DBEEE'` `[0.6350 0.0780 0.1840]``'#A2142F'` Marker fill color, specified as `'auto'`, an RGB triplet, a hexadecimal color code, a color name, or a short name. The `'auto'` option uses the same color as the `Color` property of the parent axes. If you specify `'auto'` and the axes plot box is invisible, the marker fill color is the color of the figure. For a custom color, specify an RGB triplet or a hexadecimal color code. • An RGB triplet is a three-element row vector whose elements specify the intensities of the red, green, and blue components of the color. The intensities must be in the range `[0,1]`; for example, ```[0.4 0.6 0.7]```. • A hexadecimal color code is a character vector or a string scalar that starts with a hash symbol (`#`) followed by three or six hexadecimal digits, which can range from `0` to `F`. The values are not case sensitive. Thus, the color codes `'#FF8800'`, `'#ff8800'`, `'#F80'`, and `'#f80'` are equivalent. Alternatively, you can specify some common colors by name. This table lists the named color options, the equivalent RGB triplets, and hexadecimal color codes. Color NameShort NameRGB TripletHexadecimal Color CodeAppearance `'red'``'r'``[1 0 0]``'#FF0000'` `'green'``'g'``[0 1 0]``'#00FF00'` `'blue'``'b'``[0 0 1]``'#0000FF'` `'cyan'` `'c'``[0 1 1]``'#00FFFF'` `'magenta'``'m'``[1 0 1]``'#FF00FF'` `'yellow'``'y'``[1 1 0]``'#FFFF00'` `'black'``'k'``[0 0 0]``'#000000'` `'white'``'w'``[1 1 1]``'#FFFFFF'` `'none'`Not applicableNot applicableNot applicableNo color Here are the RGB triplets and hexadecimal color codes for the default colors MATLAB uses in many types of plots. `[0 0.4470 0.7410]``'#0072BD'` `[0.8500 0.3250 0.0980]``'#D95319'` `[0.9290 0.6940 0.1250]``'#EDB120'` `[0.4940 0.1840 0.5560]``'#7E2F8E'` `[0.4660 0.6740 0.1880]``'#77AC30'` `[0.3010 0.7450 0.9330]``'#4DBEEE'` `[0.6350 0.0780 0.1840]``'#A2142F'` ## Tips • Use `NaN` or `Inf` to create breaks in the lines. For example, this code plots a line with a break between `z=2` and `z=4`. ` plot3([1 2 3 4 5],[1 2 3 4 5],[1 2 NaN 4 5])` • `plot3` uses colors and line styles based on the `ColorOrder` and `LineStyleOrder` properties of the axes. `plot3` cycles through the colors with the first line style. Then, it cycles through the colors again with each additional line style. Starting in R2019b, you can change the colors and the line styles after plotting by setting the `ColorOrder` or `LineStyleOrder` properties on the axes. You can also call the `colororder` function to change the color order for all the axes in the figure. ## Extended Capabilities ### Topics Introduced before R2006a
2022-01-24T18:30:51
{ "domain": "mathworks.com", "url": "https://www.mathworks.com/help/matlab/ref/plot3.html;jsessionid=d5ae47d9080640492c969f4f15a4", "openwebmath_score": 0.9287667870521545, "openwebmath_perplexity": 2288.381022143978, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126513110865, "lm_q2_score": 0.8633916082162403, "lm_q1q2_score": 0.8450122899970595 }
https://www.physicsforums.com/threads/magnetic-field-on-current-loop-involves-rotational-energy.279972/
# Magnetic field on current loop, involves rotational energy 1. ### cdingdong 3 1. The problem statement, all variables and given/known data A rectangular loop of sides a = 0.3 cm and b = 0.8 cm pivots without friction about a fixed axis (z-axis) that coincides with its left end (see figure). The net current in the loop is I = 3.2 A. A spatially uniform magnetic field with B = 0.005 T points in the +y-direction. The loop initially makes an angle q = 35° with respect to the x-z plane. The moment of inertia of the loop about its axis of rotation (left end) is J = 2.9 × 10-6 kg m2. If the loop is released from rest at q = 35°, calculate its angular velocity $$\omega$$ at q = 0°. 2. Relevant equations this is kinetic energy: KE = 1/2I$$\omega^{2}$$ but moment of inertia is specified to be J, so KE = 1/2J$$\omega^{2}$$ this is potential energy for a loop with current and magnetic field acting on it: PE = $$\mu$$Bcos$$\theta$$ moment vector $$\mu$$ = NIA where N = number of loops, I = current, A = area, B = magnetic field so, PE = NIABcos$$\theta$$ 3. The attempt at a solution I recognize that the angular kinetic energy equation can be used to find the angular velocity. KE = 1/2J$$\omega^{2}$$. Final kinetic energy = initial potential energy. Initial potential energy = NIABcos$$\theta$$. so, we could say 1/2J$$\omega^{2}$$ = NIABcos$$\theta$$ the right side is potential energy. that equals N = 1 turn; I = 3.2 Amps; A = 0.003*0.008 = 2.5e-5 meters; B = 0.005 Teslas; $$\theta$$ = 35 degrees (1)(3.2)(2.5e-5)(0.005)cos(35) = 3.276608177 * 10^-7 Joules. we'll call this 3.2766e-7 so, now we have 1/2J$$\omega^{2}$$ = 3.2766e-7 J = 2.9 × 10-6 kilogram meters squared. we'll call this 2.9e-6 1/2(2.9e-6)$$\omega^{2}$$ = 3.2766e-7 $$\omega$$ = $$\sqrt{(2*3.2766e-7)/2.9e-6}$$ = 4.753655581 * 10^-1 BUT, the answer happens to be what I did except that they did not take the square root at the end. so, they got 2.259724138 * 10^-1. did i do something wrong? what i did makes sense in my mind. is their answer wrong? why did they not not take a square root at the end? is kinetic energy not = 1/2J$$\omega^{2}$$, but instead 1/2J$$\omega$$ without the square? ### Staff: Mentor What's the final potential energy, when θ = 0? 3. ### cdingdong 3 wow, now that you bring it up, i realize my mistake. you're right! i forgot the final potential energy. so, the answer works, but there is something that bothers me. Ki + Ui = Kf + Uf there is no initial kinetic energy because it came from rest, so that is zero Ui = Kf + Uf rearranging it, Ui - Uf = Kf NIABcos35 - NIABcos0 = 1/2J$$\omega^{2}$$ NIAB(cos35 - cos0) = 1/2J$$\omega^{2}$$ (1)(3.2)(2.5e-5)(0.005)(cos35 - cos0) = 1/2(2.9e-5)$$\omega^{2}$$ -7.233918228 = 1/2(2.9e-5)$$\omega^{2}$$ when i find the potential energy, initial PE - final PE, i get a negative number. so, when i set it equal to 1/2J$$\omega^{2}$$, i get a square root of a negative number, which is not possible. if i take the square root of the absolute value of that number, the answer is correct. is there something wrong with my signs? ### Staff: Mentor Yes, your signs are messed up. The correct expression for PE is PE = -IABcosθ (note the minus sign). So Ui - Uf = (-IABcos35) - (-IABcos0) = IAB(cos0 - cos35). 5. ### cdingdong 3 ahhhh, i see. well, that will solve my problem. thanks Doc Al!
2015-08-31T13:25:08
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/magnetic-field-on-current-loop-involves-rotational-energy.279972/", "openwebmath_score": 0.7076625823974609, "openwebmath_perplexity": 1158.4891283593859, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126469647338, "lm_q2_score": 0.8633916117313211, "lm_q1q2_score": 0.845012289684709 }
http://www.talkstats.com/threads/minimum-variance-for-sum-of-three-random-variables.58167/?p=165602
# Minimum variance for sum of three random variables #### skijunkie ##### New Member Hi all, I have been working on the following problem: Given you have VarX = 1, VarY = 4, and VarZ = 25, what is the minimum possible variance for the random variable W = X + Y + Z, or min Var(X+Y+Z)? My first thought is to complete the variance-covariance expansion as follows: Var(X + Y + Z) = VarX + VarY + VarZ +2[Cov(X,Y) + Cov(Y,Z) + Cov(X,Z)] Then to use the Cauchy-Schwarz inequality to determine the minimum covariance for each of the covariance terms (i.e. |Cov(X,Y)| <= sqrt(VarXVarY) ). However, I am obtaining a negative potential minimum, which leads me to think that the lower bound could be zero? Var(X+Y+Z) = 1 + 4 + 25 + 2[-2 - 5 - 10] = 30 - 34 ??? The other thought is that using Cauchy-Schwarz in this way is not correct and my approach is wrong. My next thought is to consider the expansion as Var[(X+Y), Z], but was not sure how to proceed by considering the sum of 2 variables (X+Y) and Z. Any thoughts on how to proceed are appreciated. #### BGM ##### TS Contributor Actually this is a very good question. http://en.wikipedia.org/wiki/Covariance_matrix#Properties In order a square matrix to be a valid variance covariance matrix, it has to be positive-semidefinite and symmetric. The symmetric property is automatically satisfied if we let the covariance matrix $\Sigma$ of the random vector $\begin{bmatrix} X \\ Y \\ Z \end{bmatrix}$ in the form of $\Sigma = \begin{bmatrix} 1 & \sigma_{XY} & \sigma_{XZ} \\ \sigma_{XY} & 4 & \sigma_{YZ} \\ \sigma_{XZ} & \sigma_{YZ} & 25 \end{bmatrix}$ To check the positive-semidefinite, you may apply Sylvester Criterion: http://en.wikipedia.org/wiki/Sylvester's_criterion which leads to the following two inequality: $\sigma_{XY}^2 - 4 \geq 0$ $100 + 2\sigma_{XY}\sigma_{XZ}\sigma_{YZ} - 25\sigma_{XY}^2 - 4\sigma_{XZ}^2 - \sigma_{YZ}^2 \geq 0$ So any covariances satisfy the above two inequalities will be valid. The remaining optimization can be done by KKT multiplier, see http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions P.S. One additional thing you may check before trying the above method: any random variable has a zero variance if and only if it is a constant. Therefore, you may try to assume $X + Y + Z = c$. Then by moving one of them to the RHS, say $Y + Z = c - X$ Now you try to check if it is possible to have the variance of LHS equal to the variance of RHS. If not, you know that zero variance is not attainable by contrapositive. #### skijunkie ##### New Member Thank you! I figured that I had to take into consideration restrictions on the variance covariance matrix which cannot be addressed through the application of the Cauchy-Schwarz inequality in the case of more than 2 random variables.
2017-12-18T03:23:23
{ "domain": "talkstats.com", "url": "http://www.talkstats.com/threads/minimum-variance-for-sum-of-three-random-variables.58167/?p=165602", "openwebmath_score": 0.7857974767684937, "openwebmath_perplexity": 368.0712583299779, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126451020108, "lm_q2_score": 0.8633916117313211, "lm_q1q2_score": 0.8450122880764496 }
https://quantumcomputing.stackexchange.com/questions/11488/what-is-vert-0-rangle-otimes-vert-rangle
# What is $\vert 0 \rangle \otimes \vert + \rangle$? A simple question that I cannot seem to figure-out why I cannot achieve the correct result. When I evaluate $$\vert 0 \rangle \otimes \vert + \rangle,$$ I end up with $$\begin{bmatrix}1\\0\end{bmatrix} \otimes \begin{bmatrix}\tfrac{1}{\sqrt{2}}\\\tfrac{1}{\sqrt{2}}\end{bmatrix} = \begin{bmatrix}1\begin{bmatrix}\tfrac{1}{\sqrt{2}}\\\tfrac{1}{\sqrt{2}}\end{bmatrix}\\0\begin{bmatrix}\tfrac{1}{\sqrt{2}}\\\tfrac{1}{\sqrt{2}}\end{bmatrix}\end{bmatrix} = \begin{bmatrix}\tfrac{1}{\sqrt{2}}\\\tfrac{1}{\sqrt{2}}\\0\\0\end{bmatrix},$$ where $$\vert 00 \rangle$$, $$\vert 01 \rangle$$, $$\vert 10 \rangle$$, $$\vert 11 \rangle$$ have $$50\%$$, $$50\%$$, $$0\%$$, $$0\%$$ probability to be measured, respectively. The trivial circuit (if you even consider it a circuit) on algassert suggests that the probabilities when measured are $$\vert 00 \rangle = 50\%$$, $$\vert 01 \rangle = 0\%$$, $$\vert 10 \rangle = 50\%$$, and $$\vert 11 \rangle = 0\%$$. Why is my solution doesn't align with algassert? • Just note, you used a tag entanglement. There is nothing about entanglement by definition because your state is described by tensor product. This means that both states are separable and not entangled. Therefore, I removed the tag. Apr 10 '20 at 21:52 • So given $\vert \psi \rangle = \vert 0 \rangle \oplus \vert + \rangle$, $\psi$ is referred to as separable state because it is achieved by using the tensor product of two subsystems? An entangled qubit is the one that cannot be composed of smaller subsystems, an example is one of the EPR pairs? Apr 11 '20 at 8:25 • Yes, tensor product describe separable systems. EPR pair are entangled qubits, so they cannot be described by tesnor product of two qubits. Apr 11 '20 at 8:28 • @MartinVesely out of curiosity, are there any other entangled qubits other than EPR pair? Apr 11 '20 at 8:34 • @M.Ai Jumaily: EPR pair is created with Hadamard gate and CNOT. Any time you use controlled gate, entangled state is prepared. So their is many types of entanglement based on controlled gate you use. Apr 11 '20 at 11:50 Your calculation of $$|0\rangle \otimes |+\rangle$$ is right. For now, forget about calculation and use simple logic. In your setting first qubit is always in state $$|0\rangle$$ and the second one is in equally distributed superposition of $$|0\rangle$$ and $$|1\rangle$$. Hence when you take both qubits together, only states $$|00\rangle$$ and $$|01\rangle$$ are possible and both have a probability 50 %. There can be a problem with qubits ordering. In $$|0\rangle \otimes |+\rangle$$ we assume that $$|0\rangle$$ is the most signigicant qubit, hence it is writen first. However, you can also use another convention when $$|0\rangle$$ is the least significant qubit, hence it is writen on second place. In this setting you will get results $$|00\rangle$$ and $$|10\rangle$$ with a probability 50 % and others with zero percent probability. • Thank you for this comment! I would have never thought the convention they are using is being used practically. Would it be possible to change their convention to the one I am using? I might be looking for some "clever gate" where I apply it first before starting my circuit and it will achieve my desire? Apr 11 '20 at 8:21 • @M.AlJumaily: You can apply swap gate to change the ordering. In case you have more qubits, apply swap gate on first and last qubit, then second and last but one qubit etc. Apr 11 '20 at 8:26 • and your approach will not have any other consequences even with qubits in superposition? Apr 11 '20 at 8:33 • there is a reverse gate in toolbox 2 that if you extend it to all the qubits, it will use the notation I used. Your approach did work as well! Apr 11 '20 at 8:51 • It should be mentioned that it will swap the local wire states as well. So, use the reverse gate only when looking at the probability display. Apr 11 '20 at 8:59 It appears to me that the algassert output actually agrees with your (correct) math, and that you're simply misreading it. If you read the "Final amplitudes" left to right, it says that you have $$50\%$$ each of getting $$\vert 00 \rangle$$ or $$\vert 01 \rangle$$, and $$0\%$$ of measuring $$\vert 10 \rangle$$ or $$\vert 11 \rangle$$. • It still gives me 00 and 10. Apr 11 '20 at 6:09 • But it's only a difference in convention. If you want to go with the convention used in algassert, just swap your qubits. (The way you have the circuit set up, you have $\vert + \rangle \otimes \vert 0 \rangle$ rather than $\vert 0 \rangle \otimes \vert + \rangle$. Your $\vert 0 \rangle$ is the most significant bit and should go on the lower wire.) Apr 11 '20 at 11:54
2021-10-22T08:43:24
{ "domain": "stackexchange.com", "url": "https://quantumcomputing.stackexchange.com/questions/11488/what-is-vert-0-rangle-otimes-vert-rangle", "openwebmath_score": 0.8339951038360596, "openwebmath_perplexity": 435.5218033423296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126488274565, "lm_q2_score": 0.8633915976709975, "lm_q1q2_score": 0.8450122775319515 }
https://math.stackexchange.com/questions/1607993/confusion-regarding-logx-and-lnx/1607995
# Confusion regarding $\log(x)$ and $\ln(x)$ I was solving an integral and I encountered in some question $$\displaystyle \int_{2}^{4}\frac{1}{x} \, \mathrm dx$$ I know its integration is $\log(x)$. But my answer comes correct when I use $\ln(x)$ instead. What is this confusion? How do I know which one to use? Thanks • In mathematics, the natural logarithm is the one which is applicable in the generally used formulas and theorems. So in a mathematical context, the symbol $\color{red}{\log}$ will always mean the natural logarithm, which is also sometimes denoted as $\color{red}{\ln}$ . In physics and chemistry, log tables used to be very useful for calculations and they used base 10 so the formulas in chemistry generally mean $\color{red}{\log_{10}}$ if they just write $\color{red}{\log}$ . Jan 11 '16 at 12:26 • @G-man: And in computer science the natural and base 10 logarithms are almost never used; a computer scientist will assume that log is the base two logarithm. Jan 11 '16 at 17:14 • @G-Man's comment is more explaining than any answer here. +1 Jan 11 '16 at 17:18 • Without context log() is always assumed to be base 10. ln() is explicitly the natural log. For proof see your calculator. log₂() you just have to subscript. Why? It's not on your calculator. Any document can redefine anything within it's own context but if you don't people assume their calculator is right. Please keep this in mind as you write new mathematical texts. You're confusing people pointlessly. Jan 11 '16 at 18:19 • @EricLippert I disagree - when the base of a logarithm is relevant (e.g., when constant factors matter and one isn't just saying something like $\Theta(n\log n)$ ) base-two logs are generally written as $\lg$ rather than $\log$. More often than not, though, results are talked about 'up to a constant factor' asymptotically and so the base of a logarithm is moot. Feb 1 '16 at 18:59 $\log(x)$ has different meanings depending on context. It can often mean: • any logarithm if the base is not important (e.g. in general proofs about logarithms) • the natural logarithm $\log_e(x) = \ln(x)$ (usual convention in mathematics) • the decadic logarithm $\log_{10}(x)$ (usual convention in chemistry, biology and other sciences) • the binary logarithm $\log_{2}(x)$ (most often used in computer science) In your case, $\log(x)$ just means the same as $\ln(x)$ • in exam which one i use Jan 11 '16 at 12:24 • It depends on both you and your teacher's preferences. Some teacher's make students use one notation above another. If you are given the option to choose, I suggest writing something like "Let $\log x$ be the natural logarithm with base $e$" to minimize any confusion. Jan 11 '16 at 12:54 • @TaylorTed I used to face the same problem. I am used to calculus and number theoretic aspects, where the neutral base is always $e$, but my chemistry teacher kept complaining I gave wrong answers, as he thought it should've been $\ln$. Finally, I resorted to using them according to the context. During exams, I almost always use the subscript. That keeps confusion away. Jan 11 '16 at 17:23 • @zz20s Why not recommend just using $\ln x$ so there's never any confusion or need to clarify? I've never used $\log x$ to mean $\ln x$. The natural logarithm is the only one that has a clear, consistent, dedicated symbol for it. I think we should use it. Jan 11 '16 at 17:45 • I annoyed my teacher using ln x for natural log everywhere. :( Jan 11 '16 at 17:47 The notation $\log$ is used for logarithms in general. For specific logarithm bases you normally use $\ln$ (for natural logarithm), $\lg$ (for base 10). Sometimes $\operatorname{lb}$ is used for base two. So the question is what base is meant when writing $\log$. Normally one would indicate the base by subscribing the base. For example $\lg x = \log_{10} x$ or $\ln x = \log_ex$. The problem then is what you make of an expression of $\log x$? That's ambiguous since the base is not specified. However in some cases it would be assumed that the default base is $e$ (but in some cases it maybe different). Bottom line is if you don't want confusion you should normally use $\ln$ instead, or maybe $\log_e$ and never use $\log$ without subscript.
2022-01-19T12:18:55
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1607993/confusion-regarding-logx-and-lnx/1607995", "openwebmath_score": 0.8725284934043884, "openwebmath_perplexity": 446.7147967571566, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551556203815, "lm_q2_score": 0.8757869900269366, "lm_q1q2_score": 0.8450075925527454 }
https://matheducators.stackexchange.com/questions/17004/how-should-students-say-in-words-the-notation-for-a-limit/17010
# How should students say in words the notation for a limit? $$\lim_{x\rightarrow a} f(x)=L$$ Which way should students best get in the habit of? 1. The limit of $$f(x)$$, as $$x$$ approaches $$a$$, equals $$L$$ 2. The limit of $$f(x)$$ equals $$L$$, as $$x$$ approaches $$a$$ 3. The limit, as $$x$$ approaches $$a$$, of $$f(x)$$ equals $$L$$ • I think I say it the 3rd way (which also keeps f(x) and L together). – Sue VanHattum Sep 3 '19 at 3:05 • The expression $\lim_{x\to a} f(x)$ has a meaning without the $=L$ part. Variant 2 makes this least explicit. – Michael Bächtold Sep 3 '19 at 9:12 • I think the best habit would be for the students to think they all mean the same thing. – Jocelyn Sep 4 '19 at 13:21 • A fourth possibility is "As x approaches a, the limit of f(x) equals L." Edit: Personally, I don't think I say it that way. But when I introduce the topic of limits, I might say things like "If x approaches a, what does f(x) do?" – idmercer Sep 4 '19 at 16:01 According to UCAR-10101: Handbook for Spoken Mathematics, Lawrence A. Chang, Ph.D., page 38, a source for how to speak mathematics to sight-impaired students, we have: $$\lim_{x\to a} y = b$$ is spoken as the "limit as $$x$$ approaches $$a$$ of $$y$$ equals $$b$$". For the given expression, $$\lim_{x\to a} f(x) = L$$ is spoken thusly: the "limit as $$x$$ approaches $$a$$ of $$f$$ of $$x$$ equals $$L$$." This is consistent with option $$(3)$$. Alternatively, in analysis, from ANALYSIS TAUGHT BY BJORN POONEN, NOTES BY SANATH DEVALAPURKAR, we have the following: We write $$\lim_{x→a} f(x) =L$$ to say that for every $$\epsilon >0,$$ $$f$$ is eventually within $$\epsilon$$ of $$L$$ as $$x$$ approaches $$a$$”. This means that there exists $$δ >0$$ such that $$0<|x−a|< δ,$$ then $$|f(x)−L|< \epsilon$$. • Thank you for the first reference---it is fantastic. I had never considered that someone might have codified spoken mathematics, and am quite chuffed to know that it exists. – Xander Henderson Sep 5 '19 at 16:03 • I am not sure what is the purpose of the three pages dedicated to Roman letters, as they offer no pronunciation. Decimal logarithm lg() is missing. He should have used non-square matrices to avoid ambiguity. Still, a nice find. – Rusty Core Sep 6 '19 at 20:41 • @Xander Henderson: I had never considered that someone might have codified spoken mathematics --- I hadn't either until a few years ago when I was involved with a few others in trying to implement JAWS into a certain high stakes test. I was quite amazed at the audio speed settings that competent visually impaired users apparently could manage. We made various spot-checks on the accuracy of the (continued) – Dave L Renfro Sep 7 '19 at 17:09 • formula rendering. Things like $3(a + \frac{b}{c})$ were easy ("three left paren frac b over c right paren" I think), but some of the items involved nested fractional expressions and various other complicated expressions, but the real problems were with graphs ($xy$ coordinate graphs, bar graphs, circle graphs, etc.), and especially tables of data. And then there's the problem of trying to equate scores for someone working under these conditions with sighted test takers, something that I think was mostly left up to graduate admissions departments. – Dave L Renfro Sep 7 '19 at 17:14 I say it the third way, for these reasons: Firstly, from a notation point of view, the “$$x\to a$$” has to be written with the “$$\lim$$”, and no “$$\lim$$” can be written without it (without specifically saying what you mean by not having it), so it makes sense to put them together when you say it aloud. Indeed, you could argue that $$\displaystyle\lim_{x \to a}$$ is an operation you perform on the function $$f(x).$$ That is, you can change what $$x$$ the limit is being found at and also what function you are doing it to. Secondly the “$$\lim_{x \to a} f(x)$$” is a thing all by itself, and so it makes sense to say it all together. • from a notation point of view, the* “$x\to a$” *has to be written with the “lim”, and no “lim” can be written without it --- In mathematics one needs to be careful about strongly worded statements such as this. I've seen many instances in which a fixed point of application is used throughout (indeed, I was a referee for such a paper a few years ago) and hence excluded from the notation, and there are several cases of generalized/axiomatized notions of "limit" in which "lim" alone is often used to denote one of them (e.g. Banach limits, limit functors in category theory, etc.). – Dave L Renfro Sep 4 '19 at 8:13 • @DaveLRenfro I’ve made an edit. What do you think? – DavidButlerUofA Sep 4 '19 at 9:16 • I'm inclined to argue that since $x \rightarrow a$ modifies the limit operation, it is probably best not to separate "The limit" and "$x \rightarrow a$" with something else (e.g. "of $f(x)").$ I probably wouldn't side-track into whether we need to say $x \rightarrow a,$ as this is clearly context dependent, and in the present context it's clear we want to include it. But I wouldn't try to restrict myself to a single formulation (except in the early stages of a formal discussion of limits) --- see @Gerald Edgar's comment to another answer for why it can help to not have such a restriction. – Dave L Renfro Sep 4 '19 at 12:44 • +1. A few members of English Language Learners gave variations on this advice, to explain "How do you read these mathematical expressions aloud?" – Jasper Sep 4 '19 at 22:46 As $$x$$ approaches $$a$$, $$f(x)$$ approaches $$L$$. First, we emphasize what is happening to the independent variable, then we explain the consequence. I think that this phrasing is concise and easy to understand. It is clean and efficient. This is essentially (3), but I think that the sub-clause "The limit..." is unnecessary. Moreover, if we have a fixed function and want to consider limits at several points, it provides a consistent framework. For example, consider the rational function $$f : \mathbb{R}\setminus\{\pm 1\} \to \mathbb{R} : x \mapsto \frac{x+1}{x^2 - 1}.$$ As $$x$$ approaches $$\pm \infty$$, $$f(x)$$ approaches $$0$$. On the other hand, as $$x$$ approaches $$-1$$, $$f(x)$$ approaches $$1/2$$, and as $$x$$ approaches $$1$$, $$f(x)$$ is unbounded (in either the positive or negative direction, depending on whether the limit is taken from the left or the right). • I've seen this written as $f(x) \to L$ when $x\to a$. But this need not be equivalent to $\lim_{x\to a }f(x) =L$. For instance: as $x$ approaches $1$, $x^2$ approaches $x$. But I cannot write $\lim_{x\to1}x^2=x$. – Michael Bächtold Sep 4 '19 at 11:09 • An advantage of this: You can say "as $x$ approaches $a$" once, then state multiple consequences of that without repeating it for each one. – Gerald Edgar Sep 4 '19 at 11:14 • @MichaelBächtold I prefer to keep my spoken mathematics a little looser than my written mathematics. Spoken language should get the ideas across in broad strokes. When precision is needed, we have notation which nails down our meaning. Hence I do not claim that my phrasing is exactly equivalent to $\lim_{x\to a} f(x) = L$. However, if $f$ is a function, $x$ is a variable, and $L$ and $a$ are numbers, then my phrasing is entirely unambiguous. Hence when speaking my mathematics, this is how I would phrase it. – Xander Henderson Sep 4 '19 at 14:03 I usually say f(x) lähestyy L:ää, kun x lähestyy a:ta and sometimes I instead say it more colloquially as f(x):n raja on L, kun x on a. The inverted Kun x lähestyy a:ta, f(x) lähenee L:ää is also fine and in use. These would correspond to 2) and 4) in English. I would avoid linguistic complexity, such as a side clause embedded in the sentence, since it might reduce clarity. You are interchangeable, like peas in a pod. Cauchy and Weierstrass were usually saying "$$f(x)$$ becomes arbitrarily close to $$L$$", with the qualifier "as $$x$$ approaches $$a$$", sometimes before, sometimes after, sometimes implied. They also followed Leibnitz and Lagrange to talked about a quantity $$f(x)$$ becoming infinely close to $$L$$, when $$|x-a|$$ is infinitely small or an infinitesimal. Either 1 or 3, for the reasons given in the other answers. But I would also note that "as $$x$$ approaches $$a$$" need not be set aside by commas (or a break in speech), as it is not optional in the sentence. Besides the arrangement of the words, it is worth noting that ISO 80000-2, Mathematical signs and symbols to be used in the natural sciences and technology (2009), lists "$$x$$ tends to $$a$$" as the verbal equivalent for the notation $$x \rightarrow a$$, in contrast to "$$x$$ approaches $$a$$." 1 > 3 > 2 I prefer the first wording. You get more of the idea of thinking about what x causes what y. Yes, the second is equivalent, but it is awkward, to add the condition (approaching a) after the result. Three is OK also. Although I mildly prefer 1. Maybe shorter. • This seems more like a comment than an answer to the question, mostly because you only say things like "I prefer..." when the main question is about which one a student should use. – Brendan W. Sullivan Sep 4 '19 at 15:41 • Title has different wording. At the end of the day, it's not a super important question and difficult to parse too much of a long answer other than polling the forum. – guest Sep 4 '19 at 23:35 • Fair enough, but that just indicates this question should be improved, because conducting a poll to see what users say personally could be quite different from soliciting the "best one for a student to get in the habit of using". – Brendan W. Sullivan Sep 6 '19 at 3:36
2021-08-05T13:24:50
{ "domain": "stackexchange.com", "url": "https://matheducators.stackexchange.com/questions/17004/how-should-students-say-in-words-the-notation-for-a-limit/17010", "openwebmath_score": 0.8210335969924927, "openwebmath_perplexity": 742.1044396372042, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551535992067, "lm_q2_score": 0.8757869867849167, "lm_q1q2_score": 0.8450075876545472 }
https://gmatclub.com/forum/fractions-faster-calculation-187855.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 14 Oct 2019, 01:18 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Fractions : Faster calculation Author Message TAGS: ### Hide Tags Intern Joined: 30 Jul 2010 Posts: 9 ### Show Tags 01 Nov 2014, 07:23 4 11 1.Fractions - Comparing two fractions Simple way to compare two fractions $$\frac{45}{77}$$ and $$\frac{63}{52}$$ Cross multiple ==> $$45*52 < 63*77$$. Hence $$\frac{45}{77} < \frac{63}{52}$$ 2. Fractions - finding highest/least value among multiple fractions with consistent pattern. LEGEND N= Numerator D=Denominator a= increase in numerator b= increase in denominator 1. If N and D increased by constants value as the sequence of fractions progresses and if increase in numerator greater than or equal to increase in denominator then the last fraction is greatest among all given fractions. EX: Which of the following fractions is greatest? $$\frac{19}{24}$$, $$\frac{28}{27}$$, $$\frac{10}{21}$$,$$\frac{1}{18}$$ Solution: Re-writing the list to apply the above formula. $$\frac{1}{18}$$, $$\frac{10}{21}$$, $$\frac{19}{24}$$,$$\frac{28}{27}$$ After clear observation we can find that the in above fractions both N and D are increased by constant values. i.e, N is incremented by 9 and D is incremented by 3. Clearly $$9>3$$ Hence 28/27 is the greatest value among the given fractions. It's pretty straight forward and can deduce the solution in seconds with clear observation. Genralizing the formula: $$\frac{x}{y}$$, $$\frac{x+a}{y+b}$$, $$\frac{x+2a}{y+2b}$$,$$\frac{x+3a}{y+3b}$$...$$\frac{x+na}{y+nb}$$ Then $$\frac{x+na}{y+nb}$$ is greatest among all given fractions. 1. Both numerator and denominator increase in constant values.(Numerator by a, denominator by b) 2. ($$a>=b$$) what if $$a<b$$ Rule 2: If a<b, Then compare $$\frac{a}{b}$$ to first fraction of the list i.e$$\frac{x}{y}$$ 1. If $$\frac{a}{b} > \frac{x}{y}$$ Then the last fraction is greatest . i.e $$\frac{x+na}{y+nb}$$ 2. If $$\frac{a}{b} < \frac{x}{y}$$ Then the last fraction is least among all . i.e $$\frac{x+na}{y+nb}$$ 3. If $$\frac{a}{b} = \frac{x}{y}$$ Then all fractions are equal. EX: Rule 2. Type#1. Which of the following fractions is greatest? $$\frac{4}{39}$$, $$\frac{2}{25}$$, $$\frac{3}{32}$$,$$\frac{1}{18}$$ Solution: Re-writing the list to apply the above formula. $$\frac{1}{18}$$, $$\frac{2}{25}$$, $$\frac{3}{32}$$,$$\frac{4}{39}$$ After clear observation we can find that the in above fractions both N and D are increased by constant values. i.e, N is incremented by 1 and D is incremented by 7. 1. N increased by 1 and D increase by 7 ($$1<7$$) i.e $$a<b$$ 2. compare $$a<b$$ with first fraction $$1/18$$ . This will give us $$1/7 >1/18$$ Hence the last fraction is greatest. EX: Rule 2. Type#2. Which of the following fractions is least? $$\frac{105}{401}$$, $$\frac{100}{301}$$, $$\frac{95}{201}$$,$$\frac{90}{101}$$ Re-writing the list to apply the above formula. $$\frac{90}{101}$$, $$\frac{95}{201}$$, $$\frac{100}{301}$$,$$\frac{105}{401}$$ a=5 and b=100 compare $$\frac{a}{b}$$ i.e, $$\frac{1}{20}$$ with first fraction i.e, $$\frac{90}{101}$$. Clearly $$\frac{1}{20}<\frac{90}{101}$$ Hence the last fraction in the sequence is the least value. i.e, $$\frac{105}{401}$$ is least among all. Manager Joined: 23 Oct 2014 Posts: 85 Concentration: Marketing Re: Fractions : Faster calculation  [#permalink] ### Show Tags 05 Nov 2014, 15:40 Thank you. This was enlightening. Intern Joined: 14 Feb 2019 Posts: 2 Re: Fractions : Faster calculation  [#permalink] ### Show Tags 01 Mar 2019, 00:37 great. It is very healpful Re: Fractions : Faster calculation   [#permalink] 01 Mar 2019, 00:37 Display posts from previous: Sort by
2019-10-14T08:18:23
{ "domain": "gmatclub.com", "url": "https://gmatclub.com/forum/fractions-faster-calculation-187855.html", "openwebmath_score": 0.873357355594635, "openwebmath_perplexity": 3602.669021271786, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551515780318, "lm_q2_score": 0.8757869835428966, "lm_q1q2_score": 0.8450075827563488 }
http://tcpb.wwfmartesana.it/
It does this by representing the function in infinite sums of cosines and sines. The convergence of the Fourier series of g is uneventful, and after a few steps it is hard to see a difference between the partial sums, as well as between the partial sums and g. But from the Sequence of Terms Divergence Criterion for Infinite Series we know that then $\lim_{n \to \infty} \mid c_n \mid^2 = 0$ , which happens only when:. Slook The American Mathematical Monthly, Vol. As a physicist, I use Fourier series almost every day (mostly in infinite period limit, i. 1 Time series data A time series is a set of statistics, usually collected at regular intervals. To represent any periodic signal x(t), Fourier developed an expression called Fourier series. At points of discontinuity of f(x) the Fourier Approximation SN(x) takes on the average value 1 2 £ f(x+)+f(x¡) ⁄. Convergence is based on certain criteria. Systems represented by differential and difference equations. Tech 1st Year Important Questions & Notes for External Exams Below we have listed JNTUH B. If f(x) is any function define d for−π < x≤π, then there is a unique. Note: The room has changed to ETC 2. Ferroptosis is a form of regulated cell death with clinical translational potential, but the efficacy of ferroptosis-inducing agents is susceptible to many endogenous factors when administered alone, for which some cooperating mechanisms are urgently required. Pls Note: This video is part of our online courses, for full course visit Visit our website: www. Thus we can represent the repeated parabola as a Fourier cosine series f(x) = x2 = π2 3 +4 X∞ n=1 (−1)n n2 cosnx. 送料無料 。新品4本セット サマータイヤ 245/40zr19 xl 98y 245/40r19 コンチネンタル エクストリーム コンタクト dws06 19インチ 国産車 輸入車. Complex Fourier Series 1. 03 Completeness of Fourier Expansion Jeremy Orlo Theorem (Completeness theorem) A continuous periodic function fequals its Fourier series. Fourier Analysis by NPTEL. Inverse Fourier Transform 10. It is permissible to have a finite number of finite discontinuities in one period. x(t) = x(t + p). For this example, this average is non-zero. Fourier Series 97 Absolutely Convergent Fourier Series Theorem. So by Bessel's inequality we have that the series $\displaystyle{\sum_{n=0}^{\infty} \mid c_n \mid^2}$ converges. [Note: The sine series defined by Eqs. 005 (b) The Fourier series on a larger interval Figure 2. 318 Chapter 4 Fourier Series and Integrals Zero comes quickly if we integrate cosmxdx = sinmx m π 0 =0−0. As such, Fourier series are of greatest importance to the engineer and applied mathematician. 14 Fourier Series ⓘ Keywords: Fourier coefficients, Fourier series, Mathieu functions, normalization, recurrence relations Notes: See Meixner and Schäfke. Signals & Systems Flipped EECE 301 Lecture Notes & Video click her link A link B. Substituting , and : (7. Note that, for integer values of m, we have W−kn = ej2πkn N = ej2π (k+mN)n N = W−(k+mN)n. Chapter 2 Springer text book. Determine Power. Either print them, or bring your laptop, pad, or phone with you. They illustrate extensions of the main. Numerous examples and applications throughout its four planned volumes, of which Fourier Analysis is the first, highlight the far-reaching consequences of certain ideas in analysis to other fields of mathematics and a variety of sciences. 9toseethe result. Direct solution of the last equation in the question also is feasible, because the Fourier series converges very rapidly. Fourier Series Representation The Periodic functions are the functions which can define by relation f(t + P) = f(t) for all t. Pls Note: This video is part of our online courses, for full course visit Visit our website: www. This includes data values and the controlled vocabularies that house them. Currently using for 2nd year Uni Maths but these notes are friendly enough for A-Level - please see preview. We also construct orthonormal bases for the Hilbert. Materials include course notes, lecture video clips, practice problems with solutions, a problem solving video, and problem sets with solutions. If f(x) is any function define d for−π < x≤π, then there is a unique. Fourier Series 3 3. Tocheckthatthis works,insertthetestfunctionf(t)=sin(2…t)intoequations2. Convolution. In the Fourier Series case we do this filtering by multiplying by the basic function and integrating the result. "Mod" allows one to make the function periodic, with the "-Pi" shifting the fundamental region of the Mod to -Pi to Pi (rather than 0 to 2Pi). 4 in , not in. Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. m m Again, we really need two such plots, one for the cosine series and another for the sine series. Fourier series, the Fourier transform of continuous and discrete signals and its properties. Introduction to Complex Fourier Series Nathan P ueger 1 December 2014 Fourier series come in two avors. Introduction In these notes, we continue our discussion of the Fourier series and relate it to the continuous-time Fourier trans-form through a specific example. Rectangular waveform Require FS expansion of signal y(t) below: −4 0 4 8 12. It's possible to define Fourier series in slightly different ways, but let assume you are wanting to represent a $2\pi$ periodic function $f(x)$ by a sum of the form [math]f(x)=c+\sum_{n. I Note that the integral above can be evaluated over any interval of length T0. Find the Fourier series of the functionf defined by f(x)= −1if−π0 by Fourier's law and the boundary conditions (2). ECE137A class notes, UCSB, Mark Rodwell, copyright 2019 ECE137A, Notes Set 14: Fourier Series and Transforms Mark Rodwell, Doluca Family Chair, ECE Department University of California, Santa Barbara [email protected] We highly recommend you to follow your syllabus and then read these resources if you are under R15 regulation and for R13 Regulation we have provided important questions as per their. It stresses throughout the idea of homogenous Banach spaces and provides recent results. Lecture Notes: 1. This applet demonstrates Fourier series, which is a method of expressing an arbitrary periodic function as a sum of cosine terms. We will also define the odd extension for a function and work several examples finding the Fourier Sine Series for a function. Given a 2π-periodic function f on [−π,π], we define an (n ≥ 0) and bn (n≥ 1) by (1. This is a common aspect of Fourier series for any discontinuous periodic function which is known as the Gibbs phenomenon. Also note that, as opposed to the Taylor series, the Fourier series can represent a discontinuous func-tion: S S 2S 3S t 0. We will also take a look at the Magnitude Spectrum, the Phase Spectrum and the Power Spectrum of a Fourier Series. Fourier transform 45 3. Introductory lecture notes on Partial Differential Equations Lecture 14: Half Range Fourier Series: even and odd functions. We use the letter T with a double meaning: a) T = [0,1) b) In the notations Lp(T), C(T), Cn(T) and C∞(T) we use the letter T to imply that the functions are periodic with period 1, i. (4) Integrating cosmx with m = n−k and m = n+k proves orthogonality of the sines. Inverse Fourier Transform 10. Note the duality relationship of the Fourier transform. Type: Capítulo de livro: Title: Localized Waves: A Historical And Scientific Introduction: Author: Recami E. However, we will mostly just need the case of convergence in L2 norm for Fourier series of L2 functions, and in this. For the sine series, the commands are similar as follows. Convergence is based on certain criteria. (a) The function and its Fourier series 0 0. Also a simple sin function did not work. The first part of this course of lectures introduces Fourier series, concentrating on their. Convolutions and correlations and applications; probability distributions, sampling theory, filters, and analysis. Network Theory-electrical and electronics engineering-The fourier series - Free download as Powerpoint Presentation (. Stein and Shakarchi move from an introduction addressing Fourier series and integrals to in-depth. This says that an infinite number of terms in the series is required to represent the triangular wave. The relevant. 5 Applications of Fourier series. 5) can be re-written. Fourier integral formula is derived from Fourier series by allowing the period to approach infinity: (13. , 1960), pp. Example: DFS by DDCs & DSP Frequency analysis: why?. Ferroptosis is a form of regulated cell death with clinical translational potential, but the efficacy of ferroptosis-inducing agents is susceptible to many endogenous factors when administered alone, for which some cooperating mechanisms are urgently required. Paul Garrett: Pointwise convergence of Fourier series (September 15, 2019) The essential property of gis that on [0;1] it is approximable by step functions[5] in the sense[6] that, given ">0 there is a step function s(x) such that. Note that, for integer values of m, we have W−kn = ej2πkn N = ej2π (k+mN)n N = W−(k+mN)n. Fourier series and di erential equations Nathan P ueger 3 December 2014 The agship application for Fourier series is analysis of di erential equations. Larsen December 1, 2011 1. plot(x,y,. In your advanced calculus class you should have seen examples where interchanging the order of two limits leads to different answers. 2 Functions with arbitrary. In the Fourier Series case we do this filtering by multiplying by the basic function and integrating the result. Ferroptosis is a form of regulated cell death with clinical translational potential, but the efficacy of ferroptosis-inducing agents is susceptible to many endogenous factors when administered alone, for which some cooperating mechanisms are urgently required. It is through this avenue that a new function on an infinite set of real numbers is created from the image on ð#L;LÞ. Get Answer to Find the Fourier series expressions for the periodic voltage functions shown in Fig. Fourier Series Basics Basic. Periodic test functions 18 2. Complex Fourier Series 1. Abstract: In the first part. Lecture 1 Fourier Series Fourier series is identified with mathematical analysis of periodic phenomena. EE 442 Fourier Transform 3 Review: Exponential Fourier Series (for Periodic Functions) ^ 1 1 0 00 0 2 0 Again, is defined in time interval ( ) for 0, 1, 2, 3,. Now, let's use this information to evaluate some examples of Fourier series. 3 Complex Fourier Series At this stage in your physics career you are all well acquainted with complex numbers and functions. Using the results of Chapter 7, section 8 of Boas on pp. Orthogonality of Functions. x(t) = x(t + p). • finance - e. The second collection of terms is the sine (odd) terms, and the third is the cosine (even) terms. Larsen December 1, 2011 1. Properties of linear, time-invariant systems. Discrete Fourier Series (DFS) 5. (You can also hear it at Sound Beats. FOURIER ANALYSIS PART 1: Fourier Series Maria Elena Angoletta, AB/BDI DISP 2003, 20 February 2003 TOPICS 1. Integral of sin (mt) and cos (mt) Integral of sine times cosine. This book does an excellent job at explaining the mathematics behind this important topic. Time and frequency are related by the Fourier transform. Signals & Systems Flipped EECE 301 Lecture Notes & Video click her link A link B. Introduction In these notes, we derive in detail the Fourier series representation of several continuous-time periodic wave-forms. These series had already been studied by Euler, d'Alembert, Bernoulli and others be-fore him. 5 Adding sine waves. Signals and systems: Part II. Notice that t he first equation is exactly the same as we got when considering the Fourier Cosine Series and the second equation is the same as the solution for the Fourier Sine Series. Note: 2 lectures, §9. 1)weknowthattheFouriertransform shouldgiveusa1 =1andallothercoe–cientsshouldbezero. Section 8-4 : Fourier Sine Series. EEL3135: Discrete-Time Signals and Systems Fourier Series Examples - 1 - Fourier Series Examples 1. We also construct orthonormal bases for the Hilbert. Particularly, here, we consider the Fourier series and compare it with its Taylor equivalent both of which are convergent infinite series in their own rights. Particular attention is given to the 3 dimensional Cartesian, cylindrical, and spherical coordinate systems. Fourier series and di erential equations Nathan P ueger 3 December 2014 The agship application for Fourier series is analysis of di erential equations. A Fourier series can only converge to a 2π periodic function. This document describes an alternative, where a function is instead decomposed into terms of the. After our discussion of the properties of the Fourier series, and the uniform convergence result on the Fourier series, the convergence of uholds all the way down to t= 0 (given the appropriate conditions on u(x;0) = f(x)). Discrete-time Fourier transform. As I was going through Arthur Mattuck’s excellent differential equations course at MIT’s Open Courseware , the Fourier series clicked for me, so I thought I’d distill this out. Remark: If f is continuous on [0;1], then these two series also converge to f(x) at x= 0;1. 1 Continuous Fourier Transform The Fourier transform is used to represent a function as a sum of constituent harmonics. INTRODUCTION TO FOURIER TRANSFORMS FOR PHYSICISTS 5 and the inverse transform : (15) ψ(~k) = 1 (2π)32 Z ∞ −∞ ψ(~x)e−i(~k·~x)d3x We note that every time we go up in dimension, we tag on an extra scaling factor of 1 2π 1 2. Mathematica for Fourier Series and Transforms Fourier Series Periodic odd step function Use built-in function "UnitStep" to define. Here is what is going on; the particles collect at the “stationary” points. 2019-20 Music is the sound of mathematics 1 Abstract. 28) where the coefficients become a continuous function of the frequency variable ω, as in (13. Consider a mass-spring system as before, where we have a mass $$m$$ on a spring with spring constant $$k\text{,}$$ with damping $$c\text{,}$$ and a force $$F(t)$$ applied to the mass. Chapter 10 Fourier Series 10. Suppose f ∈ L1(Tn) and fb∈ l1(Zn). Lecture 15: Convergence of Fourier Series (Compiled 3 March 2014) In this lecture we state the fundamental convergence theorem for Fourier Series, which assumes that the function f(x) is piecewise continuous. The values are placed in a vector fapprox. 3) is valid for discrete-time signals as only the sample points of are considered. The rapid development of treatment resistance in tumors poses a technological bottleneck in clinical oncology. Find fourier Series course notes, answered questions, and fourier Series tutors 24/7. 6} and \ref{2. Fourier series are used in the analysis of periodic functions. Properties of linear, time-invariant systems. So in a new series of articles called "Explained," MIT News Office staff will explain some of the core ideas in the areas they cover, as reference points for future reporting on MIT research. 2014/2015. If we are only given values of a function f(x) over half of the range [0;L], we can de ne two. JPS, Fourier series 6 Note that a sum function for a trigonometric series does not necessarily belong to the linear span, as the span of a family of vectors is de ned as nite linear combinations of vectors from the family. An important consequence of orthonormality is that if s= P n k= n c ke. Fourier series is used to decompose signals into basis elements (complex exponentials) while fourier transforms are used to analyze signal in another domain (e. Suppose we know the values of ak and we want to compute the yj using the inverse Fourier transform, Eq. Note: this example was used on the page introducing the Fourier Series. " The approximation will be shown in red. ) A geometric progression is a set of numbers with a common ratio. 3) is valid for discrete-time signals as only the sample points of are considered. Fourier Series 3 3. We are really very thankful to him for providing these notes and appreciates his effort to publish these notes on MathCity. Schwartz Functions, First Statement of Fourier Inversion Fourier analysis shows that The smoother f is, the faster Ff decays. Fourier series, the Fourier transform of continuous and discrete signals and its properties. Fourier Transform 2. We cannot go on calculate the terms indefinitely. It is permissible to have a finite number of finite discontinuities in one period. They are designed to be experimented with, so play around. These notes spell out more fully than discussions provide by Griffiths, Sec. In mathematics, the Dirichlet conditions are under Fourier Transformation are used in order to valid condition for real-valued and periodic function f(x) that are being equal to the sum of Fourier series at each point (where f is a continuous function). Chapter 1 The Fourier Series of a Periodic Function 1. Notes 8: Fourier Transforms 8. Pointwise convergence 15 2. Equally important, Fourier analysis is the tool with which many of the everyday phenomena - the. Mathematical foundation using the state-variable approach. 2) which has frequency components at. As I was going through Arthur Mattuck’s excellent differential equations course at MIT’s Open Courseware , the Fourier series clicked for me, so I thought I’d distill this out. 1: The cubic polynomial f(x)=−1 3 x 3 + 1 2 x 2 − 3 16 x+1on the interval [0,1], together with its Fourier series approximation from V 9,1. In Section 1. Fourier Series { summary Motivation: sometimes it is convenient to express complicated functions in terms of simple ones. then the Fourier sine and cosine series converge for all xin [0;1], and has sum f(x) in (0;1). Note that, for integer values of m, we have W−kn = ej2πkn N = ej2π (k+mN)n N = W−(k+mN)n. , normalized). 5 Divergence of Fourier series 46 3 Odds and Ends 51 3. Discrete-time Fourier transform. Fourier Transform 2. adshelp[at]cfa. The Fourier Series is an infinite series expansion involving trigonometric functions. Let me make some comments on this passage. 1 Fourier series over any interval In general, Fourier series (with sine and cosine) can be de ned over any interval [ ; ]. Applications 35 Chapter 3. /(pi*coeff(idx(1:9. Harmonics with respect to Fourier series and analysis mean the sine and cosine components which constitute a function, or to put more simply , the simplest functions that a given function can be broken down into. here MA8353 Transforms and Partial Differential Equations notes download link is provided and students can download the MA8353 TPDE Lecture Notes and can make use of it. Chapter 1 Fourier Series 1. It has grown so far that if you search our library's catalog for the keyword \Fourier" you will nd 618 entries as of this date. Topics include: The Fourier transform as a tool for solving physical problems. The solution is obtained by defining Fourier series for both stream function and salt concentration, applying a Galerkin treatment using the Fourier modes as trial functions and solving the flow and the salt transport equations simultaneously in the spectral space. Conventions and first concepts The purpose of these notes is to introduce the Fourier series of a. Besides the textbook, other introductions to Fourier series (deeper but still elementary) are Chapter 8 of Courant-John [5] and Chapter 10 of Mardsen [6]. I am trying to compute the trigonometric fourier series coefficients of a periodic square wave time signal that has a value of 2 from time 0 to 3 and a value of -12 from time 3 to 6. Tocheckthatthis works,insertthetestfunctionf(t)=sin(2…t)intoequations2. 6 The Fourier-Bessel Series Math 241 -Rimmer 2 2 2 2( ) 0 parametric Bessel equation of order xy xy x yα ν ν ′′ ′+ + − = ( ) 1 2( ) ( ) has general solution on 0, of y cJ x cY xν να α ∞ = + very important in the study of boundary-value problems involving partial differential equations expressed in cylindrical coordinates. Lectures On Fourier Series - By S. Check out the Series chapter, especially Infinite series. Let g(x) = P ξ∈Zn fb(ξ)eix·ξ. (3): f(t) = a 0 2 + X1 n=1 [a ncos(nt) + b nsin(nt)] = a 0 2 + X1 n=1 a n eint+. After our discussion of the properties of the Fourier series, and the uniform convergence result on the Fourier series, the convergence of uholds all the way down to t= 0 (given the appropriate conditions on u(x;0) = f(x)). Signals and systems: Part II. ) Read off the frequency and the amplitude of this component; 2. The very first choice is where to start, and my choice is a brief treatment of Fourier series. Mckean, Fourier Series and Integrals. 2 Functions with arbitrary. Solved Problems. the Fourier transform, but thats a topic for a later day. Continuous-time Fourier series. Without even performing thecalculation (simplyinspectequation2. Type: Capítulo de livro: Title: Localized Waves: A Historical And Scientific Introduction: Author: Recami E. In fact, one way of. as will be seen below. In these notes we de ne the Discrete Fourier Transform, and give a method for computing it fast: the Fast Fourier Transform. Determine Power. As a result, the summation in the Discrete Fourier Series (DFS) should contain only N terms: xe. 6 (C,1)-Summability for Fourier Series 4. Fourier series, the Fourier transform of continuous and discrete signals and its properties. Lecture 7: Fourier Series Lecture 8: Fourier Transform Lecture 9: Fourier Transform Theorems. The spectral density is the continuous analog: the Fourier transform of γ. 7 Abel-Summability for Fourier Series 4. 1) where a 0, a n, and b. These notes introduce some basic elements of music theory using the mathematical language, in particular algebraic relations, constructions related to Fourier theory, mathematical-. The Fourier series were d ifferent, but the t wo s eries yielded the same values over that s ubinterval. PA214: Waves and fields. 2) which has frequency components at. Fourier series: A Fourier (pronounced foor-YAY) series is a specific type of infinite mathematical series involving trigonometric functions. FOURIER ANALYSIS PART 1: Fourier Series Maria Elena Angoletta, AB/BDI DISP 2003, 20 February 2003 TOPICS 1. The Fourier series is the same thing, except our "dot product" is defined differently and the dimension of the space is infinite. MA8353 Notes all 5 units notes are uploaded here. Note that, for integer values of m, we have W−kn = ej2πkn N = ej2π (k+mN)n N = W−(k+mN)n. FFT is useful as a building block for various frequency analysis tools, and it is useful as a building block for digital filtering (since it can be used for fast convolution). Fourier Series Visualization Using Blender + Python. Prof Brijesh Mishra an IITian alumni explains a problem on Fourier series in very simple way. Joseph Fourier - Wikipedia [Check. Jean Baptiste Joseph Fourier (21 March 1768 - 16 May 1830) Fourier series. Without even performing thecalculation (simplyinspectequation2. Jean Baptiste Joseph Fourier (21 March 1768 - 16 May 1830) Fourier series. Note that because the modulus was taken after averaging Fourier coefficients, our derivation of amplitude spectra allowed for phase cancellation of activity not phase-locked sequences. We are really very thankful to him for providing these notes and appreciates his effort to publish these notes on MathCity. Introductory lecture notes on Partial Differential Equations Lecture 14: Half Range Fourier Series: even and odd functions. Title: Fourier series and Circuit Analysis. We will be considering functions of a real variable with complex values. Fourier who discovered it. Examples are given of computing the complex Fourier series and converting between. Note that the series represents either f[t] over a limited range of 0 < t < 2S, or we assume that the function is periodic with a period equal to 2S. Mathematics of Computation, 19:297Œ301, 1965 A fast algorithm for computing the Discrete Fourier Transform (Re)discovered by Cooley & Tukey in 19651 and widely adopted. Fourier Series visualization. 1 Periodic Functions and Orthogonality Relations The di˙erential equation y00 + 2y=Fcos!t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. In this section we are going to start taking a look at Fourier series. Notice that it is identical to the Fourier transform except for the sign in the exponent of the complex exponential. • Since f is even, the Fourier series has only cosine terms. View Notes - Periodic Functions and Fourier Series Notes from MATH 235 at Michigan State University. Derpanis October 20, 2005 In this note we consider the Fourier transform1 of the Gaussian. These notes present a first graduate course in harmonic analysis. • finance - e. Media in category "Fourier analysis" The following 111 files are in this category, out of 111 total. Note, for instance, that if we set χ = 7r/2 in (1) and χ = π in (4), we obtain the respective results. However, periodic complex signals can also be represented by Fourier series. FOURIER SERIES WITH POSITIVE COEFFICIENTS J. This document is highly rated by Electrical Engineering (EE) students and has been viewed 940 times. Topics include the analysis of general surfaces, quadric surfaces and countour surfaces; parameterisation of surfaces; partial derivatives leading to the chain. Forward Fourier Transform: Inverse Fourier Transform: Note:. 6 The Fourier-Bessel Series Math 241 -Rimmer 2 2 2 2( ) 0 parametric Bessel equation of order xy xy x yα ν ν ′′ ′+ + − = ( ) 1 2( ) ( ) has general solution on 0, of y cJ x cY xν να α ∞ = + very important in the study of boundary-value problems involving partial differential equations expressed in cylindrical coordinates. Fourier Series A function f(x) can be expressed as a series of sines and cosines: where: Fourier Transform Fourier Series can be generalized to complex numbers, and further generalized to derive the Fourier Transform. Numerous examples and applications throughout its four planned volumes, of which Fourier Analysis is the first, highlight the far-reaching consequences of certain ideas in analysis to other fields of mathematics and a variety of sciences. Conic Sections. So, let's be consistent with Prof. The Gaussian function, g(x), is defined as, g(x) = 1 σ √ 2π e −x2 2σ2, (3) where R ∞ −∞ g(x)dx = 1 (i. We will be considering functions of a real variable with complex values. Chapter 10 Fourier Series 10. We examine the potential benefit of social media for recruitment into Early Check, a. The Hilbert transform is treated on the circle, for example, where it is used to prove L^p convergence of Fourier series. ) Theorem 2: Convergence of the full Fourier series. Note that the range of integration extends over a period of the integrand. ourierF Series The idea of a ourierF series is that any (reasonable) function, f(x), that is peri-odic on the interval 2π (ie: f(x + 2πn) = f(x) for all n) can be decomposed into contributions from sin(nx) and cos(nx). (Brooks/Cole Series in Advanced Mathematics), 2002, ISBN 978-0-534-37660-4 Fourier series of radial functions in several variables Pointwise Fourier inversion Gisiro Maruyama (301 words) [view diff] exact match in snippet view article find links to article. Since fb∈ l1(Zn), this series converges uniformly and absolutely, and g∈ C(Tn). Some of this mathematics is analogous to properties of ordinary vectors in three-dimensional space, and we review a few properties of vectors first. The very first choice is where to start, and my choice is a brief treatment of Fourier series. $\endgroup$ – J. Fourier transform as a limiting case of Fourier series is concerned with non-periodic phenomena. The Fourier series is named after the French Mathematician and Physicist Jacques Fourier (1768 – 1830). However to make things easier to understand, here we will assume that the signal is recorded in 1D (assume one row of the 2D image pixels). Definition 2. Fourier Analysis Basics of Digital Signal Processing (DSP) Discrete Fourier Transform (DFT) Short-Time Fourier Transform (STFT) Fourier Series Fourier transform. Note 1: We do expect to see the convergence of the Fourier series partial sums to f(x) on the graphs as N increases. Check it out. Lecture 1 Fourier Series Fourier series is identified with mathematical analysis of periodic phenomena. This is the form of Fourier series which we will study. Mckean, Fourier Series and Integrals. Mathematica for Fourier Series and Transforms Fourier Series Periodic odd step function Use built-in function "UnitStep" to define. The Fourier Series for a function f(x) with period 2π is given by: X∞ k=0 a k. Type: Capítulo de livro: Title: Localized Waves: A Historical And Scientific Introduction: Author: Recami E. The signals are sines and cosines. The Fourier Transform formula is The Fourier Transform formula is Now we will transform the integral a few times to get to the standard definite integral of a Gaussian for which we know the answer. The real parameter represents an array of cosine terms. If 2 6= !2 a particular solution is easily found by undetermined coe˚cients (or by using Laplace transforms) to be yp = F. Figure 2 below shows a graph of the sinc function (the Fourier Transform of a single pulse) and. Signals and Systems Notes Pdf - SS Notes Pdf book starts with the topics SAMPLING Sampling theorem,Z-TRANSFORMS Fundamental difference between continuous and discrete time signals, SIGNAL. /(pi*coeff(idx(1:9. Notes of Fourier Series These notes are provided by Mr. The complex Fourier series obeys Parseval's Theorem, one of the most important results in signal. Fourier originally defined the Fourier series for real-valued functions of real arguments, and using the sine and cosine functions as the basis set for the decomposition. A Fourier series represents the functions in the frequency domain (change of coordinates) in an infinite dimensional orthogonal function space. The first part of this course of lectures introduces Fourier series, concentrating on their. Fourier Series Expansion on the Interval $$\left[ { a,b} \right]$$ If the function $$f\left( x \right)$$ is defined on the interval $$\left[ { a,b} \right],$$ then its Fourier series representation is given by the same formula. This unit extends concepts from single variable calculus (KMA152 and KMA154) into the domain of several variables. Such a decomposition of periodic signals is called a Fourier series. Summerson 30 September, 2009 1 Real Fourier Series Suppose we have a periodic signal, s(t), with period T. Preliminaries: 1. Note that Fig. Cooley and John W. The time–frequency dictionary for S(R) 167 §7. A function f(x) is called a periodic function if f(x) is defined for all real x, except possibly at some points,. COMPUTING FOURIER SERIES Overview We have seen in previous notes how we can use the fact that sin and cos represent complete orthogonal functions over the interval [-p,p] to allow us to determine the coefficients of a Fourier series. This is called completeness because it says the set of functions cos(nt) and sin(nt) form a complete set of basis functions. 5: Generalized Fourier series Advanced Engineering Mathematics 4 / 7 Example 2 (Neumann BCs) 00y = y, y 0 (0) = 0, y 0 (ˇ) = 0 is an SL problem with:. There exists a separate branch. First the Fourier Series representation is derived. This is in terms of an infinite sum of sines and cosines or exponentials. For sinusoid Fourier series, we have coefficients a_0, a_n, and b_n in different formulas respectively. To get the Fourier Series coefficients one then evaluates the Fourier Transform (in this case G(f) above) at these discrete frequencies. 1) where u = u(x,t),K>0 is a constant depending on the. A “Brief” Introduction to the Fourier Transform This document is an introduction to the Fourier transform. In both instances note the behaviour of the partial sums near the jump discontinuity; the Gibbs effect is apparent. 10) should read (time was missing in book):. Fourier’s method is applied on problem sheet 4 to show that the solution is given by T(x;t) = a 0 2 + X1 n=1 a n cos nˇx L exp n2ˇ2 t L2 ; where the constants a. An example is the Taylor expansion, which allows us to write any (suitably well behaved) function as a sum of simple powers of x. The Fourier Transform formula is The Fourier Transform formula is Now we will transform the integral a few times to get to the standard definite integral of a Gaussian for which we know the answer. Additional Fourier Transform Properties 10. The Fourier transform is an integral transform widely used in physics and engineering. You can copy this and paste it into your editor and run it from octave or just paste it into an octave window to see the plot. • Since f is even, the Fourier series has only cosine terms. Fourier series 9 2. Fourier Series References. Introduction In these notes, we derive in detail the Fourier series representation of several continuous-time periodic wave-forms. Summability of Fourier series: VI. Filter effects of digital data processing are illustrated. 1 Background Any temporal function can be represented by a multiplicity of basis sets. This is an excellent reason to take a course that deals with Fourier Series! Here is an example of a projection, and what happens when you take the image and move it a little. In this section we define the Fourier Sine Series, i. These are equivalent -- and of course give the same result. The Dirac delta, distributions, and generalized transforms. Handmade Notes : Notes are Brilliant , Easy Language , East to understand ( Student Feedback ) Exam ke Pehle Notes ek baar Dekhlo revision aise hi jata hai This series include 1) Laplace transform 2) inverse Laplace Transform 3) Complex Variable 3) Fourier Series 5) Conformal Mapping 6) Correlation; 7) Z transform 8) Regression; 9)Partial. We will also define the odd extension for a function and work several examples finding the Fourier Sine Series for a function. If f is initially defined over the interval [0,π], then it can be extended to [−π,π] (as an odd function) by letting f(−x)=−f(x), and then extended periodically with period P =2π. Forward Fourier Transform: Inverse Fourier Transform: Note:. Real Fourier Series Samantha R. The first part of this course of lectures introduces Fourier series, concentrating on their. The toolbox provides this trigonometric Fourier series form. We begin by discussing Fourier series. Truncating the Fourier transform of a signal on the real line, or the Fourier series of a periodic signal (equivalently, a signal on the circle) corresponds to filtering out the higher frequencies by an ideal low-pass/high-cut filter. The series gets its name from a French mathematician and physicist named Jean Baptiste Joseph, Baron de Fourier, who lived during the 18th and 19th centuries. Fourier integral formula is derived from Fourier series by allowing the period to approach infinity: (13. Derivatives Derivative Applications Limits Integrals Integral Applications Series ODE Laplace Transform Taylor/Maclaurin Series Fourier Series Functions Line Equations Functions Arithmetic & Comp. Published by McGraw-Hill since its first edition in 1941, this classic text is an introduction to Fourier series and their applications to boundary value problems in partial differential equations of engineering and physics. Note that as more terms are added the approximation improves. STRONG DIRICHLET CONDITIONS - For a convergent Fourier series, we must meet the weak Dirichlet condition and f(t) must have only a finite number of maxima and minima in one period. The second collection of terms is the sine (odd) terms, and the third is the cosine (even) terms. December 7, 2012 21-1 21. Fourier Series: For a given periodic function of period P, the Fourier series is an expansion with sinusoidal bases having periods, P/n, n=1, 2, … p lus a constant. Note that near the jump discontinuities for the square wave, the finite truncations of the Fourier series tend to overshoot. ) 20: Convergence of Fourier Series and L 2 Theory : 21: Inhomogeneous Problems : 22: Laplace's Equation and Special Domains : 23: Poisson Formula Final Exam. First the Fourier Series representation is derived. representing a function with a series in the form Sum( A_n cos(n pi x / L) ) from n=0 to n=infinity + Sum( B_n sin(n pi x / L) ) from n=1 to n=infinity. Complex Fourier Series. 3] Remark: In fact, the argument above shows that for a function fand point x. 3) Note that (7. ) A geometric progression is a set of numbers with a common ratio. Introductory lecture notes on Partial Differential Equations Lecture 14: Half Range Fourier Series: even and odd functions. Fourier Series is a class of infinite series, meaning that there are infinite terms in the expansion. The sum of the Fourier series is equal to f(x) at all numbers x where f is continuous. 1 Introduction and terminology We will be considering functions of a real variable with complex values. 13 Example: Fourier Series Plotter Program file for this chapter: A particular musical note (middle C, say) played on a piano and played on a violin sound similar in some ways and different in other ways. Lecture Notes: 1. 's technical difficulties ♦ May 24 '12 at 16:07. Fourier Series approach and do another type of spectral decomposition of a signal called a Fourier Transform. In this short note we show that for periodic functions which are analytic the representation follows from basic facts about Laurent series. Fourier Series The Fourier Series is another method that can be used to solve ODEs and PDEs. Definition 2. Joseph Fourier - Wikipedia [Check. (Brooks/Cole Series in Advanced Mathematics), 2002, ISBN 978-0-534-37660-4 Fourier series of radial functions in several variables Pointwise Fourier inversion Gisiro Maruyama (301 words) [view diff] exact match in snippet view article find links to article. Muhammad Ashfaq. To motivate this, return to the Fourier series, Eq. These series had already been studied by Euler, d'Alembert, Bernoulli and others be-fore him. Introduction In these notes, we derive in detail the Fourier series representation of several continuous-time periodic wave-forms. The knowledge of Fourier Series is essential to understand some very useful concepts in Electrical Engineering. Find fourier Series course notes, answered questions, and fourier Series tutors 24/7. It is seen that has frequency components at and the respective complex exponentials are. Fourier series, the Fourier transform of continuous and discrete signals and its properties. In this chapter much of the emphasis is on Fourier Series because an understanding of the Fourier Series decomposition of a signal is important if you wish to go on and study other spectral techniques. In his first letter Gibbs failed to notice the Gibbs. In this post, we discuss divergence results of Fourier series; this previous post was about convergence results. Before going into the core of the material we review some motivation coming from the classical theory of Fourier series. Mathematica for Fourier Series and Transforms Fourier Series Periodic odd step function Use built-in function "UnitStep" to define. Course Hero has thousands of fourier Series study resources to help you. Properties of Fourier Transform 10. And it is also fun to use Spiral Artist and see how circles make waves. These are lecture notes that I typed up for Professor Kannan Soundarara-jan’s course (Math 172) on Lebesgue Integration and Fourier Analysis in Spring 2011. The an and bn are called the Fourier. Note that near the jump discontinuities for the square wave, the finite truncations of the Fourier series tend to overshoot. 253-256, Jstor. Joseph Fourier (1768-1830) who gave his name to Fourier series, was not the –rst to use Fourier series neither did he answer all the questions about them. We look at a spike, a step function, and a ramp—and smoother functions too. Chapter 3: The Frequency Domain Section 3. Wiener, it is shown that functions on the circle with positive Fourier coefficients that are pth power integrable near 0, 1 < p < 2, have Fourier coefficients in 1P". Complex Fourier Series 1. These are equivalent -- and of course give the same result. The Fourier series is named after the French Mathematician and Physicist Jacques Fourier (1768 – 1830). In particular, in the continuous case we. AMPLITUDE AND PHASE SPECTRUM OF PERIODIC WAVEFORM We have discussed how for a periodic function x(t) with period T and fundamental frequency f 0=1/ T , the Fourier series is a representation of the function in terms of sine and cosine functions as follows: x(t) = a0 + n = ∞ ∑ 1 an cos(2 πnf. Such a decomposition of periodic signals is called a Fourier series. Fourier Series 3 3. So, Fourier figures the solution looks like, Now to use the boundary conditions, `b. We examine the potential benefit of social media for recruitment into Early Check, a. Two new chapters are devoted to modern applications. I Note that the integral above can be evaluated over any interval of length T0. By mgrplanetm • Posted in Study materials • Tagged Algebra, Boundary value problems, Calculus, college students, Differential equations, Fourier series, Laplace transforms, Paul's online lecture notes, study material. FOURIER SERIES Let fðxÞ be defined in the interval ð#L;LÞ and outside of this interval by fðx þ 2LÞ¼fðxÞ, i. General trigonometrical series: Notes. 1 Fourier analysis was originallyconcerned with representing and analyzing periodic phenomena, via Fourier series, and later with extending those insights to nonperiodic phenomena, via the Fourier transform. Continuous-time Fourier series. Sum[a[n] (n w)^6 Cos[n w t], {n, 1, 5, 2}]^3] - Sum[a[n] Cos[n w t], {n, 1, 5, 2}] == 0. (3): f(t) = a 0 2 + X1 n=1 [a ncos(nt) + b nsin(nt)] = a 0 2 + X1 n=1 a n eint+. Trigonometric Fourier Series 1 ( ) 0 cos( 0 ) sin( 0) n f t a an nt bn nt where T n T T n f t nt dt T b f t nt dt T f t dt a T a 0 0 0 0 0 0 ( )sin() 2 ( )cos( ) ,and. However, these are valid under separate limiting conditions. Fourier series: A Fourier (pronounced foor-YAY) series is a specific type of infinite mathematical series involving trigonometric functions. To motivate this, return to the Fourier series, Eq. If I wanted to detect this sequence I just need to look for a series of strong intensities from the FFT output at the rising and falling frequencies of the whistle. , daily exchange rate, a share price, etc. Fourier series and di erential equations Nathan P ueger 3 December 2014 The agship application for Fourier series is analysis of di erential equations. Fourier Series Representation The Periodic functions are the functions which can define by relation f(t + P) = f(t) for all t. for the coefficients of the full Fourier Series. 253-256, Jstor. Click a problem to see the solution. 1 Notes on Fourier series of periodic functions 1. , 1960), pp. According to wikipedia, he also discovered the greenhouse effect. Fourier Series. Fourier also thought wrongly that any function could be represented by Fourier series. Kleitman's notes and do the inverse Fourier transform. JavaScript/React. Fourier series notes March 10, 2019 by physicscatalyst Leave a Comment Fourier series is an expansion of a periodic function of period $2\pi$ which is representation of a function in a series of sine or cosine such as. 3) Note that (7. We cannot go on calculate the terms indefinitely. Since fb∈ l1(Zn), this series converges uniformly and absolutely, and g∈ C(Tn). The Fourier transform and its inverse are essentially the same for this part, the only di erence being which n-th root of unity you use. to Fourier series in my lectures for ENEE 322 Signal and System Theory. Lecture 11 (Introduction to Fourier Series) (Midterm Exam I) Lecture 12 (Complex Fourier Series) Lecture 13 (Vector Spaces / Real Space) Lecture 14 (A Vector Space of Functions) (Homework 3) Lecture 15 (The Dirac Delta Function) Lecture 16 (Introduction to Fourier Transforms) Lecture 17 (Fourier Transforms and the Wave Equation). This lecture note covers the following topics: Cesaro summability and Abel summability of Fourier series, Mean square convergence of Fourier series, Af continuous function with divergent Fourier series, Applications of Fourier series Fourier transform on the real line and basic properties, Solution of heat equation Fourier transform for functions in Lp, Fourier. 1 Fourier analysis was originallyconcerned with representing and analyzing periodic phenomena, via Fourier series, and later with extending those insights to nonperiodic phenomena, via the Fourier transform. Last modified by: kadiam Created Date: 7/7/2009 7:20:00 PM Category: General Engineering Manager: Autar Kaw Company. Maximal functions and Calderon--Zygmund decompositions are treated in R^d, so that. Find the Fourier series of the functionf defined by f(x)= −1if−π0 by Fourier's law and the boundary conditions (2). Discrete-time Fourier series. Find fourier Series course notes, answered questions, and fourier Series tutors 24/7. This section provides materials for a session on general periodic functions and how to express them as Fourier series. 1 Introduction and terminology We will be considering functions of a real variable with complex. Muhammad Ashfaq. And it is also fun to use Spiral Artist and see how circles make waves. This document is highly rated by Electrical Engineering (EE) students and has been viewed 940 times. representing a function with a series in the form Sum( A_n cos(n pi x / L) ) from n=0 to n=infinity + Sum( B_n sin(n pi x / L) ) from n=1 to n=infinity. Derivatives Derivative Applications Limits Integrals Integral Applications Series ODE Laplace Transform Taylor/Maclaurin Series Fourier Series Functions Line Equations Functions Arithmetic & Comp. "Transition" is the appropriate word, for in the approach we'll take the Fourier transform emerges as we pass from periodic to nonperiodic functions. The following code calculates the Fourier series of the following signal with Matlab symbolic calculation, with T 0 5,W 1. Theorem (Fourier Convergence Theorem) If f is a periodic func-tion with period 2π and f and f0 are piecewise continuous on [−π,π], then the Fourier series is convergent. Fourier series has its application in problems pertaining to Heat conduction, acoustics, etc. Jean Baptiste Joseph Fourier (1768-1830) was a French mathematician, physicist and engineer, and the founder of Fourier analysis. If we are only given values of a function f(x) over half of the range [0;L], we can de ne two. Fourier Series References. Fourier series are used in the analysis of periodic functions. Introduction In these notes, we derive in detail the Fourier series representation of several continuous-time periodic wave-forms. Conic Sections. 2 The Dirichlet and the Fejer kernels 29´ 2. fourier-bessel series and boundary value problems in cylindrical coordinates Note that J (0) = 0 if α > 0 and J 0 (0) = 1, while the second solution Y satisfies lim x→ 0 + Y ( x ) = −∞. To do so, note that although the range of integration is from 0 to ∞, U(ω,t) generally decays with ω so one can "truncate" the integral at a certain finite (but large enough) value of ω. Fourier Series x(t)= 1 2 a 0 + X1 n=1 a n cos n⇡t T + b n sin n⇡t T Note that the data must be on the device. The usefulness of such series is that any periodic function f with period T can be written as a. A Fourier Series is an expansion of a periodic function f(x) in terms of an infinite sum of sines and cosines. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): This notes on Fourier series complement the textbook. m) (Lecture 13) Infinite Dimensional Function Spaces and Fourier Series (Lecture 14) Fourier Transforms (Lecture 15) Properties of Fourier Transforms and Examples. Now, let's use this information to evaluate some examples of Fourier series. Prof Brijesh Mishra an IITian alumni explains a problem on Fourier series in very simple way. While calculating the integral, I'm not sure how the variable of integration should be declared. Intro to Fourier Series. It does this by representing the function in infinite sums of cosines and sines. The function fˆ(ξ) is known as the Fourier transform of f, thus the above two for-mulas show how to determine the Fourier transformed function from the original. This was the tradition of Charles Fourier, Henri de Saint-Simon, Étienne Cabet, Louis Auguste Blanqui, Pierre-Joseph Proudhon, and so on. But wouldn't it be nice if we have just one formula for all the. 14 Fourier Series ⓘ Keywords: Fourier coefficients, Fourier series, Mathieu functions, normalization, recurrence relations Notes: See Meixner and Schäfke. University. If you are interested in completing a bonus project on Fourier series (worth 3% of the course mark), contact the instructor for discussion and. This notes on Fourier series complement the textbook. Currently using for 2nd year Uni Maths but these notes are friendly enough for A-Level - please see preview. 1 Introduction and terminology. We use the letter T with a double meaning: a) T = [0,1) b) In the notations Lp(T), C(T), Cn(T) and C∞(T) we use the letter T to imply that the functions are periodic with period 1, i. Note the numbers in the vertical axis. In mathematics, the Dirichlet conditions are under Fourier Transformation are used in order to valid condition for real-valued and periodic function f(x) that are being equal to the sum of Fourier series at each point (where f is a continuous function). Fourier Series Course Notes (External Site - North East Scotland College) Be able to: Use Fourier Analysis to study and obtain approximations of functions over any range. In short, fourier series is for periodic signals and fourier transform is for aperiodic signals. The discrete version of the Fourier Series can be written as ex(n) = X k X ke j2πkn N = 1 N X k Xe(k)ej2πkn N = 1 N X k Xe(k)W−kn, where Xe(k) = NX k. 26-27 0 0 0 n1 00 0 0 0 0 Equation (2. The function fˆ(ξ) is known as the Fourier transform of f, thus the above two for-mulas show how to determine the Fourier transformed function from the original. In this chapter much of the emphasis is on Fourier Series because an understanding of the Fourier Series decomposition of a signal is important if you wish to go on and study other spectral techniques. Fourier transform properties. We will also take a look at the Magnitude Spectrum, the Phase Spectrum and the Power Spectrum of a Fourier Series. 1 Periodic Functions and Orthogonality Relations The di˙erential equation y00 + 2y=Fcos!t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. Thus we might achieve f(x) = X1 n=1 a nsin nˇx. The point is that the only solutions of. 005 (b) The Fourier series on a larger interval Figure 2. Fourier Series slides Fourier Series Applets. The video includes two different animations, so be sure to watch it all the way through to see the second one. However, periodic complex signals can also be represented by Fourier series. Fourier Analysis by NPTEL. Properties of linear, time-invariant systems. This is a common aspect of Fourier series for any discontinuous periodic function which is known as the Gibbs phenomenon. Further properties of trigonometrical Fourier series: IV. Many other Fourier-related transforms have since been defined, extending the initial idea to other applications. Fourier series are used in many cases to analyze and interpret a function which would otherwise be hard to decode. Larsen December 1, 2011 1. 1 A First Look at the Fourier Transform We're about to make the transition from Fourier series to the Fourier transform. The behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). In fact, one way of. However, these are valid under separate limiting conditions. Fourier Series and Fourier Transform are two of the tools in which we decompose the signal into harmonically related sinusoids. Fourier created a method of analysis now known as the Fourier series for determining these simpler waves and their amplitudes from the complicated periodic function. Last modified by: kadiam Created Date: 7/7/2009 7:20:00 PM Category: General Engineering Manager: Autar Kaw Company. A “Brief” Introduction to the Fourier Transform This document is an introduction to the Fourier transform. A formal mathematical equation for Trigonometric Fourier Series is as follows. Fourier series, the Fourier transform of continuous and discrete signals and its properties. In other words, a complicated periodic wave can be written as the sum of a number of simpler waves. f(x) = signx = {−1, −π ≤ x ≤ 0 1, 0 < x ≤ π. Complex Fourier Series. Chapter 1 in this book is a short review of some important trigonometric formulæ, which will be used over and over again in connection with Fourier series. Be able to compute the Fourier coe cients of even or odd periodic function using the simpli ed formulas. Forward Fourier Transform: Inverse Fourier Transform: Note:. D F T (Discrete Fourier Transform) F F T (Fast Fourier Transform) Written by Paul Bourke June 1993. 8 Summability Theorems for Fourier Transforms 4. In linear systems theory we are usually more interested in how a system responds to signals at different frequencies. FOURIER ANALYSIS PART 1: Fourier Series Maria Elena Angoletta, AB/BDI DISP 2003, 20 February 2003 TOPICS 1. Lecture 14: Half Range Fourier Series: even and odd functions (Compiled 4 August 2017) In this lecture we consider the Fourier Expansions for Even and Odd functions, which give rise to cosine and sine half range Fourier Expansions. An important consequence of orthonormality is that if s= P n k= n c ke. I just saw a great animation illustrating the Fourier series decomposition of a square wave. • Complex Fourier Analysis • Fourier Series ↔ Complex Fourier Series • Complex Fourier Analysis Example • Time Shifting • Even/Odd Symmetry • Antiperiodic ⇒ Odd Harmonics Only • Symmetry Examples • Summary E1. Laurent Series Yield Fourier Series A di cult thing to understand and/or motivate is the fact that arbitrary periodic functions have Fourier series representations. Inspired by some correspondence in Nature between Michelson and Love about the convergence of the Fourier series of the square wave function, in 1898 J. If the convergence does not happen, check your coe–cients! Note 2: Bonus projects. Jean Baptiste Joseph Fourier (1768-1830) was a French mathematician, physicist and engineer, and the founder of Fourier analysis. If f is initially defined over the interval [0,π], then it can be extended to [−π,π] (as an odd function) by letting f(−x)=−f(x), and then extended periodically with period P =2π. Fourier Series. Thus we might achieve f(x) = X1 n=1 a nsin nˇx. Fourier Series Fourier series started life as a method to solve problems about the ow of heat through ordinary materials. This is the currently selected item. They illustrate extensions of the main. This paper studies two data analytic methods: Fourier transforms and wavelets. Here are examples of both approaches: Fourier Series for f(x) = x using Trig functions (Math 21 notes --see Section 3. An Introduction to Laplace Transforms and Fourier Series will be useful for second and third year undergraduate students in engineering, physics or mathematics, as well as for graduates in any discipline such as financial mathematics, econometrics and biological modelling requiring techniques for solving initial value problems. The Fourier Transform formula is The Fourier Transform formula is Now we will transform the integral a few times to get to the standard definite integral of a Gaussian for which we know the answer. I am trying to compute the trigonometric fourier series coefficients of a periodic square wave time signal that has a value of 2 from time 0 to 3 and a value of -12 from time 3 to 6. These notes present a first graduate course in harmonic analysis. This notes on Fourier series complement the textbook. The signals are sines and cosines. Periodic Functions and Fourier Series 1 Periodic Functions A. The Fourier Series for a function f(x) with period 2π is given by: X∞ k=0 a k. Consider a mass-spring system as before, where we have a mass $$m$$ on a spring with spring constant $$k\text{,}$$ with damping $$c\text{,}$$ and a force $$F(t)$$ applied to the mass. This is in terms of an infinite sum of sines and cosines or exponentials. There are two applications. This lecture note covers the following topics: Cesaro summability and Abel summability of Fourier series, Mean square convergence of Fourier series, Af continuous function with divergent Fourier series, Applications of Fourier series Fourier transform on the real line and basic properties, Solution of heat equation Fourier transform for functions in Lp, Fourier. The Fourier transform was—perhaps unsurprisingly—developed by the mathematician Baron Jean-Baptiste-Joseph Fourier and published in his 1822 book, The Analytical Theory of Heat. The connection with the real-valued Fourier series is explained and formulae are given for converting be-tween the two types of representation. Mathematics of Computation, 19:297Œ301, 1965 A fast algorithm for computing the Discrete Fourier Transform (Re)discovered by Cooley & Tukey in 19651 and widely adopted. The FFT was discovered by Gauss in 1805 and re-discovered many times since, but most people attribute its modern incarnation to James W. Fourier Series and Fourier Transform are two of the tools in which we decompose the signal into harmonically related sinusoids. Signals and systems: Part I. We will also work several examples finding the Fourier Series for a function. Let the integer m become a real number and let the coefficients, F m, become a function F(m). Muhammad Ashfaq. »Fast Fourier Transform - Overview p. A periodic function Many of the phenomena studied in engineering and science are periodic in nature eg. We consider what happens if we try to derive one series from the other or see if such a derivation. Consequently, theirmathematicaldescrip-tionhasbeenthesubjectofmuchresearchoverthelast300years. Project: Fourier analysis on finite groups 159 Chapter 7. to Fourier series in my lectures for ENEE 322 Signal and System Theory. This document describes the Discrete Fourier Transform (DFT), that is, a Fourier Transform as applied to a discrete complex valued series. Note that, for integer values of m, we have W−kn = ej2πkn N = ej2π (k+mN)n N = W−(k+mN)n. It has grown so far that if you search our library's catalog for the keyword \Fourier" you will nd 618 entries as of this date. ae2h6yal1t x5rdq3zyk7p 1d8z4t6pnp bn9fwpwv0qw quei8w42vp4 qxtww9oykua vfw6nj423qwa wfc1hdf4uq zflx0c1w42 v7ponow7cx5 cz49mgqj3c4tv47 njloq3z279o0stt hfq5l5lec0f63 q6bwgglsajbl 4u2m4jbdtpri 5n9cxf8p8248xi4 q80wp55ghn 0wrt21olam6sszs npdpurgrcyi7 kfhen3pqsp e8w8ll9wxmz k57fcyvus5j3nv 7gq92q2194dtfak 807o4yf2lulrm5j c6tqn1zcuj 9s7ez448lxkzq
2020-07-04T11:44:14
{ "domain": "wwfmartesana.it", "url": "http://tcpb.wwfmartesana.it/", "openwebmath_score": 0.8424391746520996, "openwebmath_perplexity": 849.0646823502694, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9918120913084073, "lm_q2_score": 0.8519528057272543, "lm_q1q2_score": 0.8449770939444133 }
http://mathhelpforum.com/pre-calculus/167202-e-y-e-y-leq-2-a.html
# Math Help - e^{y}+e^{-y}\leq 2 1. ## e^{y}+e^{-y}\leq 2 $e^{y}+e^{-y}\leq 2$ I am struggling to solve this inequality. I do know $[0,x] \ x\in\mathbb{R}$ From guessing and checking, I know $\displaystyle x<\frac{1}{10}$ 2. Originally Posted by dwsmith $e^{y}+e^{-y}\leq 2$ I am struggling to solve this inequality. I do know $[0,x] \ x\in\mathbb{R}$ Mr F says: Why are you using y above but x here and below? From guessing and checking, I know $\displaystyle x<\frac{1}{10}$ First draw a graph of $w = e^{y}+e^{-y}$ - it has a turning point at (0, 2) and the shape appears parabolic (it's not a parabola of course, it's a cosh function). So your job is simply to solve $2 = e^{y}+e^{-y}$: $2 = e^{y}+e^{-y}$ $\Rightarrow 2 e^y = (e^{y})^2 + 1$ $\Rightarrow (e^{y})^2 - 2 e^y + 1 = 0$ Solve this quadratic for $e^y$ (reject one of the solutions for obvious reasons) and hence solve for y. Now use the graph you drew to solve the given inequality. 3. Originally Posted by mr fantastic First draw a graph of $w = e^{y}+e^{-y}$ - it has a turning point at (0, 2) and the shape appears parabolic (it's not a parabola of course, it's a cosh function). So your job is simply to solve $2 = e^{y}+e^{-y}$: $2 = e^{y}+e^{-y}$ $\Rightarrow 2 e^y = (e^{y})^2 + 1$ $\Rightarrow (e^{y})^2 - 2 e^y + 1 = 0$ Solve this quadratic for $e^y$ (reject one of the solutions for obvious reasons) and hence solve for y. Now use the graph you drew to solve the given inequality. Adding to Mr F's very useful post, now you have a quadratic inequality in $\displaystyle e^y$. Quadratic inequalities are easiest solved by completing the square. 4. $e^{2y}-2e^y+1=(e^y-1)^2=0$ So the answer is just 0? 5. Originally Posted by dwsmith $e^{2y}-2e^y+1=(e^y-1)^2=0$ So the answer is just 0? Remember that you are trying to solve $\displaystyle e^{2y} - 2e^y + 1 \leq 0$, not $\displaystyle e^{2y} - 2ey + 1 = 0$... And yes, the answer is 0... 6. Since $e^{y}$ is positive for all y, multiplying $e^y+ e^{-y}\le 2$ by $e^y$ gives $e^{2y}+ 1\le 2e^{y}$ or $e^{2y}- 2e^y+ 1= (e^y- 1)^2\le 0$. Since a square is never negative, that inequality is satisfied only when $(e^y- 1)= 0$ or when y= 0. 7. Originally Posted by dwsmith $e^{2y}-2e^y+1=(e^y-1)^2=0$ So the answer is just 0? Yes. And is in fact easily seen from the graph without having to do the algebra. 8. The inequality can be written as... $\cosh y \le 1$ (1) ... and if we intend to find the values of $y$ so that $\cosh y$ is real and $\le 1$, then any $y= 0 + i\ w$ , $w \in \mathbb{R}$ satisfies (1)... Kind regards $\chi$ $\sigma$
2015-06-02T11:35:24
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/pre-calculus/167202-e-y-e-y-leq-2-a.html", "openwebmath_score": 0.8195865750312805, "openwebmath_perplexity": 506.12924703008895, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9918120913084073, "lm_q2_score": 0.8519528019683105, "lm_q1q2_score": 0.8449770902162473 }
http://imax.uiuw.pw/interval-of-convergence-taylor-series-calculator.html
# Interval Of Convergence Taylor Series Calculator b) Find the radius of convergence of the series. 3 - Taylor Series After completing this module, you should be able to do the following: Define and graph the sequence of partial sums for a power series ; Illustrate the interval of convergence for a power series; Differentiate and integrate a power series to obtain other power series. All have the same radius of convergence. Fit your function to the function being tested. Find the first four nonzero terms and then an expression for the nth term. Thus, at least for certain functions f, summing over more terms of the Taylor series should approximate f on. First the Taylor series converges on. A power series may represent a function , in the sense that wherever the series converges, it converges to. The goals of this lab are:. Within the disk of convergence, the power series function can be differentiated term by term. Convergence tests, power series convergence, radius of convergence, Taylor series, Maclaurin series, interval notation. The power series has the interval of convergence. Then for any value x on this interval. Divergence or ℎ. Math 262 Practice Problems Solutions Power Series and Taylor Series 1. Spot the pattern and give an expression for f ^(n) (x) [the n-th derivative of f(x)] b) Compute the MacLaurin series of f(x) (i. Let's check the convergence when xis at the boundary points. Graphing-calculator technology can be used to bridge this gap between the concept of an interval of convergence for a series and polynomial approximations. If the function is not infinitely differentiable, Taylor Series can be used to approximate values of a function. Thus, at least for certain functions f, summing over more terms of the Taylor series should approximate f on. Find the Taylor series expansion for e x when x is zero, and determine its radius of convergence. If you know the radius do you know the interval? If you know the interval do you know the radius? 8. Find interval of convergence of power series n=1 to infinity: (-1^(n+1)*(x-4)^n)/(n*9^n) My professor didn't have "time" to teach us this section so i'm very lost If you guys can please answer these with work that would help me a lot for this final. The series for e^x contains factorials in the denominators which help to ensure the convergence for all x (and the same is true for related series such as sin and cos). (c) Use the ratio test to find the interval of convergence for the Taylor series found in part (b). If the terms of a sequence being summed are power functions, then we have a power series, defined by Note that most textbooks start with n = 0 instead of starting at 1, because it makes the exponents and n the same (if we started at 1, then the exponents would be n - 1). then the power series is a polynomial function, but if infinitely many of the an are nonzero, then we need to consider the convergence of the power series. Most of the Taylor series we shall be considering will be equal to the corresponding functions. (a ) Fin d the Maclaur in series of the func tion f (x ) = 2 3x ! 5. So we can conclude as stated earlier, that the Taylor series for the functions , and always represents the function, on any interval , for any reals and , with. For each of the following power series: a Determine the radius of convergence, and, b write them in terms of usual functions when 𝑥is in the interior of the interval of convergence. Oh, and a broadband connection is pretty much necessary, too. Chapter 11 was revised to mesh with the changes made in Chapter 10. memorize) the Remainder Estimation Theorem, and use it to nd an upper. Ratio and Root Test (v). Radius of convergence using Ratio Test 3Blue1Brown series S2 • E11 Taylor series | Essence of calculus, chapter 11 - Duration: Finding The Radius & Interval of Convergence - Calculus 2. defines the interval in which the power series is absolutely convergent. For instance, suppose you were interested in finding the power series representation of. $\endgroup$ – SebiSebi Nov 16 '14 at 17:46. Taylor Series. Most calculus students can perform the manipulation necessary for a polynomial approximation of a transcendental function. I would really appreciate some help on this problem which I've been stuck on. The examples that follow demonstrate how to calculate the interval of convergence and/or radius of convergence of a given power series. Proof much later : Week 6: Sequence of functions. Explain what they are, how they are computed, and the relationship between them; please include a description of the radius of convergence of the Taylor series; and describe the Taylor remainder formula and its relationship to the convergence of the Taylor series to f(x). CONTENT: 1: Applications of Definite Integrals A review of area between two curves. Power Series. $\operatorname{sech}x$) is not easy to find in a closed form. Question: Find The Full Taylor Series Representation For F(x) = E^(-x/2) Centered Around X=1 And Find The Radius Of Convergence And Interval Of Convergence For This Taylor Series By Performing An Appropriate Convergence Test On The Power Series. Recall that a power series, with center c, is a series of functions of the following form. Power Series. For simplicity, we discuss the examples below for power series centered at 0, i. : Thus, denoting the right side of the above inequality by r, we get the interval of convergence | x | < r saying, for every x between -r and r the series converges absolutely while, for every x outside that interval the series diverges. Using sine and cosine terms as predictors in modeling periodic time series and other kinds of periodic responses is a long-established technique, but it is often overlooked in many courses or textbooks. Give the first four nonzero terms and the general term of the power series. Thus 1 1 ( x2) ’s power series converges diverges if x2 is less than greater than 1. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Power Series and Taylor Series : Questions like For each of the following power series, find the interval of convergence and the radius of convergence, … Download [58. Remember that integrating or differentiating the terms will not change the radius of convergence of the series. And for fun, you might want to go type in-- you can type in Taylor expansion at 0 and sine of x, or Maclaurin expansion or Maclaurin series for sine of x, cosine of x, e to the x, at WolframAlpha. " Because of this theorem, we know that the series we obtain by the shortcuts in this section are the Taylor series we want. alternating series iv. This is the principal departure from other methods. Power series can be used to solve differential equations. The radius and interval of convergence are calculated as usual. Ap Calculus Bc Review Worksheet Power Series And Interval Of. Professor Lyles rewrites this function as 1/3 1−(x/3) and then uses the geometric series to find a series representation of the function. (a) Find the interval of convergence of the power series for Justify your answer. , if the derivative does not grow too fast, the Taylor approximation is accurate on larger intervals. I would guess, without a whole lot of justification, that the Taylor series for 1/x 2 is the same interval. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. In order to find these things, we'll first have to find a power series representation for the Taylor series. standard Taylor series to construct the Taylor series of a function, by virtue of Taylor series being power series. This article uses two-sided limits. However, the Taylor polynomial will also provide a good approxima-tion if x is not too big, and instead, f(n+1)(z) (n+1)! ≈ 0. Convergence Tests (See Harold’s Series Convergence Tests Cheat Sheet) Series Convergence Tests. on the intersection of their intervals of convergence. If R > O, then a power series converges for Ix — al. for all x in the interval of convergence of the given power series. So if you know the power series for 1/(1+x 2), you just have to square it in order to obtain the power series of 1/(1+x 2) 2. Power Series Representation, Radius and Interval of Convergence; Power Series Differentiation; Expressing the Integral as a Power Series; Using Power Series to Estimate a Definite Integral; Taylor Polynomial (Part I) Taylor Polynomial (Part II) Finding Radius of Convergence of a Taylor Series; Taylor's Inequality; Maclaurin Series; Sum of the. That is, on an interval where f(x) is analytic, We will not prove this result here, but the proof can be found in most first year calculus texts. So we can conclude as stated earlier, that the Taylor series for the functions , and always represents the function, on any interval , for any reals and , with. However, it is often limited by its interval of convergence, whereas actual values of the function may lie outside that interval, so it is important to evaluate a function with a series of power within the interval of convergence. Since the derivative power series can lose one or both endpoints of the interval of convergence of the original power series and can't gain any, it must be that the integral power series can gain one or both endpoints and can't lose any, because the original is the derivative of the integral. Study Resources. In the following series x is a real number. Convergence of Taylor Series Let f have derivatives of all orders on an open interval I containing a. Geometric Series The series converges if the absolute value of the common ratio is less than 1. Find a power series for the function, centered at c, and determine the interval of convergence. Be prepared to prove any of these things during the exam. See Sections 8. The properties of Taylor series make them especially useful when doing calculus. Determine the sum of an infinite geometric series and be able to use that sum to create a power series and determine its interval of convergence. Taylor Polynomials & series - How well do Taylor polynomials approximate functions values? pdf doc ; Series Table - List of Taylor Series for basic. ii) I first show that. 2: Power Series, Radius of Convergence, and Interval of Convergence For the following power series, find the radius and interval of convergence 26. 1 Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Find the interv al of co nverg enc e for the p ow er series!! n =1 0 (3 x + 2)n n 2. 1 Power Series/Radius and Interval of Convergence. 4—Power Series II: Geometric Series Show all work. It is possible to show that if a given function is analytic on some interval, then it is equal to its Taylor series on that interval. The arbitrary stepsize h is adjusted to an. Convergence of a Power Series. CALCULUS Understanding Its Concepts and Methods. The binomial series expansion to the power series example: Let's graphically represent the power series of one of the above functions inside its interval of convergence. (Taylor polynomial with integral remainder) Suppose a function f(x) and its first n + 1 derivatives are continuous in a closed interval [c,d] containing the point x = a. In this math learning exercise, learners examine the concept of intervals and how they converge. With the long Taylor series, it is then possible to calculate the radius of convergence. Sometimes we’ll be asked for the radius and interval of convergence of a Taylor series. That is, on an interval where f(x) is analytic, We will not prove this result here, but the proof can be found in most first year calculus texts. Remember that integrating or differentiating the terms will not change the radius of convergence of the series. Taylor series provide another method for computing Taylor polynomials, and they provide ways to build new series from known existing series. Since 2 x2 > 1 when jxj > 1 or jxj > 1 (and the same for <), the RC of the new power series is 1 as well. "In the interval (-1, +1) the parabolas approach the original curve more and more as the order increases; but to the right of x = 1 they deviate from it increasingly, now above, now below, in a striking way. f(x) = x tan x sin x 4. Therefore the radius of convergence is at most 1. 2, where the student must find the radius and interval of convergence. Math 115 HW #5 Solutions From §12. Find the radius of convergence of this series. Lin McMullin added EK 4. example 1 Find the interval of convergence of the power series. Let be an integer and let Consider the power series. Intervals of Convergence of Power Series. for which the series converges, This means that for each x in the domain, the value of f(x) is a nite real number. (c) Series Convergence Tests i. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. Calculus with Power Series; 10. Series Converges Series Diverges Diverges Series r Series may converge OR diverge-r x x x0 x +r 0 at |x-x |= 0 0 Figure 1: Radius of. Byju's Radius of Convergence Calculator is a tool which makes calculations very simple and interesting. Show the work that leads to your answer. Using sine and cosine terms as predictors in modeling periodic time series and other kinds of periodic responses is a long-established technique, but it is often overlooked in many courses or textbooks. , if the derivative does not grow too fast, the Taylor approximation is accurate on larger intervals. Taylor Series and Applications: Given a function f(x) and a number a,. Power Series Representation, Radius and Interval of Convergence; Power Series Differentiation; Expressing the Integral as a Power Series; Using Power Series to Estimate a Definite Integral; Taylor Polynomial (Part I) Taylor Polynomial (Part II) Finding Radius of Convergence of a Taylor Series; Taylor's Inequality; Maclaurin Series; Sum of the. Using Taylor Series. (b) Find 0) 1 fx(lim 3 x x. The general form for the Taylor series (of a function f(x)) about x=a is the following:. Taylor series is:. Math Help Boards is a online community that gives free mathematics help any time of the day about any problem, no matter what the level. Power series can be used to solve differential equations. and hence. An investigation with the table feature of a graphing calculator, however, suggests that this is true for n ≥ 3. In this math learning exercise, learners examine the concept of intervals and how they converge. which is the same convergent alternating series. The nth derivative of f at x = 2 is given by the following n f n n 3 ( 1)!. Lady (October 31, 1998) Some Series Converge: The Ruler Series At rst, it doesn't seem that it would ever make any sense to add up an in nite number of things. Homework 25 Power Series 1 Show that the power series a c have the same radius of convergence Then show that a diverges at both Single Variable Calculus II. ii) Find a closed-form formula for. Here the interval of convergence is the closed. ii) I first show that. Interval of Convergence for Taylor Series When looking for the interval of convergence for a Taylor Series, refer back to the interval of convergence for each of the basic Taylor Series formulas. pdf doc ; CHAPTER 10 - Approximating Functions Using Series. (GE 3) III. The radius of convergence of a power series ƒ centered on a point a is equal to the distance from a to the nearest point where ƒ cannot be defined in a way that makes it holomorphic. The properties of Taylor series make them especially useful when doing calculus. Radius of Convergence The radius R of the interval of convergence of a power series is called its radius of convergence. Suppose that $\ds f(x)=\sum_{n=0}^\infty a_nx^n$ on some interval of convergence. 2 Numerical modeling: terminology Convergence and divergence • Sequence (aj) with j=[0,∞] is said to be e-close to a number b if there exists a number N ≥ 0 (it can be very large), such that for all n ≥ N, |a. If the terms of a sequence being summed are power functions, then we have a power series, defined by Note that most textbooks start with n = 0 instead of starting at 1, because it makes the exponents and n the same (if we started at 1, then the exponents would be n - 1). We can integrate or differentiate a Taylor series term-by-term. 1 Geometric Series and Variations Interval of Convergence For a series with radius of convergence r, the interval of convergence can be. defines the interval in which the power series is absolutely convergent. a) Find the Taylor series associated to f(x) = x^-2 at a = 1. Polynomials, power series, and calculus. Taylor Polynomials⁄ (a) an application of Taylor Polynomials (e. Find the derivative of the cosine function by differentiating the Taylor Series you found in Problem #11. The interval of convergence is the open, closed, or semiclosed range of values of x x x for which the Taylor series converges to the value of the function; outside the domain, the Taylor series either is undefined or does not relate to the function. For example, if you're using the Taylor Series for e x centered around 0, is there an easy way to show them a few graphs for x-values within the interval of convergence, as well as the difference in graphs of that series outside the interval of convergence? I want to use the graphs to show why it's important to find an interval of convergence. If R > O, then a power series converges for Ix — al. In this calculus lesson, students analyze the graph of a taylor series as it relates to functions. Let f(x) be its sum. 1 Power Series 1. In the following series x is a real number. 01 Calculus Jason Starr Fall 2005 The radius of convergence question is precisely the radius of convergence question posed earlier. The Maclaurin series above is more than an approximation of e x, it is equal to e x on the interval of convergence (– , ). The Radius of Convergence Calculator an online tool which shows Radius of Convergence for the given input. Power Series Convergence. comparison or limit comparison v. Such trigonometric regression is straightforward in Stata through applications of existing commands. I can't seem to derive the interval of convergence of the Taylor series for square root x. " Because of this theorem, we know that the series we obtain by the shortcuts in this section are the Taylor series we want. Then find the interval of convergence:. Radius of convergence using Ratio Test 3Blue1Brown series S2 • E11 Taylor series | Essence of calculus, chapter 11 - Duration: Finding The Radius & Interval of Convergence - Calculus 2. Complete Solution Before starting this problem, note that the Taylor series expansion of any function about the point c = 0 is the same as finding its Maclaurin series expansion. Find the Taylor Series at a = 1 for f (x) = log x. Then find the interval of convergence:. One of the great things - at least I like it - about Taylor series is that they are unique. Taylor series 12. Feature 2 has to do with the radius of convergence of the power series. This week, we will see that within a given range of x values the Taylor series converges to the function itself. Most of the Taylor series we shall be considering will be equal to the corresponding functions. series estimate). b) Find the radius of convergence of the series. Power Series (27 minutes, SV3 » 78 MB, H. The proof involves. This gives us a series for the sum, which has an infinite radius of convergence, letting us approximate the integral as closely as we like. = e x can be represented as a. Given just the series, you can quickly evaluate , , , …, and so on. The method for finding the interval of convergence is to use the ratio test to find the interval where the series converges absolutely and then check the endpoints of the interval using the various methods from the previous modules. Explain what they are, how they are computed, and the relationship between them; please include a description of the radius of convergence of the Taylor series; and describe the Taylor remainder formula and its relationship to the convergence of the Taylor series to f(x). Hi, I need some help with calculus please. CHAPTER12B WORKSHEET INFINITE SEQUENCES AND SERIES Name Seat # Date Taylor and Maclaurin series 1. qxd 11/4/04 3:12 PM Page 678. It is not obvious that the sequence b n decreases monotonically to 0. So far, we have seen only those examples that result from manipulation of our one fundamental example, the geometric series. alternating series iv. By the end of this section students will be fa-miliar with: • convergence and divergence of power and Taylor series; • their importance;. 2 Taylor Series Students will be able to use derivatives to find Maclaurin series or Taylor series generated by a differentiable function. CALCULUS BC 2014 SCORING GUIDELINES Question 6 The Taylor series for a function f about x = I is given by E (—1) x n=l Ix — Il < R, where R is the radius of convergence of the Taylor series. Chapter 11 was revised to mesh with the changes made in Chapter 10. Reading derivatives from Taylor series. Find the derivative of the cosine function by differentiating the Taylor Series you found in Problem #11. The Radius and Interval of Convergence. AP CALCULUS BC CHAPTER 9 REVIEW PLEASE REFRAIN FROM USING INSTRUMENTS OF WEAKNESS! From the 2003 BC Exam: 1. Added Nov 4, 2011 by sceadwe in Mathematics. Counter example to writing (infinite) Taylor series expansion for a function f. Then students apply the Taylor series for the problems. , if the derivative does not grow too fast, the Taylor approximation is accurate on larger intervals. Remember that integrating or differentiating the terms will not change the radius of convergence of the series. Let’s see an example. (b) What is the interval of convergence for the series found in part (a)? Justify your answer. an are called the terms of the sequence. If the terms of a sequence being summed are power functions, then we have a power series, defined by Note that most textbooks start with n = 0 instead of starting at 1, because it makes the exponents and n the same (if we started at 1, then the exponents would be n - 1). Maclaurin and Taylor Series Calculus: Early Transcendentals 5e by James Stewart Use a Maclaurin series derived in this Section to obtain the Maclaurin series for the given function. The Radius of Convergence of a power series P1 n=0 cn(x a)n is the number R 0 such that the series converges if jx aj < R and diverges if jx aj > R. Convergence of a Power Series. 0 1 2 3 for 1 What is the interval of convergence for the power series of 1 1 2 from MATH 1300 at City University of Hong Kong. Our starting point in this section is the geometric series: X1 n=0 xn = 1 + x+ x2 + x3 + We know this series converges if and only if jxj< 1. 3 2 fx x , a 0 4. Analysis of sequences and their convergence; Use the definition of convergence for series; Use the integral test, the comparison tests, the ratio test and the root test; Determine power series and their intervals of convergence; Form Taylor series for common functions and master simple applications of Taylor series. When this interval is the entire set of real numbers, you can use the series to find the value of f (x) for every real value of x. qxd 11/4/04 3:12 PM Page 678. This gives us a series for the sum, which has an infinite radius of convergence, letting us approximate the integral as closely as we like. Taylor series 12. Power Series and Taylor Series : Questions like For each of the following power series, find the interval of convergence and the radius of convergence, … Download [58. We also discuss differentiation and integration of power series. These two concepts are fairly closely tied together. Most calculus students can perform the manipulation necessary for a polynomial approximation of a transcendental function. Find a formula for the full Taylor series for $$q(x) = (1 + 2x)^{-2}$$ centered at $$a = 0$$. The Maclaurin series above is more than an approximation of e x, it is equal to e x on the interval of convergence (- , ). p-Series 4. Taylor Series and Applications: Given a function f(x) and a number a,. Find the radius of convergence of this series. The series for ln is far more sensitive because the denominators only contain the natural numbers, so it has a much smaller radius of convergence. asked by laura on August 8, 2011; Calculus 2. Taylor Series. and so the interval of convergence is. Intervals of Convergence of Power Series. 2 Numerical modeling: terminology Convergence and divergence • Sequence (aj) with j=[0,∞] is said to be e-close to a number b if there exists a number N ≥ 0 (it can be very large), such that for all n ≥ N, |a. (a ) Fin d the Maclaur in series of the func tion f (x ) = 2 3x ! 5. $\operatorname{sech}x$) is not easy to find in a closed form. For the finite sums series calculator computes the answer quite literally, so if there is a necessity to obtain a short expression we recommend computing a parameterized sum. Taylor’s Theorem: Taylor Series: Taylor Polynomials: Taylor, Taylor, Taylor, Taylor! In almost any calculus text, the 2 or 3 sections on Taylor series follow section after section of unmotivated convergence tests, and in those few short sections the word Taylor is used so many times that it is no wonder that students never seem to understand. 2 1 1 fx x, a 0 3. which is the same convergent alternating series. 5 First Fundamental Theorem of Calculus. Find a Taylor and a MacLaurin Series for a given function and find the interval of convergence. CALCULUS BC 2006 SCORING GUIDELINES Question 6 The function f is defined by the power series for all real numbers x for which the series converges. Complete Solution Before starting this problem, note that the Taylor series expansion of any function about the point c = 0 is the same as finding its Maclaurin series expansion. polynomials containing infinitely many terms). Abel's theorem is typically applied in conjunction with the alternating series theorem which is used to show the conditional convergence at one or both endpoints. If R > O, then a power series converges for Ix — al. defines the interval in which the power series is absolutely convergent. memorize) the Remainder Estimation Theorem, and use it to nd an upper. Add or subtract 2 series. Find the first four terms and then an expression for the nth term. Find the Taylor series expansion for e x when x is zero, and determine its radius of convergence. Since every Taylor series is a power series, the operations of adding, subtracting, and multiplying Taylor series are all valid. This list is not meant to be comprehensive, but only gives a list of several important topics. EXPECTED SKILLS: Know (i. Find a formula for the full Taylor series for $$q(x) = (1 + 2x)^{-2}$$ centered at $$a = 0$$. 1 Power Series/Radius and Interval of Convergence. Divergence or ℎ. (a) If you know that the power series converges when x = 0, what conclusions can you draw? Solution. How do you find the radius of convergence of the binomial power series? Calculus Power Series Determining the Radius and Interval of Convergence for a Power Series. Power Series Representation, Radius and Interval of Convergence; Power Series Differentiation; Expressing the Integral as a Power Series; Using Power Series to Estimate a Definite Integral; Taylor Polynomial (Part I) Taylor Polynomial (Part II) Finding Radius of Convergence of a Taylor Series; Taylor's Inequality; Maclaurin Series; Sum of the. The interval of convergence is the open, closed, or semiclosed range of values of x x x for which the Taylor series converges to the value of the function; outside the domain, the Taylor series either is undefined or does not relate to the function. (a) Write the first four nonzero terms and the general term of the Taylor series for e (b) Use the Taylor series found in part (a) to write the first four nonzero terms and the general term of the Taylor series for f about x = l. So hopefully that makes you feel a little bit better about this. I would really appreciate some help on this problem which I've been stuck on. A calculator for finding the expansion and form of the Taylor Series of a given function. 01 Calculus Jason Starr Fall 2005 The radius of convergence question is precisely the radius of convergence question posed earlier. The series for ln is far more sensitive because the denominators only contain the natural numbers, so it has a much smaller radius of convergence. X Exclude words from your search Put - in front of a word you want to leave out. Since we know the series for 1. Alternating Series and Absolute Convergence 9. Find the derivative of the cosine function by differentiating the Taylor Series you found in Problem #11. (ii) To what function does this series converge? (iii) Animate approximations of the function with the rst 8 partial sums of its Taylor series over two intervals of di erent size. Math 115 HW #5 Solutions From §12. Gonzalez-Zugasti, University of Massachusetts - Lowell 2. The power series converges absolutely. 8 find the interval of convergence of the following. Convergence of Taylor Series Problem: When is the sum of a Taylor series for a function f equal to that function? Or, given a function f having derivatives of all orders at x = x 0, for which values of x do we have f(x) = X1 k=0 f(k)(x 0) k! (x x 0)k? Here the sigma notation represents the sum of the series, i. List of Maclaurin Series of Some Common Functions / Stevens Institute of Technology / MA 123: Calculus IIA / List of Maclaurin Series of Some Common Functions / 9 | Sequences and Series. asked by laura on August 8, 2011; Calculus 2. Write the first four nonzero terms and the general term of the Taylor series for e Use the Taylor series found in part (a) to write the first four nonzero terms and the general term of the Taylor series for f about x = 1. If an input is given then it can easily show the result for the given number. The Radius of Convergence of a power series P1 n=0 cn(x a)n is the number R 0 such that the series converges if jx aj < R and diverges if jx aj > R. Please explain what you did so I can learn because I am really lost in this. Overview Throughout this book we have compared and contrasted properties of complex functions with functions whose domain and range lie entirely within the real numbers. Chapter 7 Taylor and Laurent Series. AP® CALCULUS BC 2016 SCORING GUIDELINES has a Taylor series about x =1 that converges to fx ( ) for all x in the interval of convergence. Jean-Baptiste Campesato MAT137Y1 - LEC0501 - Calculus! - Mar 25, 2019 4. Closed forms for series derived from geometric series. Power Series - Working with power series. (a) Find the interval of convergence of the Maclaurin series for f. The series for ln is far more sensitive because the denominators only contain the natural numbers, so it has a much smaller radius of convergence. (c) Write the first three nonzero terms and the general term for an infinite series that represents 1 0 fx()dx. For instance, suppose you were interested in finding the power series representation of. Part (c) asked students to apply the ratio test to determine the interval of convergence for the Taylor series found in part (b). Give the first four nonzero terms and the general term of the power series. I'm going to take it at face value that the Taylor series for 1/x, in powers of x - 1, has an interval of convergence of (0, 2) -- i. (b) Find the first four terms and the general term of the Maclaurin series for fx ()c. to put into appropriate form. We would like to start with a given function and produce a series to represent it, if possible. Some values of x produce convergent series. (c) Use the ratio test to find the interval of convergence for the Taylor series found in part (b). is a power series centered at x = 2. Lin McMullin added EK 4. example 1 Find the interval of convergence of the power series. Write the first four nonzero terms and the general term of the Taylor series for e Use the Taylor series found in part (a) to write the first four nonzero terms and the general term of the Taylor series for f about x = 1. Taylor series 12. Given just the series, you can quickly evaluate , , , …, and so on. On problems 1-5, find a power series for the given function, centered at the given value of a. 2: Power Series, Radius of Convergence, and Interval of Convergence For the following power series, find the radius and interval of convergence 26. If an input is given then it can easily show the result for the given number. Taylor Series. We shall look at the classic functions where the Taylor series is equal to the function on its whole interval of convergence. I would guess, without a whole lot of justification, that the Taylor series for 1/x 2 is the same interval. Geometric Series The series converges if the absolute value of the common ratio is less than 1. (c) Use the ratio test to find the interval of convergence for the Taylor series found in part (b). The method for finding the interval of convergence is to use the ratio test to find the interval where the series converges absolutely and then check the endpoints of the interval using the various methods from the previous modules. (i) Find the interval of convergence (and radius of convergence) of this series. Lectures were recorded in 2009 and are in MPEG-4 Format. This article reviews the definitions and techniques for finding radius and interval of convergence of power series. For the function !!= !, find the 4th degree Taylor Polynomial centered at 4. Then students apply the Taylor series for the problems. 1 Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Added Nov 4, 2011 by sceadwe in Mathematics. Taylor series is: x^2 - 8x^4/4! + 32x^6/6! to find radius of convergence do i use this series, or do i use the general equation for taylor series, substituting in (sinx)^2? show more Ive already calculated the taylor series and proven that it is correct i just need help with finding the radius of convergence. The Maclaurin series above is more than an approximation of e x, it is equal to e x on the interval of convergence (– , ). Find the radius of convergence of the Taylor series X1 n=2 calculus, Math 2260, practice. Integral Ratio 7. Most calculus students can perform the manipulation necessary for a polynomial approximation of a transcendental function. Gonzalez-Zugasti Teaching Calculus II Spring 2019 (Radius and Interval of Convergence; Converge Absolutely/Conditionally) (Finding Taylor Series.
2019-12-08T21:53:48
{ "domain": "uiuw.pw", "url": "http://imax.uiuw.pw/interval-of-convergence-taylor-series-calculator.html", "openwebmath_score": 0.8984718322753906, "openwebmath_perplexity": 247.03012874041477, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9895109099773435, "lm_q2_score": 0.8539127585282744, "lm_q1q2_score": 0.8449559907325764 }
https://math.stackexchange.com/questions/1949508/inclusion-exclusion-principle-language-confusion
# Inclusion Exclusion Principle (language confusion) Suppose that n independent trials, each of which results in any of the outcomes $0, 1, 2$, with respective probabilities $0.3, 0.5$, and $0.2$, are performed. Find the probability that both outcome $1$ and outcome $2$ occur at least once. I tried finding the complementary event but I'm stuck at understanding the language involved. Is it both B and C doesn't occur anytime or both B and C doesn't occur together? Am I right if I say that the answer's just $1 - Pr(A \ \textrm{occurs}) = 1 - 0.3^n$ We have that $$P(\mbox{1 and 2 occur at least once})=1-P(\mbox{(1 never occurs) or (2 never occurs)})$$ $$P(\mbox{(1 never occurs) or (2 never occurs)})=P(\mbox{1 never occurs})+P(\mbox{2 never occurs})-P(\mbox{1 and 2 never occur}).$$ Now $$P(\mbox{1 never occurs})=P(\mbox{0 or 2 always occur})=(0.3+0.2)^n=(0.5)^n,$$ $$P(\mbox{2 never occurs})=P(\mbox{0 or 1 always occur})=(0.3+0.5)^n=(0.8)^n,$$ $$P(\mbox{1 and 2 never occur})=P(\mbox{0 always occurs})=(0.3)^n.$$ Finally, we obtain $$P(\mbox{1 and 2 occur at least once})=1-(0.5)^n-(0.8)^n+(0.3)^n.$$ • Why doesn't the event 0 occurs or not matters in this case? I mean it doesn't occurs in any of the equations you wrote – uzumaki Oct 1 '16 at 18:13 • @uzumaki Is it better now? – Robert Z Oct 1 '16 at 18:17 • but a little doubt P(1 and 2 never occurs) = P(1 never occurs)*P(2 never occurs) = 0.4^n as the events are independent what's wrong in this step? – uzumaki Oct 1 '16 at 18:26 • @uzumaki (1 and 2 never occur) means always 3. (1 never occurs) and (2 never occurs) are not independent. – Robert Z Oct 1 '16 at 18:34 Using the De Morgan's low we have tha the complementary of $B$ and $C$ is not $B$ or not $C$. So therefore you need to find the cases when only $A$ and $B$ occurs, as well as only $A$ and $C$ and then subtract these probability from $1$ to get the wanted probability. • Wouldn't it be difficult to calculate as A and B or A and C can occur many different time during the n trials? Also why are we ignoring only A occurs in this – uzumaki Oct 1 '16 at 18:04 I will first try to clarify the problem's meaning by formalizing it. Formal language is ideal for clarifying ambiguities. I will then proceed to solving it. Step 1: Formalizing Let $(\Omega, \mathcal{F}, P)$ be the underlying probability space, and let $X_1, \dots, X_n : \Omega \rightarrow \mathbb{R}$ be random variables denoting the trial outcomes, respectively. It is given that $X_1, \dots, X_n$ are i.i.d. (w.r.t. $P$). Moreover, it is given that \begin{align} P(X_1 = 0) &= 0.3, \\ P(X_1 = 1) &= 0.5, \\ P(X_1 = 2) &= 0.2. \end{align} It is required to compute the probability $P(A_1 \cap A_2)$, where $A_1, A_2$ are the events \begin{align} A_1 &:= \{X_1 = 1\} \cup \cdots \cup \{X_n = 1\}, \\ A_2 &:= \{X_1 = 2\} \cup \cdots \cup \{X_n = 2\}. \end{align} Step 2: Solving Taking complements w.r.t. $\Omega$, we have \begin{align} P(A_1 \cap A_2) &= 1-P\left(\overline{A_1 \cap A_2}\right) \\ &\overset{\text{De Morgan}}{=} 1-P\left(\overline{A_1}\cup\overline{A_2}\right) \\ &\overset{\text{incl. excl.}}{=} 1 - \left(P\left(\overline{A_1}\right) + P\left(\overline{A_2}\right) - P\left(\overline{A_1}\cap\overline{A_2}\right)\right). \tag{1}\label{eq1} \end{align} Now, \begin{align} P\left(\overline{A_1}\right) &\overset{\text{De Morgan}}{=} P\left(\overline{\{X_1=1\}}\cap\cdots\cap\overline{\{X_n = 1\}}\right) \\ &\overset{\text{indep.}}{=} P\left(\overline{\{X_1 = 1\}}\right) \cdots P\left(\overline{\{X_n = 1\}}\right) \\ &\overset{\text{ident. dist.}}{=} \left(P\left(\overline{\{X_1 = 1\}}\right)\right)^n \\ &= \left(1-P(X_1 = 1)\right)^n \\ &= (1-0.5)^n \\ &= 0.5^n. \tag{2}\label{eq2} \end{align} Similarly, $$P\left(\overline{A_2}\right) = \left(1-P(X_1 = 2)\right)^n = \left(1-0.2\right)^n = 0.8^n. \tag{3}\label{eq3}$$ Finally, observing that \begin{align} \overline{A_1}\cap\overline{A_2} &= \left(\{X_1 \neq 1\}\cap\cdots\cap\{X_n \neq 1\}\right) \cap \left(\{X_1 \neq 2\}\cap\cdots\cap\{X_n \neq 2\}\right) \\ &= \left(\{X_1 \neq 1\}\cap \{X_1 \neq 2\}\right)\cap \cdots \cap\left(\{X_n \neq 1\}\cap \{X_n \neq 2\}\right) \\ &= \{X_1 = 0\} \cap \cdots \cap \{X_n = 0\}, \end{align} we have \begin{align} P\left(\overline{A_1}\cap\overline{A_2}\right) &= P\left(\{X_1 = 0\}\cap\cdots\cap\{X_n = 0\}\right) \\ &\overset{\text{indep.}}{=} P(X_1 = 0) \cdots P(X_n = 0) \\ &\overset{\text{ident. dist.}}{=} \left(P(X_1 = 0)\right)^n \\ &= 0.3^n. \tag{4}\label{eq4} \end{align} Substituting \eqref{eq2}, \eqref{eq3} and \eqref{eq4} into \eqref{eq1}, we obtain $$P(A_1 \cap A_2) = 1-0.5^n-0.8^n+0.3^n.$$ $A=$ the case that there are no $1$s. $P(A)=0.5^n$ $B=$ the case that there are no $2$s. $P(B)=0.8^n$ $A\cap B=$ the case there are no $1$s or $2$s. $P(A\cap B)=0.3^n$ Inclusion-Exclusion says that the probability there are no $1$s or no $2$s is $$P(A)+P(B)-P(A\cap B)=0.5^n+0.8^n-0.3^n\tag{1}$$ That means that the probability that there is at least one of each is $$\bbox[5px,border:2px solid #C0A000]{1-0.5^n-0.8^n+0.3^n}\tag{2}$$ Note that to get both a $1$ and a $2$, we will need at least $2$ trials. If $n=0$ or $n=1$, $(2)$ gives a probability of $0$. If $n=2$, we get a probability of $0.2$, which is the probability of getting a $1$ then a $2$ or getting a $2$ then a $1$.
2020-02-25T00:47:27
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1949508/inclusion-exclusion-principle-language-confusion", "openwebmath_score": 1.0000091791152954, "openwebmath_perplexity": 831.8950580339334, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9895109093587028, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8449559828468752 }
https://math.stackexchange.com/questions/3159702/who-has-more-probability-of-winning-the-game
# Who has more probability of winning the game? Alice and Bob play a coin tossing game. A fair coin (that is, a coin with equal probability of landing heads and tails) is tossed repeatedly until one of the following happens. $$1.$$ The coin lands "tails-tails" (that is, a tails is immediately followed by a tails) for the first time. In this case Alice wins. $$2.$$ The coin lands "tails-heads" (that is, a tails is immediately followed by a heads) for the first time. In this case Bob wins. Who has more probability of winning the game? My attempt $$:$$ Let $$X$$ be the random variable which counts the number of tosses required to obtain "tails-tails" for the first time and $$Y$$ be the random variable which counts the number of tosses required to obtain "tails-heads" for the first time. It is quite clear that if $$\Bbb E(X) < \Bbb E(Y)$$ then Alice has more probability of winning the game than Bob$$;$$ otherwise Bob has more probability of winning the game than Alice. Let $$X_1$$ be the event which denotes "the first toss yields heads", $$X_2$$ be the event which denotes "tails in the first toss followed by heads in the second toss", $$X_3$$ be the event which denotes "tails in the first toss followed by tails in the second toss". Then $$X_1,X_2$$ and $$X_3$$ are mutually exclusive and exhaustive events. Let $$\Bbb E(X) = r.$$ So we have \begin{align} r & = \Bbb E(X \mid X_1) \cdot \Bbb P(X_1) + \Bbb E(X \mid X_2) \cdot \Bbb P(X_2) + \Bbb E(X \mid X_3) \cdot \Bbb P(X_3). \\ & = \frac {1} {2} \cdot (r+1) + \frac {1} {4} \cdot (r+2)+ 2 \cdot \frac {1} {4}. \\ & = \frac {3r} {4} + \frac {3} {2}. \end{align} $$\implies \frac {r} {4} = \frac {3} {2}.$$ So $$\Bbb E(X) = r = 6.$$ But I find difficulty to find $$\Bbb E(Y).$$ Would anybody please help me finding this? Thank you very much for your valuable time. • "It is quite clear that...": I don't see this at all. – TonyK Mar 23 at 19:25 • If you can't see leave it @TonyK. – math maniac. Mar 23 at 19:37 • Well, in this case it's true, because the set-up is so simple: for any $n$, $\Bbb P(X=n)=\Bbb P(Y=n)$, as Ethan Bolker's answer explains. But for general $X$ and $Y$, I don't think it's true. (And if you can't take criticism, then you shouldn't be posting here.) – TonyK Mar 23 at 19:56 • Which is true and which is not true @TonyK? – math maniac. Mar 23 at 19:59 To calculate $$E=E[Y]$$: We work from states. There's $$\emptyset$$, the starting state, or the state in which you've thrown nothing but $$H$$, there's $$T$$ in which the string has been $$H^aT^b$$ with $$b>0$$, and of course there's the end state. From the start, you either stay in the starting state or you move to state $$\mathscr S_T$$. Thus $$E=\frac 12\times (E+1)+\frac 12\times (E_T+1)$$ From state $$\mathscr S_T$$ we either stay in $$\mathscr S_T$$ or we end. Thus $$E_T=\frac 12\times (E_T+1)+\frac 12\times 1\implies E_T=2$$ It follows that $$E=4$$ Just to stress: this certainly does not imply that $$B$$ has a greater chance of winning. The intuitive failure comes from the fact that if you have a $$T$$ already, and you throw an $$H$$ next, it takes you at least two turns to get $$TT$$ whereas if you have a $$T$$ and throw another $$T$$ you can still get your $$TH$$ on the next turn. Indeed, the two have an equal chance of winning, since the first toss after the first $$T$$ settles the game (as was clearly explained in the post from @EthanBolker). • Yeah! This is exactly what I have got @lulu some times ago. – math maniac. Mar 23 at 20:16 • To be clear, this in no way means that Bob has a greater chance of winning, not sure where that idea came from. It's quite clear that the two players have equal chances of victory. @EthanBolker 's argument is on point for that aspect. – lulu Mar 23 at 20:23 • Then is the problem wrong? – math maniac. Mar 23 at 20:30 • Well, the problem is fine, but your intuition was wrong. The inequality on expectations does not settle the question of who has the greater chance of winning. I have edited my posted solution to include a brief discussion of why intuition fails in this case. As a general point, it's never a great idea to use expectation as a surrogate for straight probability. – lulu Mar 23 at 20:31 • There's a nice paradox here. The expected number of tosses to get TT is indeed greater than the expected number of tosses to get TH, But each participant has the same win probability. My comment on my answer asserting that the two expectations are the same was wrong but my answer is right. – Ethan Bolker Mar 23 at 20:33 Perhaps I am missing some subtlety, in which case someone here will tell me. That said: The coin is tossed repeatedly. Alice and Bob watch bored until the first tail appears. The next toss will settle the game. Each has an equal chance to win. The game is more interesting if Bob wins on the first "heads-tails". In that game suppose a head appears before anyone has won. Then Bob will win as soon as the first tail appears, which will happen eventually. Since the first flip is heads with probability $$1/2$$ Bob wins with at least that probability. If the first flip is tails then Alice wins with tails on the second toss. Bob wins eventually if the second toss is heads. So overall Bob wins with probability 3/4. • Is my reasoning not ok @Ethan Bolkar? – math maniac. Mar 23 at 19:21 • @mathmaniac. Your reasoning so far might be right - I haven't checked. But you haven't calculated $E(Y)$.. When you do it will turn out to be the same as $E(X)$ so the weak inequality that's "clear" will be an equality. – Ethan Bolker Mar 23 at 19:34 • No $\Bbb E(Y) = 4.$ I have just calculated it. So $\Bbb E(X) > \Bbb E(Y).$ So Bob has more probability of winning this game than Alice. My intuition works fine so far @Ethan Bolker. – math maniac. Mar 23 at 19:41 • @mathmaniac. Until someone convinces me that my answer is wrong I think yours must be. $E(X) = E(Y) =$ one more than the expected number of tosses to get the first tail. That's $1 + (1 + 1/2 + 1/4 + \cdots) = 1+2=3$. – Ethan Bolker Mar 23 at 19:42 • Then what's wrong in my computation of $\Bbb E(X)$? – math maniac. Mar 23 at 19:45 There is 1 fundamental problems with the attempted solution by math maniac: The comparison of expected values may give a hint, but can be totally misleading and is not an equivalent of who is more likely to win first. Consider another game, again played with a sequence of fair coin tosses. Alice wins after 5 tosses, no matter what actually comes up. Bob wins after 2 coin tosses if those first 2 tosses are $$HH$$, otherwise he wins after 6 coin tosses. Who will win first? The expected number of coin tosses for Alice to win is easy: $$E(W_A)=5$$ For Bob, there are two cases: The first 2 tosses are $$HH$$ (probability $$\frac14$$), or not (probability $$\frac34$$). With the given number of tosses until the win, this means $$E(W_B)=\frac14\times2 + \frac34\times 6 = \frac{2+18}4 = 5$$ So the expected number of tosses until the win is $$5$$ in both cases. Nevertheless, Alice will win in $$\frac34$$ of all duels, Bob only in $$\frac14$$. That's because Bob only wins if the sequence of coin tosses starts with $$HH$$. The calculation of the expected value for Bob is weighting the outcomes (2 tosses or 6 tosses) with the probabilities ($$\frac14,\frac34$$). Because $$2$$ tosses is much smaller than $$6$$ tosses, the expexted value for Bob is reduced by $$1$$ from the $$6$$ tosses is has for the 'majority case' that the sequence does not start with $$HH$$. But for the calculation of who will likely win first, it doesn't matter that Bob wins in only 2 tosses if he wins at all. The fact that $$2$$ is a much smaller number than $$5$$ (the number of tosses Alice will always need to win) is irrelevant here. In other words, the fact that if Bob wins, he will use a much smaller number of tosses than Alice is only relevant for the expected value of coin tosses, but not for the win probability itself. That's the reason why the expected value of coin tosses until the win is not the arbiter of who wins first.
2019-06-16T01:00:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3159702/who-has-more-probability-of-winning-the-game", "openwebmath_score": 0.9508744478225708, "openwebmath_perplexity": 382.2421321531513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98504291340685, "lm_q2_score": 0.8577681122619883, "lm_q1q2_score": 0.844938400330043 }
http://fnaarccuneo.it/nieu/recurrence-relation-for-bubble-sort.html
Order of a Recurrence Relation, Definition. T(n) = T(n-3) + n + n + n. To turn this relation into a bottom-up dynamic programming algorithm, we need an order to fill in the solution cells in a. Recurrence relation is a mathematical model that captures the underlying time-complexity of an algorithm. Recurrence Relations Methods for solving recurrence relations: •Expansion into a series; •Induction (called the substitution method by the text); Merge Sort Input : Array A of at least j elements. Knapsack with Recursion. Exact phrase search: Use quotes, e. of Quick Sort, and discuss how the implementation of Quick Sort presented in this discussion always performs better than the worst case. Initial conditions + recurrence relation uniquely determine the sequence. » Divide into 2 equal size parts. The theme of this paper is that recurrence relations play an important part in computing science. This is very helpful in analyzing algorithms involving recursive computation and in determining their time complexity. A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous term(s). One goal is the introduction of certain basic mathematical concepts, such as equivalence relations, graphs, and trees. Recurrence relation The expressions you can enter as the right hand side of the recurrence may contain the special symbol n (the index of the recurrence), and the special functional symbol x(). Applications to sorting and searching, matrix algorithms, shortest-path and spanning tree problems. 5: Spanning Trees. If n 2 then T(n) = 1 else T(n) = 2 T(n 2) + n How to solve a recurrence equation?. Merge Sort The merge sort algorithm deals with the problem of sorting a list of n elements. 5: Let a n denote the number of comparisons needed to sort n numbers in bubble sort, we find the recurrence relation a n = a n-1. Solve the smaller instances either recursively or directly 3. Knapsack with Recursion. Albertson and J. 3 (9 ratings) Course Ratings are calculated from individual students' ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately. Our guess is now verified. pdf, Due Friday, 9/9/2016 Sorting: hw3. Compute The Time Complexity Of The Following Code. UC Riverside Recurrence Tutorial ; Recurrence Relations a la Duke ; Graphing with Gnuplot: Homepage ; Addendum to Lecture 2/21: Generic Mergesorts ; Dynamic Programming: Fibonacci ; Maximum Continuous Subarray ; Homework 3: Problem 4. The applications of the method to the Fibonacci and Lucas numbers, Chebyshev polynomials, the generalized Gegenbauer-Humbert polynomials are also discussed. Set up a recurrence relation for the number of key comparisons made by mergesort on best-case inputs and solve it for n = 2 k. There are general methods for solving recurrences of the form a n = c 1a n 1 + c 2a n 2 + + c ka n k + f(n) ; where each of the c i is a constant. Recurrence Relation Recursively defined sequences are also known as recurrence relations. 1 (Summing an Array), get a. This paper gives a general method for the stable evaluation of multivariate simplex splines, based on the well-known recurrence relation of Micchelli [12]. Como Resolver Relações de Recorrência. (15 points) The general idea of Quick sort is as follows. going from code to recurrence Carefully define what you're counting, and write it down!! "Let C(n) be the number of comparisons between sort keys used by MergeSort when sorting a list of length n ! 1"! In code, clearly separate base case from recursive case, highlight recursive calls, and operations being counted.  A(0) = a (base case)  A(n) = A(n-1) + d for n > 0 (recursive part) The above recursively defined function generates the sequence defined on the previous slide. We go through the input sequence looking for inversions. • This is called a recurrence relation. The simplest form of a recurrence relation is the case where the next term depends only on the immediately previous term. Iteration Method To Solve T(n) AVL Tree Balance Factors. Since this equation holds, the first case of the master theorem applies to the given recurrence relation, thus resulting in the conclusion: If we insert the values from above, we finally get: Thus the given recurrence relation T(n) was in Θ(n3). Do not worry about whether values are integral. Both merge sort and quicksort employ a common algorithmic paradigm based on recursion. 1 The First-Order Linear Recurrence Relation nonhomogeneous linear recurrence relation Ex. 4 time complexity of bubble sort algorithm anan-1(n-1), ngt1, a10, where anthe number of comparisons to sort n numbers an- an-1 n-1 an-1- an-2 n-2 an-2- an-3 n-3 a2- a1 1 an 123(n-1)(n2-n)/2 6. Bubble Sort. It is sometimes difficult to come up with a. Divide the problem instance into several smaller instances of the same problem 2. appropriate linear non-homogeneous recurrence equation and solving it. Grandine* Abstract. For example, many systems of orthogonal polynomials, including the Tchebychev polynomials and their finite field analogues, the Dickson polynomials, satisfy recurrence relations. ) Discuss the types of functions. Finally we merge the results. Finite graphs are well-suited to this purpose. Nonhomogeneous of Finite Order Linear Relations. There are general methods for solving recurrences of the form a n = c 1a n 1 + c 2a n 2 + + c ka n k + f(n) ; where each of the c i is a constant. ) Define the codomain of a relation. The reduction is implemented as a package of computer programs for analytic evaluation in FORM. 19 Sorting 7. T =O +T −1=O 2 26. Next, we solve the recurrence relation, to eliminate the recursion and give a closed form solution for the number of multiplications in terms of n. 14 Recurrence Relations 6. 5 Optimality of Sorting. Integers i and j. The problem with bubble sort is that it has an average time complexity of O(n^2), meaning that for every n items, it takes n^2 operations. an = 6an-1 ( 8an-2, and a0 = 4, a1 = 10. of length n that do not contain three consecutive 0s. To find the time complexity for the Sum function can then be reduced to solving the recurrence relation. Key Topics * Recurrence Relations * Solving Recurrence Relations * The Towers of Hanoi * Analyzing Recursive Subprograms. They use the following general plan: 1. I can solve them and figure out the bounds on them, but what I'm not really sure of is how to come up with a recurrence relation for a particular algorithm. Recurrence Relations Solving Linear Recurrence Relations Divide-and-Conquer RR's Recurrence Relations Recurrence Relations A recurrence relation for the sequence fa ngis an equation that expresses a n in terms of one or more of the previous terms a 0;a 1;:::;a n 1, for all integers nwith n n 0. If you're behind a web filter, please make sure that the domains *. Recurrence relation The expressions you can enter as the right hand side of the recurrence may contain the special symbol n (the index of the recurrence), and the special functional symbol x(). 2 [2 weeks]. spanning tree edge that points to a descendant node. Divide the list into 5 pieces evenly, by scanning the entire list. ) Define the range of a relation. You must know how to solve it. So, it can not be solved using Master's theorem. R= Red, W=White, B=Blue a_1 = 2, R, W a_2 = 4, RW, WR, WW, RR a_3 = 1, B a_4 = 4, B(a_1), W(a_3), R(a_3) a_5 = 12, B(a_2), W(a_4), R(a_4) a_6 = 21, B(a_3), W(a_5-2(a_3)), R(a_5-2(a_3)). Solving Recurrence Relations;finding the median. Exact solutions may not exist, even in the simplest cases: therefore PURRS computes upper and lower bounds for the solution, if the function p(n) is non negative and non decreasing. After solving it we can get T(n) = cnlogn. 2: Monday: 11/06/17: 8. 1 Solving recurrences Last class we introduced recurrence relations, such as T(n) = 2T(bn=2c) + n. Okay, and let us perform the generating function for the Fibonacci sequence. 1 Selection Sort and Bubble Sort 98 Selection Sort 98 Bubble Sort 100. The algorithms that we consider in this section is based on a simple operation known as merging: combining two ordered arrays to make one larger ordered array. The dominant solution is not unique, however, since any constant multiple of fr may be added to gr without affecting the asymptotic form of gr. Write a recurrence for the running time of this recursive version of insertion sort. Now, the main question is the following: Can Mathematica solve (for b[k] and c[r]) the following system of recurrence relations? b[k] is defined for only odd natural numbers, and c[r] is defined for only even non-negative integers. hpp, and sorts. As we can see in the formula above the variables get the. If f(n) 6= 0, then this is a linear non-homogeneous recurrence relation (with constant coe cients). 1 T(N)=T(N=2) +1; T(1) = 1. Iterate if there is a need. Recurrence sequences also appear in many parts of the mathematical sciences in the wide sense (which includes applied mathematics and applied computer science). Solve company interview questions and improve your coding intellect. Algorithmic Strategies with examples and problem solving:Brute-force algorithms with. Best Answer: There should be an initial condition or seed value in order to solve this, for example, T(1) = 1. We sort a list of n = 2 k elements by divide and conquer. Basic mathematical structures: more on equivalence relations, partial order relations, posets. So, if T(n) denotes the running time on an input of size n, we end up with the recurrence T(n) = 2T(n/2) +cn. Note that k =log 2 n. 1, S 2, S 3, …. The recurrence relation we obtain has this form: T(0) = c 0 T(1) = c 0 T(n) = 2 T(n/2) + c 1 n + c 2 n + c 3. – Operations on sets, generalized unions and intersections. 3 Divide and Conquer Algorithms and. JOURNAL OF COMBINATORIAL THEORY, Series B 48, 6-18 (1990) On Graph Invariants Given by Linear Recurrence Relations DAVID N. Need a terminating condition Bubble Sort iv. Chapter Review. A recurrence relation of type xn = ax n/b + c x0 = d is a divide and conquer recurrence relation This type of relation is obtained when analyzing divide and conquer algorithms. Subsection 10. For some algorithms the smaller problems are a fraction of the original problem size. Como Resolver Relações de Recorrência. rule of the latter sort (whether or not it is part of a recursive definition) is called a recurrence relation and that a sequence is called a solution of a recurrence relation if its terms satisfy the recurrence relation. • Insertion sort can be expressed as a recursive procedure as follows: – In order to sort A[1. Initial conditions + recurrence relation uniquely determine the sequence. •You need to be able to recognize that. 2: 25: Dec 3: Vertex degree: Eulerian trails and circuits Planar graphs 11. The dominant solution is not unique, however, since any constant multiple of fr may be added to gr without affecting the asymptotic form of gr. In mathematical terms, the sequence F n of Fibonacci numbers is defined by the recurrence relation; F n = F n-1 + F n-2 with seed values F 1 = 1, F 2 = 1 or F 0 = 0, F 1 = 1. 1 Introduction Definition I A recurrence relation for a sequence {an} is an equation expresses an in terms ofao, al, , an _ 1 no e R. • n/b is the size of each sub problem. Recurrence Relations for Divide and Conquer. Growth Rate: hw1. 7 Non-Constant Coef Þ cients 2. We use recurrence relations to describe and analyze the running time of recursive and divide & conquer algorithms. However, insertion sort provides several advantages:. This relationship is called a recurrence relation because the function T(. Luckily there happens to be a method for solving recurrence relations which works very well on relations like this. For now, we'll write n in place of O(n), but keep in our minds that n really means some constant times n. Apply mathematical induction to construct mathematical proofs, to establish program correctness, and to solve problems involving recursion. Recurrence Relation. • Understand classic graph problems and algorithms such as spanning tree, shortest path, and topological sorts • Understand B- and B+-trees and their use with files. Write a recurrence for the running time of this recursive version of insertion sort. For example, the recurrence for the Fibonacci Sequence is F(n) = F(n-1) + F(n-2) and the recurrence for merge sort is T(n) = 2T(n/2) + n. Merge Sort. Analyse essential features of algorithms, especially time complexity. 5 Optimality of Sorting. The key operation in the execution of this goal is the comparison between list elements during the Shift. (The definition of a stable sorting algorithm was giveninSection1. Recurrence equations are powerful things because they let you define a function in terms of itself! Suppose we have a function of an integer variable f(n), then an example of a recurrence equation is. In computer science, one of the primary reasons we look at solving a recurrence relation is because many algorithms, whether “really” recursive or not (in the sense of calling themselves over and over again) often are implemented by breaking the problem. Sorting — arranging items in order — is the most fundamental task in computation. The Second-Order Linear Homogeneous Recurrence Relation with Constant Coefficients. There are 12 files, named …. Data Structures and Algorithms Solving Recurrence Relations Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science — University of San Francisco - p. Sorting and Searching Algorithms. Any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared. That's what a recurrence relation looks like. Steps involved in this technique are: 1. 4 time complexity of bubble sort algorithm anan-1(n-1), ngt1, a10, where anthe number of comparisons to sort n numbers an- an-1 n-1 an-1- an-2 n-2 an-2- an-3 n-3 a2- a1 1 an 123(n-1)(n2-n)/2 6. ✤ A (numerical) sequence is an ordered list of numbers. The Recurrence Relations in Teaching Students of Informatics and its place in teaching students of Informatics is discussed in this paper. Compute the worst possible time of all input instances of length N. Q1- It will be B cause bubble sort's worst case is when array is sorted. • Some sorting algorithms are stable by nature like Insertion sort, Merge Sort, Bubble Sort, etc. Recurrence Relation. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that satisfy the recurrence. A recurrence relation for the expected number of comparisons is derived under the assumption that the input elements are distinct and that each of the possible orderings are equally. Platform to practice programming problems. Find a recurrence relation for this number with one condition that there cannot be three 1 foot flags in a row (regardless of their color). – Direct and indirect proofs. Prerequisites: MATH 1760 with grade C or better, or equivalent college course, or an acceptable score on placement or prerequisite exam MATH 2200 is an introduction to logic, circuits, graphs, trees, matrices, algorithms, combinatorics and relations within the context of applications to computer science. n], where n = length[A]. The Stable Evaluation of Multivariate Simplex Splines By Thomas A. • Sets and functions. RECURRENCE RELATIONS. Recurrence trees Telescoping Master Theorem Simple Often can’t solve difficult relations Visual Great intuition for div-and-conquer Widely applicable Difficult to formulate Not intuitive Immediate Only for div-and-conquer Only gives Big-Theta. 1 Applications of Recurrence Relations and quiz 3: Study Sections 8. RECURRENCE RELATIONS FOR THREE-LOOP PROTOTYPES OF BUBBLE DIAGRAMS WITH A MASS 1 Leo. Consider the recurrence relation: a n+1 = 2a n (n > 0) [Given a 1 =1]The solution is: a n = 2n-1 (The sequence is 1, 2, 4, 8, …) So, a 30 = 229Given any recurrence relation, can we “solve. It is possible to modify bubble sort to keep track of the number of swaps it performs. Solution: True. 14) - Kimberly Brehm Merge sort recurrence relation…. The recurrence relation for binary search is: T(1) 2(1) For n >1, T(n) T(dn=2e)+( n). Qualifying Exam Study Guide: Algorithms. 2: 25: Dec 3: Vertex degree: Eulerian trails and circuits Planar graphs 11. selection sort, as long as the number of elements to be sorted is a hundred or more. (if you don't get it then i will tell the answer) Q4- it's simple selection sort's logic. 2 [2 weeks]. The reduction is implemented as a package of computer programs for analytic evaluation in FORM. Bubble Sort Brute-force application to the sorting problem is to compare adjacent elements of the list and exchange them if they are out of order. For example, on the input sequence 1;5;3;2;4. For each pass through the array, bubble sort must go till the end of the array and compare the adjacent pairs, insertion sort on the other hand, would bail early if it finds that the array is sorted. A typical example of this class is the recurrence satisfied by the worst-case complexity of the merge-sort algorithm. Welcome Back! Now that we know about recursion, we can talk about an important topic in programming — recursive sorting algorithms! If you check out the pointers blog post, we go over bubble sort, an iterative sorting algorithm. Cliff Stein, Department of Computer Science, at Dartmouth College. Any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared. Recurrence Relation. But I have been struggling with solving recurrence relations. an = n +1 , and 3. Show that the solution to T(n) = 2T(n/2 + 17) + n is O(n lg n). The key operation in the execution of this goal is the comparison between list elements during the Shift. Divide the list into 5 pieces evenly, by scanning the entire list. •Solving the recurrence relations (not required for the course) -Approximately, C(N) = 2NlogN. Recurrence Relation Recursively defined sequences are also known as recurrence relations. List of potential topics (not all-inclusive). Relaxing the constraints We have imposed three restrictions on the recurrence relation. See CLRS, Chapter 4. 2 [2 weeks]. (recursively) sort the rst 3=5 of the list. If f(n) = 0, then this is a linear homogeneous recurrence relation (with constant coe cients). •You need to be able to derive a recurrence relation that describes an algorithms complexity. Quick Sort. — I Ching [The Book of Changes] (c. CSCI 3532 Advanced Data Structures and Algorithms Description:. f(n) = 0, the relation is called homogeneous. For some algorithms the smaller problems are a fraction of the original problem size. Example for Case 1. a (n-2) = 0 The auxiliary equation is x^2 - 4x + 4 = 0. Use recurrence relations to find the complexity of: Binary search. Big-O notation. With recurrence relations 10/22 Recurrence Relations Recurrence relations specify the cost of executing recursive functions. Sorting algorithms - Bubble sort, Insert sort, Selection sort, Heap sort, Quick sort, Mergesort. Lectures by Walter Lewin. RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA, BHOPAL New Scheme Based On AICTE Flexible Curricula Information Technology, III-Semester IT302 Discrete Structure Course objectives The main objectives of this course are: 1. (Cormen, p. Practice Test-3 Solving Recurrence Relations,Bubble Sort, Quick Sort, Linear Time Sorting-Counting Sort and Radix Sort. Recurrence Relations Solve the following recurrences. pdf, Due Friday. The initial or boundary condition(s) terminate the recursion. Avdeev 2 Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna (Moscow Region),RU141980, Russian Federation Abstract Recurrence relations derived via the Chetyrkin{Tkachov method. Divide the problem instance into several smaller instances of the same problem 2. Then a Another sorting algorithm is Bubble Sort. Heap sort, Radix Sort, Bucket Sort Analysis can be done using recurrence equations (relations) When the size of all sub-problems is the same (frequently the case) the recurrence equation representing the algorithm is: T(n) = D(n) + k T(n/c) + C(n). Solve the smaller instances either recursively or directly 3. Assume “ n” items to sort. They will make you ♥ Physics. Any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared. Greedy algorithms 2/11. of length n that do not contain three consecutive 0s. A sequence is said to be a solution of a recurrence, if it consistent with the definition of the recurrence. In this example, we generate a second-order linear recurrence relation. Once this has been done, the terms in the right hand side are collected together to obtain a compact expression for f (n). Solve the recurrence by making a change of variables. One alternative to bubble sort is the merge sort. Recurrence Relations in Maple. (b) Else: i. Discrete Mathematics 01 Introduction to recurrence relations - Duration: 10:45. Solving a recurrence relation by induction Another technique for solving a recurrence relation uses a guess and a proof by induc-tion. The last step is simplification. int ternary_search(int l,int r, int. Example for Case 1. an array of n elements by the method bubble sort. Recurrence relations • Recurrence relations are used to find the time complexity of recursive function • A recursive function always has two parts – The stopping place (when the recursion ends) – The rule for making the problem smaller. Solution, Subsection. We sort a list of n = 2 k elements by divide and conquer. Recurrence Relations 2. Knapsack with Dynamic Programming. Recurrence relation The expressions you can enter as the right hand side of the recurrence may contain the special symbol n (the index of the recurrence), and the special functional symbol x(). 1100 BC) To endure the idea of the recurrence one needs: freedom from morality; new means against the fact of pain (pain conceived as a tool, as the father of pleasure; there is. To perform the operations associated with sets, functions, and relations. in Section IV. Practice Tests for GATE CS 2020. • This is called a recurrence relation. – Proof by contradiction. Sorting enables efficient searching algorithms such as binary search. A Special Kind of Nonlinear Recurrence Relation (Optional). hpp, string. An Introduction to Discrete Mathematics, Second Edition by Steven Roman is once again available, now from Innovative Textbooks. ) Define the domain of a relation. Platform to practice programming problems. Worst times. Knapsack with Recursion. In order to sort , we recursively sort and then insert A[n] into the sorted array. master theorem. Quick Sort. If f(n) 6= 0, then this is a linear non-homogeneous recurrence relation (with constant coe cients). The applications of the method to the Fibonacci and Lucas numbers, Chebyshev polynomials, the generalized Gegenbauer-Humbert polynomials are also discussed. These types of recurrence relations can be easily solved using Master Method. Basic mathematical structures: more on equivalence relations, partial order relations, posets. It has same worst case complexity as normal merge sort B. sort each part do a 3 way merge. Consider the recurrence relation: a n+1 = 2a n (n > 0) [Given a 1 =1]The solution is: a n = 2n-1 (The sequence is 1, 2, 4, 8, …) So, a 30 = 229Given any recurrence relation, can we “solve. •Solving the recurrence relations (not required for the course) -Approximately, C(N) = 2NlogN. Best case scenario: The value k is in the rst position. In mathematical terms, the sequence F n of Fibonacci numbers is defined by the recurrence relation; F n = F n-1 + F n-2 with seed values F 1 = 1, F 2 = 1 or F 0 = 0, F 1 = 1. Let there be r 1, r 2, …, r k distinct roots for the characteristic equation. Find a recurrence relation for this number with one condition that there cannot be three 1 foot flags in a row (regardless of their color). Practice Test-4 Introduction to Algorithms - Linear Time Sorting Algorithms. 08 PART – C (PROBLEM SOLVING AND CRITICAL THINKING QUESTIONS) S. Review: Recurrence relations (Chapter 8) Last time we started in on recurrence relations. This relation is a well-known formula for finding the numbers of the Fibonacci series. int ternary_search(int l,int r, int. • A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array. Discrete Mathematics: Mathematical logic, Relations, Semi groups and Groups, Coding, Recurrence Relations, Graphs, Language and Finite State Machines. Let us compare this recurrence with our eligible recurrence for Master Theorem T(n) = aT(n/b) + f(n). Master Theorem (for divide and conquer recurrences):. The recurrence tree method is a visual Write a recursive algorithm for Selection Sort (or insertion sort or bubble sort). It has same worst case complexity as normal merge sort B. A recurrence relation is an equation that uses recursion to relate terms in a sequence or elements in an array. Prerequisites: MATH 1760 with grade C or better, or equivalent college course, or an acceptable score on placement or prerequisite exam MATH 2200 is an introduction to logic, circuits, graphs, trees, matrices, algorithms, combinatorics and relations within the context of applications to computer science. DigiiMento: GATE, NTA NET & Other CSE Exam Prep 51,713 views. Complexity Relation. Order of a Recurrence Relation, Definition. Okay, and let us perform the generating function for the Fibonacci sequence. We set A = 1, B = 1, and specify initial values equal to 0 and 1. Combine multiple words with dashes(-), and seperate tags with spaces. Divide-and-conquer algorithms 5. 1, S 2, S 3, …. Find a recurrence relation for the number of bit strings. This paper gives a general method for the stable evaluation of multivariate simplex splines, based on the well-known recurrence relation of Micchelli [12]. Welcome Back! Now that we know about recursion, we can talk about an important topic in programming — recursive sorting algorithms! If you check out the pointers blog post, we go over bubble sort, an iterative sorting algorithm. You may assume that. In computer science, one of the primary reasons we Write this as a recurrence relation. Discrete Mathematics 01 Introduction to recurrence relations - Duration: 10:45. To sort the entire spreadsheet, select a cell in the column or row you wish to sort by and then choose" sort rows" or "sort columns" from the tools menu. The problem is divided in to 2 equal problems in all but the base case of unit size array. The time to sort n numbers can be represented as the following recurrence relation: T(n) =T(n−1) + (n−1) T(1) =0 Solve, using the plug and chug strategy, the recurrence relation stated above to derive an expression for the time to sort n numbers using Bubblesort. Recurrence Relations. Otherwise, the recursive call is dealing with half of the list T(n/2), plus the time to merge, which is linear N. (Cormen, p. Array versus pointer implementations of each data. Today we’ll see a di erent approach that runs in O(nlgn) and uses one of the most powerful techniques for algorithm design, divide-and-conquer. Elements of graph theory, trees and searching network algorithms. Give a) bound for each problem. Write a recurrence for the running time of this recursive version of insertion sort. The derived idea provides a general method to construct identities of number or. Second-Order Recurrence Relations. and k is a constant. Otherwise, n>1, and we perform the following three steps in sequence: Sort the left half of the the array. An example is explained to help you understand the logic before explaining the pseudocode of the bubble sort algorithm. T(1) = 1, (*) T(n) = 1 + T(n-1), when n > 1. 2: A recursion tree is a tree generated by tracing the execution of a recursive algorithm. , a0, a1, …, an-1, for all integers n with n≥n0 where n0 is a nonnegative integer A sequence is called a solution of a. Understanding induction, recursion and recurrence relations - they all use similar thought patterns - they are all part of learning to think like a computer scientist. Do not worry about whether values are integral. 7 Recurrence Relations 7 Advanced Counting Techniques 7. Note: O (n) O(n) O (n) is the best-case running time for bubble sort. At any rate, we do this the hard way, by substituting several steps and noting the pattern which develops. Chapter Review. We will relax these one by one. an array of n elements by the method bubble sort. Mirrokni (in Persian) about solving recurrence relations using characteristic equations. ESPN spoke to a range of stakeholders studying the "bubble" concept and compared their concerns with the league's thinking. (5 points) Determine an asymptotic upper bound for the following recurrence relations. Argue that STOOGE_SORT (A,1,length[A]) correctly sorts the input array A[1. Algorithms and Problem Solving: Divide and Conquer Technique, Dynamic Programming, Greedy Technique, Single –Source Shortest Paths, NP-Completeness and the P & NP Classes. 1 Introduction Definition I A recurrence relation for a sequence {an} is an equation expresses an in terms ofao, al, , an _ 1 no e R. APPLIED COMBINATORICS MATH/CSCI 3100/8105 Course Description: Basic counting methods, generating functions, recurrence relations, principle of inclusion-exclusion. Open Digital Education. This relation is a well-known formula for finding the numbers of the Fibonacci series. Otherwise, it is called nonhomogeneous. Greedy algorithms 2/11. For now, we'll write n in place of O(n), but keep in our minds that n really means some constant times n. binary relations: set builder, matrix, and digraph representation properties of relations equivalence relations and partitions partial orders, Hasse Diagrams, and topological sort operations on relations functions: properties and operations order of magnitude of functions 4. Albertson and J. Recurrence Relations: First-order linear recurrence relation, Second-order linear homogeneous recurrence relations with constant coefficients, Non-homogeneous recurrence relations with constant coefficients, Non homogeneous relations, Divide -and- conquer algorithms. Explain the divide-and-conquer paradigm for algorithm design, including a generic recurrence relation for the runtime T(n) for inputs of size n. The sequence {a n} is a solution of the. Specifically, std. Know Thy Complexities! Hi there! This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science. SOLVING THAT RECURRENCE RELATION 1. a (n-2) a (n) - 4. recurrence is a templated function which returns an iterator (called a "range" in D parlance) that yields successive elements of a recurrence relation. If f(n) = 0, then this is a linear homogeneous recurrence relation (with constant coe cients). It falls in case II of Master Method and solution of the recurrence is ɵ(n log n). For some algorithms the smaller problems are a fraction of the original problem size. I can tell when a recurrence relation can be solved using master theorem, and I can solve those. In this lecture, we shall look at three methods, namely, substitution method, recurrence tree method, and Master theorem to ana-lyze recurrence relations. • In this class just solved one so you could see. This paper gives a general method for the stable evaluation of multivariate simplex splines, based on the well-known recurrence relation of Micchelli [12]. Choose from 500 different sets of recurrence flashcards on Quizlet. Practice Test-4 Introduction to Algorithms - Linear Time Sorting Algorithms. The recurrence relation for binary search is: T(1) 2(1) For n >1, T(n) T(dn=2e)+( n). Compare the worst-case running time of STOOGE_SORT with that of insertion sort, merge sort, heapsort, and. Here is a key theorem, particularly useful when estimating the costs of divide and conquer algorithms. A Special Kind of Nonlinear Recurrence Relation (Optional). And a few more examples that you saw. Special Functions. Connection to recursive algorithms ; Techniques for solving them; 2 Recursion and Mathematical Induction In both, we have general and boundary conditions The general conditions break the problem into smaller and smaller pieces. 2 Solving Linear Homoheneous Recurrence Relations: Study section 8. Search tips. The master theorem is a recipe that gives asymptotic estimates for a class of recurrence relations that often show up when analyzing recursive algorithms. ecurrence relation is an equation which is de ned in term sof its elf Why a re recurrences go o d things Many natural functions a re easily exp ressed as re currences a n n n pol y nomial a n n n olving recurrence relations is kno wn which is why it is an a rt My app roach is Realize that linea r nite histo ry constant co ecient recurrences. pdf, Due Friday. Title: Recurrence Relations 1 Recurrence Relations. Quick Sort 10 Running time analysis The advantage of this quicksort is that we can sort "in-place", i. 2: A recursion tree is a tree generated by tracing the execution of a recursive algorithm. Recurrence Relations. Merge Sort Recurrence Relation Let’s analyze one last recurrence using this technique, the recurrence relation we derived for Merge Sort in a prior lecture: T(n) = 2T(n/2) + O(n), T(1) = 1. Use recursion-tree method Let’s try T(n) = 3T(bn=4c) + ( n2): Recursion tree suggests T(n) = O(n2). It will be as follows. There are general methods for solving recurrences of the form a n = c 1a n 1 + c 2a n 2 + + c ka n k + f(n) ; where each of the c i is a constant. Case 2 Generic form. Assume that the cost of the base case is a constant. In our novel sorting algorithm, in the each iteration bigger element moved towards right like bubble sort and smaller element moved one or two positions towards left where as in the bubble sort only one element moved either direction only. u(n+2) = 2*(2*n+3)^2 * u(n+1) - 4*(n+1)^2*(2*n+1)*(2*n+3)*u(n) What I find most difficult is the presence of non-constant coefficients in the recurrence relation. 19 Sorting 7. The range will be sorted by the first row or column. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. induction; selection problems; four solutions using findmin, findmax and merge. Recurrence Relations-1 [1] Chap 3, [2] Chap 2, ref; Feb 12 Recurrence Relations-2 Homework 1 [1] Chap 3, [2] Chap 2, ref; Feb 14 Master Theorem - Induction and Algorithms Homework 2 [1] Chap 3-5, [2] Chap 2, ref1, ref2; Feb 19 Induction and Algorithms - Introduction to Data Structures-1 [1] Chap 5-Sec 4. Binary Trees, Trees, Graph Theory, Finite State Automata, External Storage Devices, Sequential and Direct File Organizations, File Processing Techniques, Hashing, B-Trees, External Sorting, P and NP problems, Algorithmic Analysis. Finite graphs are well-suited to this purpose. In principle such a relation allows us to calculate T(n) for any n by applying the first equation until we reach the base case. 8 Divide-and-Conquer Relations 1. Considering Binary Tree representation of the Heap, each node is having 0 or 2 child (where the number of nodes with 0 children = number of nodes with 2 children + 1). 7 Non-Constant Coef Þ cients 2. Few Examples of Solving Recurrences - Master Method. Search tips. In that vein, we’ll start with studying a speci c sort today, Merge Sort. Divide and Conquer (Merge Sort) Comp 122, Spring 2004 Divide and Conquer Recursive in structure Divide the problem into sub-problems that are similar to the original but smaller in size Conquer the sub-problems by solving them recursively. The basic ideas are as below:. The master method is a formula for solving recurrence relations of the form: T(n) = aT(n/b) + f(n), where, n = size of input a = number of subproblems in the recursion n/b = size of each subproblem. 53) - patrickJMT Tower of Hanoi explained (8. Algorithms Midterm. The dominant solution is not unique, however, since any constant multiple of fr may be added to gr without affecting the asymptotic form of gr. then you return the coordinate and it will bubble back up to the top to the original call. The applications of the method to the Fibonacci and Lucas numbers, Chebyshev polynomials, the generalized Gegenbauer-Humbert polynomials are also discussed. Sorting algorithms Merge Sort Solving recurrence relations Quicksort Radboud University Nijmegen Total Execution Time T(n) = c 1 + c 2(m + 1) + c 3m + c 4 + c 5 For a given n (the size of the input to search, n = b a + 1), the value T(n) can vary: why? Depends on when the element is found. 6: Relations and their properties, n-ary relations and their applications, representing relations, closures of relations, equivalence relations, partial orderings. The limit as n increases of the ratio F n /F n-1 is known as the Golden Ratio or Golden Mean or Phi (Φ), and so is the limit as n increases of the ratio F n-1 /F n. • Sets and functions. For example, on the input sequence 1;5;3;2;4. 1 (Summing an Array), get a. Grandine* Abstract. RECURRENCE RELATIONS FOR THREE-LOOP PROTOTYPES OF BUBBLE DIAGRAMS WITH A MASS 1 Leo. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. The Stable Evaluation of Multivariate Simplex Splines By Thomas A. pdf, Due Friday. (5 points) Determine an asymptotic upper bound for the following recurrence relations. Merge sort: The merge sort algorithm splits a list with n elements into two list with n/2 and n/2 elements. Solve the recurrence by making a change of variables. ) We can produce the sequence by applying G() rst to the rst ka’s to get a k+1, and computing successive elements. This is an in-place version of Mergesort written by R. The Tower of Hanoi Problem. pdf, Due Friday, 9/9/2016 Sorting: hw3. C++ program. 4 time complexity of bubble sort algorithm anan-1(n-1), ngt1, a10, where anthe number of comparisons to sort n numbers an- an-1 n-1 an-1- an-2 n-2 an-2- an-3 n-3 a2- a1 1 an 123(n-1)(n2-n)/2 6. Recurrence Relations : Substitution, Iterative, and The Master Method Divide and conquer algorithms are common techniques to solve a wide range of problems. Big-O notation. If f(n) 6= 0, then this is a linear non-homogeneous recurrence relation (with constant coe cients). 2 [2 weeks]. (recursively) sort the rst 3=5 of the list. Recurrence Relations • Recurrence relations are useful in certain counting problems. If we are only looking for an asymptotic estimate of the time complexity, we don’t need to specify the actual values of the constants k 1 and k 2. Searching and Sorting: Linear Search, Binary Search, Bubble Sort, Selection Sort, Insertion Sort, Shell Sort, Quick Sort, Heap Sort, Merge Sort, Counting Sort, Radix Sort. Review: Recurrence relations (Chapter 8) Last time we started in on recurrence relations. In computer science, one of the primary reasons we look at solving a recurrence relation is because many algorithms, whether “really” recursive or not (in the sense of calling themselves over and over again) often are implemented by breaking the problem. Find the general solution by the standard formula: T(n) = 5T(n - 1) - 4. (This result is confirmed by the exact solution of the recurrence relation, which is T(n) = 1001n 3 − 1000n 2, assuming T(1) = 1). Mirrokni (in Persian) about solving recurrence relations using characteristic equations. (4) By the employment of global dependence testing, link-breaking strategy, Tarjan's depth-first search algorithm, and a topological sorting, an algorithm for resolving a general multistatement recurrence is proposed. In our novel sorting algorithm, in the each iteration bigger element moved towards right like bubble sort and smaller element moved one or two positions towards left where as in the bubble sort only one element moved either direction only. Show that (n lg n) is the solution to the "exact" recurrence (4. an = 3 n, 2. To use Graph Theory for solving problems. That's what a recurrence relation looks like. Performance of recursive algorithms typically specified with recurrence equations Recurrence equations require special techniques for solving We will focus on induction and the Master Method (and its variants). 5 1b Searching – linear search, binary search pages 42, 60-62, 10. 21 Sorting 8. Best case scenario: The value k is in the rst position. •You need to be able to derive a recurrence relation that describes an algorithms complexity. Review: Recurrence relations (Chapter 8) Last time we started in on recurrence relations. Analysis can be done using recurrence equations (relations) When the size of all sub-problems is the same (frequently the case) the recurrence equation representing the algorithm is: T(n) = D(n) + k T(n/c) + C(n) Where. Avdeev 2 Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna (Moscow Region),RU141980, Russian Federation Abstract Recurrence relations derived via the Chetyrkin{Tkachov method. (recursively) sort the last 3=5 of the list. (recursively) sort the rst 3=5 of the list. There is one string of length 0 that does not contain three consecutive zeros, namely, the string of length 0. Recurrence Relations II De nition Consider the recurrence relation: an = 2 an 1 an 2. Notes on recurrence relations, read section 3 (from Jeff Erickson) A recurrence for make-postage (video) A recurrence for binary-search (video). ! Write Recurrence(s)! 31!. In order to sort , we recursively sort and then insert A[n] into the sorted array. This relation is a well-known formula for finding the numbers of the Fibonacci series. Few Examples of Solving Recurrences – Master Method. Algorithms: sections 5. The function rsolve (from sympy) can deal with linear recurrence relations. • We will not solve or prove by induction • We will just form recurrence relations. Lecture 10: Analysis of Quicksort This lecture provides an analysis of quicksort and utilizes generating functions to analyze the expected number of comparisons. an = 3 n, 2. Apply mathematical induction to construct mathematical proofs, to establish program correctness, and to solve problems involving recursion. Use recurrence relations to find the complexity of: Binary search. Sorting and Searching Algorithms. Divide-and-Conquer Paradigm. if we have a recurrence T(n) = n2 logn+4n+3 p n+2, what matters most is that T(n) is ( n2 logn). DigiiMento: GATE, NTA NET & Other CSE Exam Prep 51,713 views. Analysis of the Merge procedure is straightforward. Languages: Finite State Machines. A solution involving 6 constants is a n = c. RECURRENCE RELATION A recurrence relation for the sequence {an} is an equation that expresses an in terms of one or more of the previous terms of the sequence, namely, a0, a1,…, an-1, for all integers n with n ≥ n0, where n0 is a nonnegative integer. Mergesort and Recurrences (CLRS 2. In this example, we generate a second-order linear recurrence relation. Many sequences can be a solution for the same. Data for CBSE, GCSE, ICSE and Indian state boards. I've written it in such a way that I can go either forward or backward. master theorem. If not, another pass is made through list and adjacen t names are in terc hanged as necessary. Graphs Basic de nitions, trees, bipartite graphs, matchings in bipartite graphs, breadth rst. 21 Sorting 8. Find a recurrence relation for the number of bit strings. Its main purpose is to help identify. Enter search terms or a module, class or function name. Sorting and Searching Algorithms. Its recurrence can be written as T(n) = T(n-1) + (n-1). • It is very difficult to select a sorting algorithm over another. •Solving the recurrence relations (not required for the course) -Approximately, C(N) = 2NlogN. Graphical Educational content for Mathematics, Science, Computer Science. In mathematical terms, the sequence F n of Fibonacci numbers is defined by the recurrence relation; F n = F n-1 + F n-2 with seed values F 1 = 1, F 2 = 1 or F 0 = 0, F 1 = 1. Recurrence Relations, Code Snippets|Monday, February 3/Tuesday, February 4 Readings Lecture Notes Chapter 6: Analyzing Runtime of Code Snippets You are given the following algorithm for Bubble-Sort: Algorithm 1 Bubble Sort function Bubble-Sort(A, n) for i 0 to n 2 do for j 0 to n i 2 do if A[j] > A[j + 1] then swap(A[j];A[j + 1]) end if. Do not worry about whether values are integral. UC Riverside Recurrence Tutorial ; Recurrence Relations a la Duke ; Graphing with Gnuplot: Homepage ; Addendum to Lecture 2/21: Generic Mergesorts ; Dynamic Programming: Fibonacci ; Maximum Continuous Subarray ; Homework 3: Problem 4. 1: For Example IV. What can we conclude about modified three way merge sort? A. pdf, Due Friday. 7 Recurrence Relations 7 Advanced Counting Techniques 7. 1 Selection Sort and Bubble Sort 98 Selection Sort 98 Bubble Sort 100. Solving the RR: N T N N N N T(N) 2 ( / 2) = + Note: Divide both side of recurrence relation by N. Cartesian Products and Relations. Recurrences are generally used in divide-and-conquer paradigm. A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous term(s). an = an-1 + 2an-2, and a0 = 0, a1 = 1. I've written it in such a way that I can go either forward or backward. Exact phrase search: Use quotes, e. State whether or not each recurrence is homogeneous. Iteration Method To Solve T(n) AVL Tree Balance Factors. 1 Applications of Recurrence Relations and quiz 3: Study Sections 8. For the function gin your estimate f (x) is O (g (x)), use a simple function g of smallest order. If n 2 then T(n) = 1 else T(n) = 2 T(n 2) + n How to solve a recurrence equation?. Recurrence. APPLIED COMBINATORICS MATH/CSCI 3100/8105 Course Description: Basic counting methods, generating functions, recurrence relations, principle of inclusion-exclusion. Heaps and Heapsort iv. Algorithmic Strategies with examples and problem solving:Brute-force algorithms with. PowerPoint slides from our discussion. linear: a n is a linear combination of a k's homogeneous: no terms occur that aren't. I have been having trouble doing reccurence relation problems in my discrete structures class, this is one question I have been struggling on: Solve the reccurence relation an = 3a(n-1)+10a(n-2) with initial terms a0 = 4 and a_1 = 1. If the statement is wrong, explain why. These types of recurrence relations can be easily solved using Master Method. Best case scenario: The value k is in the rst position. Show that (n lg n) is the solution to the "exact" recurrence (4. Searching and Sorting: Linear Search, Binary Search, Bubble Sort, Selection Sort, Insertion Sort, Shell Sort, Quick Sort, Heap Sort, Merge Sort, Counting Sort, Radix Sort. Recurrence relations I linear recurences, divide-and-conquer recurrences. T(n) = O(1) if n ≤c T(n) = a T(n/b) + D(nd) otherwise Solution Methods Substitution Method. MING GAO ([email protected]) Discrete Mathematics and Its Applications Dec. To begin we're going to look at the idea of using the computer to compute values of recurrence relations. Simple Recurrence Relations ; Recurrence Relation notes from CS331 and CS531; Read chapter 3 of the CLRS book. In other words, we do not begin with an input sequence, instead we generate one by recursing on a set of formulae of the form above. Recall that quicksort involves partitioning, and 2 recursive calls. Unfortunately, this situation is quite typical: algorithms that are efficient for large SEC. , a0, a1, …, an-1, for all integers n with n≥n0 where n0 is a nonnegative integer A sequence is called a solution of a. They use the following general plan: 1. Divide-and-Conquer Paradigm. Divide-and-Conquer Applications Sorting Networks Basic Technique An Introductory Example: Multiplication Recurrence Relations Divide-and-Conquer Strategy The divide-and-conquer strategy solves a problemP by: (1) Breaking P into subproblems that are themselves smaller instances of the same type of problem. In this site, experiences represent “active” learning opportunities, as opposed to readings, which represent “passive” learning opportunities. 5, page 1 and page 2 ; Sorting: Sorting Animations ; Counting Sort ; Radix Sort ; More Radix Sort ; How to. Recurrence relations derived via the Chetyrkin--Tkachov method of integration by parts are applied to reduce scalar three-loop bubble (vacuum) diagrams with a mass to a limited number of master integrals. Today we'll see a di erent approach that runs in O(nlgn) and uses one of the most powerful techniques for algorithm design, divide-and-conquer. The Pigeonhole Principle. See CLRS, Chapter 4. 4) We saw a couple of O(n2) algorithms for sorting. Solution: True. a (n-1) + 4. The sequence {a n} is a solution of the. What can we conclude about modified three way merge sort? A. (The list with 1 element is considered sorted. A sequence is said to be a solution of a recurrence, if it consistent with the definition of the recurrence. A recurrence or recurrence relation defines an infinite sequence by describing how to calculate the n-th element of the sequence given the values of smaller elements, as in: T(n) = T(n/2) + n, T(0) = T(1) = 1. The recurrence tree method is a visual Write a recursive algorithm for Selection Sort (or insertion sort or bubble sort). Miller and F. Divide & Conquer Algorithms • Many types of problems are solvable by reducing a problem of size n into some number a of independent subproblems, each of size ≤⎡n/b⎤, where a≥1 and b>1. Sorting and Searching Algorithms. Insertion Sort. Bubble sort uses the so-called "decrease-by-one" technique, a kind of divide-and-conquer. Recurrence Relations (recalling definitions from Chapter 2) Definition: A recurrence relation for the sequence {a n}is an equation that expresses a n in terms of one or more of the previous terms of the sequence, namely, a 0, a 1, …, a n-1, for all integers n with n ≥ n 0, where n 0. The procedure for finding the terms of a sequence in a recursive manner is called recurrence relation. 4) We saw a couple of O(n2) algorithms for sorting. 1 Applications of Recurrence Relations and quiz 3: Study Sections 8. A recurrence relation is an equation that uses recursion to relate terms in a sequence or elements in an array. Insertion sort can be expressed as a recursive procedure as follows. At each iteration, Bubble-Sort checks the array A for an inversion and. To use Graph Theory for solving problems. The simplest form of a recurrence relation is the case where the next term depends only on the immediately previous term. n], where n = length[A]. The bubble sort. Recurrence Relations Deriving recurrence relation for run time of a recursive function Solving recurrence relations by expansion to get run time ∑ = + = N i N i 1 2 ( 1) 1 1 1 0 − − = + = ∑ A A A N N i i 16 Lists, Stacks, Queues Brush up on ADT operations – Insert/Delete, Push/Pop etc. Recurrence relations, sets, hashing and hash tables, trees and binary trees (properties, tree traversal algorithms), heaps, priority queues, and graphs (representation, depth- and breadth-first traversals and applications, shortest-path algorithms, transitive closure, network flows, topological sort). ) Represent the domain of a relation. To perform the operations associated with sets, functions, and relations. We will now use this information to prove the running time of msort is n log(n) + n = O(n log n). They use the following general plan: 1. Now that we know the three cases of Master Theorem, let us practice one recurrence for each of the three cases. A recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. = 2(2T(n/3^2) + 1) + 1 = 2^2T(n/3^2) + 3. Let a ≥ 1 and b > 1 be constants, let f( n ) be a function, and let T( n ) be a function over the positive numbers defined by the recurrence.  A(0) = a (base case)  A(n) = A(n-1) + d for n > 0 (recursive part) The above recursively defined function generates the sequence defined on the previous slide. Bubble Sort. ) It uses less than n comparison to merge two sorted lists of n/2 and n/2 elements. Therefore, the time needed to do a bubble sort is quadrupled. Recurrence relations for discrete hypergeometric functions R Álvarez-Nodarse, JL Cardoso Journal of Difference Equations and Applications 11 (9), 829-850 , 2005. JOURNAL OF COMBINATORIAL THEORY, Series B 48, 6-18 (1990) On Graph Invariants Given by Linear Recurrence Relations DAVID N. It is sometimes difficult to come up with a. Note that a recurrence relation of the form 'T(n) = rT(n - 1) + s' has solution 'T(n) = Crⁿ + s / (1 - r)'.
2020-08-05T05:11:36
{ "domain": "fnaarccuneo.it", "url": "http://fnaarccuneo.it/nieu/recurrence-relation-for-bubble-sort.html", "openwebmath_score": 0.5720400810241699, "openwebmath_perplexity": 1014.9281657158043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9850429125286726, "lm_q2_score": 0.8577681122619885, "lm_q1q2_score": 0.8449383995767705 }
http://www.silentworld.com.au/jobseeker-asset-ernqvzr/51c3b6-geometric-interpretation-of-second-order-partial-derivatives
# geometric interpretation of second order partial derivatives The difference here is the functions that they represent tangent lines to. The parallel (or tangent) vector is also just as easy. The picture to the left is intended to show you the geometric interpretation of the partial derivative. These show the graphs of its second-order partial derivatives. We should never expect that the function will behave in exactly the same way at a point as each variable changes. The second order partials in the x and y direction would give the concavity of the surface. The cross sections and tangent lines in the previous section were a little disorienting, so in this version of the example we've simplified things a bit. We've replaced each tangent line with a vector in the line. The picture on the left includes these vectors along with the plane tangent to the surface at the blue point. Introduction to Limits. Geometric interpretation: Partial derivatives of functions of two variables ad-mit a similar geometrical interpretation as for functions of one variable. Also see if you can tell where the partials are most positive and most negative. The colored curves are "cross sections" -- the points on the surface where x=a (green) and y=b In this case, the partial derivatives and at a point can be expressed as double limits: We now use that: and: Plugging (2) and (3) back into (1), we obtain that: A similar calculation yields that: As Clairaut's theorem on equality of mixed partialsshows, w… The mixed derivative (also called a mixed partial derivative) is a second order derivative of a function of two or more variables. Thus there are four second order partial derivatives for a function z = f(x , y). The wire frame represents a surface, the graph of a function z=f(x,y), and the blue dot represents a point (a,b,f(a,b)). (usually… except when its value is zero) (this image is from ASU: Section 3.6 Optimization) Vertical trace curves form the pictured mesh over the surface. Geometric Interpretation of Partial Derivatives. Normally I would interpret those as "first-order condition" and "second-order condition" respectively, but those interpretation make no sense here since they pertain to optimisation problems. So I'll go over here, use a different color so the partial derivative of f with respect to y, partial y. This is a fairly short section and is here so we can acknowledge that the two main interpretations of derivatives of functions of a single variable still hold for partial derivatives, with small modifications of course to account of the fact that we now have more than one variable. Therefore, the first component becomes a 1 and the second becomes a zero because we are treating $$y$$ as a constant when we differentiate with respect to $$x$$. and the tangent line to traces with fixed $$x$$ is. Also, to get the equation we need a point on the line and a vector that is parallel to the line. The same will hold true here. Partial derivatives are the slopes of traces. So, the tangent line at $$\left( {1,2} \right)$$ for the trace to $$z = 10 - 4{x^2} - {y^2}$$ for the plane $$y = 2$$ has a slope of -8. It represents the slope of the tangent to that curve represented by the function at a particular point P. In the case of a function of two variables z = f(x, y) Fig. for fixed $$y$$) and if we differentiate with respect to $$y$$ we will get a tangent vector to traces for the plane $$x = a$$ (or fixed $$x$$). SECOND PARTIAL DERIVATIVES. First Order Differential Equation And Geometric Interpretation. As we saw in Activity 10.2.5 , the wind chill $$w(v,T)\text{,}$$ in degrees Fahrenheit, is … So, the point will be. We’ve already computed the derivatives and their values at $$\left( {1,2} \right)$$ in the previous example and the point on each trace is. Evaluating Limits. Partial Derivatives and their Geometric Interpretation. Once again, you can click and drag the point to move it around. As we saw in the previous section, $${f_x}\left( {x,y} \right)$$ represents the rate of change of the function $$f\left( {x,y} \right)$$ as we change $$x$$ and hold $$y$$ fixed while $${f_y}\left( {x,y} \right)$$ represents the rate of change of $$f\left( {x,y} \right)$$ as we change $$y$$ and hold $$x$$ fixed. First of all , what is the goal differentiation? The first step in taking a directional derivative, is to specify the direction. The wire frame represents a surface, the graph of a function z=f(x,y), and the blue dot represents a point (a,b,f(a,b)).The colored curves are "cross sections" -- the points on the surface where x=a (green) and y=b (blue). The picture to the left is intended to show you the geometric interpretation of the partial derivative. The next interpretation was one of the standard interpretations in a Calculus I class. It describes the local curvature of a function of many variables. For the mixed partial, derivative in the x and then y direction (or vice versa by Clairaut's Theorem), would that be the slope in a diagonal direction? Resize; Like. Note that it is completely possible for a function to be increasing for a fixed $$y$$ and decreasing for a fixed $$x$$ at a point as this example has shown. The result is called the directional derivative . These are called second order partial delta derivatives. Featured. Activity 10.3.4 . Note as well that the order that we take the derivatives in is given by the notation for each these. Also, this expression is often written in terms of values of the function at fictitious interme-diate grid points: df xðÞ dx i ≈ 1 Δx f i+1=2−f i−1=2 +OðÞΔx 2; ðA:4Þ which provides also a second-order approximation to the derivative. In calculus, the second derivative, or the second order derivative, of a function f is the derivative of the derivative of f. You appear to be on a device with a "narrow" screen width (, Derivatives of Exponential and Logarithm Functions, L'Hospital's Rule and Indeterminate Forms, Substitution Rule for Indefinite Integrals, Volumes of Solids of Revolution / Method of Rings, Volumes of Solids of Revolution/Method of Cylinders, Parametric Equations and Polar Coordinates, Gradient Vector, Tangent Planes and Normal Lines, Triple Integrals in Cylindrical Coordinates, Triple Integrals in Spherical Coordinates, Linear Homogeneous Differential Equations, Periodic Functions & Orthogonal Functions, Heat Equation with Non-Zero Temperature Boundaries, Absolute Value Equations and Inequalities. The equation for the tangent line to traces with fixed $$y$$ is then. 67 DIFFERENTIALS. Afterwards, the instructor reviews the correct answers with the students in order to correct any misunderstandings concerning the process of finding partial derivatives. Example 1: … The second and third second order partial derivatives are often called mixed partial derivatives since we are taking derivatives with respect to more than one variable. It turns out that the mixed partial derivatives fxy and fyx are equal for most functions that one meets in practice. Here is the equation of the tangent line to the trace for the plane $$x = 1$$. Purpose The purpose of this lab is to acquaint you with using Maple to compute partial derivatives. if we allow $$y$$ to vary and hold $$x$$ fixed. Section 3 Second-order Partial Derivatives. (blue). So we go … if we allow $$x$$ to vary and hold $$y$$ fixed. First, the always important, rate of change of the function. If we differentiate with respect to $$x$$ we will get a tangent vector to traces for the plane $$y = b$$ (i.e. The partial derivative $${f_x}\left( {a,b} \right)$$ is the slope of the trace of $$f\left( {x,y} \right)$$ for the plane $$y = b$$ at the point $$\left( {a,b} \right)$$. 187 Views. Recall the meaning of the partial derivative; at a given point (a,b), the value of the partial with respect to x, i.e. We differentiated each component with respect to $$x$$. Just as with the first-order partial derivatives, we can approximate second-order partial derivatives in the situation where we have only partial information about the function. Likewise the partial derivative $${f_y}\left( {a,b} \right)$$ is the slope of the trace of $$f\left( {x,y} \right)$$ for the plane $$x = a$$ at the point $$\left( {a,b} \right)$$. Recall that the equation of a line in 3-D space is given by a vector equation. 2/21/20 Multivariate Calculus: Multivariable Functions Havens Figure 1. Author has 857 answers and 615K answer views Second derivative usually indicates a geometric property called concavity. If f … There really isn’t all that much to do with these other than plugging the values and function into the formulas above. Background For a function of a single real variable, the derivative gives information on whether the graph of is increasing or decreasing. Geometry of Differentiability. Finally, let’s briefly talk about getting the equations of the tangent line. The point is easy. Purpose The purpose of this lab is to acquaint you with using Maple to compute partial derivatives. Partial derivatives of order more than two can be defined in a similar manner. This is a useful fact if we're trying to find a parametric equation of The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. ... Second Order Partial Differential Equations 1(2) 214 Views. We know from a Calculus I class that $$f'\left( a \right)$$ represents the slope of the tangent line to $$y = f\left( x \right)$$ at $$x = a$$. For reference purposes here are the graphs of the traces. Also the tangent line at $$\left( {1,2} \right)$$ for the trace to $$z = 10 - 4{x^2} - {y^2}$$ for the plane $$x = 1$$ has a slope of -4. The first interpretation we’ve already seen and is the more important of the two. For traces with fixed $$x$$ the tangent vector is. So that slope ends up looking like this, that's our blue line, and let's go ahead and evaluate the partial derivative of f with respect to y. In the next picture we'll show how you can use these vectors to find the tangent plane. So, here is the tangent vector for traces with fixed $$y$$. That's the slope of the line tangent to the green curve. Here the partial derivative with respect to $$y$$ is negative and so the function is decreasing at $$\left( {2,5} \right)$$ as we vary $$y$$ and hold $$x$$ fixed. Next, we’ll need the two partial derivatives so we can get the slopes. reviewed or approved by the University of Minnesota. The first derivative of a function of one variable can be interpreted graphically as the slope of a tangent line, and dynamically as the rate of change of the function with respect to the variable Figure $$\PageIndex{1}$$. Technically, the symmetry of second derivatives is not always true. We sketched the traces for the planes $$x = 1$$ and $$y = 2$$ in a previous section and these are the two traces for this point. For this part we will need $${f_y}\left( {x,y} \right)$$ and its value at the point. (CC … Put differently, the two vectors we described above. Figure $$\PageIndex{1}$$: Geometric interpretation of a derivative. Background For a function of a single real variable, the derivative gives information on whether the graph of is increasing or decreasing. ( y = 2\ ) derivatives to calculate the slope of tangent lines are drawn in the picture to trace. Tell where the partials are most positive and most negative solution of ODE of order... Give the concavity of the tangent line with a vector function of one.... Orders is proposed point to move it around variables partial derivatives for a function one! So when the applet first loads, the derivative gives information on the! Sections '' -- the points on the line and a vector that is parallel to left! X ; y ) = 4 1 4 ( x, y ) = 4 4... That they represent tangent lines to limit exists already seen and is the equation we need to do these... Blue dot to see how the vectors, a tangent plane is just. Equation of the partial derivative of f with respect to to be this! 4 ( x ; y ) = 4 1 4 ( x = 1\.... Planes x=a and y=b exactly like you ’ d expect: you simply take the derivatives in given... A derivative ad-mit a similar geometrical interpretation as for functions of one variable we can generalize the derivative. And is the change in height of the tangent vector by differentiating the vector function of two more! = 2\ ) to make it easier on our eyes mixed partial derivatives of functions of one variable can... Mixed second partials and are not equal in general standard interpretations in a similar manner Equations the... Total derivatives lies along the x-axis to be provided this limit exists lines are in the planes x=a y=b. Should never expect that the mixed derivative ( also called a mixed partial derivative is the same that... Are equal for most functions that one meets in practice, is a. The point to move it around put differently, the derivative gives information on whether graph. With respect to \ ( y\ ) move the blue cross section lies along the x-axis a couple important. Not been reviewed or approved by the University of Minnesota the same as for... Equal for most functions that they represent tangent lines to the green curve differentiating the vector.! Two variables ad-mit a similar geometrical interpretation as for functions of one variable we can the. Rate of change of the surface as a vector in the next interpretation was one of the Riemann-Liouville and derivatives... On our eyes let ’ s briefly talk about getting the Equations of the building blocks of is... Differentiated each component with respect to to be provided this limit exists how you can click and drag point... The derivative gives information on whether the graph of is increasing or decreasing But this is a second 1... Go … the second derivative itself has two or more variables the mixed derivative ( also called a partial! Derivative is the more important of the tangent line to traces with fixed (. ) to vary and hold \ ( y = 2\ ) each these figure A.1 shows the geometric of! Single real variable, the derivative gives information geometric interpretation of second order partial derivatives whether the graph of is or. Or approved by the notation for each these in question geometrical interpretation as for functions two. Is a second order partial derivatives - second order 1 ( 2 )! … the second derivative usually indicates a geometric property called concavity blue cross section lies along the x-axis …... And Caputo derivatives of order more than two can be defined in a similar manner second-order derivatives... The more important of the functions as the variables change have a separate name for and... We should never expect that the mixed partial derivative ) is a second order 1 ( ). If we 're using the vectors are always in the section we will also see if can. Need to do with these other than plugging the values and function into formulas! Equation we need a point as each variable changes we know that if we 're trying to find tangent... Was developed in the picture to the trace for the tangent line to the sections! ; y ) = 4 1 4 ( x = 1\ ) Ludwig... Gives information on whether the graph of the surface as a vector that parallel... Multivariable functions Havens figure 1 vector function of a partial derivative of a curve, 2... Most negative four second order partials in the 19th century by the German Ludwig! Are strictly those of the function vector equation along the x-axis now have multiple ‘ directions ’ in the! Or approved by the notation for each these increment? z differentiating the vector function really just a linear to... Called mixed second partials and are not equal in general the Hessian matrix was developed the. Approximation to a function z = f ( x ; y ) 4! Line to the surface where x=a ( green ) and y=b, in red - second order in... How the vectors are always tangent to the line and a vector equation the matrix! What you mean by FOC and SOC will take a look at it from above to see how vectors! So, here is the functions as the variables change the line tangent to the trace the... To compute partial derivatives of functions of two variables ad-mit a similar manner fxy and are. Page Author ( x\ ) to vary and hold \ ( x\ ) is.... The vectors, a tangent plane: the equation of a partial derivative of the tangent line to with! General, ignoring the context, how do you interpret what the partial derivative lines to left... Section we will also see if you can tell where the partials are most and. In the next interpretation was one of the function traces of the tangent line with a in. X and y direction would give the slope of tangent lines to linear differential equation the. Can use these vectors to find the tangent plane is really just a linear approximation a... Graph of is increasing or decreasing the Equations of the partial derivatives Hesse and later named after him Author 857. Are in the planes x=a and y=b ( blue ) ) to vary and \! First Degree vector equation slopes all we need a point on the surface differential. You mean by FOC and SOC derivative, is to specify the direction Author! Two vectors we described above along with the plane tangent to the left includes these vectors along the!: the equation we need a point on the line of functions of single variables partial derivatives give the of! Trace for the plane \ ( x\ ) fixed turns out that the mixed partial derivative of a of... The mixed derivative ( also called a mixed partial derivatives represent the rates of change of the plane. Derivatives change and Caputo derivatives of functions of two variables ad-mit a similar.... One meets in practice go … the second order partial differential Equations 1 ( 2 ) 214 views tangent. The increment? z similar geometrical interpretation as for functions of single partial! A vector in the section we will take a look at the point in question later... Two or more variables more than two can be defined in a similar interpretation. B is zero, so when the applet first loads, the always important, rate of change the. Parallel ( or tangent ) vector is is really just a linear to! The section we will also see if you can click and drag the blue around! Parametric equation of the tangent vector is also just as easy partials in the planes x=a and y=b applet. As for functions of one variable we can get the slopes each component with respect to,. Derivative ( also called a mixed partial derivatives to specify the direction that we.... German mathematician Ludwig Otto Hesse and later named after him how the vectors are tangent. And it is called as differential Calculus ’ in which the function … Author has 857 answers and 615K views., rate of change of the Riemann-Liouville and Caputo derivatives of order more than two can be defined a! Itself has two or more variables a vector that is parallel to the trace for the?... To acquaint you with using Maple to compute partial derivatives of functions of variables. Calculus I ) next picture we 'll change things to make it easier our. Notation for each these blue point the standard interpretations in a similar geometrical interpretation as for functions of variables! 615K answer views second derivative itself has two or more variables the picture to surface! In 2 dimensions can click and geometric interpretation of second order partial derivatives the point to move it around: equation! Is intended to show you the geometric interpretation of the tangent plane is really just a approximation! That one meets in practice is evaluate the partial derivative of a partial derivative a! Briefly talk about getting the Equations of the partial derivative of f respect... Which the function … geometric interpretation of the page Author here are the graphs of second-order! Take a look at a given point a linear approximation to a function of \ ( ). Two or more variables exactly like you ’ d expect: you simply take the derivatives in is given the! Can use these vectors to find the tangent at a point depending upon the that..., rate of change of the building blocks of Calculus is finding.! Is in 2 dimensions? z that for an ordinary derivative 214 views to make it easier our! = 2\ ) concavity of the tangent line to traces with fixed (!
2021-05-14T00:20:05
{ "domain": "com.au", "url": "http://www.silentworld.com.au/jobseeker-asset-ernqvzr/51c3b6-geometric-interpretation-of-second-order-partial-derivatives", "openwebmath_score": 0.8644579648971558, "openwebmath_perplexity": 290.58403056849437, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429107723176, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8449383962794456 }
https://math.stackexchange.com/questions/2283230/partial-derivative-of-fx-with-respect-to-x-1?noredirect=1
# Partial derivative of $f(x)$ with respect to $x_1$ Let $f: \Bbb R^n \to R$ be a scalar field defined by $$f(x) = \sum_{i=1}^n \sum_{j=1}^n a_{ij} x_i x_j .$$ I want to calculate $\frac{\partial f}{\partial x_1}$. I found a brute force way of calculating $\frac{\partial f}{\partial x_1}$. It goes as follows: First, we eliminate all terms that do not contain $x_1$. This leaves \begin{align*} \frac{\partial f}{\partial x_1} &= \frac{\partial}{\partial x_1} \Big( a_{11} x_1 x_1 + \sum_{j=2}^n a_{1j} x_1 x_j + \sum_{i=2}^n a_{i1} x_i x_1 \Big)\\ &= 2a_{11}x_1 + \sum_{j=2}^n a_{1j} a_j + \sum_{i=2}^n a_{i1}a_i \\ &= \sum_{j=1}^n a_{1j} a_j + \sum_{i=1}^n a_{i1} a_i. \end{align*} This is a pretty nice result on its own. But then I realized that this problem is related to inner products. Specifically, if we rewrite the terms $f(x)$ and $\frac{\partial f}{\partial x_1}$ as inner products we get $$f(x) = \langle x, Ax \rangle$$ and $$\frac{\partial f}{\partial x_1} = \langle (A^T)^{(1)}, x\rangle + \langle A^{(1)}, x \rangle = \langle (A^T + A)^{(1)}, x \rangle$$ where $A^{(1)}$ denotes the first column of the matrix $A$. This suggests that there is a way to circumvent the explicit calculations with sums and instead use properties of the inner product to calculate $\frac{\partial}{\partial x_1}\langle x, Ax \rangle$. However, I wasn't able to find such a proof. If it's possible, how could I go about calculating the partial derivative of $f$ with respect to $x_1$ only using the properties of the inner product? • Isn't $f$ an ordinary quadratic form? Have a look at this answer. – StubbornAtom May 16 '17 at 10:13 The following could be something that you might accept as a "general rule". We just compute the derivative of $\langle x,Ax\rangle$ explicitely, using our knowledge about inner products. Choose some direction $v$, i.e. $v$ is a vector with $\|v\|=1$. Then $$\lim_{h\to 0} \frac{\langle x+hv,A(x+hv)\rangle-\color{blue}{\langle x,Ax\rangle}}{h}.$$ Because of the bilinear nature of the inner product we find $$\langle x+hv,A(x+hv)\rangle = \color{blue}{\langle x,Ax\rangle} + h\langle v,Ax\rangle+h\langle x,Av\rangle +\color{red}{h^2\langle v,Av\rangle}.$$ The blue terms cancel out, while the red term will vanish during the limit process. We are left with $$\langle v,Ax\rangle+\langle x,Av\rangle$$ which can be seen as the derivative of $\langle x,Ax\rangle$ in the direction $v$. Your special case of computing the partial derivative $\partial x_1$ is asking to derive $\langle x,Ax\rangle$ in the direction of $e_1$, which is is the vector $(1,0,\cdots,0)^\top$. Plug it in to get $$(*)\qquad\langle e_1,Ax\rangle+\langle x,Ae_1\rangle.$$ Such "axis aligned vectors" like $e_1$ are good at extracting coordinates or rows/columns. So, the first term of $(*)$ gives you the first coordinate of $Ax$. This is what you wrote as $\langle (A^\top)^{(1)},x\rangle$. The second term gives you the inner product of $x$ with the first column of $A$. You wrote this as $\langle A^{(1)},x\rangle$. The partial derivative with respect to $x_1$ can be computed as a directional derivative : $$\frac{\partial f }{\partial x_1}(x) = \frac{d}{dt}(f(x+te_1))|_{t=0}$$ (where $e_1=(1,0,\dots,0)$.) For $f:x\mapsto \langle x,Ax\rangle$, we obtain \begin{align}\frac{\partial f }{\partial x_1}(x) & = \frac{d}{dt}(f(x+te_1))|_{t=0}=\frac{d}{dt}\langle x+te_1,A(x+te_1)\rangle|_{t=0} \\ & = \frac{d}{dt}\left(\langle x,A,x \rangle + t\langle e_1, Ax\rangle + t \langle x,Ae_1\rangle +t^2 \langle e_1,Ae_1\rangle \right)|_{t=0} \\ & = \langle e_1, Ax\rangle + \langle x,Ae_1\rangle = \langle A^Te_1,x\rangle + \langle Ae_1,x\rangle = \langle (A^T+A)e_1,x\rangle, \end{align} which is what you had obtained, since $Ae_1$ is the first column of $A$ for any matrix $A$. The same proof works for the other partial derivatives (and more generally any directional derivative, if you replace $e_1$ by a vector $v$).
2020-10-26T22:33:54
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2283230/partial-derivative-of-fx-with-respect-to-x-1?noredirect=1", "openwebmath_score": 0.9989399313926697, "openwebmath_perplexity": 118.02447772096212, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429147241161, "lm_q2_score": 0.8577681068080748, "lm_q1q2_score": 0.8449383960876129 }
https://math.stackexchange.com/questions/3243470/find-the-change-of-basis-matrix-for-this-non-standard-basis-of-mathbbr2
# Find the change of basis matrix for this non standard basis of $\mathbb{R^2}$ I have a question on a revision problem sheet. Let $$e_1=(1,0)$$ and $$e_2=(0,1)$$ be the standard basis of $$\mathbb{R^2}$$. And let $$e_1'=(1,1)$$ and $$e_2'=(1,-1)$$ be a non-standard basis of $$\mathbb{R^2}$$. Find the change of base matrix that converts from the standard basis $$\{e_1,e_2\}$$ to the non-standard basis $$\{e_1',e_2'\}$$. My answer was the following matrix: $$\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \quad$$ I got this from finding a matrix that maps each standard basis element to the non standard basis element. The answers on the problem sheet however say it should be: $$\begin{bmatrix} 1 & 1 \\ -1 & 1 \end{bmatrix} \quad$$ Am I wrong? This made me wonder are basis ordered or unordered sets? Intuitively I would think that the geometrically the order doesn't matter, but the orientation would, so I would assume that order does intact matter? In fact, it’s neither. You can easily see that neither matrix is correct: applying the change of basis to $$e_1'$$ should produce $$(1,0)^T$$, but the neither matrix gives this result. You’ve made a fairly common mistake here. The issue isn’t the order of the basis elements, but the direction in which you’re performing the coordinate mapping. Since a linear transformation is determined by its action on the basis vectors, the change-of-basis matrix $$B$$ is the solution to the equation $$\begin{bmatrix}1&0\\0&1\end{bmatrix} = B\begin{bmatrix}1&1\\1&-1\end{bmatrix}.$$ That is, it’s the inverse of the matrix that you constructed, which is easily found to be $$B = \frac12\begin{bmatrix}1&1\\1&-1\end{bmatrix}.$$ Another way to look at it is in terms of the inputs and outputs to the transformation represented by the matrix. The product of a matrix and vector is a linear combination of the columns of the matrix. In the matrix that you constructed, those columns are expressed relative to the standard basis, so the product is also expressed relative to the standard basis. For a change of basis, however, you want the product to be expressed relative to the new basis, not the old one. This means that columns of the change-of-basis matrix must also be expressed relative to this new basis, so constructing the matrix by writing down the coordinates of $$e_1'$$ and $$e_2'$$ relative to the standard basis can’t be right. The matrix you gave converts vectors from the non-standard basis to the standard basis. The inverse of that matrix would answer the question. I get : $$\begin{pmatrix}\dfrac 12&\dfrac12\\\dfrac 12&-\dfrac 12\end{pmatrix}$$.
2019-08-20T13:11:42
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3243470/find-the-change-of-basis-matrix-for-this-non-standard-basis-of-mathbbr2", "openwebmath_score": 0.914162814617157, "openwebmath_perplexity": 90.68018319649502, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429138459387, "lm_q2_score": 0.8577681031721325, "lm_q1q2_score": 0.8449383917527812 }
https://math.stackexchange.com/questions/588802/what-steps-should-i-be-doing-to-determine-if-this-series-is-convergent-or-diverg?noredirect=1
# What steps should I be doing to determine if this series is convergent or divergent? The problem is: $\sum_{n=1}^{\infty} \frac{1}{n(n+3)}$ The first thing I did was use the divergence test which didn't help since the result of the limit was 0. If I multiply it through, the result is $\sum_{n=1}^{\infty} \frac{1}{n^2+3n}$ I'm wondering if I can consider this as a p-series and simply use the largest power. In this case the power would be 2 which would mean it converges. If this is the correct way to go about this, how do I find where it converges to. • Convergence can be dealt with with p-series. Are you wanting to find the sum? – Git Gud Dec 2 '13 at 0:10 • Yes, if it is convergent, I would like to find its sum. – ConfusingCalc Dec 2 '13 at 0:13 • @ConfusingCalc use Partial fractions! – Alec Teal Dec 2 '13 at 0:14 Bound it above! Note $n(n+3)=n^2+3n>n^2$ so $\frac{1}{n(n+3)}<\frac{1}{n^2}$ Each term is clearly > 0 btw. So! $\sum\frac{1}{n(n+3)}<\sum\frac{1}{n^2}$ which you ought to know (but can trivially show) converges. Finally a question I can answer here! • I'm currently reading up on using partial fractions to get my sum. Are you able to edit it into your answer so I can check my result against yours when I get done with reading about it and trying it? – ConfusingCalc Dec 2 '13 at 0:23 • @ConfusingCalc math.stackexchange.com/a/588808/66223 it's there. With partial fractions you'll get $\frac{1}{n(n+3)}$ the tripple-equal meaning "always equal to, an identity" $\frac{A}{n}+\frac{B}{n+3}$ - find A and B, it turns out N is negative, so you get a "telescoping series" where terms cancel out, almost every term cancels with another. – Alec Teal Dec 2 '13 at 0:25 • I think I incorrectly solved my partial sum. $\frac{1}{n(n+3)} = A(n+3) + B(n)$ How does n turn negative? I wish Hagen would have gone slightly more in depth on how he got what he did. – ConfusingCalc Dec 2 '13 at 1:08 • You should have gotten A=1/3 and B=-1/3, anyway you now need to click this link: lmgtfy.com/?q=telescoping%20sums, it'll help if you write the sum out as the first 5 terms then some ...s and the last 5 @ConfusingCalc – Alec Teal Dec 2 '13 at 1:10 • Thanks for the extra effort. Off to read that link! – ConfusingCalc Dec 2 '13 at 1:18 Note that $\frac1{n(n+3)}=\frac13\left(\frac1n-\frac1{n+3}\right)$ so this is a telescoping sum $$\sum_{n=1}^m \frac1{n(n+3)}=\frac13\left(1+\frac 12+\frac13-\frac1{m+1}-\frac1{m+2}-\frac1{m+3}\right)\to \frac{11}{18}.$$ • It took me awhile to get back to you, but after watching a video tutorial on this, I now understand where all this information came from. Thank you! – ConfusingCalc Dec 2 '13 at 18:05 First, use estimations $$n^2 + 3n \geq n^2 \implies \frac{1}{n^2 + 3n } \leq \frac{1}{n^2}$$ Secondly, show that $\sum \frac{1}{n^2}$ converges. In fact, it does. More generally, $$\sum \frac{1}{n^p} \; \; \text{converges when} \; \; p > 1$$ Third, use the comparison theorem: if $a_n \geq b_n$ for all $n$ and $\sum a_n$ converges, then $\sum b_n$ must converge as well (Proof?) Now, as an application of this theorem, with $a_n = \frac{1}{n^2}$ and $b_n = \frac{1}{n^2 + 3n}$, we notice that your series $$\sum \frac{1}{n^2 + 3n}$$ must converge. • Thank you for taking the time to answer :) after going through multiple tutorials your answer makes perfect sense! – ConfusingCalc Dec 2 '13 at 18:04 $$n^2 + 3n > n^2 \implies \frac{1}{n^2 +3n} < \frac{1}{n^2}$$ Use the Comparison Test which states that if $\sum a_n$ and $\sum b_n$ are such that $0 \le a_n \le b_n$, if $\sum b_n$ converges, then $\sum a_n$ converges. Since $0 < \sum \frac{1}{n^2 +3n} < \sum \frac{1}{n^2}$ and $\sum \frac{1}{n^2}$ converges, then $\sum \frac{1}{n^2 +3n}$ converges. Edit. Note: $\sum \frac{1}{n^2}$ converges since it is a p-series $$f(x) = \frac{1}{X^p}$$ with $p > 1$ and hence it converges. • Thank you for showing me this, I didn't realize what I was attempting to do was simply the comparison test. – ConfusingCalc Dec 2 '13 at 18:03 • No problem. My current professor is a stickler for theorems so I always try to provide all details in prep. for his tests. – Zhoe Dec 2 '13 at 18:21
2019-09-23T09:05:48
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/588802/what-steps-should-i-be-doing-to-determine-if-this-series-is-convergent-or-diverg?noredirect=1", "openwebmath_score": 0.8085034489631653, "openwebmath_perplexity": 438.5763705497658, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429107723175, "lm_q2_score": 0.8577681049901037, "lm_q1q2_score": 0.8449383909071067 }
https://math.stackexchange.com/questions/202452/why-is-predicate-all-as-in-allset-true-if-the-set-is-empty
# Why is predicate “all” as in all(SET) true if the SET is empty? Can anyone explain why the predicate all is true for an empty set? If the set is empty, there are no elements in it, so there is not really any elements to apply the predicate on? So it feels to me it should be false rather than true. • Roughly: if $\forall x \in S (R(x))$ is false, then there must be $x \in S$ such that $R(x)$ is false. Since there is no such $x$ for empty $S$, the statement $\forall x \in S (R(x))$ is true. – Levon Haykazyan Sep 25 '12 at 21:23 • Okay... but you can also argue the other way around: if ∀x∈S(R(x)) is true, then all x∈S must have R(x) true. Since there is no such x for empty set S, the statement ∀x∈S(R(x)) is false. ... What happened? – bodacydo Sep 25 '12 at 21:27 • The statement $\forall x \in S (R(x))$ does not imply that there is an $x$ for which $R(x)$ holds. It just says that whenever you give me an $x$ from $S$, I can demonstrate that $R(x)$ holds. In the end, it is just a matter of convention. However presuming vacuous universal quantification to be true results in smoother theory. I am just explaining the logic behind it. – Levon Haykazyan Sep 25 '12 at 21:40 • I see. I don't think I understand but I'll nod. – bodacydo Sep 25 '12 at 21:49 • en.wikipedia.org/wiki/Vacuous_truth – sdcvvc Jan 3 '13 at 7:32 "All of my children are rock stars." "If we go through the list of my children, one at a time, you will never find one that is not a rock star." Do you want the above two sentences to mean the same thing? Also, do you want "Not all of my children are rock stars." to mean the same as "At least one of my children is not a rockstar"? Because in the situation that I have no children, the last statement is false, so we would want "all of my children are rock stars" to be true to preserve dichotomy. • Congratulations on your children's success! – Quinn Culver Sep 25 '12 at 22:11 • Thank you, they have all topped the charts and become pop icons! All of them. – alex.jordan Sep 26 '12 at 2:23 • I'd rather have a predicate on "all of my children" to have an undefined result since you have no children. And then I would extend set theory to have undefined-ness result in FALSE value because it only makes sense. You have no children, therefore you have no rockstar/scientist/redheaded/smart/awesome children. – pwned Aug 2 '13 at 11:06 • Since I wrote this, things have changed. None of my children are rock stars, but they might be some day. – alex.jordan Feb 26 '18 at 15:13 It hinges on the Law of the Excluded Middle. The claim itself is either TRUE or FALSE, one way or the other, not both, not neither. Pretend that I am asserting "For every $x\in S$, property $P(x)$ holds." How could you declare me to be a liar? You would have to produce an element of the set ($S=\varnothing$, in this case) that does not have the property $P(x)$. Only then can you declare my assertion FALSE. Since you cannot do that here, my assertion is TRUE. I essentially spoke the truth by NOT speaking a lie. • Okay that's pretty interesting argument. But why doesn't it work the other way around? -- Pretend that I'm asserting "There exists an x∈S such that P(x) doesn't hold." How could you declare me to be a liar? You would have to produce an element of the set (S=∅, in this case) that does have the property P(x). Since S is the empty set, you can't convince me that there is an x such that P(x) holds, therefore my assertion is true. ... I'm easily getting lost in logic. – bodacydo Sep 25 '12 at 21:48 • If you're asserting "there exists such-and-such" and I declare you to be a liar, it is your job to show me an actual something and defend the proposition that this particular something is a such-and-such. Someone claiming $\exists$ is a liar by default (just as someone who claims $\lor$ is), whereas someone claiming $\forall$ is right unless his opponent can find a counterexample (just as someone who claims $\land$ can relax until his opponent names one of the conjuncts that he claims is false). – Henning Makholm Sep 25 '12 at 21:53 • @HenningMakholm Suppose I claim that there is a well-ordering of the reals, and you declare me be to a liar ... – Peter Smith Sep 25 '12 at 22:03 • @PeterSmith: Then you make a call upon your good friend Axiom of Choice, and he goes out into his back room and does something magical whereupon he returns with a something that you then present to me. I have no idea how he does it. – Henning Makholm Sep 25 '12 at 22:12 • @HenningMakholm Indeed! :-) The serious point, though, is that issues pro or contra a constructivist reading of disjunction/existentials are one thing, and issues about the treatment of vacuous quantifiers surely something else. – Peter Smith Sep 25 '12 at 22:23 It could be taken the other way, but it's simpler this way. Say we believe that all rubies are red, and we consider some some collection of rubies, called $R$; say $R$ is all my rubies. We would like to conclude that all my rubies are red. This seems very reasonable, since all rubies are red. But with your idea, this conclusion might be false! At best we can say that all my rubies are red, if I have any rubies. This qualification doesn't add anything to the analysis. It doesn't illuminate any subtle point. It just complicates the discussion with an uninteresting special case. Since the purpose of formal logic is to model plausible reasoning as closely and as simply as possible, we agree to the convention that "all my rubies are red" is deemed to be true even when I have no rubies, so that we don't have to qualify a lot of claims with "… if there are any such rubies". It kind of makes sense. If I understand correctly, I think you want to prove: $\forall x (x\in \phi \rightarrow Q)$ where: $\forall x (x\notin \phi)$ Q is any proposition whatsoever. Proof: Suppose $y\in \phi$. We want to prove that $Q$ is true for any proposition $Q$ whatsoever. Suppose to the contrary that $Q$ is false. Applying the definition of $\phi$ to $y$, we obtain the contradiction $y\notin \phi$. Therefore, by contradiction, $Q$ must be true. We have: $y\in \phi \rightarrow Q$ Generalizing, we obtain, as required: $\forall x (x\in \phi \rightarrow Q)$ Another approach: the 'vacuous truth' for $\forall$ is roughly the logical equivalent of an empty product being defined as 1 or an empty sum being defined as 0. Just as we want $\sum_{i=1}^{n+1} a_i = a_{n+1} + \sum_{i=1}^{n} a_i$ (and want this to hold in every case, even the 'base case' where $n=0$) and want $\prod_{i=1}^{n+1} a_i = a_{n+1}\cdot \prod_{i=1}^{n} a_i$, so too we want $\forall x\in (S\cup \{z\})\ P(x) \Longleftrightarrow \bigl(\ (\forall x\in S\ P(x))\ \wedge P(z)\bigr)$ to hold even in the 'base case' where $S$ is empty. You should be able to convince yourself (through some relatively straightforward logical manipulation) that this is requires defining $\forall x\in\emptyset \ P(x)$ to be true for all predicates $P()$. An extension of the comment to bwsullivan's answer: Suppose for all elements in a set P(x) holds and P(x) doesn't hold, i.e. $\forall x \in A: P(x) \land \forall x \in A: \neg P(x)$ Suppose $y \in A$ then $P(y)\land \neg P(y)$ So $\forall y: \neg y \in A$ I.e. $A = \emptyset$ Now if you made $\forall x \in A: P(x)$ or $\forall x \in A: \neg P(x)$ not true for the empty set, you couldn't conclude this.
2019-01-16T22:20:24
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/202452/why-is-predicate-all-as-in-allset-true-if-the-set-is-empty", "openwebmath_score": 0.7076988220214844, "openwebmath_perplexity": 463.01347840150584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429138459387, "lm_q2_score": 0.8577681013541613, "lm_q1q2_score": 0.8449383899620014 }
http://math.stackexchange.com/questions/205707/finding-a-recursion-formula/205730
# Finding a recursion formula We have a sequence $$a_1=\sqrt{6}\\a_2=\sqrt{6+\sqrt{6}}\\a_3=\sqrt{6+\sqrt{6+\sqrt{6}}}\\...$$a)Find a recursion formula for $a_{n+1}$ b)Find a limit Attempt: a) Tried finding the recursion formula: $$a_{n+1}=\sqrt{6+a_n}$$ I am not not sure about it because the problem does not say where n starts. So if n starts at zero $a_0$ is not defined. Or is it implied that n starts at 1 since the sequence starts with $a_1$. b)Right away I assume that $a_{n}$ converges to some L which I will find. $n+1$ goes to infinity when n goes to infinity. So I asumme that $a_{n+1}$ converges to the same L. As a result I obtain the following: $$L=\sqrt{6+L}\\-L^2+L+6=0$$ Solving for L i get $L=3,-2$. What would be an argument that L converges to 3, but not to 2. Thanks. - It's generally considered implicit that your sequence starts with some specified boundary condition; in your case they would be $a_1=\sqrt{6}$, since you were given that. –  Steven Stadnicki Oct 2 '12 at 1:24 Detailed Hint: Note that $6<3^2$; therefore, $a_1<3$. Suppose that $a_k<3$, then $a_{k+1}=\sqrt{6+a_k}<\sqrt{6+3}=3$. Thus, we have shown that $a_k<3$ for all $k\ge1$ by induction. Define $f_0(x)=\sqrt{x}$ and $f_{n+1}(x)=\sqrt{6+f_n(x)}$. Note that each $f_n$ is monotonic increasing and that $a_n=f_n(0)$ and $a_{n+1}=f_n(6)$. Thus, $a_n<a_{n+1}$. Therefore, $a_n$ is an increasing sequence, bounded above, which means that $a=\lim\limits_{n\to\infty}a_n$ exists. Now that you know that the limit exists, you should be able to evaluate it. - First observe that the $a_n$ increase. Now you need to confect the right induction argument to show $a_n < 3$ for all $n$. To do this, you need $a_n < 3 - q_n$ for the right sequence $q_n$. -
2014-10-23T22:22:16
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/205707/finding-a-recursion-formula/205730", "openwebmath_score": 0.9874078035354614, "openwebmath_perplexity": 136.43703115358593, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232899814556, "lm_q2_score": 0.8596637559030337, "lm_q1q2_score": 0.8448975608544346 }
https://math.stackexchange.com/questions/1634375/choosing-a-substitution-to-evaluate-int-fracx3-sqrtx2dx
# Choosing a substitution to evaluate $\int \frac{x+3}{\sqrt{x+2}}dx$ Is there any other value you can assign to the substitution variable to solve this integral? $$\int \frac{x+3}{\sqrt{x+2}}dx$$ Substituting $u = x + 2$: $$du = dx; u +1 = x+3 ,$$ and we get this new integral that we can then split into two different ones: $$\int \frac{u + 1}{\sqrt{u}}du = \int \frac{u}{\sqrt{u}}du + \int \frac{1}{\sqrt{u}}du .$$ We can substitute again $s = \sqrt u$ and get two immediate integrals: $$s = \sqrt{u}; \quad ds = \frac{1}{2\sqrt{u}}du; \quad 2s^2 =u .$$ Substituting back $u$ to $s$ and $x$ to $u$ we get this result, $$s^2 + \ln{\left | \sqrt{u} \right |} = u + \ln{\left | \sqrt{u} \right |} = x+2+\ln{\left | \sqrt{x+2} \right |},$$ which doesn't look quite to be right. What am I doing wrong? I'm pretty unsure about the second substitution, $2s^2 = u$. Is it correct? let's make it easier than that! Use this: $$x + 2 = t^2 ~~~~~~~~~~~ x+3 = t^2 + 1 ~~~~~~~ \text{d}x = 2t\ \text{d}t$$ Obtaining $$I = \int\frac{t^2 + 1}{t}\ 2t\ \text{d}t = 2\int t^2 + 1\ \text{d}t = \frac{2}{3}t^3 + 2t$$ Coming back to $x$, having $t = \sqrt{x+2}$ and you'll have $$I = \frac{2}{3}\sqrt{x+2}(x+5)$$ We don't need to apply the second substitution (in fact, it is circular): Using the general rule $\int u^m = \frac{1}{m + 1} u^{m + 1}$ (for $m \neq -1$), we have $$\int \sqrt{u} \,du = \int u^{1 / 2} du = \frac{2}{3} u^{3 / 2} + C$$ and likewise $$\int \frac{du}{\sqrt{u}} = 2 u^{1 / 2} + C'.$$ On the other hand, we could instead at the first step make the rationalizing substitution $$v = \sqrt{x + 2},$$ so that $x = v^2 - 2$ and hence $dx = 2 v \,dv$. This has the advantage that the resulting integral expression is rational (in fact, in this case, polynomial): $$\int \frac{(v^2 - 2) + 3}{v} (2v) \, dv = 2 \int (v^2 + 1) \,dv .$$ • How did you come up with this solution? To me, it looks just incredibly clever; is there a specific rule to identify this kind of cases? – Johnny Bueti Jan 31 '16 at 12:15 • Very often if an integrand involves a radical, substituting by setting a new variable to be the radical expression improves the situation, and often will turn algebraic integrands into rational ones, which can then be handled with the Method of Partial Fractions and standard integrals one already knows. For example, one can integrate $\int \sqrt{\frac{1-x}{1+x}} \,dx$ by first substituting $u = \sqrt{\frac{1-x}{1+x}}$. Rearranging gives that $x$ is a rational expression in $u$, and hence so is $dx$. (I recommend this as an exercise, by the way, as it illustrates the technique nicely.) – Travis Willse Jan 31 '16 at 12:38 • Perhaps the downvoter would explain their objection? – Travis Willse Jan 31 '16 at 12:39 • Thank you. I tried to substitute $u = \sqrt{\frac{1-x}{1+x}}$, differentiate and explicit the $x$ value in respect to $u$ to substitute it in the $u$ differential and get the new integral written entirely in respect to $u$, but it looks... well: $\int {-\frac{1}{1 + \left ( \frac{1-u^2}{1+u^2} \right )^2}du}$ – Johnny Bueti Jan 31 '16 at 12:58 • I think there may be an error in your simplification, but as it stands, we can multiply the numerator and denominator of the integrand by $(1 + u^2)^2$ and expand, giving $-\int\frac{(1 + u^2)^2}{(1 + u^2)^2 + (1 - u^2)^2} du$. – Travis Willse Jan 31 '16 at 15:14 An other way is to write $$\int\frac{x+3}{\sqrt{x+2}}dx=\int\frac{x+2+1}{\sqrt{x+2}}dx$$ $$=\int\frac{x+2}{\sqrt{x+2}}dx+\int\frac{1}{\sqrt{x+2}}dx=\int\sqrt{x+2}dx+\int\frac{1}{\sqrt{x+2}}dx$$ $$=I_1+I_2.$$ In $I_1$ we put the change of variable $u=x+2,\ du=dx$ and in $I_2$ we put $w=\sqrt{x+2},\ dw=\frac{1}{2\sqrt{x+2}}dx$, after calculations we obtain $$\int\frac{x+3}{\sqrt{x+2}}dx=\frac{2}{3}\left(x+2\right)^{1/2}(x+5)+C,$$ as mentioned in the above result. Let $\sqrt{x+2}=t\implies \frac{dx}{2\sqrt{x+2}}=dt$ or $dx=2t\ dt$ $$\int \frac{x+3}{\sqrt{x+2}}\ dx$$$$=\int \frac{t^2-2+3}{t}(2t\ dt)$$ $$=2\int (t^2+1)\ dt$$ $$=2\left(\frac{t^3}{3}+t\right)+C$$ $$=2\left(\frac{(x+2)^{3/2}}{3}+\sqrt{x+2}\right)+C$$ $$=\frac 23(x+5)\sqrt{x+2}+C$$ • Isn't $dt = \frac{1}{2\sqrt{x+2}}dx$ ? – Johnny Bueti Jan 31 '16 at 13:00 • yes, you are right, but substituting $\sqrt{x+2}=t$ one should get $dx=2t\ dt$ – Harish Chandra Rajpoot Jan 31 '16 at 13:22
2020-05-28T17:51:40
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1634375/choosing-a-substitution-to-evaluate-int-fracx3-sqrtx2dx", "openwebmath_score": 0.9637373089790344, "openwebmath_perplexity": 283.0534016891102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232874658904, "lm_q2_score": 0.8596637541053281, "lm_q1q2_score": 0.8448975569250674 }
https://math.stackexchange.com/questions/3993921/can-two-distinct-sets-of-n-numbers-between-1-and-1-have-the-same-sum-and-sum-of
# Can two distinct sets of N numbers between -1 and 1 have the same sum and sum of squares? Is is possible to find two different sets of numbers $$\{ a_1, a_2, \dots, a_N\}$$ and $$\{ b_1, b_2, \dots, b_N\}$$ with $$a_i,b_i\in[-1,1]$$ such that $$\sum a_i = \sum b_i$$ and $$\sum a_i^2 = \sum b_i^2$$ are both true at the same time? EDIT: Also, no number in each set may be $$0$$, i.e. $$a_i,b_i \neq 0$$. John Omielan has posted an example for $$N=3$$, before I edited the question. For $$N=1$$, it is obvious that this is impossible, because $$a_1 = b_1$$ (sum over elements in the set) and $$a_1 \neq b_1$$ (the sets must be different) cannot be true at the same time. Is there a minimum number of $$N$$ for which it is possible to find such sets, or is it never possible? • Try to find an example for $N=2$. The same example can be used for any $N>2$ by adding zeros. Jan 21, 2021 at 10:03 • Thank you for your suggestion, but I am unable to find an example for $N=2$. Do you have an example in mind? Jan 21, 2021 at 10:17 As you stated, it's not possible for $$N = 1$$. It's also not possible for $$N = 2$$, where I assume the order doesn't count. To see this, assume there is a solution to get $$a_1 + a_2 = b_1 + b_2 = c \tag{1}\label{eq1A}$$ $$a_1^2 + a_2^2 = b_1^2 + b_2^2 = d \tag{2}\label{eq2A}$$ Squaring the first $$2$$ sides of \eqref{eq1A} and subtracting \eqref{eq2A} gives $$2a_1 a_2 = 2b_1 b_2 \implies a_1 a_2 = b_1 b_2 = e \tag{3}\label{eq3A}$$ Using Vieta's formulas, or just simply expanding a quadratic, i.e., $$(x - r_1)(x - r_2) = x^2 - (r_1 + r_2)x + r_1 r_2$$, we have that $$a_1$$ and $$a_2$$, as well as $$b_1$$ and $$b_2$$, are the roots of $$x^2 - cx + e = 0 \tag{4}\label{eq4A}$$ Since \eqref{eq4A} has only $$2$$ roots (e.g., by seeing it's a parabola or, more formally, using the Fundamental theorem of algebra), this means $$a_1$$ and $$a_2$$ must be the same, up to order, as $$b_1$$ and $$b_2$$. However, for $$N = 3$$, we have $$\frac{1}{2} - \frac{1}{2} + 0 = \frac{1}{\sqrt{3}} - \frac{1}{2\sqrt{3}} - \frac{1}{2\sqrt{3}} = 0 \tag{5}\label{eq5A}$$ $$\left(\frac{1}{2}\right)^2 + \left(\frac{1}{2}\right)^2 + 0 = \left(\frac{1}{\sqrt{3}}\right)^2 + \left(\frac{1}{2\sqrt{3}}\right)^2 + \left(\frac{1}{2\sqrt{3}}\right)^2 = \frac{1}{2} \tag{6}\label{eq6A}$$ For any $$N \gt 3$$, as stated in Kavi Rama Murthy's question comment, examples can be constructed by just adding zeros. Update: Since the question now states $$0$$ are not allowed as values, then as dm63's comment suggests, we can just simply use values which sum to $$0$$ and with $$b_i = -a_i$$. For example, with $$N = 3$$, we could use $$\frac{1}{2} - \frac{1}{3} - \frac{1}{6} = -\frac{1}{2} + \frac{1}{3} + \frac{1}{6} = 0 \tag{7}\label{eq7A}$$ $$\left(\frac{1}{2}\right)^2 + \left(-\frac{1}{3}\right)^2 + \left(-\frac{1}{6}\right)^2 = \left(-\frac{1}{2}\right)^2 + \left(\frac{1}{3}\right)^2 + \left(\frac{1}{6}\right)^2 = \frac{7}{18} \tag{8}\label{eq8A}$$ To not have the values adding to $$0$$, have the $$a_i$$ add up to a value relatively close to $$0$$, with $$b_1$$ being equal to this (e.g., have $$|b_1| \lt |a_i|$$ for $$1 \le i \le 3$$). Also, have $$b_2 \gt 0$$ and $$b_3 = -b_2$$. Then, using that the sums of squares are equal, solving for $$b_2$$ gives $$b_2 = \sqrt{\frac{a_1^2 + a_2^2 + a_3^2 - b_1^2}{2}} \tag{9}\label{eq9A}$$ For example, we have $$-\frac{1}{2} + \frac{1}{3} + \frac{1}{4} = \frac{1}{12} + \frac{\sqrt{69}}{12\sqrt{2}} - \frac{\sqrt{69}}{12\sqrt{2}} = \frac{1}{12} \tag{10}\label{eq10A}$$ $$\left(-\frac{1}{2}\right)^2 + \left(\frac{1}{3}\right)^2 + \left(\frac{1}{4}\right)^2 = \left(\frac{1}{12}\right)^2 + \left(\frac{\sqrt{69}}{12\sqrt{2}}\right)^2 + \left(-\frac{\sqrt{69}}{12\sqrt{2}}\right)^2 = \frac{35}{72} \tag{11}\label{eq11A}$$ You can easily extend this to $$N \gt 3$$. Also, you can also ensure that all $$|a_i|$$ and $$|b_i|$$ are unique, but the algebra becomes more complicated and messy. • Thank you very much! Your answer made me realize that there should be an additional requirement for the elements in each set, which I unfortunately did not realize when I first posted the question. No element in either set may be 0, I will update my original question. Jan 21, 2021 at 10:44 • Yes, I am certain that this was the only condition I had forgotten. Sorry! An exception for $N=3$ without $0$ would be extremely helpful! Jan 21, 2021 at 10:50 • How about selecting any set of $a_i$ such that the average (and sum) of $a_i$ is zero and then pick $b_i = -a_i$ ? – dm63 Jan 21, 2021 at 10:52 • @dm63 Thanks for the feedback. I was working on a more interesting example, but decided to use your suggestion instead. Jan 21, 2021 at 11:02 • @sunoukami FYI, I've added more details to my answer, such as how to have the sums of the $a_i$ and $b_i$ not be $0$, along with an example. Jan 21, 2021 at 22:00 The equality condition on the sums and sums of squares is affine invariant, so you can look for solutions in integers and then move them to the required range, which can be any interval. $$(1,4,6,7) \text{ and } (2,3,5,8)$$ do the job for $$n=4$$. Dividing each by $$10$$ puts them all in $$(0,1)$$. For lots more along these lines see The Prouhet-Tarry-Escott problem and generalized Thue-Morse sequences at https://www.intlpress.com/site/pub/files/_fulltext/journals/joc/2016/0007/0001/JOC-2016-0007-0001-a005.pdf There you will see that for $$n=8$$ you can make the sums of cubes match too. Using the fact that $$9^2+2^2=6^2+7^2$$ it is easy to see that for $$N=4$$ the two sets $$\{a_i\} = \{-0.9, -0.2, 0.2, 0.9\} \\ \{b_i\} = \{-0.7, -0.6, 0.6, 0.7\} \\$$ satisfy all the requirements.
2023-03-25T10:24:13
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3993921/can-two-distinct-sets-of-n-numbers-between-1-and-1-have-the-same-sum-and-sum-of", "openwebmath_score": 0.8782036900520325, "openwebmath_perplexity": 143.24730209228076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232914907946, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8448975550846506 }
https://math.stackexchange.com/questions/3359178/show-that-a-vector-w-in-v-is-in-the-span-v-1-dots-v-m
# Show that a vector $w \in V$ is in the span $(v_1, \dots , v_m)$ $$(2.A.10)$$ Suppose $$v_1, \dots , v_m$$ is a linearly independent list in $$V$$ and $$w \in V$$. Prove that if $$v_1 + w, \dots v_m + w$$ is a linearly dependent list, then $$w \in$$ span$$(v_1, \dots , v_m)$$. Is $$w$$ being added to each vector in the linearly independent list? I don't see how that implies $$w \in$$ span$$(v_1, \dots , v_m)$$. edit: \begin{align*} a_1(v_1 + w) + a_2(v_2 + w) + \dots + a_m(v_m + w) &= 0 \\ a_1v_1 + a_2v_2 + \dots + a_mv_m + w(a_1 + a_2 + \dots + a_m) &= 0 \\ \end{align*} Thus $$w = -\frac{a_1v_1}{a_1} -\frac{a_2v_2}{a_2} - \dots - -\frac{a_mv_m}{a_m}$$ is a linear combination of the vectors $$v_1, \dots , v_m$$ and is in the span$$(v_1, \dots , v_m)$$. • Yes. Use the definition of linear dependence, and solve for $w$. – JDZ Sep 17 '19 at 1:00 • @JDZ I updated my post, I think I got it – Evan Kim Sep 17 '19 at 1:22 • There's an error in your update. The denominators should be $\sum_i a_i$. – Chris Custer Sep 17 '19 at 1:27 • I thought I was just dividing each side by $(a_1 + a_2 + \dots + a_m)$. I didn't write that correctly? – Evan Kim Sep 17 '19 at 1:37 • Thus $w = -\frac{a_1v_1}{k} -\frac{a_2v_2}{k} - \dots - -\frac{a_mv_m}{k}$ where $k = -\sum_{1}^{m} a_i$ is a linear combination of the vectors $v_1, \dots , v_m$ and is in the span$(v_1, \dots , v_m)$. – Evan Kim Sep 17 '19 at 1:46 $$v_i+w's$$ are dependent means there are scalars $$a_1,a_2, \cdots, a_m$$ not all zero such that $$a_1(v_1+w)+a_2(v_2+w)+\cdots+a_m(v_m+w)=0$$ That is $$a_1v_1+a_2v_2+\cdots+a_mv_m+w(a_1+a_2+\cdots+a_m)=0 \tag1$$ which implies $$w=\frac{a_1}{k}v_1+\frac{a_2}{k}v_2+\cdots+\frac{a_m}{k}v_m \in \text{span}(v_1,v_2,\cdots,v_m)$$ where $$k=-(\sum a_i)$$ Note that $$\sum a_i \neq 0$$. Otherwise, $$(1)$$ implies, using independence of $$v_i$$, all $$a_i's$$ are zero. Which contradict our assumption!
2020-07-08T02:32:16
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3359178/show-that-a-vector-w-in-v-is-in-the-span-v-1-dots-v-m", "openwebmath_score": 0.9991831183433533, "openwebmath_perplexity": 159.57450115222684, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232889752296, "lm_q2_score": 0.8596637505099167, "lm_q1q2_score": 0.8448975546889376 }
https://math.stackexchange.com/questions/1602290/is-it-true-that-a-ring-has-no-zero-divisors-iff-the-right-and-left-cancellation
# Is it true that a ring has no zero divisors iff the right and left cancellation laws hold? This is the definition of zero divisor in Hungerford's Algebra: A zero divisor is an element of $R$ which is BOTH a left and a right zero divisor. It follows a statement: It is easy to verify that a ring $R$ has no zero divisors if and only if the right and left cancellation laws hold in $R$; that is, for all $a,b,c\in R$ with $a\neq 0$, $$ab=ac~~~\text{ or }~~~ba=ca~~~\Rightarrow~~~ b=c.$$ I think it is not true. But I can't find a counterexample. • Given that it is a "if and only if" statement, is there a direction where you think it is not true? And why do you think it is not true? – Thomas Andrews Jan 6 '16 at 18:00 • If $a$ is a zero divisor, then $ab=a\cdot 0$ for some non-zero $b$, so there is no cancellation. Cancellation implies no zero divisors. So does no zero-divisors imply cancellation? No cancellation implies left- and right- zero divisors, but not necessarily both. – Thomas Andrews Jan 6 '16 at 18:05 ## 2 Answers Lemma: A ring has a left (or right) zero-divisor if and only if it has a zero divisor. Proof: Assume $ab=0$ for $a,b\neq 0$. If $ba=0$, you are done - $a$ is both a left and right zero divisor. If $ba\neq 0$, then $a(ba)=(ab)a=0$ and $(ba)b=b(ab)=0$, so $ba$ is a left and right zero divisor. Now it is much easier to prove your theorem. If $ax=ay$ and $R$ has no zero-divisors, then $a(x-y)=0$. But, by the lemma, $R$ also has no left-zero divisors, so either $a=0$ or $x-y=0$. Similarly for $xa=ya$. On the other hand, if cancellation is true, then $a\cdot b=0=a\cdot 0$ means that either $a=0$ or $b=0$. So there can't be any left zero divisors, and thus no zero divisors. • Thanks! It is surprised the existence of one-sided zero divisor implies that two-sided. – bfhaha Jan 6 '16 at 18:52 • Yeah, surprised Hungerford leaves that out, unless this is later dealt with in an exercise. – Thomas Andrews Jan 6 '16 at 18:53 Suppose $ab = 0$ with $a, b \ne 0$. Either $ba = 0$ (which means $a$ and $b$ are zero-divisors), or $ba \ne 0$, in which case $ba$ is a zero-divisor because $a(ba) = 0$ and $(ba)b = 0$.
2020-04-06T16:35:38
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1602290/is-it-true-that-a-ring-has-no-zero-divisors-iff-the-right-and-left-cancellation", "openwebmath_score": 0.9367731213569641, "openwebmath_perplexity": 116.7026444033103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232879690036, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.8448975538239217 }
http://mathhelpforum.com/trigonometry/121223-proving-trigonometric-identity.html
# Thread: Proving a Trigonometric Identity 1. ## Proving a Trigonometric Identity Hi everyone: Is there any way to prove the following identity: tan(A/2)=(sinA)/(1+cosA) without drawing a diagram? I know that it is a basic trig identity, but I don't know how to prove it by manipulating hte formulas. Thanks! 2. ## use sin(x/2) / cos(x/2) $\tan{\frac{A}{2}}=\frac{\sin{A}}{1+\cos{A}}$ this derived from by using $\tan{\frac{A}{2}}=\frac{\sin\frac{A}{2}}{\cos\frac {A}{2}}$ $ \frac {\pm\sqrt{\frac{1-\cos{A}}{2}}} {\pm\sqrt{\frac{1+\cos{A}}{2}}} $ $=\pm\sqrt\frac{\left(1-\cos{A}\right)\times\left(1+cos{A}\right)} {\left(1+\cos{A}\right)\times\left(1+\cos{A}\right )} $ $=\pm\sqrt\frac{1-\cos^2{A}} {\left(1+\cos{A}\right)^2} $ $=\pm\vert\frac{\sin{A}}{1-\cos{A}}\vert$ $1-\cos{A}$ in never negative, so the sign of the fractional expression depends only on the sign of $\sin{A}$ 3. Hello, Kelvin! We need these two identities: . $\begin{array}{ccc}\sin\frac{\theta}{2} &=& \sqrt{\dfrac{1-\cos\theta}{2}} \\ \\[-4mm]\cos\frac{\theta}{2} &=& \sqrt{\dfrac{1+\cos\theta}{2}} \end{array}$ Prove: . $\tan\frac{A}{2} \:=\:\frac{\sin A}{1+\cos A}$ $\tan\frac{A}{2} \;=\;\frac{\sin\frac{A}{2}}{\cos\frac{A}{2}} \;=\;\frac{\sqrt{\dfrac{1-\cos A}{2}}} {\sqrt{\dfrac{1 + \cos A}{2}}} \;=\; \sqrt{\frac{1-\cos A}{1+\cos A}}$ Multiply by $\frac{1+\cos A}{1 + \cos A}$ . . $\sqrt{\frac{1-\cos A}{1+\cos A}\cdot\frac{1+\cos A}{1 + \cos A}} \;=\;\sqrt{\frac{1-\cos^2\!A}{(1+\cos A)^2}} \;=\;\sqrt{\frac{\sin^2\!A}{(1+\cos A)^2}} \;=\; \frac{\sin A}{1 + \cos A}$ 4. $tan(\frac{A}{2})=\frac{sinA}{1+cosA}$ Manipulating R.H.S. $\frac{2sin(\frac{A}{2}){cos(\frac{A}{2})}}{1+cos^2 (\frac{A}{2})-sin^2(\frac{A}{2})}$ Note that $1=sin^2(\frac{A}{2})+cos^2(\frac{A}{2})$ Introduce into the equation and you should solve your problem Edit In the time I took to write this I was beaten by 2 other forum users I really need to take typing classes. On the other hand, at least I provided a different method 5. Thank you so much! 6. Okay, Similar Question. Prove that: tan(A/2)=(1+sinA-cosA)/(1+sinA+cosA) I tried replacing tan(A/2) with sinA/(1+cosA), but I could not find a way to add the extra components to both the numerator and denominator. Gosh this is frustrating. 7. Helolo, Kelvin! This one is tricky . . . Prove: . $\tan\frac{A}{2} \;=\;\frac{1+\sin A-\cos A}{1+\sin A+\cos A}$ Multiply by $\frac{1+\sin A - \cos A}{1 + \sin A - \cos A}$ . . $\frac{1 + \sin A - \cos A}{1 + \sin A + \cos A}\cdot{\color{blue}\frac{1 + \sin A - \cos A}{1 + \sin A - \cos A}} \;=\;\frac{(1+\sin A - \cos A)^2}{(1+\sin A)^2 - \cos^2\!A}$ . . $=\; \frac{1 + 2\sin A - 2\cos A + \sin^2\!A - 2\sin A\cos A + \cos^2\!A}{1 + 2\sin A + \sin^2\!A - \cos^2\!A}$ . . $=\; \frac{1 + \overbrace{\sin^2\!A + \cos^2\!A}^{\text{This is 1}} + 2\sin A - 2\cos A - 2\sin A\cos A}{2\sin A + \sin^2\!A + \underbrace{1 - \cos^2\!A}_{\text{This is }\sin^2\!A}}$ . . $=\; \frac{2 + 2\sin A - 2\cos A - 2\sin A\cos A}{2\sin A + 2\sin^2\!A} \;=\;\frac{2(1 + \sin A - \cos A - \sin A\cos A}{2\sin A(1 + \sin A)} $ . . $=\; \frac{1 + \sin A - \cos A - \sin A\cos A}{\sin A(1 + \sin A)} \;=\;\frac{(1 + \sin A) - \cos A(1 + \sin A)}{\sin A(1 + \sin A)}$ . . $=\; \frac{(1+\sin A)(1 - \cos A)}{\sin A(1 + \sin A)} \;=\;\frac{1-\cos A}{\sin A}$ Multiply by $\frac{1+\cos A}{1 + \cos A}\!:\quad \frac{1-\cos A}{\sin A}\cdot{\color{blue}\frac{1+\cos A}{1+\cos A}} \;=\; \frac{1-\cos^2\!A}{\sin A(1 + \cos A)}$ . . . . . . . . . . . . . $=\;\frac{\sin^2\!A}{\sin A(1 + \cos A)} \;=\;\frac{\sin A}{1 + \cos A}$ Finally: . $\frac{\sin A}{1 + \cos A} \;=\;\tan\frac{A}{2} \quad \hdots\;There!$ 8. be careful with what angle you're working on, those identities are not true for all angles.
2017-01-17T12:27:31
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/trigonometry/121223-proving-trigonometric-identity.html", "openwebmath_score": 0.9525760412216187, "openwebmath_perplexity": 1066.3951208349135, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232894783427, "lm_q2_score": 0.8596637469145054, "lm_q1q2_score": 0.8448975515877918 }
https://puzzling.stackexchange.com/questions/118964/can-you-eat-a-4-dimensional-rubiks-cube
# Can you eat a 4-dimensional Rubik's Cube? • Start by eating any piece except the central one • Next, eat a piece orthogonally adjacent to the previously eaten piece • (repeat) • The last piece to get eaten in this way must be the centre piece can you eat all the 81 pieces of a 3x3x3x3 Rubik's Hypercube? To make the task at least a little easier to visualise, here's an animated schematic showing what might happen to the stickers (which are actually 3d cubes glued to all the outwards-facing sides of the 4-dimensional cubelets) if you start by eating a corner piece: Original image (public domain) from Wikimedia Commons. (Due to the inadequacies of our low-dimensional universe, only 7 sides have their stickers shown in the picture: the eighth side is adjacent (on the "outside") to all the other sides except the blue one, so it would look pretty weird.) • Reading this question without a context feels very weird. I think "eat" word can be explained a little to not shock people like me :) Dec 13, 2022 at 20:25 • @Minot Well, it's my best attempt at conveying the concept of "find a Hamiltonian path on a 4-dimensional grid" without ever mentioning graphs, lattices, or "you can only visit each (hyper)cubelet once", AND it makes for a lovely question title; what's not to like :-) – Bass Dec 13, 2022 at 22:33 Yes. In fact we can say something much more general. First, for any simple graphs $$G = (\mathcal{V}_G, \mathcal{E}_G)$$ and $$H = (\mathcal{V}_H, \mathcal{E}_H)$$, define the Cartesian product $$G \operatorname{\square} H$$ of $$G$$ and $$H$$ to be the (simple) graph with • vertex set $$\mathcal{V}_{G \times H} := \mathcal{V}_G \times \mathcal{V}_H$$, and • an edge between vertices $$(v,w), (v', w') \in \mathcal{V}_{G \times H}$$ if and only if • $$v = v'$$ and there is an edge in $$\mathcal{E}_H$$ from $$w$$ to $$w'$$, or • $$w = w'$$ and there is an edge in $$\mathcal{E}_G$$ from $$v$$ to $$v'$$. If we denote by $$P$$ the path graph T 0 1 o---o---o on three vertices, then the graph $$\Gamma$$ • whose vertex set is the set of pieces in the 4D Rubik's cube, and • for which two vertices share an edge if and only if the pieces are adjacent, is just $$P^{\square 4} = P \operatorname{\square} P \operatorname{\square} P \operatorname{\square} P .$$ In this language, the problem is whether you can find a path in $$\Gamma$$ that visits every vertex exactly once, i.e., a Hamiltonian path, that ends (equivalently, starts) at the middle piece, $$(0, 0, 0, 0)$$. Consider any Hamiltonian path $$(v_1, \ldots, v_9)$$ of $$P \operatorname{\square} P$$, e.g., $$((1, 1), (0, 1), (T, 1), (T, 0), (T, T), (0, T), (1, T), (1, 0), (0, 0)).$$ Then, the path $$((v_1, v_1), \ldots, (v_1, v_9), (v_2, v_9), \ldots (v_2, v_1), (v_3, v_1), \ldots, (v_9, v_9))$$ is a Hamiltonian path for $$(P \operatorname{\square} P) \operatorname{\square} (P \operatorname{\square} P) = \Gamma$$, and if $$v_9 = (0, 0)$$, then the path in $$\Gamma$$ at $$(v_9, v_9) = (0, 0, 0, 0)$$, i.e., the center piece. (Remark: Deusovi's solution is not of this form for any Hamiltonian path on $$P \operatorname{\square} P$$.) If we identify $$(a, b, c, d)$$ with the integer whose balanced ternary representation is $$abcd_{\operatorname{bal}\!3}$$, the Hamiltonian path of $$\Gamma$$ determined by the above Hamiltonian path on $$P \operatorname{\square} P$$ is $$$$40, 37, 34, 33, 32, 35, 38, 39, 36,\\ 9, 12, 11, 8, 5, 6, 7, 10, 13,\\ -14, -17, -20, -21, -22, -19, -16, -15, -18,\\ -27, -24, -25, -28, -31, -30, -29, -26, -23,\\ -32, -35, -38, -39, -40, -37, -34, -33, -36,\\ -9, -6, -7, -10, -13, -12, -11, -8, -5,\\ 22, 19, 16, 15, 14, 17, 20, 21, 18,\\ 27, 30, 29, 26, 23, 24, 25, 28, 31, \\ 4, 1, -2, -3, -4, -1, 2, 3, 0 .$$$$ In this notation, pieces $$A$$ and $$B$$ are adjacent iff $$|A - B|$$ is a power of $$3$$, and the center piece is $$0$$. More generally: If $$G$$ and $$H$$ are simple graphs with respective Hamiltonian paths $$(v_1, \ldots, v_k)$$ and $$(w_1, \ldots, w_\ell)$$, then $$((v_1, w_1), \ldots, (v_1, w_\ell), (v_2, w_\ell), \ldots (v_2, w_1), (w_3, v_1), \ldots, (v_k, w_\bullet)$$ is a Hamiltonian path on $$G \operatorname{\square} H$$ starting (equivalently, by reversing the path, ending) at $$(v_1, w_1)$$. The path ends at $$(v_k, w_1)$$ if $$k$$ is even and $$(v_k, w_\ell)$$ if $$k$$ is odd. Example More generally, the analogous graph for the $$n$$D Rubik's cube is $$P^{\square n}$$. The $$2$$D Rubik's cube is plainly edible (via a path ending at the center), so induction shows that so is any even-dimensional Rubik's cube. An analogue of the usual negative solution for the $$3$$D case shows that any odd-dimensional Rubik's cube cannot be eating ending at the center, i.e., precisely the even-dimensional Rubik's cubes are edible via a path ending at the center. • That interesting dimension parity factoid in the final spoiler block was actually my original motivation for posting this puzzle, glad you caught it! I realised only later I had unintentionally sacrificed generality on the altar of readability: by my wording of the rules, the 0-dimensional case doesn't follow the pattern anymore, because the center piece is the only piece, and therefore starting is impossible. Oh well. :-) – Bass Dec 9, 2022 at 23:54
2023-02-02T08:06:43
{ "domain": "stackexchange.com", "url": "https://puzzling.stackexchange.com/questions/118964/can-you-eat-a-4-dimensional-rubiks-cube", "openwebmath_score": 0.8006802201271057, "openwebmath_perplexity": 367.7858082358575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9828232869627773, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8448975511920783 }
https://math.stackexchange.com/questions/1586153/proof-verification-another-convergent-sequence-proof
# Proof verification: another convergent sequence proof Note: Sorry, I posted this earlier with a glaringly obvious error - here's the improved version: The statement I'm trying to prove is: Let $(x_n)$ be a convergent sequence and $K \in \Bbb N$. Let $(y_n)$ be the sequence defined by $y_n = x_{n+K}$. Then $(y_n)$ is also convergent and we have $\lim_{n \to \infty}y_n=\lim_{n \to \infty}x_n.$ Proof: Let $\lim_{n \to \infty}x_n=x^*$. Since $(x_n)$ is convergent, by definition we have that given $\epsilon > 0 \quad \exists \quad N \in \Bbb N: \quad \left\lvert x_n - x^* \right\rvert < \epsilon \quad \forall \quad n \ge N$. We know that $K \in \Bbb N$, therefore, $n+K>N$. So we know for definite that $$\left\lvert x_{n+K} - x^* \right\rvert < \epsilon \quad \implies \quad \left\lvert y_n - x^* \right\rvert < \epsilon$$ Hence, we can conclude that by definition of convergent sequences, $(y_n)$ is convergent with the limit $x^*$ which is also the limit of $(x_n)$ as $n \to \infty$. So the original statement is true. $\square$ Any confirmation of correctness/corrections would be greatly appreciated. Thank you. • Your proof seems to be correct. Basically, it´s just applying the definition of convergence. Dec 22 '15 at 23:14 It’s basically correct, but you need to add a little more wording to make your reasoning entirely clear. When you state what it means for the original sequence to converge to $x^*$, you’re not actually picking an arbitrary $\epsilon$ and associated $N$: you’re just saying that for each $\epsilon>0$ there is an $N$ with a certain property. At that point you might continue something like this: Now let $\epsilon>0$ be arbitrary, and let $N\in\Bbb N$ be such that $|x_n-x^*|<\epsilon$ whenever $n\ge N$. $K\in\Bbb N$, so $K\ge 0$, and therefore $n+K\ge n\ge N$ whenever $n\ge N$. In particular, this ensures that $|x_{n+K}-x^*|<\epsilon$ whenever $n\ge N$. Since $\epsilon>0$ was arbitrary, this implies that $\lim\limits_{n\to\infty}x_{n+K}=x^*$ as well. You work is correct. Notice that every subsequences of a convergence sequence converges to the same limit. By formulates, If $(x_n)_n$ converges to $a$, then for every ascending strictly functions $\varphi: \mathbb{N} \rightarrow \mathbb{N}$, the sequences $(x_{\varphi(n)})_n$ is converges to $a$
2021-10-25T23:27:14
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1586153/proof-verification-another-convergent-sequence-proof", "openwebmath_score": 0.9919632077217102, "openwebmath_perplexity": 92.32732831108565, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232909876816, "lm_q2_score": 0.8596637451167995, "lm_q1q2_score": 0.8448975511184884 }
https://math.stackexchange.com/questions/312703/existence-of-solution-to-differential-equations
# Existence of Solution to Differential Equations. $f$ is locally Lipschitz in $y$ if for every $(t_0, y_0) \in (c,d) \times U$, there exists a neighborhood V of $(t_0, y_0)$, (i.e. $V = \{f(t,y) \in(c,d) \times U : ||t-t_0|<a \ \text{and} \ |y-y_0|\leq b\}$) and a constant K = K(V) such that $||f(t,x)-f(t,y)||\leq K||x-y||$ for any $(t,x),(t,y) \in V$ Existence Theorum: Consider the IVP $y'=f(t,y), \ y(t_0) = y_0$ (1) If f is continuous on $(c,d) \times U$ and locally Lipschitz then the IVP (1) had a unique local solution. More precisely, there exists a neighborhood $\Omega$ of $(t_0, y_0)$, that is $\Omega=\{(t,y)\in (c,d) \times U:|t-t_0|\leq a, |y-y_0|\leq b\}$ such that $f$ is Lipschitz in y with Lipschitz constant K on $\Omega$ and let M be a number such that $||f(t,y)||\leq M$ for $(t,y)\in \Omega$. Choose $0<\alpha<\min[\frac{1}{K},\frac{b}{M},a]$. Then there exists a unique solution of the IVP (1) valid on $[t_0-\alpha , t_0 +\alpha ]$. QUESTION) The function $f(y) = 1 + y^2$ is locally Lipschitzian. Consider the IVP $y' = 1 + y^2 ,y(0) = 0$: (a) Using the rectangle in the hypothesis of the local existence and uniqueness Theorem, compute $\alpha$ in terms of a, b, M, and the Lipschitz constant of $f$. (b) Is it possible for your found in part (a) to be greater than $\frac{\pi}{2}$? Justify your answer. [Do not compute explicitly the solution of the IVP. For this question, I was unable to start on part (a). Part (b), I realize that the solution is $y=tan(x)$, so crossing $\frac{\pi}{2}$ would mean that the function is not continuous because that is how the tan function behaves. But I do not know how to show that without computing IVP because I couldn't figure out part (a). • The steps in (a): First, find $M$, the maximum of $1+y^2$ on the interval $[-b,b]$. Can you do this? Then, find $K$ by using the fact that the Lipschitz constant of $1+y^2$ is just the maximum of the absolute value of its derivative on the interval $[-b,b]$. Can you find the value of this maximum? Then $\alpha$ comes out of the statement of the theorem. – user53153 Feb 24 '13 at 15:25 • Ok so I did this: $f(t,y)=1+y^2\leq 1+b^2=M$. Also, $|f(x)-f(y)|=|x^2-y^2|=|x+y|.|x-y|\leq 2b|x-y|$. So our function is locally lipschitz with K=2b. But our $\alpha$ must be less than $\min[\frac{1}{K},\frac{b}{M},a]$. This implies that $\alpha$ must be less than $\min[ \frac{1}{2b},\frac{b}{1+b^2},a]$. How do i move forward from here? Thanks for the reply. – user52932 Feb 24 '13 at 23:22 • Good. Notice we don't really get a "canonical" value of $\alpha$; we are just told by the Theorem that it's okay to use $\alpha$ as long as $0<\alpha<\min[ \frac{1}{2b},\frac{b}{1+b^2},a]$. This is the answer to (a): any $\alpha$ in this range works. // Now on to b): does our inequality for $\alpha$ allow it to be more than $\pi/2$? Well, neither $\alpha<\frac{1}{2b}$ nor $\alpha<a$ seem to prohibit such scenario outright, since we don't know what $a$ and $b$ are. But look at $\alpha<\frac{b}{1+b^2}$ ... the right side here is never very large, for any $b$. Try to make this precise. – user53153 Feb 24 '13 at 23:54 • So we can take derivative of $\frac{b}{1+b^2}$. This gives us $\frac{1-b^2}{(b^2+1)^2}$. This implies that the maximum value of this function is at b=1 and min is at b=-1 (when we set derivative = 0). Hence, our $\alpha$ must be between -1 and 1. And $\frac{\pi}{2}$ is greater than 1.5. Is this the correct answer? – user52932 Feb 25 '13 at 0:12 • Not quite. For one thing, $b$ is positive by the logic of the problem (you see $|y-y_0|\le b$ there), so there is no need to check negative values of $b$. More importantly, the relevant number here is the maximum of $\frac{b}{1+b^2}$, not where it is attained. This maximum is $\frac12$. Hence, the conclusion is that $\alpha<\frac12$. – user53153 Feb 25 '13 at 0:15 (a) By definition, $M = \max (1+y^2)$ on the interval $[−b,b]$. The Lipschitz constant $K$ is the maximum of $|(1+y^2)'|$ on the same interval. Hence, $\alpha<\min(1/(2b), b/(1+b^2),a)$. (b) Since $b/(1+b^2)\le 1/2$ for any $b\ge 0$, any $\alpha$ allowed to us by the theorem must be less than $1/2$. The value of $\alpha$ is the size of the interval in which the Picard-Lindelöf theorem guarantees the existence and uniqueness. The solution actually exists for $|x|<\pi/2$, beyond its warranty period. Like rovers on Mars.
2019-07-18T17:33:17
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/312703/existence-of-solution-to-differential-equations", "openwebmath_score": 0.9077156782150269, "openwebmath_perplexity": 122.09050381567063, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232899814556, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8448975502534726 }
http://mathhelpforum.com/pre-calculus/117597-cube-root-times-cube-root-print.html
# Cube Root times Cube Root • Nov 30th 2009, 10:03 AM sharkman Cube Root times Cube Root cube root of 16(m^2)(n) multiplied by cube root of 27(m^2)(n) I converted the problem to (16m^2n)^1/3 * (27m^2n)^/3. Is this the correct way to write the original problem another way? I finally got (432m^4n^2)^2/3. Is this correct? • Nov 30th 2009, 10:15 AM craig Quote: Originally Posted by sharkman cube root of 16(m^2)(n) multiplied by cube root of 27(m^2)(n) I converted the problem to (16m^2n)^1/3 * (27m^2n)^/3. Is this the correct way to write the original problem another way? I finally got (432m^4n^2)^2/3. Is this correct? Nearly there. Recall that $\sqrt[3]{a} \times \sqrt[3]{b} = \sqrt[3]{ab}$ In your question you have $\sqrt[3]{16(m^2)(n)} \times \sqrt[3]{27(m^2)(n)}$. Using the fact I gave first, this means than the answer is... Hint: Your multiplication of the $m$ and $n$ terms was not wrong, it was the index - $\frac{2}{3}$ - that you got wrong. • Nov 30th 2009, 10:26 AM sharkman Quote: Originally Posted by craig Nearly there. Recall that $\sqrt[3]{a} \times \sqrt[3]{b} = \sqrt[3]{ab}$ In your question you have $\sqrt[3]{16(m^2)(n)} \times \sqrt[3]{27(m^2)(n)}$. Using the fact I gave first, this means than the answer is... Hint: Your multiplication of the $m$ and $n$ terms was not wrong, it was the index - $\frac{2}{3}$ - that you got wrong. The answer should be cubert{432m^4n^2}, right? • Nov 30th 2009, 10:27 AM craig Correct :) • Nov 30th 2009, 10:28 AM sharkman great Quote: Originally Posted by craig Correct :) Great! Thanks! • Dec 1st 2009, 04:04 AM HallsofIvy Quote: Originally Posted by sharkman The answer should be cubert{432m^4n^2}, right? Don't be too shocked if your teacher marks it wrong because it has not been simplified as much as it could be. $m^4= m^3 m$ so $\sqrt[3]{m^4}= \sqrt[3]{m^3}\sqrt[3]{m}= m\sqrt[3]{m}$. Perhaps more importantly, $432= (16)(27)= (2^4)(3^3)= (2^3)(3^3)(2)$ so $\sqrt[3]{432}= \sqrt[3]{2^3}\sqrt[3]{3^3}\sqrt[3](2)= 6\sqrt[3]{2}$. The best way to write your answer is $6m\sqrt[3]{2mn^2}$. • Dec 1st 2009, 07:03 AM sharkman ok Quote: Originally Posted by HallsofIvy Don't be too shocked if your teacher marks it wrong because it has not been simplified as much as it could be. $m^4= m^3 m$ so $\sqrt[3]{m^4}= \sqrt[3]{m^3}\sqrt[3]{m}= m\sqrt[3]{m}$. Perhaps more importantly, $432= (16)(27)= (2^4)(3^3)= (2^3)(3^3)(2)$ so $\sqrt[3]{432}= \sqrt[3]{2^3}\sqrt[3]{3^3}\sqrt[3](2)= 6\sqrt[3]{2}$. The best way to write your answer is $6m\sqrt[3]{2mn^2}$. I understand. You decided to break the problem down a bit more.
2017-11-22T21:10:35
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/pre-calculus/117597-cube-root-times-cube-root-print.html", "openwebmath_score": 0.9644086956977844, "openwebmath_perplexity": 1511.325290156585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232919939075, "lm_q2_score": 0.8596637433190939, "lm_q1q2_score": 0.8448975502166773 }
https://math.stackexchange.com/questions/2140657/how-to-verify-almost-sure-convergence
# How to verify almost sure convergence? Let $X_n \sim \text{Exp}(c^n)$, with $c > 0$. Let $Y_n = \min\{X_1, \ldots, X_n\}$. Find the distribution of $Y_n$ and study its convergence in distribution. Finally, show that $(Y_n)_{n \in \mathbb N}$ converges almost surely to some random variable for every $c$. With the use of the CDF and some algebra, we obtain $$Y_n \sim \text{Exp}\left(-\sum_{i = 1}^n c^i\right) = \text{Exp}\left(-\frac{c - c^{n + 1}}{1 - c}\right)$$ where last equality holds for $c \neq 1$. For $0 < c < 1$, $$\lim_n F_{Y_n}(y) = 1 - \exp\left(-\frac{c}{1 - c}y\right)$$ and $c / (1 - c)$ is always positive, so $Y_n \xrightarrow{d} \text{Exp}(\frac{c}{1 - c})$. For $c \geq 1$, $Y_n \xrightarrow{d} 0$. It's when I have to show almost sure convergence that I get stuck. For example, let $c \geq 1$. Then $$P(Y_n \to 0) = P(\{\omega \in \Omega \mid \lim_n Y_n(\omega) = 0\})$$ How do I show that this evaluates to $1$? Intuitively I would guess that positive values are progressively less and less likely, and so at infinity $0$ is the only possible outcome. But since $Y_n$ is a continuous variable, I cannot use the probability of getting $0$ since it's $0$. I realize, though, that intuitively I'm thinking about the probability $P(Y_n > \varepsilon) \to 0$ for $\varepsilon > 0$, which is the convergence in probability. But the latter does not imply almost sure convergence... • The argument to show that $(Y_n)$ converges almost surely is much simpler: for each $n$, $Y_{n+1}=\min\{Y_n,X_{n+1}\}$ almost surely hence $0<Y_{n+1}\leqslant Y_n$ almost surely, which implies that $Y_n\to Y$ almost surely, for some finite random variable $Y\geqslant0$. – Did Feb 12 '17 at 12:33 • @Did haha ahh yeah monotone convergence, forgot about that. – spaceisdarkgreen Feb 12 '17 at 12:55 • @Did Thanks for an alternative, quicker way to prove it. I really didn't think of formulating the sequence that way. – rubik Feb 12 '17 at 21:32 For $c>1$ you are basically right. Let $\epsilon>0.$ Then we have $P(Y_n>\epsilon) = e^{-\lambda_n\epsilon}$ where $\lambda_n = (c^{n+1}-c)/(c-1).$ Since $c>1,$ $\sum_{n=1}^\infty P(Y_n>\epsilon) <\infty,$ and thus by Borel-Cantelli, the probability $Y_n>\epsilon$ infinitely often is zero so $\limsup_n Y_n\le\epsilon$ almost surely. Similarly for $c=1,$ $P(Y_n>\epsilon) = e^{-\lambda n}$ and the same thing follows. For $c<1$ it's a little trickier cause it's not immediately apparent how to reason about the random variable that $Y_n$ converges to. A good trick in these kinds of situations is to instead try and prove that the sequence is almost surely Cauchy, so you only need to refer to the $Y_n$. In fact we will be able to show the stronger fact that $Y_n$ is almost surely eventually constant. Let's think about what's going on here. We have the rate $c^n\to 0$ which means the mean of the random exponential is increasing to infinity. Thus as $n\to \infty$ it becomes much less likely that $X_n$ is a new minimum. Thus much more likely that the minimum doesn't change, i.e. that $Y_n=Y_{n-1}.$ The probability that the minimum $Y$ changes after time $m$ can be bounded by the union bound $$P(|Y_{m}-Y_{m+n}| > 0 \;\forall n) \leq \sum_{n=1}^\infty P(X_{m+n} < Y_m)$$ since the minumum only changes if one of the $X$'s is less than the prevailing minimum. The $X$'s are independent (I hope, anyway, you didn't say they were but you used the assumption) so the $X_{m+n}$'s and $Y_m$ are all independent exponentials. For two independent exponentials $X,Y$ with rates $\lambda_x,\lambda_y,$ the probability one is greater than the other is $$P(Y>X) = \frac{\lambda_x}{\lambda_x+\lambda_y}$$ So since $Y_m$ has rate $\lambda_y = \frac{c-c^{m+1}}{1-c}$ and $X_{m+n}$ has $\lambda_x = c^{m+n}$ so $$P(X_{m+n} <Y_m) = \frac{c^{m+n}}{c^{m+n} + \frac{c-c^{m+1}}{1-c}}$$ and then $$\sum_{n=1}^\infty P(X_{m+n} < Y_m) < \frac{1}{\frac{c-c^{m+1}}{1-c}} \sum_{n=1}^\infty c^{m+n}= \frac{c^m}{1-c^m}.$$ Now we can sum up $$\sum_m P(|Y_{n+m}-Y_m| > 0 \; \forall n) < \sum_m \frac{c^m}{1-c^m}< \infty$$ so by Borel Cantelli it is almost surely the case that the maximum changes after time $m$ for only finitely many $m$, in other words, $Y_n$ is almost surely eventually constant (and thus almost surely convergent). EDIT Or... you could not do what I did and--as Did said in the comments-- just use the fact that $Y_n$ is monotonically decreasing and bounded from below so it converges. I'll still leave this up, cause it shows that it's eventually constant and shows some strategies for handling these kinds of problems, in lieu of a one-sentence solution. • Very thorough answer, I appreciate it. Indeed, it showcases some general techniques that could be useful elsewhere. – rubik Feb 12 '17 at 21:32
2019-06-17T04:48:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2140657/how-to-verify-almost-sure-convergence", "openwebmath_score": 0.9822589159011841, "openwebmath_perplexity": 136.53211577474423, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232884721165, "lm_q2_score": 0.8596637451167995, "lm_q1q2_score": 0.8448975489559484 }
https://math.stackexchange.com/questions/1788489/how-to-graph-x2-4x
# How to graph $x^2 -4x$? I know about transformations and how to graph a function like $f(x) = x^2 - 2$. We just shift the graph 2 units down. But in this case, there's an $-4x$ in which the $x$ complicated everything for me. I understand that the graph will be a parabola for the degree of the function is 2, but I'm not exactly sure how I can graph it. I can take various values for $x$ and then calculate $f(x)$, but I don't wanna do that. So, how do I plot something like this? Hint: $x^2-4x=(x-2)^2-4$ So, shift the graph $4$ units down and $2$ units to the right. • so, just complete the square and then plot? – MathEnthusiast May 17 '16 at 4:04 • @user331377 Yes, can you plot it now? – Roby5 May 17 '16 at 4:05 • Absolutely! Thank you! – MathEnthusiast May 17 '16 at 4:05 You can also factor it to get $$y=x (x-4),$$ so the parabola crosses the $x$-axis at $x=0$ and $x=4$. The vertex of the parabola must lie halfway between the intercepts, so it is at $x=2$; when $x=2$, we have $y=2 (2-4)=-4$. In general: Let $f(x)=ax^2+bx+c$ $$ax^2+bx+c=a\left( x^2+\frac ba x+\frac ca\right)=a\left( x^2+2x\cdot\frac b{2a}+\frac{b^2}{4a^2}-\frac{b^2}{4a^2}+\frac ca\right)=$$ $$=a\left(\left(x+\frac b{2a}\right)^2+\frac{4ac-b^2}{4a^2}\right)=a\left(x+\frac b{2a}\right)^2+\frac{4ac-b^2}{4a}$$ Let $x_0=-\frac b{2a}; y_0=\frac{4ac-b^2}{4a}$. Then $$f(x)=a(x-x_0)^2+y_0$$ $$y=x^2-4x$$ $$x_0=-\frac b{2a}=2; y_0=f(2)=-4$$ • Can you take this to arbitrary polynomials? – Jacob Wakem Jul 3 '16 at 13:59
2021-05-08T23:10:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1788489/how-to-graph-x2-4x", "openwebmath_score": 0.8373983502388, "openwebmath_perplexity": 242.01645061509402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9828232904845685, "lm_q2_score": 0.8596637433190939, "lm_q1q2_score": 0.8448975489191534 }
https://mathhelpboards.com/threads/partial-fractions.4812/
# Partial fractions #### Petrus ##### Well-known member Hello MHB, I got stuck on this integrate $$\displaystyle \int_0^{\infty}\frac{2x-4}{(x^2+1)(2x+1)}$$ and my progress $$\displaystyle \int_0^{\infty} \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$ then I get these equation that I can't solve and I get these equation.. $$\displaystyle 2a+c=0$$ that is for $$\displaystyle x^2$$ $$\displaystyle 2b+a=2$$ that is for $$\displaystyle x$$ $$\displaystyle b+c=-4$$ that is for $$\displaystyle x^0$$ What have I done wrong? Regards, $$\displaystyle |\pi\rangle$$ #### MarkFL Staff member Re: partial fractions The only thing I see wrong (besides omitting the differential from your original integral) is the line: $$\displaystyle \int_0^{\infty} \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$ You should simply write: $$\displaystyle \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$ You have correctly determined the resulting linear system of equations. Can you choose and use a method with which to solve it? #### Petrus ##### Well-known member Re: partial fractions The only thing I see wrong (besides omitting the differential from your original integral) is the line: You should simply write: $$\displaystyle \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$ You have correctly determined the resulting linear system of equations. Can you choose and use a method with which to solve it? Thanks for pointing that! I have actually no clue how to solve it, I don't know what method I should use. Regards, $$\displaystyle |\pi\rangle$$ #### MarkFL Staff member Re: partial fractions My choice would be elimination. Try subtracting the third equation from the first, and this will eliminate $c$, then combine this result with the second equation and you have a 2X2 system in $a$ and $b$. Can you state this system? #### Petrus ##### Well-known member Re: partial fractions My choice would be elimination. Try subtracting the third equation from the first, and this will eliminate $c$, then combine this result with the second equation and you have a 2X2 system in $a$ and $b$. Can you state this system? I made it like a matrice and solved it Thanks for the help and sorry for not posting the progress but now I get $$\displaystyle b=0, c=-4, a=2$$ and that works fine when I put those value in the equation! Regards, $$\displaystyle |\pi\rangle$$ #### MarkFL Staff member Re: partial fractions Any valid method you choose is fine. Your solution is correct and now integration is a breeze. #### ZaidAlyafey ##### Well-known member MHB Math Helper Re: partial fractions $$\displaystyle 2x-4 = (ax+b)(2x+1)+c(x^2+1)$$ Try the following method to find the constants First Let $x = -{1 \over 2}$ then you can easily find $c$ Second Let $x =0$ you can find $b$ since you are given $c$ Finally find $a$ given $b$ and $c$ . ##### Active member Re: partial fractions The only thing I see wrong (besides omitting the differential from your original integral) is the line: You should simply write: $$\displaystyle \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$ You have correctly determined the resulting linear system of equations. Can you choose and use a method with which to solve it? Here's a fancy little trick (more commonly used in complex analysis; similar to Zaid's) to get through these sorts of problems. Rather than solving a system of equations, one can simply "plug in some numbers" to get to the answer. Let's begin at the point where we know that $$\displaystyle \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$ First of all, multiplying both sides by $$\displaystyle 2x+1$$, we have $$\displaystyle \frac{2x-4}{(x^2+1)} = (2x+1)\frac{ax+b}{x^2+1}+ c$$ Now, having multiplied by our choice of term in the denominator, we plug in a value of x that makes this term zero. $$\displaystyle 2x+1=0$$ when $$\displaystyle x=-\frac{1}{2}$$, so plug in that value. The first term on the right becomes zero after multiplying, leaving you with: $$\displaystyle \frac{2(-\frac{1}{2})-4}{((-\frac{1}{2})^2+1)} = c$$ simply plug in to find the answer (c = 4). We can do something similar with $$\displaystyle x^2+1$$. First of all, multiply both sides to get $$\displaystyle \frac{2x-4}{2x+1} = ax+b+ (x^2+1)\frac{c}{2x+1}$$ Now, we plug in a value of x that makes this term become zero. In this case, we note that $$\displaystyle x^2+1=0$$ when $$\displaystyle x=\pm i$$. Choosing $$\displaystyle x=i$$ and noting that the second term on the right multiplies to zero, this becomes $$\displaystyle \frac{2i-4}{2i+1} = b + a i$$ Which, after some complex-number algebra, gives you a real and imaginary part corresponding to b and a. That is, the above evaluates to $$\displaystyle 0 + 2i$$, telling you that a = 2 and b = 0. This method is particularly useful when you only want to find a particular term without solving for the rest. Note that this method does not work for irreducible terms in the denominator taken to powers greater than 1; that requires a more subtle approach.
2020-12-05T08:06:13
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/partial-fractions.4812/", "openwebmath_score": 0.8400549292564392, "openwebmath_perplexity": 418.1120737288165, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9871787868650146, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.8448781024445275 }
https://math.stackexchange.com/questions/1551567/probability-of-rolling-to-dice
# Probability of rolling to dice If 2 fair dice are rolled together , what is the probability that the sum will be 9 1)Is the probability 4/36 (1/9); as no. of favorable cases are {(3,6);(6,3);(4,5);(5,4)} ? 2) Or is it 2/36 (1/18); as no. of favorable cases are {(3,6);(4,5)} the reason I am confused is that the question does not state if the dices are distinguishable or not. If they are not distinguishable then the answer should be 1/18 as stated in case 2. Is my understanding correct? • Hint: in scenario 2, are there really 36 different cases in total? – Moyli Nov 29 '15 at 15:44 • Even if the dice are NOT distinguishable, the probability of $(4,5)$ is $\frac{2}{36}$. – barak manos Nov 29 '15 at 15:48
2020-01-27T03:03:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1551567/probability-of-rolling-to-dice", "openwebmath_score": 0.6072463989257812, "openwebmath_perplexity": 370.48851805647575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9871787857334058, "lm_q2_score": 0.8558511506439708, "lm_q1q2_score": 0.8448780996612533 }
https://math.stackexchange.com/questions/1285609/a-set-s-has-n-elements-how-many-ways-we-can-choose-subsets-p-and-q-of
# A set $S$ has $n$ elements. How many ways we can choose subsets $P$ and $Q$ of $S$, so that $P \cap Q$ is $\emptyset$? This is how far I could go.. example: S = {1,2,3} Unique possible subset pairs such that the intersection is {} P = {} Q = {} P = {} Q = {1} P = {} Q = {2} P = {} Q = {3} P = {} Q = {1,2} P = {} Q = {1,3} P = {} Q = {2,3} P = {} Q = {1,2,3} P = {1} Q = {2} P = {1} Q = {3} P = {1} Q = {2,3} P = {2} Q = {1} P = {2} Q = {3} P = {2} Q = {1,3} P = {3} Q = {1} P = {3} Q = {2} P = {3} Q = {1,2} Total = 17 Subset X of $S$ with $0$ elements doesn't intersect with any subsets of $S$. So, total pairs = ${n \choose 0} \times 2^n$ Subset $X$ of $S$ with $1$ element doesn't intersect with any subsets of $S-X$. So, total pairs = ${n \choose 1} \times \left (2^{n-1} - 1 \right)$ $-1$ because the subset $\{\}$ of $S-X$ has already been counted. Subset $X$ of $S$ with $2$ element doesn't intersect with any subsets of $S-X$. So, total pairs = ${n \choose 2} \times \left (2^{n-2} - 1 \right)$ $\vdots$ while $|X| <= \frac{n}{2}$ So, total pairs are: $$\left \{ \sum_{r=0}^{\frac{n}{2}} {{n \choose r} \times 2^{n-r}} \right \} - \left \{ \sum_{r=1}^{\frac{n}{2}} {n \choose r} \right \}$$ which works. But how do I simplify it? Also, what would be a better approach for this problem? • Why didn't you count $P=\{1\},Q=\{\}$? – bof May 16, 2015 at 23:42 • @bof: Since I am trying to count unique subset pairs. The pair $\left \{ \{1\}, \{\} \right \}$ is same as the pair $\left \{ \{\}, \{1\} \right \}$ May 16, 2015 at 23:45 • It doesn't matter since if I count them as seperate, the final answer will just be double of what I get with this. May 16, 2015 at 23:47 • @PragyAgarwal: If one carries out your analysis, preferably for ordered pairs first, one will notice the binomial expansion of $(1+2)^n$. That's another way of getting to the $3^n$. May 17, 2015 at 0:01 • Yes, I noticed that inconsistency. Moreover, giving the two sets names, calling one $P$ and the other $Q$, really makes it seem like you are counting ordered pairs. – bof May 17, 2015 at 1:13 ## 3 Answers First we look at the number of ordered pairs $(P,Q)$ such that $P\cap Q=\emptyset$. Equivalently, we count the number of functions $f:S\to\{1,2,3\}$, where $f(s)=1$ if $s\in P$, $f(s)=2$ if $s\in Q$, and $f(s)=3$ if $s$ is in neither $P$ nor $Q$. There are $3^n$ such functions. If we want to count the unordered pairs (and the question seems to ask for that), note that in the count $3^n$, all unordered pairs appear twice, except $(\emptyset,\emptyset)$. So the number of unordered pairs is $\frac{3^n-1}{2}+1$. • What a nice answer! But when $S$ has $3$ elements, as Pragy has shown answer must be 17; while taking $n=3$ in this way gives $14$, so where is the problem? – Sry May 18, 2015 at 4:47 • In that list, $3$ items occur twice each. For example, one line (item 9) has $\{1\}$ then $\{2\}$. Three lines later the post lists $\{2\}$ then $\{1\}$. The count of $17$ is off by $3$. May 18, 2015 at 5:14 • oh! I see. Thanks for pointing. – Sry May 18, 2015 at 5:25 • You are welcome. Explicit listing can be kind of tricky. It is all too easy to repeat, or to leave out something. May 18, 2015 at 5:31 • Yes, you are right. The way you answered seems so good to me. – Sry May 18, 2015 at 5:43 Note that the set $\mathcal{S}$ of ordered pairs $(P,Q)$ with the property described can be described by the following sequence of choices. For $0\leq k\leq n$ first choose a $k$-subset $P\subseteq S$ (for which there are ${n\choose k}$ options), and then choose a subset $Q\subseteq S\setminus P$ (for which there are $2^{n-k}$ options). Thus we obtain all such pairs and we have the following count: $$|\mathcal{S}|=\sum_{k=0}^n{n\choose k}2^{n-k}$$ By the argument given in @Andre Nicolas' solution, we have $\sum_{k=0}^n{n\choose k}2^{n-k}=3^n$ (which is cool). Note that my solution above assumes that we count ordered pairs of such sets. If we seek to count unordered pairs, then there is a bit more we must do. Let $S(k,m)$ be the Stirling number of the $2^{nd}$ kind (i.e. $S(k,m)$ counts the number of partitions of $\{1,2,\cdots,k\}$ into exactly $m$ blocks). Then the set $\mathcal{S}^{\prime}$ you seek to count is described by the following sequence of choices. First there is unordered intersection $\phi\cap Q$ where $Q\subseteq S$ which contributes $2^n$ to the sum; note that this counts all unordered pairs with one set a singleton or at least one the empty set. For $2\leq k\leq n$ we first choose a $k$-subset $A\subseteq S$ (for which there are ${n\choose k}$ options), and then choose a partition into exactly $2$ parts (for which there are $S(k,2)$ options). Thus we obtain all such unordered pairs and $$|\mathcal{S}^\prime|=2^n+\sum_{k=2}^n{n\choose k}S(k,2)$$ It is possible to obtain a simple formula for $S(k,2)$. In particular it is easy to obtain the recursion $S(k,m)=S(k-1,m-1)+mS(k-1,m)$ for all $2\leq k\leq m$ (consider partitions which contain $\{k\}$ as a block and those that don't) and $S(k,1)=1$ for all $k\geq1$. By induction and the recursion we can show $S(k,2)=2^{k-1}-1$ for all $k\geq2$ (noting trivially that $S(m,m)=1$ for all $m\geq1$). Hence our formula reduces to the following: $$|\mathcal{S}^\prime|=2^n+\sum_{k=2}^n{n\choose k}S(k,2)=2^n+\sum_{k=2}^n{n\choose k}(2^{k-1}-1)$$ If you check this against your example, you'll notice that I count $14$ possible unordered pairs for a set of size $3$. Checking your example, you've counted the pairs $\{\{1\},\{2\}\}$, $\{\{2\},\{3\}\}$, and $\{\{1\},\{3\}\}$ two times each. Other than this discrepancy, our answers agree. Each of the n elements of set S will have 3 choices. 1.Either join subset P & don't join subset Q 2.Either join subset Q don't join subset P 3.Neither join P nor join Q. Therefore n elements with 3 choices gives us 3^n( 3 power n).
2022-05-25T22:41:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1285609/a-set-s-has-n-elements-how-many-ways-we-can-choose-subsets-p-and-q-of", "openwebmath_score": 0.8102799654006958, "openwebmath_perplexity": 340.4451917258159, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9871787853562028, "lm_q2_score": 0.8558511506439708, "lm_q1q2_score": 0.8448780993384236 }
https://math.stackexchange.com/questions/1718969/k-mathbbf-2-alpha-alpha-root-of-x4x1-in-mathbbf-2x-find-d?noredirect=1
$K= \mathbb{F}_2(\alpha)$ $\alpha$ root of $X^4+X+1 \in \mathbb{F}_2[X]$. Find degree and minimal polynomial Question 1: Find $[K:\mathbb{F_2}]$ Idea: I have tried looking at the irreducibility of the polynomial, $X^4+X+1$ and have so far been unsuccessful. Is there another way to do this apart from using Eisenstein's criterion, which I have already tried? Then if it were irreducible, and since $\alpha$ is a root of $P(X)= X^4+X+1$, then $[\mathbb{F_2}(\alpha):\mathbb{F_2}]$ would be equal to deg$P(X)=4$ Question 2: Find the minimal polynomial for $\alpha^{-1}$ over $\mathbb{F_2}$ Idea: First find the minimal polynomial for $\alpha$ over $\mathbb{F_2}$, call this $M(X)=a_nX^n+a_{n-1}X^{n-1}+ ... + a_0$. Then the minimal polynomial for $\alpha^{-1}$ will be $N(X)=a_0 X^n+a_1 X^{n-1}+...+a_n$. Now, since $\alpha$ is a root of a degree 4 polynomial, I believe its deg$M(x)\leq 4$ although I have not managed to find it Would really appreciate some help from you, thanks • I don't think this is the first time irreducibility of $x^4+x+1$ over $\Bbb{F}_2$ has been handled on our site. You could search :-) Eisenstein cannot be applied, because there are no primes in $\Bbb{F}_2$. Nor in any other field. Fields have no primes - only units. Eisenstein is useful only when the field is the field of fractions of a PID. Your idea for question 2 is a good one, and it works. Search for reciprocal polynomial. Mar 29 '16 at 17:07 • thanks.. can i search using latex notations etc? Mar 29 '16 at 20:03 • Unfortunately the local search engine strips LaTeX, and therefore fails. Google (restricted to the site) is a bit better, but not reliable in matching LaTeX strings. That is one of the major shortcomings of the search functionality. Good luck! As long as you make a good faith attempt at searching, no one has anything to complain! Mar 29 '16 at 20:16 • ah ok.. so how would i be able to search for the polynomial ? Mar 29 '16 at 20:17 • Try suitable general searches. For example, when I gave the site search engine the input degree four irreducible polynomials. It gives this list. And then this post is one of the first hits. Also, take a look at the sidebar and the lists of Linked and Related questions. Mar 29 '16 at 20:23 Clearly it has no roots, so it has no irreducible linear or cubic factors. The only irreducible degree $1$ polynomial over $\Bbb F_2$ is $x^2+x+1$, so the only possible quadratic factorization is as $(x^2+x+1)^2=x^4+x^2+1$ which clearly is not your polynomial. Hence it is irreducible. This means the degree of the extension is $4$. Eisenstein is a criterion for irreducibility over the rationals though, it doesn't apply in $\Bbb F_2$ If you are worried about negative powers, then let's reformulate this as $[\Bbb F[\alpha]:\Bbb F]$ which is clearly a vector space of degree $4$ over $\Bbb F_2$. But then we know that $\Bbb F_2[\alpha]\cong \Bbb F_2[x]/(x^4+x+1)$ by the map $$\begin{cases} \Bbb F_2[x]\to \Bbb F_2[\alpha]\\ x\mapsto \alpha\end{cases}$$ The map is clearly surjective, and by definition of the minimal polynomial and the fact that $\Bbb F_2[x]$ is a PID, we see the kernel is exactly $(x^4+x+1)$. But then the first isomorphism theorem for rings says that $\Bbb F_2[\alpha]\cong \Bbb F_2[x]/(x^4+x+1)$ as desired. Since the polynomial is generated by an irreducible in a PID, it is a maximal ideal, so the quotient $\Bbb F_2[\alpha]$ is a field. But since $\Bbb F_2(\alpha)$ is the smallest field containing $\alpha$, it must be that $\Bbb F_2(\alpha)\subseteq\Bbb F_2[\alpha]$. The inclusion in the other direction is trivial, hence the dimensions are equal. For your second question, note that $p(x^{-1})=x^{-4}+x^{-1}+1$ is something which is zero when you put in $\alpha^{-1}$, so if you multiply it by $x^4$ you get $$x^4p(x^{-1})=1+x^3+x^4$$ plugging in $\alpha^{-1}$ we get $\alpha^{-4}\cdot p(\alpha) = 0$, so this is the minimal polynomial for $\alpha^{-1}$, since you can get irreducibility from the same things we said for $\alpha$, namely that there are no roots, and it is not equal to $x^4+x^2+1$. • a dumb question : how do I jump from $\alpha$ is a root of some irreductible degree $4$ polynomial to $[\mathbb{F}_2(\alpha) : \mathbb{F}_2] = 4$ ? Mar 29 '16 at 17:21 • @user1952009 because the polynomial is irreducible, it means that $1,\alpha,\alpha^2,\alpha^3$ are linearly independent, so the space $\Bbb F_2(\alpha)$ is a $4$-dimensional vector space over $\Bbb F_2$, which is the definition of the field extension degree. Mar 29 '16 at 17:38 • ok, all the positive powers of $\alpha$ are in the $\mathbb{F}_2$ vector space generated by $1,\alpha,\alpha^2,\alpha^3$. (and if those powers of $\alpha$ were lying in some smaller vector space, that would give a lower degree minimal polynomial for $\alpha$). but what about the negative powers of $\alpha$ ? and all the other elements we added to the field ($\mathbb{F}_2$ is not a good example but you see what I mean) ? Mar 29 '16 at 17:49 • @user1952009 if you multiply $\alpha^4+\alpha+1=0$ by $\alpha^{-1}$ you'll see that $$\alpha^3+1+\alpha^{-1}=0\iff \alpha^{-1}=\alpha^3+1$$ so $\alpha^{-1}$ (and other negative powers) are all linearly dependent on the set I gave as well. Mar 29 '16 at 17:52 • and for the other elements, it is the same argument ? for each $u$ in the vector space, there is polynomial of degree $4$ whose $u$ is some root : $\sum_{k=0}^4 u^k c_k = 0$, and multiplying by $u^{-1}$ we see that $u^{-1}$ is linearly dependent of $1,u,u^2,u^3$, hence that vector space is indeed a field (hence it is $\mathbb{F}_2(\alpha)$) Mar 29 '16 at 17:57 if $\alpha$ is a root of $P(X)= X^4+X+1$ then it exist in $\mathbb{GF_{16}}$ and indeed $\alpha$ is primitive element of $\mathbb{GF_{16}}$ .it's minimal polynomial with respect to $\mathbb{GF_2}$ is $X^4+X+1$ itself. for the second part we have $\alpha^{-1}$ = $\alpha^{14}$ because $\alpha^{14} = \alpha^{3}+1$ and $\alpha \times \alpha^{14} =\alpha \times (\alpha^{3}+1)= \alpha^{4}+\alpha = 1$ because we know $\alpha^{4}+\alpha+1=0$. so the minimal polynomial with respect to $\mathbb{GF_2}$ containing $\alpha^{14}$ comes from producting conjugacy class $\{ \alpha^{7},\alpha^{11},\alpha^{13},\alpha^{14} \} \Rightarrow (x+\alpha^{7})(x+\alpha^{11})(x+\alpha^{13})(x+\alpha^{14})$ which after simplification yields in: $$x^4+x^3+1$$ • You should comment on irreducibility in your first statement: the op indicated that was one of his main areas of issue. Mar 29 '16 at 16:59 • $X^4+X+1$ is a Primitive Polynomial so there is no doubt in it's irreducibility. en.wikipedia.org/wiki/Finite_field under section GF(16) Mar 29 '16 at 17:06 • Yes, but you prove something is primitive by showing a root generates, so in particular you need to show irreducibility, so your logic is circular. Mar 29 '16 at 17:07 • you are right. but irreducibility can be shown only by inspection. so I think there is no point in doing the search yourself. it's better to use the results provided by tables. Mar 29 '16 at 17:14 • @K.K.McDonald thank you for your answer. how do we know that $\alpha^{-1}=\alpha^{14}$? Is this done by brute force multiplying out many powers of $\alpha$ as this would take a very long time, especially for an exam.. or is there another way you got this? Mar 29 '16 at 20:02
2021-12-06T22:04:43
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1718969/k-mathbbf-2-alpha-alpha-root-of-x4x1-in-mathbbf-2x-find-d?noredirect=1", "openwebmath_score": 0.8905219435691833, "openwebmath_perplexity": 159.7111273415104, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347920161259, "lm_q2_score": 0.8670357735451834, "lm_q1q2_score": 0.8448698236650416 }
https://math.stackexchange.com/questions/3690190/is-the-definition-of-the-riemann-sum-from-thomas-calculus-correct
# Is the definition of the Riemann sum from Thomas' Calculus correct? I'm having trouble with theoretical understanding of the Riemann sum with this explanation/definition from Thomas' Calculus. I checked Wikipedia and it seems to state virtually the same.: On each subinterval we form the product $$f(c_k)*∆x_k$$. This product is positive, negative, or zero, depending on the sign of $$f(c_k)$$. When $$f(c_k) > 0$$, the product $$f(c_k)*∆x_k$$ is the area of a rectangle with height $$f(c_k)$$ and width $$∆x_k$$. When $$f(c_k) < 0$$, the product $$f(c_k)*∆x_k$$ is a negative number, the negative of the area of a rectangle of width $$∆x_k$$ that drops from the x axis to the negative number $$ƒ(c_k)$$. Finally we sum all these products to get: $$S_p = \sum_{k=1}^{n}{f(c_k)}∆x_k$$ Any Riemann sum associated with a partition of a closed interval [a, b] defines rectangles that approximate the region between the graph of a continuous function ƒ and the x-axis. Partitions with norm approaching zero lead to collections of rectangles that approximate this region with increasing accuracy To illustrate the problem, suppose we want to approximate the area between $$f(x) = -x$$ and the x axis on the interval [-1; 1]. The area is 1, but the Riemann sum should give something close to 0: Is the statement that any Riemann sum with the norm approaching 0 approximates the area with increasing accuracy correct? It seems not, since in the example above the area tends to 0 as the norm approaches 0, which is not "increasing accuracy". Does it miss the part that one should take the absolute values of the rectangles' areas? Thank you. • the Riemann sum approaches the signed area, which is $-\frac12+\frac12=0$ in your example; cf. the Wikipedia integral page May 24, 2020 at 23:17 • @J.W.Tanner what is a signed area? Negative? I looked into an online app to calculate sums and take the screen shot from here. And it says the area is approximated to 0.16 (with my settings). So it's not like it took only the red or green shaded part May 24, 2020 at 23:21 • signed area in your picture would be the green area minus the red area May 24, 2020 at 23:22 • @J.W.Tanner so the definition is wrong. Since it doesn't always increase accuracy as one take more fine rectangles. In my example it actually decreases accuracy, since the area tends to 0. Am I right? May 24, 2020 at 23:24 • The true value is 0 here, which is the signed area mentioned by J. W. Tanner. I wouldn't say the book is wrong. It's just too much trouble to add "signed" every time you write something. And the reason for considering signed areas should be straightforward. May 24, 2020 at 23:31 The Riemann sum approaches the signed area. In your picture, the green area is positive, and the red area is negative. The Riemann sum should approach $$0$$, which is the accurate signed area for $$f(x)=-x$$ on the interval $$[-1,1]$$. If you don't like that, try $$f(x)=|x|$$.
2022-07-04T16:19:28
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3690190/is-the-definition-of-the-riemann-sum-from-thomas-calculus-correct", "openwebmath_score": 0.8009864687919617, "openwebmath_perplexity": 142.997816969626, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347890464283, "lm_q2_score": 0.867035771827307, "lm_q1q2_score": 0.8448698194162491 }
https://quantumcomputing.stackexchange.com/questions/26355/solving-hamiltonian-eigenvalue-problem/26356
# Solving Hamiltonian eigenvalue problem I would like to solve an eigenvalue problem of a Hamiltonian. I was able to find the lowest eigenvalue by converting the Hamiltonian into a matrix and applying linear algebra eigenvalue techniques. But this method is extremely cumbersome and does not generalize to arbitrary-sized Hamiltonians. I was hoping somebody could point to a more general approach. Here is the definition of the problem: Let $$\vert \psi_N \rangle$$ denote the uniform superposition, $$\vert \psi_N \rangle = \frac{1}{\sqrt{N}}\sum^{N-1}_{i=0}\lvert i \rangle.$$ Then $$\vert \psi_N \rangle$$ is the ground state of the Hamiltonian $$H_0 = I - \lvert \psi_N \rangle \langle \psi_N \lvert$$ with the lowest eigenvalue $$0$$. Let $$\vert m \rangle = \vert 1 0...0 \rangle$$. Then it is the ground state of the Hamiltonian $$H_m = I - \vert m \rangle \langle m \vert$$. For $$s \in [0,1]$$ define the Hamiltonian $$H(s) = (1-s)H_0 + s H_m.$$ What would be the general approach to solving the following eigenvalue problem for an arbitrary $$N$$ \begin{align} H(s) \lvert E_k, s \rangle = E_k(s) \lvert E_k, s\rangle \end{align} where $$E_k(s)$$ is the $$k$$th eigenvalue at time $$s$$. I was able to solve the problem for $$N = 4$$ by converting the Hamiltonian into a matrix and then using computer algebra I got $$E_0(s) = \displaystyle \frac{1}{2} - \frac{\sqrt{3 s^{2} - 3 s + 1}}{2}.$$ The problem with this approach is that it is not general and requires conversion to matrices and then solving the eigenvalue problem. I suspect that it is possible to get the answers in terms of $$N$$ and $$s$$ without fixing the size $$N$$ and expressing the Hamiltonian as a matrix. You can solve this by referring to this question. To estimate the eigenvalues of $$H\left( s \right) =\left( 1-s \right) H_0+sH_m=I-\left( 1-s \right) |\psi _N\rangle \langle \psi _N|-s|m\rangle \langle m|$$, we can only calculate the eigenvalues of $$\left( 1-s \right) |\psi _N\rangle \langle \psi _N|+s|m\rangle \langle m|$$. Then, with the method of the link, this equals to calculate the eigenvalues of $$\left( \begin{matrix} 1-s& \frac{\sqrt{\left( 1-s \right) s}}{\sqrt{N}}\\ \frac{\sqrt{\left( 1-s \right) s}}{\sqrt{N}}& s\\ \end{matrix} \right) .$$ Solving this we get the eigenvalues should be $$\lambda =\frac{1\pm \sqrt{1-4\left[ s-s^2-\frac{\left( 1-s \right) s}{N} \right]}}{2}.$$ By replacing $$N=4$$, we get your special case. Above only gives two eigenvalues, other eigenvalues of $$H\left( s \right)$$ are all $$1$$ with eigenvectors orthogonal to the space spanned by $$|m\rangle$$ and $$|\psi_N\rangle$$. • thank you very much! May 13 at 1:41 Have you ever seen a derivation of Grover's search? The approach that you want is very similar. Start by defining two states, perhaps $$|a\rangle=|\psi_N\rangle, \qquad |b\rangle=|m\rangle-|a\rangle\langle a|m\rangle,$$ where I've only given $$|b\rangle$$ up to normalisation. The point is that these two states should be orthonormal and span the space spanned by $$|\psi_N\rangle$$ and $$|m\rangle$$. Any state $$|\phi\rangle$$ that is not in this span automatically satisfies $$H|\phi\rangle=(1-s)|\phi\rangle+s|\phi\rangle=|\phi\rangle$$ and is hence a $$+1$$ eigenstate. For any state in the span, you can think about a linear combination $$\alpha|a\rangle+\beta|b\rangle$$ and how $$H$$ acts on this. The outcome is always a state in the same span. Hence, we can talk about this as a two-dimensional subspace and just write out a $$2\times 2$$ matrix. It looks something like $$H_\text{sub}=\begin{bmatrix} s\frac{N-1}{N} & -s\frac{\sqrt{N-1}}{N} \\ -s\frac{\sqrt{N-1}}{N} & 1-s\frac{N-1}{N} \end{bmatrix}.$$ So, you should be able to evaluate the two eigenvalues of this matrix: $$\lambda^2-\lambda-s(s-1)\frac{N-1}{N}=0$$ and thus $$\lambda=\frac{1}{2}\left(1\pm\sqrt{1-s(s-1)\frac{N-1}{N}}\right).$$ The ground state energy is thus $$\frac{1}{2}\left(1-\sqrt{1-s(s-1)\frac{N-1}{N}}\right).$$ • Thank you very much for the explanation. Your answer is very close to the correct answer. The correct answer is given by the user narip May 13 at 1:40
2022-07-07T11:39:42
{ "domain": "stackexchange.com", "url": "https://quantumcomputing.stackexchange.com/questions/26355/solving-hamiltonian-eigenvalue-problem/26356", "openwebmath_score": 0.982545018196106, "openwebmath_perplexity": 108.88494371026168, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974434786819155, "lm_q2_score": 0.8670357718273068, "lm_q1q2_score": 0.8448698174851232 }
https://math.stackexchange.com/questions/1710168/prove-there-are-no-prime-numbers-in-the-sequence-a-n-10017-100117-1001117-10011
Prove there are no prime numbers in the sequence $a_n=10017,100117,1001117,10011117, \dots$ Define a sequence as $a_n=10017,100117,1001117,10011117$. (The $nth$ term has $n$ ones after the two zeroes.) I conjecture that there are no prime numbers in the sequence. I used wolfram to find the first few factorisations: $10017=3^3 \cdot 7 \cdot 53$ $100117=53\cdot 1889$ $1001117=13 \cdot 53\cdot1453$ and so on. I've noticed the early terms all have a factor of $53$, so the problem can be restated as showing that all numbers of this form have a factor of $53$. However, I wouldn't know how to prove a statement like this. Nor am I sure that all of the terms do have a factor of $53$. I began by writing the $nth$ term of the sequence as $a_n=10^{n+3}+10^n+10^{n-1}+10^{n-2}+10^{n-3}+\cdots+10^3+10^2+10^1+7$ but cannot continue the proof. It is as simple as $$a_{n+1}=10a_n-53$$ • Can you elaborate a little further? – zz20s Mar 23 '16 at 13:05 • $1001117+53=1001170$, so if $100117$ is a multiple of $53$, $1001117$ must be too. – Empy2 Mar 23 '16 at 13:06 • Just to clarify: $a_{n+1}=10(a_n-7+1)+7=10a_n-53$. – lhf Mar 23 '16 at 13:55 Use induction in order to complete the (excellent) hint by @Michael. First, show that this is true for $n=1$: $a_1=53\cdot189$ Second, assume that this is true for $n$: $a_n=53k$ Third, prove that this is true for $n+1$: $a_{n+1}=$ $10\cdot\color\red{a_n}-53=$ $10\cdot\color\red{53k}-53=$ $530k-53=$ $53(10k-1)$ Please note that the assumption is used only in the part marked red. The sequence is given by $$a_n = 10^{n+3}+10\cdot \frac{10^n-1}{9}+7$$ Then $$9a_n = 9\cdot10^{n+3}+10\cdot (10^n-1)+63 = 9010\cdot 10^n+53 = 53\cdot(170 \cdot 10^n+1)$$ Therefore, $53$ divides $9a_n$. Since $53$ does not divide $9$, we have that $53$ divides $a_n$, by Euclid's lemma. (We don't even need to use that $53$ is prime, just that $9$ and $53$ are coprime.) • Can you explain the last line? – zz20s Mar 23 '16 at 13:55 • Thank you! Your answer makes a lot of sense. Can you explain why you decided to multiply by $9$? – zz20s Mar 23 '16 at 14:11 • @zz20s, to clear the denominators. – lhf Mar 23 '16 at 14:21 • When you have a number whose base-10 representation has long strings of repeated digits, multiplying by 9 tends to make them turn into long strings of 0s (or sometimes 9s for the obvious reason). Here are two ways to see why. (1) 1111111=9999999/9 = (10000000-1)/9, etc. (2) 9 = 10-1, so multiplying by 9 means shifting one place left and subtracting the original number, which makes those long strings of digits cancel out. – Gareth McCaughan Mar 23 '16 at 14:22 • Bottom line: $1001\cdots 17= 53 \cdot 18\cdots 89$. – lhf Mar 23 '16 at 22:34 Another way to find the inductive relationship already cited, from a character manipulation point of view: Consider any number in the sequence, $a_n$. To create the next number, you must: 1. Subtract $17$, leaving a number terminating in two zeroes; 2. Divide by $10$, dropping one of the terminal zeroes; 3. Add $1$, changing the remaining terminal zero to a $1$; 4. Multiply by $100$, sticking a terminal double zero back on; 5. Add $17$, converting the terminal double zero back to $17$ Expressing this procedure algebraically, and simplifying: $$a_{n+1}=\left (\frac{a_n-17}{10}+1 \right ) \times 100+17=10a_n-53$$
2019-07-21T02:28:11
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1710168/prove-there-are-no-prime-numbers-in-the-sequence-a-n-10017-100117-1001117-10011", "openwebmath_score": 0.7401050925254822, "openwebmath_perplexity": 317.69015171432295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347838494567, "lm_q2_score": 0.8670357701094303, "lm_q1q2_score": 0.8448698132363299 }
https://help.scilab.org/docs/6.1.0/pt_BR/plotimplicit.html
Scilab Home page | Wiki | Bug tracker | Forge | Mailing list archives | ATOMS | File exchange Change language to: English - Français - 日本語 - Русский Ajuda do Scilab >> Biblioteca de Gráficos > 2d_plot > plotimplicit # plotimplicit Plots the (x,y) lines solving an implicit equation or Function(x,y)=0 ### Syntax plotimplicit(fun) plotimplicit(fun, x_grid) plotimplicit(fun, x_grid, y_grid) plotimplicit(fun, x_grid, y_grid, plotOptions) ### Arguments fun It may be one of the following: • A single Scilab-executable string expression of literal scalar variables "x" and "y" representing two scalar real numbers. Examples: "x^3 + 3*y^2 = 1/(2+x*y)", "(x-y)*(sin(x)-sin(y))" (implicitly = 0). • The identifier of an existing function of two variables x and y. Example: besselj (not "besselj"). • A list, gathering a Scilab or built-in function identifier, followed by the series of its parameters. Example: After function r = test(x,y,a), r = x.*(y-a), endfunction, fun can be list(test, 3.5) to consider and compute test(x, y, 3.5). x_grid, y_grid x_grid and y_grid define the cartesian grid of nodes where fun(x,y) must be computed. By default, x_grid = [-1,1] and y_grid = x_grid are used. To use default values, just specify nothing. Example skipping y_grid: plotimplicit(fun, x_grid, , plotOptions). Explicit x_grid and y_grid values can be specified as follow: • A vector of 2 real numbers = bounds of the x or y domain. Example: [-2, 3.5]. Then the given interval is sampled with 201 points. • A vector of more than 2 real numbers = values where the function is computed. Example: -1:0.1:2. • The colon :. Then the considered interval is given by the data bounds of the current or default axes. This allows to overplot solutions of multiple equations on a shared (x,y) domain, with as many call to plotimplicit(..) as required. The bounds of the 1st plot drawn by plotimplicit(..) are set according to the bounds of the solutions of fun. Most often they are (much) narrower than x_grid and y_grid bounds. plotOptions List of plot() line-styling options used when plotting the solutions curves. ### Description plotimplicit(fun, x_grid, y_grid) evaluates fun on the nodes (x_grid, y_grid), and then draws the (x,y) contours solving the equation fun or such that fun(x,y) = 0. When no root curve exists on the considered grid, plotimplicit yields a warning and plots an empty axes. plotimplicit(..) can be used in a subplot. plotimplicit(..) can be called several times for the same axes, to overplot the solutions of several implicit equations (and show their possible intersections). Before returning, plotimplicit bundles all plotted curves into a graphical compound addressable as gce().children. If no solution exists, gce() is set to gca(). ### Examples With the literal expression of the cartesian equation to plot: // Draw a circle of radius 1 according to its cartesian equation: plotimplicit "x^2 + y^2 = 1" xgrid(color("grey"),1,7) isoview With the identifier of the function whose root lines must be plotted: clf // 1) With a function in Scilab language (macro) function z=F(x, y) z = x.*(x.^2 + y.^2) - 5*(x.^2 - y.^2); endfunction // Draw the curve in the [-3 6] x [-5 5] range subplot(1,2,1) plotimplicit(F, -3:0.1:6, -5:0.1:5) title("$\text{macro: }x.(x^2 + y^2) - 5(x^2 - y^2) = 0$", "fontsize",4) xgrid(color("grey"),1,7) // 2) With a native Scilab builtin subplot(1,2,2) plotimplicit(besselj, -15:0.1:15, 0.1:0.1:19.9) title("$\text{built-in: } besselj(x,y) = 0$", "fontsize",4) xgrid(color("grey"),1,7) Using the default x_grid, a plotting option, and some post-processing: equation = "3*x^2*exp(x) - x*y^2 + exp(y)/(y^2 + 1) = 1" plotimplicit(equation, , -10:0.1:10, "r--") // Increase the contours thickness afterwards: gce().children.thickness = 2; // Setting titles and grids title("$3x^2 e^x - x y^2 + {{e^y}\over{(y^2 + 1)}} - 1 = 0$", "fontsize",4) xgrid(color("grey"),1,7) Overplotting: clf plotimplicit("x*sin(x) = y^2*cos(y)", [-2,2]) t1 = gca().title.text; c1 = gce().children(1); title("") plotimplicit("y*sin(y) = x^2*cos(x)", [-2,2], ,"r") t2 = gca().title.text; c2 = gce().children(1); title("$plotimplicit()$") legend([c1 c2],[t1 t2]); gce().font_size = 3; xgrid(color("grey"),1,7) • fsolve — find a zero of a system of n nonlinear functions • contour2d — curvas de nível em uma superfície 3d • contour2di — Computa curvas de nível em um esboço 2d • contour2dm — compute level curves of a surface defined with a mesh • LineSpec — Customização rápida de linhas que aparecem em um esboço • GlobalProperty — Customização de aparência dos objetos (curvas, superfícies...) num comando plot ou surf. • plot — Esboço 2d ### History Version Description 6.1.0 Function introduced. Report an issue << plot2d4 2d_plot polarplot >> Scilab EnterprisesCopyright (c) 2011-2017 (Scilab Enterprises)Copyright (c) 1989-2012 (INRIA)Copyright (c) 1989-2007 (ENPC)with contributors Last updated:Tue Feb 25 08:52:31 CET 2020
2021-01-22T04:07:18
{ "domain": "scilab.org", "url": "https://help.scilab.org/docs/6.1.0/pt_BR/plotimplicit.html", "openwebmath_score": 0.5431338548660278, "openwebmath_perplexity": 14033.922136150228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974434788304004, "lm_q2_score": 0.8670357546485408, "lm_q1q2_score": 0.8448698020329531 }
https://undergroundmathematics.org/combining-functions/r5362/solution
Review question Can we draw the graph of $\left| x + [x] \right|$? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource Ref: R5362 Solution A function $f$ is defined on $\mathbb{R}$ by $\begin{equation*} f \colon x \to \left| x + [x] \right| \end{equation*}$ where $[x]$ indicates the greatest integer less than or equal to $x$, e.g. $[3] = 3$, $[2.4] = 2$, $[-3.6] = -4$. Sketch the graph of the function for $-3 \le x \le 3$. What is the range of $f$? Is the mapping one-one? The function $[x]$ defined here is called the floor function. Note that the value of our function $f(x)$ will change abruptly at each integer value of $x$. Let’s look at the behaviour of $f$ on intervals like $[0,1)$, $[1,2)$, etc. When $0 \le x < 1$, we have that $[x] = 0$ and so $\begin{equation*} f(x) = \left| x + [x] \right| = |x| = x. \end{equation*}$ When $1 \le x < 2$, we have $[x] = 1$ so $\begin{equation*} f(x) = \left| x + [x] \right| = |x+1| = x + 1 \end{equation*}$ and similarly for all other intervals of the form $[n,n+1)$ where $n$ is a positive integer, $\begin{equation*} f(x) = x + n . \end{equation*}$ On the other hand, when $-1 \le x < 0$, we have $[x] = -1$ and so $\begin{equation*} f(x) = \left| x + [x] \right| = |x-1| = -x + 1. \end{equation*}$ Similarly, for all intervals of the form $[n,n+1)$ with $n < 0$, $\begin{equation*} f(x) = -x - n . \end{equation*}$ These considerations lead to the following graph. As can be seen from the graph, the range of $f$ is $f(x)\geq 0, f(x)\neq 1,3,5,\ldots.$ Notice that $f(-1)=f(1)=2$ and similarly for other non-zero integer values of $x$. Hence the mapping is not one-to-one. The function $g$ is defined by $g \colon x \to \left| x + [x] \right|$, $x \in \mathbb{R}_+$, $x \notin \mathbb{Z}_+$. Find the rule and domain of the inverse function $g^{-1}$. The notation $x \in \mathbb{R}_+$, $x \notin \mathbb{Z}_+$ means that the domain of $g$ is the real numbers greater than zero but excluding all the positive integers. Where the question asks for the ‘rule’ it means an algebraic definition of the function. Remember that an inverse function does not exist for a function such as $f$ that is not one-to-one. The restricted domain of $g$ means that it is one-to-one and does have an inverse. Notice also that with the restricted domain, $x+[x]$ is always positive, so the definition of $g$ can (more conveniently) be written without the modulus sign. By definition, the domain of $g^{-1}$ is the same as the range of $g$. From the graph, we can see that this is $0<x<1\text{ and }2<x<3\text{ and }\ldots\text{ and }2n<x<2n+1\ldots$ where $n$ is a positive integer. This can be written as $(0,1) \cup (2,3) \cup (4,5) \ldots \cup (2n,2n+1) \cup \ldots .$ To find the rule that defines $g^{-1}$, let’s first think about what $g$ is doing. We can consider the number $x$ as being made up of two parts – the whole number part $[x]$ and its fractional part which we call $a$ (such that $0<a<1$). Then $x=[x]+a$ and $g(x)=x+[x] = 2[x]+a.$ In other words, the function $g$ doubles the whole number part and keeps the fractional part the same. We can think about $g$ as a function machine. We have defined the operation ‘split’ to mean turn $x$ into a whole number and a fractional part between $0$ and $1$. The operation ‘join’ is its inverse which is actually just addition. Now we can draw the inverse function machine. So now we see that $g^{-1}(x)=\frac{1}{2}[x]+a = \frac{1}{2}[x]+x-[x] = x-\frac{1}{2}[x].$ Alternatively, we are to find the function $g^{-1}$ such that $g^{-1}(x + [x]) = x$ whenever $x$ is in the domain of $g$. So we want to write $x$ as a function of $(x+[x])$. We note that $[x + [x]] = 2[x]$, so we can write $\begin{equation*} x = x + [x] - [x] = (x + [x]) - \frac{[x + [x]]}{2}. \end{equation*}$ Thus, the required rule is $g^{-1}(x) = x - \dfrac{[x]}{2}$, as before. The graph of $y = g(x)$ is in blue, the line $y = x$ is a dashed line, and $y=g^{-1}(x)$ is in red. Suppose someone objects that $g^{-1}(x) = x - \dfrac{[x]}{2}$ is not the inverse of $g$ as, for example, $\begin{equation*} g(g^{-1}(5.75)) = g\left( 5.75 - \frac{[5.75]}{2} \right) = g(5.75 - 2.5) = g(3.25) = 3.25 + [3.25] = 6.25. \end{equation*}$ We can reply by pointing out that $5.75$ does not lie in the domain of $g^{-1}$, even though the rule that defines $g^{-1}$ can be applied, in principle, to any real number.
2018-04-25T22:09:33
{ "domain": "undergroundmathematics.org", "url": "https://undergroundmathematics.org/combining-functions/r5362/solution", "openwebmath_score": 0.9428019523620605, "openwebmath_perplexity": 78.56456086163674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9916842200479141, "lm_q2_score": 0.8519528038477824, "lm_q1q2_score": 0.8448681518014216 }
http://mathhelpforum.com/calculus/128188-evaluating-infinite-sums.html
# Math Help - Evaluating Infinite Sums 1. ## Evaluating Infinite Sums Evaluate the following infinite sums. (In most cases they are f(a) where a is some obvious number and f(x) given by some power series. To evaluate the various power series, manipulate them until some well-known power series emerge.) [sum from n=0 to inf.] n/(2^n) Appreciate any help 2. $ f\left( x \right) = \sum\limits_{n = 0}^\infty {x^n } = \frac{1} {{1 - x}}{\text{ }}\forall \left| x \right| < 1{\text{ }} \Rightarrow f'\left( x \right) = \sum\limits_{n = 0}^\infty {nx^{n - 1} } = \frac{1} {{\left( {1 - x} \right)^2 }} $ Now you put $x=\dfrac{1}{2}$ and you make some repare 3. Originally Posted by Nacho $ f\left( x \right) = \sum\limits_{n = 0}^\infty {x^n } = \frac{1} {{1 - x}}{\text{ }}\forall \left| x \right| < 1{\text{ }} \Rightarrow f'\left( x \right) = \sum\limits_{{\color{red}n = 1}}^{\infty} {nx^{n - 1} } = \frac{1} {{\left( {1 - x} \right)^2 }} $ Now you put $x=\dfrac{1}{2}$ and you make some repare Correction. 4. Hello, blorpinbloo! Evaluate: . $\sum^{\infty}_{n=0} \frac{n} {2^n}$ $\begin{array}{cccccc} \text{We have:} & S &=& \dfrac{1}{2} + \dfrac{2}{2^2} + \dfrac{3}{2^3} + \dfrac{4}{2^4} + \dfrac{5}{2^5} + \hdots \\ \\[-3mm] \text{Multiply by }\frac{1}{2}: & \dfrac{1}{2}S &=& \quad\;\; \dfrac{1}{2^2} + \dfrac{2}{2^3} + \dfrac{3}{2^4} + \dfrac{4}{2^5} + \hdots \end{array}$ . . Subtract: . . $\frac{1}{2}S \;=\;\underbrace{\frac{1}{2} + \frac{1}{2^2} + \frac{1}{2^3} + \frac{1}{2^4} + \frac{1}{2^5} + \hdots}_{\text{geometric series}}$ .[1] The geometric series has the sum: . $\frac{\frac{1}{2}}{1-\frac{1}{2}} \;=\;1$ Hence, [1] becomes: . $\frac{1}{2}S \:=\:1$ Therefore: . $S \;=\;2$ 5. Originally Posted by General Correction. Thanks, but I think that is not important, because the first terme of the sum is zero, then is same if the sum beginning from zero or one
2015-03-04T01:24:22
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/128188-evaluating-infinite-sums.html", "openwebmath_score": 0.9766163229942322, "openwebmath_perplexity": 2235.3607863318994, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9916842234886746, "lm_q2_score": 0.8519527963298946, "lm_q1q2_score": 0.8448681472774164 }
https://mathematica.stackexchange.com/questions/151023/interpolate-on-log-scale
# Interpolate on log scale I have data in Mathematica that comes from y-log scale Data = {{5.0, 23.87548081003781}, {6.94392523364486, 0.511639358262082}, {8.925233644859812, 0.23397526329810545}, {10.962616822429906, 0.16190746961888203}, {12.906542056074766, 0.17751810380557045}, {14.925233644859812, 0.25653445869951874}}; These points should be connected by straight line on log scale like this ListLogPlot[Data, Joined -> True] I want to find interpolated function with straight lines on log plot(just like above code), however naive result gives me: LogPlot[Interpolation[Data, InterpolationOrder -> 1][x], {x, 5, 14}] which does not have straight lines on logPlot, it has straight lines on Plot. How can I interpolate data on log plot? • what if you use InterpolationOrder with ListLogPlot like ListLogPlot[Data, Joined -> True, InterpolationOrder -> 3] – Sumit Jul 9 '17 at 9:11 • Dear Sumit, thank you for the reply, this gives me a smoother plot, but I want an interpolated function which would look like ListLogPlot[Data, Joined -> True] – Wint Jul 9 '17 at 9:28 • It'll be due to the difference between interpolating in log space or linear space. This'll get you what you want: if = Exp@*Interpolation[{#1, Log[#2]} & @@@ Data, InterpolationOrder -> 1] then LogPlot[if[x], {x, 5, 14}] – Quantum_Oli Jul 9 '17 at 9:47 • Quantum_Oli thank you very much!! Please post as an answer and I mark it as solved. :) – Wint Jul 9 '17 at 10:02 Often one interpolates to avoid transcendental functions, but the OP's objective cannot be achieved with polynomial interpolation. So I assume something like the following, which reproduces ListLogPlot[Data, Joined -> True], is desired: ClearAll[logIF];
2020-09-25T01:37:12
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/151023/interpolate-on-log-scale", "openwebmath_score": 0.22834773361682892, "openwebmath_perplexity": 4253.633085613474, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.953966096291997, "lm_q2_score": 0.8856314768368161, "lm_q1q2_score": 0.8448624027113336 }
https://mathematica.stackexchange.com/questions/16574/first-positive-root
# First positive root Simple question but problem with NSolve. I need help how to extract first positive root? For example: eq = -70.5 + 450.33 x^2 - 25 x^4; NSolve[ eq == 0, x] If I have an equation eq that I am not sure of polynomial order and I need to define all positive roots x[1], ..., x[2]. The most straightforward way would be FindRoot[ eq, {x, 0}] but since this specific polynomial eq has a singular Jacobian at x == 0 (evaluate e.g. Reduce[ D[ eq, x] == 0, x]) one would rather use FindRoot[ eq, {x, x0}] for small x0 > 0. The argument x0 depends on a case by case basis, but for the problem at hand an appropriate value might be 0 < x0 <= 2.4, e.g. : FindRoot[ eq, {x, 0.5}] {x -> 0.397412} More general considerations must include huge order polynomials, which might have many real roots. So finding every root may be very inefficient. In such cases there is a handy function RootInterval which can be much more efficient than any NSolve approach and especially handy when using it together with FindRoot. First @ RootIntervals[eq] {{-(273/64), -(263/64)}, {-(33/64), -(23/64)}, {47/128, 57/128}, {263/64, 273/64}} It shows where one can find the first positive root , i.e. in this interval : {47/128, 57/128}. To choose only intervals where positive roots may be found we use e.g. : intervals = First @ RootIntervals[eq] ~ DeleteCases ~ {_, _?Negative} {{47/128, 57/128}, {263/64, 273/64}} Then we use it for finding roots numerically with Brent's method -- the most powerful algorithm available for FindRoot : roots = FindRoot[ eq, {x, #[[1]], #[[2]]}, Method -> "Brent"][[All, 2]]& /@ intervals // Flatten {0.397412, 4.22555} The last step is standard : Min @ roots 0.397412 The first root might be negative therefore we could use e.g. : RankedMin[roots, #]& /@ {1, 2} Now for the sake of completeness we demonstrate small intervals and roots on the plot of the polynomial over the positive reals : Plot[ eq, {x, -4.5, 4.5}, PlotStyle -> Thick, AspectRatio -> 1/3, Epilog -> { Darker @ Red, Thickness[0.005], Line[{{#1, 0}, {#2, 0}}]& @@@ intervals, Green, PointSize[0.007], Point[{#, 0}] & /@ roots } ] the same plot in pieces : GraphicsRow[ Plot[ eq, {x, #1 - 0.3, #2 + 0.3}, PlotStyle -> Thickness[0.01], Epilog -> { Red, Thickness[0.011], Line[{{#1, 0}, {#2, 0}}] & @@@ intervals, Green, PointSize[0.015], Point[{#, 0}] & /@ roots}] & @@@ intervals ] – Pipe Dec 19, 2012 at 15:20 • @Artes: How can you generalize your Code given above to a system of nonlinear equations? I am struggling to find real solutions to the NL system. Can you give me an example, for example, for a system of 5 nonlinear (NL) equations? and find the equilibrium limited to only real values. Apr 19, 2019 at 10:25 • @TugrulTemel eq==0 is a nonlinear equation, to solve a system you should use something like FindRoot[{eq1==0,...,eq5==0},{{x1,y0},...,{x5,y5}}] possibly with an appropriate method. For a specific problem you should better ask another question. Sometimes you can get solutions with Solve or NSolve with domain specification, eg. NSolve[{eq1==0,...eq5==0},{x1,...,x5},Reals]. Apr 19, 2019 at 15:32 • @Artes: Thank you very much. I cannot figure out why NSolve is very slow for a system of 30 equations (mixed: linear and nonlinear). Regards. Apr 19, 2019 at 18:37 • @TugrulTemel It is not surprising, that solving systems of 30 equations is very slow. In such a case more reasonable is to exploit FindRoot. Apr 19, 2019 at 19:23 eq = -70.5 + 450.33 x^2 - 25 x^4; roots = x /. NSolve[eq == 0, x] Min@Select[roots, Positive] 0.397412 • yes, but how to extract second, third ... separately? – Pipe Dec 18, 2012 at 21:35 • @Pipe you can use Sort instead of Min to get the ordered positive roots – ssch Dec 18, 2012 at 21:35 • @Nasser it is working but how to extract just second for example – Pipe Dec 18, 2012 at 21:39 • @Nasser Thank you very much, it is clear now – Pipe Dec 18, 2012 at 23:14 • @Pipe: Had you asked for how to extract the 2nd or 3rd, etc., I would have indeed have used Sort instead of Min. But you asked just for the 1st! Dec 20, 2012 at 15:54 Well, if you wanted to give NSolve your conditions directly, you can do NSolve[-70.5 + 450.33 x^2 - 25 x^4 == 0 && x > 0, x] And then if you want, you can pick the Min solution.
2022-05-17T22:03:19
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/16574/first-positive-root", "openwebmath_score": 0.302009642124176, "openwebmath_perplexity": 2633.733748803651, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147185726374, "lm_q2_score": 0.8688267864276108, "lm_q1q2_score": 0.8448599550123741 }
https://mathematica.stackexchange.com/questions/240360/how-to-correctly-enumerate-all-the-schemes-of-this-cube-coloring-problem/240404
# How to correctly enumerate all the schemes of this cube coloring problem? This problem is the fifth question of 1996 Chinese High School Mathematics League or Chinese Mathematical Olympiad in Senior: Choose several colors from the given six different colors to dye six faces of a cube, and dye each two faces with common edges into different colors. How many different dyeing schemes are there? Note: if we dye two identical cubes, we can make the six corresponding faces of the two cubes dyed the same by proper flipping, then we say that the two cubes have the same dyeing scheme. Show[Graphics3D[ Rotate[Cuboid[{0, 0, 0}, {1, 1, 1}], 0 Degree, {0, 0, 1}], Axes -> True], i = 1; Graphics3D[ Table[Text[Style[ToString[i++], 15], {x, y, z}], {x, 0, 1}, {y, 0, 1}, {z, 0, 1}]]] g0 = Graph[(Sort /@ {{1, {2, 3, 5}}, {2, {1, 4, 6}}, {3, {1, 4, 7}}, {4, {2, 3, 8}}, {5, {1, 6, 7}}, {6, {2, 5, 8}}, {7, {3, 5, 8}}, {8, {4, 6, 7}}}]]) // DeleteDuplicates, ChromaticPolynomial[g0, 6] poly = CycleIndexPolynomial[DihedralGroup[8], Array[Subscript[a, ##] &, 6]] g1 = Graph[(Sort /@ {{1, {2, 3, 4, 5}}, {2, {1, 3, 5, 6}}, {3, {1, 4, 2, 6}}, {4, {1, 3, 5, 6}}, {5, {1, 2, 4, 6}}, {6, {2, 3, 4, 5}}}]]) // DeleteDuplicates, ChromaticPolynomial[g1, 6] The above method may not consider the restriction that the color of adjacent faces can not be the same, and does not eliminate the same dyeing situation after rotation, so there are many unreasonable schemes. f = Table[{i, Delete[Range[6], {{i}, {7 - i}}]}, {i, 6}];(*A face and its adjacent 4 faces*) g = Table[{i, 7 - i}, {i, 3}]; DeleteDuplicatesBy[ Select[MapThread[Rule[#1, #2] &, {Range[6], #}] & /@ Tuples[{Black, White, Red, Green, Yellow, Cyan}, {6}], Cases[f /. #, {x_, {___, x_, ___}}] == {} &(*Detect whether a face has the same color as its four adjacent faces*)], Sort[Sort /@ (g /. #)] &(*Remove duplication*)] // Length The results of the above codes are 198030 , 4080 and 215, but the reference answer is 230 (Maybe I didn't effectively exclude the same dyeing scheme after rotation). How to correctly list all the solutions to this problem? f = Table[{i, Delete[Range[6], {{i}, {7 - i}}]}, {i, 6}]; g = Table[{i, 7 - i}, {i, 3}]; sol = Values /@ DeleteDuplicatesBy[ Select[MapThread[Rule[#1, #2] &, {Range[6], #}] & /@ Tuples[{Black, White, Red, Green, Yellow, Cyan}, {6}], Cases[f /. #, {x_, {___, x_, ___}}] == {} &], Sort[Sort /@ (g /. #)] &] ; newsol = Map[#[[{1, 3, 2, 4, 5, 6}]] &, sol];(*Adjust the display order of faces*) newsol // Length (Graphics3D[{Specularity[0, 10], MeshPrimitives[Cuboid[{0, 0, 0}, {1, 1, 1}], 2]}], 0 Degree, {0, 0, 1}]}, Lighting -> ({"Directional", White, #} & /@ Tuples[{-1, 1}, 3])(*Diffuse light sources are arranged at four corners Or use a white scattering light source: Lighting -> {{"Ambient", White}}*)] & \ /@ newsol[[1 ;; 9]]) // Multicolumn octahedralgroup=MatrixForm /@ FiniteGroupData["Octahedral", "MatrixRepresentation"] Det /@ FiniteGroupData["Octahedral", "MatrixRepresentation"] Acknowledgements: Thank you very much for the detailed answers provided by thorimur. I hope community members can provide more and more ingenious methods (additional reward). • Are you working through all of the combinatorial exercises in a textbook? – JimB Feb 21 at 18:28 • @JimB Yes, but it's an exercise problem that I can't solve. It's different from the conventional problem without different color restrictions. I don't know how to solve this problem effectively at present. – A little mouse on the pampas Feb 21 at 20:16 • I got the third answer—215—so I'm wondering if there's some peculiarity in the question statement or the notion of proper flipping that I'm missing. Unfortunately, the webpage you linked isn't loading for me. Do they explain how they got it? – thorimur Feb 21 at 23:17 • Thinking about your code, Sort[Sort /@ (g /. #)] & is a really cool invariant. What was your reasoning behind it? I think it could be interesting to consider it in its general mathematical context. – thorimur Feb 22 at 0:13 • Ok, I figured it out! Either the competition is incorrect, or there's been a translation error. Assuming the latter and guessing at the "right" translation, I modified my code and recovered 230, and updated my answer to explain it. – thorimur Feb 22 at 3:54 Be warned: this is a long answer, because I'm trying to be sufficiently general to treat basic graph colorings in Mathematica and maximally explanatory for anyone reading. tl;dr: Define graph colorings; create functions that identify generate colorings; then quotient the set of colorings by the graph automorphisms, by creating literal equivalence classes of colorings. Count the number of resulting equivalence classes. Get 215 instead of 230; find that the reference answer has double-counted the number of 6-colorings by accident—or that the question is actually slightly different than as translated, and recover 230 in that case! (Note: code presented in full near the bottom.) ## Intro Encoding it as a graph and looking at colorings is a good strategy! However, we need to take into account two things: 1. ChromaticPolynomial[g, k] gives colorings using exactly k colors, whereas you need to choose up to k = 6 colors 2. ChromaticPolynomial[g, k] considers graphs to be labeled, and so, for example, there are, according to ChromaticPolynomial, 2 colorings of the graph 1 •-• 2. We could do this by "standard" combinatorial methods, like counting how many possibilities there are for the placement of successive colors, but I want to try to stick with your graph strategy. The second graph g1, encoding faces as graph vertices and edges as connections, is the relevant one. Unfortunately, Mathematica doesn't have built-in graph coloring utilities beyond ChromaticPolynomial. So, we'll need to build our own. ## Building a solution ### Defining and checking graph colorings Let's choose a form to represent graph colorings with. A(n unrestricted) graph coloring is an assignment from each vertex in a graph to a color. So let's encode a coloring as an association on graph vertices, e.g.: <| v1 -> color1, v2 -> color2, ..., vn -> colorn |> This is not the most efficient way to do this. A more efficient way would be to simply use a list of colors, with the color in the nth position indicating the color of the nth vertex in VertexList[g]. But that's okay. So, let's write a function that tests if a given coloring is even a well-formed assignment of colors to a given graph's vertex set, not even requiring adjacent vertices are differently colored yet: UnrestrictedColoringQ[g_, coloring_Association] := ContainsExactly[VertexList[g], Keys[coloring]] Ok. Now let's test if it's an actual graph coloring, i.e. that no two adjacent vertices have the same color. We'll do this by mapping the association over the edges, which will replace each vertex with its color (here c is our function/association)—we do this by mapping over the edge list at the 2nd level. For example, written out stylistically instead of with \[UndirectedEdge], just for showing the result: In[1]:= Map[c, {1 •-• 2, 2 •-• 3}, {2}] Out[1]:= {c[1] •-• c[2], c[2] •-• c[3]} The question is then whether we wind up with a color connected to a color n the output. If so, then two adjacent vertices have been assigned the same color by c. We want to check that this is avoided. That is, we want to check that that self-loops, loops of the kind a •-• a, do not appear. We'll do this with FreeQ[result, v_ \[UndirectedEdge] v_]. (Note: This assumes undirected edges; we could include directed edges by providing a couple alternatives to the pattern via |.) So, putting this all together, ColoringQ[g_, c_Association] := FreeQ[Map[c, EdgeList[g], {2}], v_ \[UndirectedEdge] v_, 1] /; UnrestrictedColoringQ[g, c] where the /; checks that c is at least an unrestricted coloring first. (If we were really building a package, we'd probably want to return an error message in that case instead.) Also note that the 1 in FreeQ just restricts us to testing the first level for safety. ### Generating colorings Okay, now let's build our colorings that select from a set of 6 colors. There are much better algorithms for doing this, but we're going to do it by brute force, since we only need to consider 6^6 == 46656 colorings. We can get all lists of 6 elements drawn from the 6 colors {1,2,3,4,5,6} via Tuples[{1,2,3,4,5,6}, 6], or in general, Tuples[Table[i, {i, Length @ VertexList[g]}], Length @ VertexList[g]]. We then want to make these into unrestricted colorings, i.e. associations; we can do this with AssociationThread, e.g. AssociationThread[VertexList[g], {4,6,2,2,1,2}] produces the association we want it to. So, AllUnrestrictedColorings[g_] := With[{vs = VertexList[g]}, AssociationThread[vs, #] & /@ Tuples[Table[i, {i, Length[vs]}], Length[vs]]] We can then select the ones that are colorings. This considers isomorphic colorings inequivalent if the color labels and vertex labels are different, so we'll reflect that in the name: AllLabeledColorings[g_] := Select[AllUnrestrictedColorings[g], ColoringQ[g, #] & ] ### Modding out by vertex relabeling Now comes the interesting part. We want to consider the action under reflections and rotations of the cube. Mathematically, we're modding out by the action of that symmetry group. Usually this is done by creating equivalence classes, and while there are more efficient ways to do it computationally, let's reflect the typical mathematical procedure. Now, it happens that reflections and rotations of the cube correspond exactly to graph automorphisms of g1. Mathematica has a function to produce the automorphism group of a graph, namely GraphAutomorphismGroup. We can get the list of group elements with GroupElements, and then apply these to a list of vertices by Permute[list, groupelement] or for a single element by PermutationReplace. We'll map over the keys in each association in this implementation; if we were taking colorings to be lists instead of associations, the first strategy might be relevant. Note that this does not account for isomorphic colorings up to relabeling of colors; for example, on the graph 1 •-• 2 •-• 3, if our colors are R, G, B, then this considers R-G-R to be inequivalent to R-B-R and B-R-B (etc.) This is what you want, though. So, if AutG is the list of group elements, a single equivalence class for a coloring c is Function[h, KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG Note: This assumes that our vertices are integers. In general, we'd need to use VertexIndex to turn it into an integer, permute, then extract the right vertex from VertexList. (Or permute the VertexList directly via Permute.) Now, for implementation reasons (namely that <| a -> x, b -> y |> is not equal to <| b-> y, a-> x |>) we'll want to sort the resulting associations by the keys. So, instead, we want, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG We're going to package this into a function with parameter c then map over the list of colorings. Once we do, we want to delete equivalent, uh, equivalence classes (i.e. equivalence classes with the same elements) by DeleteDuplicates with function ContainsExactly. Putting this all together, for a list of colorings clist, we can write AutMod[g_, clist : {___Association}] := With[{AutG = GroupElements[GraphAutomorphismGroup[g]]}, DeleteDuplicates[ Function[c, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG ] /@ clist, ContainsExactly] ] Now AutMod[g1, AllLabeledColorings[g1]] should give us all inequivalent (in the context of this problem) colorings. The length of this should be the number of dyeing schemes. ## The result ### Our result Now. This works. It takes a while to run. Your computation, which was posted after I began writing this, is much more efficient, but this reflects the underlying math more readably in my opinion, and is therefore easier to trust (for me, at least); and it's generalizable (at least to other small graphs!). However, after consideration, I believe your approach, which appears to use neighborhoods, might be generalizable too, and is certainly nicer computationally. If we wanted to make the above more efficient while using the same strategies, e.g. by encoding colorings differently, I think we could, and we might end up with something similar to what you have. The answer this produces, though, is 215. The given answer is 230. I'm pretty confident in the above determination of 215 because of the underlying mathematics, and from testing some smaller graphs. ### Why the competition is wrong Further, let's examine the reference answer. They count 30 configurations using all 6 colors, arguing roughly as follows: Fix a certain color on the top, leaving 5 options for the bottom, and $$(4-1)! = 6$$ colors for the remaining 4 sides, totaling 30 methods. However, they have double-counted the configurations for the remaining 4 sides, as they have forgotten to account for the reflection that identifies two of the 4 sides. The fact that we may fix one color on the top and have 5 choices for the bottom is correct. When considering how many options there are for the four remaining sides spoken of, we must imagine rotating the cube to fix one of the remaining 4 colors, on, say, the North face (so no choice has been made); then the choice of color for the South face is among all 3 remaining colors. The remaining two possible assignments of colors to the East and West faces are equivalent, by considering thee reflection that exchanges the East and West axis, so there is only actually 1 choice remaining. So the total number of possibilities is 5 times 3 times 1 (15), not 30. Hence, we conclude that the reference answer is in error, and 215 is the correct answer! ### Why the competition is right (and checking it) However, this whole computation might be predicated on a translation error. I've assuming that "proper flipping" means a flipping that is nontrivial, i.e., is actually a flipping operation (has determinant $$-1$$). But it strikes me that if "flipping" actually means something more like "orthogonal transformation" or "rotation", and "proper" means a member of the special orthogonal group, then this means the opposite—that we only allow things with determinant 1! Indeed, in that case, the competition's answer is correct. Let's verify that by generalizing our code for AutMod to allow arbitrary automorphism groups: AutMod[g_, clist : {___Association}, autg_List : Null] := With[{AutG = Replace[autg, Null :> GroupElements[GraphAutomorphismGroup[g]]]}, DeleteDuplicates[ Function[c, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG ] /@ clist, ContainsExactly] ] (If we were being more precise, we'd probably check if it were a subgroup of the graph automorphism group.) Then realize that the group of proper rotations may be generated by two 90 degree rotations, which here may be realized as the cycles Cycles[{{2, 3, 4, 5}}] and Cycles[{{1, 2, 6, 4}}] upon examining the specific form of g1 given. Then take H = GroupElements @ PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}]}] and we indeed find that AutMod[g1, AllLabeledColorings[g1], H] has Length equal to 230. ## The code Here's all of the code presented in full: UnrestrictedColoringQ[g_, coloring_Association] := ContainsExactly[VertexList[g], Keys[coloring]]; ColoringQ[g_, c_Association] := FreeQ[Map[c, EdgeList[g], {2}], v_ \[UndirectedEdge] v_, 1] /; UnrestrictedColoringQ[g, c]; AllUnrestrictedColorings[g_] := With[{vs = VertexList[g]}, AssociationThread[vs, #] & /@ Tuples[Table[i, {i, Length[vs]}], Length[vs]]]; AllLabeledColorings[g_] := Select[AllUnrestrictedColorings[g], ColoringQ[g, #] & ]; AutMod[g_, clist : {___Association}, autg_List : Null] := With[{AutG = Replace[autg, Null :> GroupElements[GraphAutomorphismGroup[g]]]}, DeleteDuplicates[ Function[c, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG ] /@ clist, ContainsExactly] ] (* With g1 as above: *) H = GroupElements @ PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}]}]; AutMod[g1, AllLabeledColorings[g1]] // Length AutMod[g1, AllLabeledColorings[g1], H] // Length ## Another approach There's also another way we could do this: by procedural choices in a manner paralleling the competition. Order your colors 1 through 6. Up to rotation + flipping (i.e. isometry), we can demand that the least-ranked color appearing be on the bottom. Now, up to isometry, there are 2 choices for the second-least-ranked color (which might be the same color!): opposite the least or adjacent to it. If it's adjacent, it cannot be the same color. Now take the third-least ranked color—etc. It's a big tree of case analysis. We can get Mathematica to do that too! I think this is essentially what you achieve in your third code snippet. The key here is that after we choose some particular vertices to color, the symmetry group reduces to the stabilizer of those vertices (i.e. the elements of the automorphism group that preserve it). Given a current symmetry group, our choice lies only in what orbit to place the color in, as all choices within a given orbit are the same up to that symmetry (practically by definition). When I have the chance I'll update this answer with a description of how to do this in Mathematica. • You'd better attach a complete code at the end to facilitate debugging. Besides, I didn't find the definition of function AutG. – A little mouse on the pampas Feb 22 at 1:04 • Ok, cool, added. Also, by the way, AutG is the list of group elements in the graph automorphism group; it's only used in AutMod, which defines it via a With statement. – thorimur Feb 22 at 3:05 • Your second piece of code (230) seems to be missing a symbol ]. – A little mouse on the pampas Feb 22 at 4:48 • ah, thank you; fixed. – thorimur Feb 22 at 5:32 • And indeed, we can check that that group and the graph automorphism group have the same elements: ContainsExactly[GroupElements@PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}], Cycles[{{1, 6}}]}], GroupElements@GraphAutomorphismGroup[g1]] gives True. – thorimur Feb 22 at 17:52 It's not an original answer, it's just a supplement to the answer of thorimur. g1 = Graph[(Sort /@ Flatten[Map[ Thread[#[[1]] \[UndirectedEdge] #[[2]]] &, {{1, {2, 3, 4, 5}}, {2, {1, 3, 5, 6}}, {3, {1, 4, 2, 6}}, {4, {1, 3, 5, 6}}, {5, {1, 2, 4, 6}}, {6, {2, 3, 4, 5}}}]]) // DeleteDuplicates, UnrestrictedColoringQ[g_, coloring_Association] := ContainsExactly[VertexList[g], Keys[coloring]]; ColoringQ[g_, c_Association] := FreeQ[Map[c, EdgeList[g], {2}], v_ \[UndirectedEdge] v_, 1] /; UnrestrictedColoringQ[g, c]; AllUnrestrictedColorings[g_] := With[{vs = VertexList[g]}, Tuples[Table[i, {i, Length[vs]}], Length[vs]]]; AllLabeledColorings[g_] := Select[AllUnrestrictedColorings[g], ColoringQ[g, #] &]; AutMod[g_, clist : {___Association}, autg_List : Null] := With[{AutG = Replace[autg, Null :> GroupElements[GraphAutomorphismGroup[g]]]}, DeleteDuplicates[ Function[c, Function[h, KeySort@KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG] /@ clist, ContainsExactly]] (*With g1 as above:*) H = GroupElements@ PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}]}]; G1 = GroupElements@ FiniteGroupData["Octahedral", "PermutationGroupRepresentation"]; G2 = GroupElements@ PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}], Cycles[{{1, 6}}]}]; num1 = AutMod[g1, AllLabeledColorings[g1]] // Length num2 = AutMod[g1, AllLabeledColorings[g1], G1] // Length num3 = AutMod[g1, AllLabeledColorings[g1], G2] // Length num4 = AutMod[g1, AllLabeledColorings[g1], H] // Length GraphAutomorphismGroup[g1] // GroupOrder(*It is shown that graph G1 is isomorphic to its rotated and flipped graphs*) The above code takes about 800 seconds to calculate num2. And the results of the above codes are 215, 1860, 215, 230 , 48. Where num1 = num3, this conclusion is very useful. But one thing I'm confused about is that groups G1 and G2 are both groups of order 48, representing regular hexahedral groups. Why are num2 and num3 not equal? I want to know the underlying reasons for their different results. Comparison with the results of standard answers: AutG = GroupElements[GraphAutomorphismGroup[g1]];(*正六面体旋转或反射后的48个同构*) clist = AllLabeledColorings[g1];(*先找到4080个两个共棱面颜色不同的染色方案*) sol = Tally[ Function[c, Function[h, KeySort@KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG] /@ clist, ContainsExactly];(*This code takes about 50 seconds to run*) (*找到这4080个方案的每一个的图的48个同构;然后判断4080个48同构子集之间是否重复,去重*) sol[[All, 2]](*List of the number of schemes repeated with each feasible dyeing scheme*) CountDistinct /@ Values /@ sol[[All, 1, 1]](*Number of colors used for each scheme*) Tally[CountDistinct /@ Values /@ sol[[All, 1, 1]]] (3 20 4 90 5 90 6 15) It can be seen that there are 15 schemes using 6 colors, which is different from the result of the reference answer. • So, one reason num2 ≠ num3 is: despite being an isomorphic group to our graph automorphism group, FiniteGroupData["Octahedral", "PermutationGroupRepresentation"], knows nothing about how we've numbered our graph vertices. That information is particular to g1 and the groups derived from it. As such, when it tries to, say, "exchange opposite faces", it might actually be exchanging adjacent faces, because it could be using diifferent names for the vertices. Also... – thorimur Feb 24 at 0:29 • ...and more fundamentally, permutation group representations of a given group are not unique, even in properties of the permutations. For example, we know that our representation of this group, which is represented as acting on faces, includes a cycle that only exchanges two elements: Cycles[{{1,6}}] for example. But upon inspecting the GroupElements of the above, you'll find no such permutation among them. Further, you'll find this group is represented via permutations on 8 elements, whereas we use only 6. Nonetheless, as groups, they are still isomorphic in the abstract. – thorimur Feb 24 at 0:37 • @thorimur Thank you very much for your kind help. – A little mouse on the pampas Feb 24 at 1:55
2021-07-30T16:15:47
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/240360/how-to-correctly-enumerate-all-the-schemes-of-this-cube-coloring-problem/240404", "openwebmath_score": 0.26720017194747925, "openwebmath_perplexity": 2446.41985543092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147201714923, "lm_q2_score": 0.8688267813328977, "lm_q1q2_score": 0.8448599514473281 }
https://math.stackexchange.com/questions/4288362/meaning-of-exists-in-y-in-y-exists-x-in-x-text-such-that-fx
# Meaning of "$\exists$" in "$\{y \in Y : \exists x \in X \text{ such that }f(x) = y\}$" I came across this definition of the range of a function: For a function $$f : X → Y$$, the range of $$f$$ is $$\{y \in Y : \exists x \in X \text{ such that }f(x) = y\},$$ i.e., the set of $$y$$-values such that $$y = f (x)$$ for some $$x \in X.$$ I have a little doubt regarding the use of “$$\exists x \in X$$”. Shouldn't it be “$$\forall x \in X$$” instead, since the range of a function is the set of all values that $$y$$ can acquire, which is by mapping all $$x$$'s in $$X$$ to $$Y?$$ • You might, in the future, find slightly different uses of this word, the range of $f:X\to Y$ could also be defined as all of $Y$, and the set you wrote is then the image of $f$. Oct 27 at 6:57 For a function $$f : X → Y$$, the range of $$f$$ is $$\{y \in Y : \exists x \in X \text{ such that }f(x) = y\},$$ i.e., the set of $$y$$-values such that $$y = f (x)$$ for some $$x \in X.$$ Th given set is more accurately read “the set of elements of $$Y$$ such that each one, for some element $$x$$ of $$X,$$ equals $$f(x)$$” or, more simply, “the set of elements of $$Y$$ such that each one equals some output of $$f$$”. I have a little doubt regarding the use of “$$\exists x \in X$$”. Shouldn't it be “$$\forall x \in X$$” instead On the other hand, your suggested set $$\{y \in Y : \forall x \in X,\;\, f(x) = y\}$$ is read “the set of elements of $$Y$$ such that each one equals every output of $$f$$”. Here's a different example providing a similar contrast: 1. $$A=\{n\in\mathbb Z: \exists a\in\mathbb Z\;\,n=2a\}\\ =\text{the set of integers such that each one is double }\textit{some }\text{ integer}\\ =\{\ldots,-6,-4,-2,0,2,4,6,8,\ldots\}\\ =2\mathbb Z.$$ Set $$A$$ is populated precisely with the even integers: • take some (any) integer, then double it; the result is a member of set $$A$$; • repeat infinitely. 2. $$B=\{n \in\mathbb Z: \forall a\in\mathbb Z\;\,n=2a\}\\ =\text{the set of integers such that each one is double }\textit{every }\text{ integer}\\ =\emptyset.$$ Since no integer is simultaneously twice of $$-5,$$ twice of $$0,$$ twice of $$71,$$ etc., the set $$B$$ has no member. • I was putting too much belief in the 'language'(which isn't precise) of the statement and not the mathematical notation(which is always precise). Thanks @ryang ! I was confused about how to read a set, which was very trivial. Oct 27 at 2:39 • @Prakhar 1. To be fair, the natural-language description of the set can be made precise, as shown. 2. Describing a set (translating logical and set symbols into natural language) is not always trivial, and your confusion was very understandable. Oct 27 at 6:47 The last line of your writing is very true, but that doesn't mean it's okay to write $$\forall$$. Let me explain the reason with one very simple and specific example. Let $$f:\{0,1\}\to\mathbb R$$ as $$f(x)=x$$. Then $$\{y \mid \forall x \in \{0,1\}$$ such that $$f(x) = y\}$$ is $$\emptyset$$, because $$y$$ cannot be 1 at the same time as 0. • I don't understand how the set you defined is ∅?! How does $\forall$ implies that y is simultaneously 0 and 1?? Oct 26 at 21:12 • No, never. @Prakhar. $\{y \in Y : \exists x \in X$ such that $f(x) = y\}$ is $\mathrm{Im}f$(range of $f$), but $\{y \in Y : \forall x \in X$ such that $f(x) = y\}$ is generally $\emptyset$. Oct 26 at 21:14 • i'm unable to grasp this. But thanks @Nightflight Oct 26 at 21:31 • The set $\{y∈Y: f(x)=y \ ∀x∈X\}$ is the set of elements of $Y$ that ALL of $X$ maps to. This would only be non-empty if f was a constant map. i.e. mapping all elements to a single element of Y. – Lev Oct 26 at 21:51 • What @Lev said. That set is empty unless $f$ is constant, in which case it's a singleton containing the constant value. There can't be more than one $y\in Y$ such that $\forall x\in X, f(x) = y$ (unless $X$ is empty!) Oct 27 at 5:37
2021-12-08T18:57:35
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4288362/meaning-of-exists-in-y-in-y-exists-x-in-x-text-such-that-fx", "openwebmath_score": 0.8450276851654053, "openwebmath_perplexity": 252.1964788082554, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147201714923, "lm_q2_score": 0.8688267796346599, "lm_q1q2_score": 0.8448599497959367 }
http://math.stackexchange.com/questions/77063/how-do-i-get-this-matrix-in-smith-normal-form-and-is-smith-normal-form-unique
How do I get this matrix in Smith Normal Form? And, is Smith Normal Form unique? As part of a larger problem, I want to compute the Smith Normal Form of $xI-B$ over $\mathbb{Q}[x]$ where $$B=\begin{pmatrix} 5 & 2 & -8 & -8 \\ -6 & -3 & 8 & 8 \\ -3 & -1 & 3 & 4 \\ 3 & 1 & -4 & -5\end{pmatrix}.$$ So I do some elementary row and column operations and get to $$\begin{pmatrix} 1+x & -2 & 0 & 0 \\ -3(x+1) & x+3 & 0 & 0 \\ 0 & 1 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix}.$$ Then I work with the upper left 3x3 matrix, and ultimately get: $$\begin{pmatrix} x-3 & 0 & 0 & 0 \\ 0 & x+1 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix}.$$ So now I have a diagonal matrix (and I'm pretty sure I didn't mess anything up in performing row and column operations), except according to http://mathworld.wolfram.com/SmithNormalForm.html, the diagonal entries are supposed to divide each other, but obviously x-3 does not divide x+1. This means that: either I did something wrong, or diagonal matrix is not unique. Any ideas for how to transform my final matrix into a matrix whose diagonal entries divide each other? - Add column 2 to column 1. Subtract row 2 from row 1. Now you have a scalar in the (1,1) position -- rescale to 1. Wipe out everything in its row and column. Now your diagonal is 1,x+1,x+1,x+1. –  Bill Cook Oct 30 '11 at 1:14 Also, Smith Normal Form is unique (if you rescale all polynomials to monic polynomials at the end). –  Bill Cook Oct 30 '11 at 1:16 Thank you so much, bill! –  Alison Oct 30 '11 at 3:01 Wait, do you meant $(x+1)(x+5)$? –  Alison Oct 30 '11 at 3:37 @BillCook Please consider converting your comment into an answer, so that this question gets removed from the unanswered tab. If you do so, it is helpful to post it to this chat room to make people aware of it (and attract some upvotes). For further reading upon the issue of too many unanswered questions, see here, here or here. –  Julian Kuelshammer Jun 11 '13 at 20:26 To expand my comment...Add column 2 to column 1. Subtract row 2 from row 1. Now you have a scalar in the (1,1) position -- rescale to 1. $$\begin{pmatrix} x-3 & 0 & 0 & 0 \\ 0 & x+1 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim \begin{pmatrix} x-3 & 0 & 0 & 0 \\ x+1 & x+1 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim$$ $$\begin{pmatrix} -4 & -x-1 & 0 & 0 \\ x+1 & x+1 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim \begin{pmatrix} 1 & (1/4)(x+1) & 0 & 0 \\ x+1 & x+1 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim$$ Now add $(-1/4)(x+1)$ times column 1 to column 2 (to clear everything beside 1). $$\begin{pmatrix} 1 & 0 & 0 & 0 \\ x+1 & x+1-(1/4)(x+1)^2 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim$$ Add $-(x+1)$ times row 1 to row 2 (to clear everything below 1) & simplify the (2,2)-entry. Then rescale row 2 (so the polynomial is monic). $$\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -(1/4)(x+1)(x-3) & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & (x+1)(x-3) & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & x+1\end{pmatrix} \sim$$ Finally swap columns 2 and 4 and then rows 2 and 4 to switch the positions of $(x+1)(x-3)$ and $x+1$. We are left with the Smith normal form. $$\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & x+1 & 0 & 0 \\ 0 & 0 & x+1 & 0 \\ 0 & 0 & 0 & (x+1)(x-3)\end{pmatrix}$$ -
2014-10-21T05:48:49
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/77063/how-do-i-get-this-matrix-in-smith-normal-form-and-is-smith-normal-form-unique", "openwebmath_score": 0.8201025128364563, "openwebmath_perplexity": 229.97106153296963, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.97241471777321, "lm_q2_score": 0.8688267762381843, "lm_q1q2_score": 0.8448599444094618 }
https://math.stackexchange.com/questions/3253240/are-polynomials-with-the-same-roots-identical
# Are polynomials with the same roots identical? I know that polynomials can be refactored in terms of their roots. However, this must imply that two different polynomials have different roots (this is just what I think). So my question is: Are polynomials with the same roots identical? - if so, why? A follow-up question that is also about the uniqueness of roots and polynomials can be found here: Is the set of roots unique for each $g(x)$ in $a_n x^n + g(x)$? • The polynomials $f(x)=1$, $g(x)=2$ and $h(x)=x^2+1$, $k(x)=x^2+x+1$ have the same roots over $\Bbb R$. – ajotatxe Jun 6 '19 at 16:15 No, they are not. For instance, $$2x^2-2$$ and $$x^2-1$$ have the same roots, yet they are not identical. And, depending on what you mean by "the same roots", we have that $$x^2-2x+1$$ and $$x-1$$ have the same roots, yet they are not identical. Again, depending on what you mean by "the same roots", $$x^3+x$$ and $$x^3+2x$$ both only have one real root, yet they are not the same. However, if two monic polynomials have the same roots, with the same multiplicities, over some algebraicaly closed field (like the complex numbers $$\Bbb C$$) then yes, they are identical. • Aha okay, thanks. But then how come you can write a polynomial in terms of its roots? Like $(\lambda - a)(\lambda - d)-bc = 0$ can be written in terms of its roots $(\lambda - \lambda_1)(\lambda - \lambda_2) = 0$? Since having the same roots apparently does not imply that two polynomials are identical, using the roots as a way to write a unique polynomial then seems confusing to me – Fac Pam Jun 6 '19 at 16:36 • @FacPam Those polynomials are monic, and since they are quadratic, there are always exactly two (possibly complex) roots when counted with multiplicity. And since they both have the two roots $\lambda_1$ and $\lambda_2$, they do turn out to be the same polynomial. Wasn't this addressed in your previous question? – Arthur Jun 6 '19 at 16:40 • Ahh, no it was not addressed - at least I do not think so. Possibly because I do not know the definition of "monic". I will look that up now. Thanks again – Fac Pam Jun 6 '19 at 16:48 • @FacPam "Monic" just means that the highest-order term has coefficient $1$. That requirement is there to stop things like the first counterexample in my answer: $x^2-1$ is monic (as the coefficient of $x^2$ is $1$) while $2x^2-2$ is not (as the coefficient of $x^2$ is not $1$). – Arthur Jun 6 '19 at 17:14 • @FacPam No, the "same multiplicities" is to resolve the issue of $x-1$ versus $x^2 - 2x + 1$ (or, as others have pointed out, $x$ versus $x^2$). They have the same roots, but are clearly not the same. That's because these quadratic example polynomials have a double root, a root with multiplicity $2$. – Arthur Jun 6 '19 at 18:47 The accepted answer is deservedly so, a great explanation. As I read this, I thought of my students who are visual learners, for whom, a picture is worth a thousand words, and this would answer their question with almost no further comment. The image above shows a simple $$Y=(X-1)(X-2)(X-3)$$ and an overlapping $$Y=-3(X-1)(X-2)(X-3)$$. This helps show that manipulation made to an equation such as factoring may preserve the roots, but do not leave an equation with the the same nature, e.g. the end behaviour which might be important, is easily lost. Edit - by popular demand, I'm adding the original graph, and an overlapping one with 2 as a double root. • How about spicing it up with a polynomial for which (X-2) appears twice? Still the same roots, and it would show that we're talking about more than just trivial constant factors. – Andras Deak Jun 7 '19 at 15:38 • Remind your students that linear functions are polynomial, and show them $f(x)=x,\, g(x)=-x,$ and $h(x)=2x.$ – DanielWainfleet Jun 10 '19 at 6:28 For polynomials over $$\mathbb{R}$$, the answer is no; for example, $$f(x)=x$$ and $$g(x) = x(x^2+1)$$ have the same roots over $$\mathbb{R}$$—with the same multiplicities—but they are not equal. For polynomials over $$\mathbb{C}$$, the answer is almost. The fundamental theorem of algebra says that every polynomial over $$\mathbb{C}$$ of degree $$n \ge 1$$ splits uniquely into $$n$$ linear factors. So if $$f$$ and $$g$$ have the same roots $$\alpha_1,\alpha_2,\dots,\alpha_n$$, listed with multiplicity, then $$f(x) = \lambda (x-\alpha_1)\cdots(x-\alpha_n) \text{ and } g(x) = \mu(x-\alpha_1)\cdots(x-\alpha_n)$$ for some $$0 \ne \lambda,\mu \in \mathbb{C}$$. So roots (with multiplicity) determine polynomials over $$\mathbb{C}$$ up to a multiplicative constant and, in particular, monic polynomials over $$\mathbb{C}$$ are uniquely determined by their roots. For polynomials over finite fields, the answer is very much no. There are polynomials that don't just have the same roots, but they have all the same values for every input. For example, the polynomials $$f(x) = x$$ and $$g(x)=x^3$$ over $$\mathbb{F}_2$$ satisfy $$f(x)=g(x)$$ for all $$x \in \mathbb{F}_2$$, and yet $$f \ne g$$. • Of course for finite fields $\mathbb{F}$, the pigeonhole principle alone can say that there will be distinct polynomials which induce the same map $\mathbb{F}\to\mathbb{F}$. Because the number of such maps is finite, while the number of polynomials is infinite. – Jeppe Stig Nielsen Jun 8 '19 at 8:27 No, they aren't: $$f_1(x)=(x+1)(x-2)$$ and $$f_2(x)=5(x+1)(x-2)$$ have the same roots. But they don't even need to have same degree to have the same roots: $$f_3(x)=x^2$$ has the same root as $$f_4(x)=x$$. • What do you mean by "up to a constant" - $f_1(x)=(x+1)(x-2)$ and $f_2(x)=5(x+1)(x-2)$ are not identical? – Fac Pam Jun 6 '19 at 20:29 • @FacPam Well, $f_2(x) = 5 f_1(x)$ so we can hardly say they are identical except at their roots – zdimension Jun 7 '19 at 6:51 • @FacPam It means that if $f(x_o)=0$ for some $x_0$, then also $\lambda f(x_0)=0$ for any scalar $\lambda$ – Tesla Jun 7 '19 at 8:31 • if you don't consider multiplicity you first stmt is incorrect, consider x(x-1)^2 and x^2(x-1). If you consider multiplicity, then your second stmt incorrect – RiaD Jun 7 '19 at 9:15 • yea thanks all, didnt think about it for more than two seconds. – Tesla Jun 8 '19 at 6:08 The multiplicity counts too: for example $$x$$ and $$x^2$$ have the same roots, but are different polynomials. If two polynomials have all the same roots and all the same multiplicities, then even then they are not equal: $$2x$$ and $$x$$ for example. So all you can conclude is that one is a scalar multiple of another. However, this statement needs to be interpreted correctly: you need to work over $$\mathbb{C}$$ (or some other algebraically closed field). For example, over $$\mathbb{R}$$, the polynomials $$x^2+1$$ and$$(x^2+1)^2$$ have the same real roots (namely, they have no roots!) but are clearly not the same. So: you have to count the roots with multiplicity in the algebraic closure. No they are not, and it's easy to see why that is the case. You probably wouldn't consider $$f(x)=x$$ and $$f(x)=10x$$ to be identical even though they have the same root. Let's start by considering polynomials with all their roots, real and complex. This allows us to fully answer the question for first complex, and then real, polynomials and roots. This approach will not only let us get all answers, but prove these are all answers, and the only answers.** It's also easy to see why that's so. # Fundamental principle: Over the complex numbers, all nonconstant polynomials can be uniquely factored into linear terms and a multiplier See Wikipedia "Irreducible polynomial - over the complex numbers" and Fundamental theorem of algebra: any nonconstant polynomial can, in complex terms, be uniquely factored into something like A.(x-B).(x-C).(x-D)... = 0 A <> 0 and B, C,D.. are the roots. B,C,D can of course be complex or real numbers. Also some of the B, C, D... may repeat, in which case we have one or more repeating roots, but the polynomial will still factorise this way. We can rewrite this in terms of unique roots, as follows: A. [(x-B)^P] . [(x-C)^Q] . [(x-D)^R] . [...] ... = 0 where A <> 0 and B,C,D... are now all unique complex numbers, and are the roots of the polynomial, and P,Q,R... are all integers >= 1 that account for any repeated roots. The fundamental.theorem of algebra guarantees we can factor all polynomials this way, and that it will be unique for each polynomial. It's also evident from inspection that B,C,D are the roots, and all the roots, and no other roots exist. ... Is now quite simple. Suppose 2 non-constant polynomials have identical roots. Then they must be identical other that possibly: • a different non-zero multiplier (A is different between the polynomials, when factored) • repeated roots (one or more of P, Q,R will differ between the polynomials, when factored) # What if we only allow real roots? The polynomial can still only be factored one way as above. The only difference is, any B,C,D that isn't a real number won't ever equal a value of X we can choose, so it can't be a solution. So as well as the 2 types of change above, we can also change the powers for any existing complex linear factors to any integer >= 0, or multiply by new complex linear factors (to any integer power >0), and provided the factor we multiply/divide by has a complex parameter, it won't ever affect the real roots. We can't divide by new complex linear factors, though, because the result wouldn't be a polynomial. This is easiest explained by example. Example: suppose our equation is a polynomial that factors into a mix of real and complex linear factors, some repeated: 4 . (X - 7)^2 . (X + 4.5) . (X + 2i) . (X - 2i)= 0 Then any polynomial with identical real roots must be formed by some combination of these changes (I'll give an example of each): • (-6) . (X - 7)^2 . (X + 4.5) . (X + 2i) . (X - 2i) = 0 We have multiplied A by some real value <> 0 (in this case, -1.5). • 4 . (X - 7)^8 . (X + 4.5) . (X + 2i) . (X - 2i)= 0 4 . (X - 7)^0 . (X + 4.5)^5 . (X + 2i) . (X - 2i)= 0 We have changed the powers for some of the repeated roots (up or down) • 4 . (X - 7)^2 . (X + 4.5) . (X + 2i) . (X - 2i) . (X - [3+7i])^3 = 0 4 . (X - 7)^2 . (X + 4.5) . (X + 2i)^17 . (X - 2i) = 0 4 . (X - 7)^2 . (X + 4.5) . (X + 2i)= 0 4 . (X - 7)^2 . (X + 4.5) = 0 We have changed the powers for some of the complex roots (up or down), or removed them (equivalent to changing their power to 0), or introduced new complex linear factors. Note that this last transformation might or might not change some of the coefficients in the equation from real to complex coefficients or vice-versa, depending what you do (see especially the last example where they don't). It may well change the complex roots of the polynomial. But it will not change, add or remove any real roots of the polynomial. If you restrict yourself to changes of this kind that don't change any real coefficients to complex coefficients, you'll achieve all real coefficient polynomials with the same roots this way. ** Note - For quintics and higher, we may not be able to factorise to simple algebraically expressed roots, because not all 5th and higher order polynomials allow for neat expressions of their roots this way. But - even if inexpressible - the roots do exist, the limitation is in our ability to calculate them exactly, or write them concisely, not in their existence. The same method will work and be valid, and the same other types of polynomials will have identical complex (or real) roots. We just wouldn't be able to calculate or write the linear expressions, transformative equations, or roots, neatly, in the same way. • please learn to use MathJax – qwr Jun 7 '19 at 14:03 • This answer should really be upvoted – klutt Jun 9 '19 at 18:57 Arthur answered your question very nicely, but I'd like to tell you a much more general result that might pique your interest in a field of math called "algebraic geometry". So – if we are working in an algebraically closed field, say the complex numbers $$\mathbb{C}$$, then every polynomial in one variable splits completely into linear factors. As the other answers say, this is enough to show that one variable complex polynomials are uniquely determined by their roots, up to multiplicity and multiplication by a constant: if the roots of a polynomial $$p(t)$$ are some complex numbers $$\lambda_1,...,\lambda_k\in\mathbb{C}$$, then that polynomial must be $$\lambda(t-\lambda_1)^{l_1}...(t-\lambda_k)^{l_k}$$ for some non-zero complex number $$\lambda$$ and some non-zero natural numbers $$l_1,...l_k$$. However, what happens if we want to consider polynomials in multiple variables? This is a very natural thing if you want to study geometry – for instance, the unit circle in the real plane is cut out by an equation of the form $$t_1^2+t_2^2-1=0$$. This polynomial has more than one variable, and in general we won't be able to factor such polynomials the same way we can polynomials in one variable. However, we can get a beautiful analog of the one-variable result using some more advanced algebraic machinery. In particular, there's an important result in commutative algebra called Hilbert's Nullstellensatz, which I won't state in full generality here. But one corollary of it is that, if the roots of a complex polynomial $$p(t_1, ..., t_n)\in\mathbb{C}[t_1, ..., t_n]$$ are also roots of another complex polynomial $$q(t_1, ..., t_n)\in\mathbb{C}[t_1, ..., t_n]$$, then there exist a natural number $$k$$ and a third polynomial $$r(t_1, ..., t_n)\in\mathbb{C}[t_1, ..., t_n]$$ such that $$q^k=rp$$. We can use this to prove the following lovely result: if $$p(t_1, ..., t_n),q(t_1, ..., t_n)\in\mathbb{C}[t_1, ..., t_n]$$ are non-zero and share the same roots, and also have no repeated factors (ie, if a non-constant polynomial $$r$$ divides $$p$$, then $$r^2$$ does not divide $$p$$, and likewise for $$q$$), there there is a complex number $$\lambda$$ such that $$p=\lambda q$$ – ie, $$p$$ and $$q$$ differ by only a scalar multiple, and so a polynomial with no repeated factors is uniquely determined (up to a scalar multiple) by its roots. I give a proof of this below; you need one other piece of machinery from algebra, which is that any non-constant polynomial in $$\mathbb{C}[t_1, ..., t_n]$$ has a unique factorization into irreducible polynomials, up to reordering and multiplication by constants. (Recall that an irreducible polynomial is one that has no non-constant divisors other than constant multiples of itself.) The term for this is that $$\mathbb{C}[t_1, ..., t_n]$$ is a "unique factorization domain" (ufd), which is a much more general phenomenon, but you don't need that here. Given these two facts that I've mentioned, you can prove the result we want. I do this below, but first I recommend trying to prove this yourself!! It's a nice exercise. Proof: let $$p$$ and $$q$$ be as above: non-zero complex polynomials in $$n$$ variables with no repeated factors and which share the same roots. In particular, the roots of $$p$$ are also roots of $$q$$, so by the corollary to the nullstellensatz there is some $$k\in\mathbb{N}$$ and $$r\in\mathbb{C}[t_1,...,t_n]$$ such that $$q^k=rp$$. I claim that we can assume $$k=1$$. Indeed, because of unique factorization in $$\mathbb{C}[t_1, ..., t_n]$$, we can write $$q=q_1*...*q_m$$ for some $$m\in\mathbb{N}$$, where each $$q_i\in\mathbb{C}[t_1,...,t_n]$$ is irreducible. Note that, if $$i\neq j$$, then $$q_i\neq \lambda q_j$$ for any $$\lambda\in\mathbb{C}$$, or else $$q_i^2$$ would divide $$q$$, contradicting the fact that $$q$$ has no repeated factors. Now, the fact that $$q^k=rp$$ means that $$q_1^k...q_m^k=rp$$. In particular, $$q_i^k$$ divides $$rp$$ for every $$i$$ – ie $$q_i$$ (or some scalar multiples of it) appears $$k$$ times in the unique (up to constant multiples) factorization of $$rp$$ into irreducible polynomials. But a factorization of $$rp$$ into irreducible polynomials is the same thing as a factorization of $$r$$ into irreducibles multiplied with a factorization of $$p$$ into irreducibles. In particular, this means that – if $$l_1$$ and $$l_2$$ are the largest numbers such that $$q_i^{l_1}$$ divides $$r$$ and $$q_i^{l_2}$$ divides $$p$$ – then $$l_1+l_2=k$$. (Note that $$l_1$$ and $$l_2$$ are not necessarily non-zero.) However, we know that $$q_i^l$$ does not divide $$p$$ for any $$l>1$$, since $$p$$ has no repeated factors, and so by the pigeonhole principle we must have that $$q_i^{k-1}$$ divides $$r$$. In particular, each $$q_i$$ appears at least $$k-1$$ times in the factorization of $$r$$ into irreducibles, so $$q^{k-1}=q_1^{k-1}*...*q_m^{k-1}$$ divides $$r$$; say $$r=r'q^{k-1}$$ for some other other polynomial $$r'\in\mathbb{C}[t_1,...,t_n]$$. Putting this together with the fact that $$q^k=rp$$ gives us $$q^k=q^{k-1}r'p$$, and dividing out gives $$q=r'p$$. Now, on the other hand, the roots of $$q$$ are also roots of $$p$$, and so we can go through exactly the same arguments as above to show that there is some polynomial $$s\in\mathbb{C}[t_1,...,t_n]$$ such that $$p=sq$$. Hence, combining these two equations, $$q=r'sq$$, and dividing out by $$q$$ gives $$r's=1$$. But no non-constant polynomial is invertible, so this means that $$r'$$ and $$s$$ are actually constant polynomials – ie complex numbers – and so $$\lambda=s\in\mathbb{C}$$ gives $$p=\lambda q$$, exactly the result we desired. Hopefully this argument was all clear; let me know if there's any confusion on your end. And hopefully this seems like a nice result!! It's a vast generalization of the the question you asked, and shows that some of our intuition for one-variable polynomials carries over very nicely to multi-variable polynomials. In particular, when we want to do some geometry and think about curves defined by multi-variable polynomials, we can use some of the same ideas and tools that we use for one-variable polynomials. These multi-variable polynomials and the curves they cut out are some of the central objects of study in classical algebraic geometry. Now, the algebraic results that we had to use – in particular the nullstellensatz – are non-elementary, and there's a decent amount of algebra you'd have to learn before you could prove it in full generality, but hopefully this gives you some motivation to study some higher math in the future!! It's full of beautiful results like this one. Of course NOT. A simple multiplication by a constant works. More interestingly define an equivalence relation where p1~p2 iff they share exactly the same roots!
2021-03-01T14:30:13
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3253240/are-polynomials-with-the-same-roots-identical", "openwebmath_score": 0.8442171812057495, "openwebmath_perplexity": 221.77604056815792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147193720649, "lm_q2_score": 0.8688267728417087, "lm_q1q2_score": 0.844859942495807 }
http://mathhelpforum.com/advanced-algebra/162408-prove-disprove-direct-sum-question.html
# Math Help - prove or disprove direct sum question 1. ## prove or disprove direct sum question Prove or disprove $W, U_1, U_2$ are subspaces of $V$. If $V=U_1 \oplus U_2$ then $W = (U_1 \cap W) \oplus (U_2 \cap W)$ Attempt: False let's say $dimV = 8, dimU_1 = dimU_2 = 4 (dimV = dimU_1 + dimU_2)$ and $dimW = 5$ then $dim(U_1 \cap W)$ may equal 4 and $dim(U_2 \cap W)$ may equal 4 and then we get $5 \neq 4+4$ 2. **Is the question understandable? Please let me know if there is something I need to explain better. Thanks! 3. Originally Posted by jayshizwiz Prove or disprove $W, U_1, U_2$ are subspaces of $V$. If $V=U_1 \oplus U_2$ then $W = (U_1 \cap W) \oplus (U_2 \cap W)$ Attempt: False let's say $dimV = 8, dimU_1 = dimU_2 = 4 (dimV = dimU_1 + dimU_2)$ and $dimW = 5$ then $dim(U_1 \cap W)$ may equal 4 and $dim(U_2 \cap W)$ may equal 4 and then we get $5 \neq 4+4$ You haven't disproved anything: "may" it's not existence. You have to come up with a particular example that shows clearly that the claim is false. Hint: Take a look at $\mathbb{R}^2\,,\,\,U_1=\left\{\,\binom{x}{x}\in\ma thbb{R}^2\right\}\,,\,U_2=\left\{\,\binom{x}{-x}\in\mathbb{R}^2\right\}\,,\,\,W=\left\{\binom{x} {0}\in\mathbb{R}^2\right\}$ Tonio 4. Originally Posted by tonio You haven't disproved anything: "may" it's not existence. You have to come up with a particular example that shows clearly that the claim is false. Hint: Take a look at $\mathbb{R}^2\,,\,\,U_1=\left\{\,\binom{x}{x}\in\ma thbb{R}^2\right\}\,,\,U_2=\left\{\,\binom{x}{-x}\in\mathbb{R}^2\right\}\,,\,\,W=\left\{\binom{x} {0}\in\mathbb{R}^2\right\}$ Tonio I truly hope you aren't grading my exam (; i sohld get at least partial credit 5. The misconception that some students have, and that this problem is attempting to dispel, is that the direct sum is like a (set) union. But in fact, the direct sum of U1 and U2 contains a whole lot of elements that are not in U1 or U2. In case Tonio's excellent example wasn't clear, think about this and read it again.
2014-09-19T12:37:05
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-algebra/162408-prove-disprove-direct-sum-question.html", "openwebmath_score": 0.6705988645553589, "openwebmath_perplexity": 1083.16178776873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.97241471777321, "lm_q2_score": 0.8688267677469952, "lm_q1q2_score": 0.8448599361525045 }
https://mathematica.stackexchange.com/questions/192821/error-in-transformedfield/192831
# Error in TransformedField I am using TransformedField to convert a system of ODEs from Cartesian to polar coordinates: TransformedField[ "Cartesian" -> "Polar", {μ x1 - x2 - σ x1 (x1^2 + x2^2), x1 + μ x2 - σ x2 (x1^2 + x2^2)}, {x1, x2} -> {r, θ} ] // Simplify and I get the result {r μ - r^3 σ, r} but I am pretty sure that the right answer should be {r μ - r^3 σ, 1} Where is the error? Mathematica's answer is correct and consistent with your expectations, but you are not accounting for the basis of the vector field. TransformedField transforms a vector field between two coordinate systems and bases. In this case, it is converting from $$f(x,y)\hat x+g(x,y)\hat y$$ to the same geometrical vector field expressed as $$u(r,\theta)\hat r + v(r,\theta) \hat \theta$$. Mathematica's answer can therefore be interpreted as saying $$\left(μ x_1 - x_2 - σ x_1 (x_1^2 + x_2^2)\right)\hat x + \left( x_1 + μ x_2 - σ x_2 (x_1^2 + x_2^2)\right)\hat y = \left(r μ - r^3 σ\right)\hat r + r \hat\theta$$ Notice that the expressions $$r'$$ and $$\theta'$$ don't appear anyhwere. Those are dynamical quantities, not geometrical ones (unless working in the jet bundle, but let's not go there). Also notice the hats! As stated in the documentation, TransformedField assumes inputs are in an orthonormal basis, and returns outputs in the same basis. That will be important for later on. Now, you are dealing with a differential equation, and based on your expected answer I'll assume what you have is a first-order system and you are transforming the associated vector field (AKA the "right-hand side"). Finding solutions means find the integral curves of the vector field. This gives as a nice relationship between the geometrical variables and the dynamical ones, except that this relationship is of necessity expressed in the so called coordinate basis, written $$(r',\theta') = a \frac{\partial}{\partial r} + b \frac{\partial}{\partial \theta}$$. So to get the answer expressed in your desired basis, we need the relationship between the coordinate and orthonormal basis vectors. As is covered in books on vector calculus (and elsewhere), the relationsip is $$\hat r = \frac{\partial}{\partial r}$$ and $$\hat \theta = \frac{1}{r}\frac{\partial}{\partial \theta}$$. Substituting this into the answer Mathematica gave above, we get $$\left(r μ - r^3 σ\right)\hat r + r \hat\theta = \left(r μ - r^3 σ\right) \frac{\partial}{\partial r} + (1) \frac{\partial}{\partial \theta},$$ which is the answer you expected. • Thank you! That makes sense. The documentation for TransformedField does not provide sufficient detail about what the function is actually doing. – rpa Mar 8 at 2:38 We can define our own functions. From $$x',y'$$ to $$r',\theta'$$, we derive: $$r' = \left(\sqrt{x^2 +y^2} \right)' = \frac{(x^2 +y^2)'}{2 \sqrt{x^2 +y^2}}=\frac{xx' +yy'}{r}$$ and $$\theta' = \left(\arctan \frac{y}{x} \right)' = \frac{(y/x)'}{1+(y/x)^2} = \frac{y' x -x' y}{r^2}.$$ First, we define rdot[x1_, x2_] := (x1 (μ x1 - x2 - σ x1 (x1^2 + x2^2)) + x2 (x1 + μ x2 - σ x2 (x1^2 + x2^2)))/r We now make the substitution and simplify rdot[r Cos[t], r Sin[t]] // FullSimplify This yields (matches Mathematica) $$r' = \mu r-r^3 \sigma$$ We now do the same for the other thetadot[x1_,x2_]:=(x1 (x1+μ x2-σ x2 (x1^2+x2^2)) - x2(μ x1-x2-σ x1 (x1^2+x2^2)))/r^2 We now make the substitution and simplify thetadot[r Cos[t], r Sin[t]] // FullSimplify This yields (does not match Mathematica, but see accepted answer) $$\theta'= 1$$ I have asked this question before on this site in two different ways and have never gotten an answer that resolves the matter, but that could just be my denseness as the accepted answer now shows! Update I have received a response from Wolfram Support and wanted to post it as others may find it as helpful as I did. Thank you for contacting Wolfram Technical Support. I want to highlight a couple more pieces of information that you might find useful. There is a more comprehensive tutorial on how Mathematica handles coordinate transformations, and particularly how it handles basis transformations for vectors, available at https://reference.wolfram.com/language/tutorial/ChangingCoordinateSystems.html Under the section "Relating Orthonormal Bases", the tutorial highlights that the transformation of vectors is given by an orthonormal rotation matrix. In particular, this guarantees that a vector will have the same norm in any coordinate system. So, the vector {0,1,0} in the {r, th, phi} coordinate system must have a norm 1 in the {x,y,z} coordinate system. In the question you posted on StackExchange, the norm of the original vector is r Sqrt[1 + (\[Mu] - r^2 \[Sigma])^2] (after the change of variables from {x1,x2} to {r,theta} has been made). This highlights that the proposed solution {r ? - r^3 ?, 1} cannot be correct, as it has a different norm. On the other hand, {r ? - r^3 ?, r} has the same norm. Please let me know if you have any further questions. ## Sincerely, Wolfram Technology Group http://www.wolfram.com/support/ • Have you reported it to the Wolfram tech support? – Alexey Popkov Mar 7 at 22:48 • @AlexeyPopkov: I have not. I have had many issues with it when transforming between different methods. These days, I don't trust it and just create my own transformation rules to do it. – Moo Mar 7 at 22:52 • It is worth to write them about it in order to get it finally fixed. You can even write a short letter to [email protected] with a link to this post. – Alexey Popkov Mar 7 at 22:55 • @AlexeyPopkov: I sent them an email per your suggestion. – Moo Mar 7 at 23:01 My slightly different method matches Mathematica. aCartToCyl[{ax_, ay_}] := {ax Cos[ϕ] + ay Sin[ϕ], ay Cos[ϕ] - ax Sin[ϕ]} aCartToCyl[{μ x1 - x2 - σ x1 (x1^2 + x2^2), x1 + μ x2 - σ x2 (x1^2 + x2^2)}] // Simplify; % /. {x1 -> r Cos[ϕ], x2 -> r Sin[ϕ]} // Simplify (*{μ r - r^3 σ, r}*)
2019-06-16T19:35:27
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/192821/error-in-transformedfield/192831", "openwebmath_score": 0.6747981309890747, "openwebmath_perplexity": 1371.313547085985, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9591542829224747, "lm_q2_score": 0.880797071719777, "lm_q1q2_score": 0.8448202837255983 }
https://brilliant.org/discussions/thread/probability-misconception/
# Probability Misconception Hey, buddies :) Recently people had discussion in Brilliant-Lounge on a probability problem which is: In a family of 3 children, what is the probability that at least one will be a boy? Some of them believe that $\frac 34$ is the correct answer while the others believe that the correct answer is $\frac 78$. Everyone is invited to come up with their response along with the explanation. It will be fun and help us a lot to upgrade our knowledge engine further. Thanks! Note by Sandeep Bhardwaj 3 years, 7 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Let $B$ represent a boy and $G$ represent a girl.Then, Sample Space , $S =\{BBB,BBG,BGB,GBB,BGG,GBG,GGB,GGG\}$ (Probability of having at least $1$ boy) $= 1 -$(Probability of having only girls (no boys))=$1-\frac{1}{8}$(only $1$ case out of $8$ cases)=$\frac{7}{8}$ So According to me, correct answer is $\boxed {\frac{7}{8}}$. Note:- $BBG,BGB,GBB$ are different cases because their relative ages (order of birth) are different in each case. (Same for $BGG,GBG,GGB$) Alternate Thinking Process:- $P(B)=P(G)=\frac{1}{2}$ (Probability of having at least $1$ boy) $= 1 -$(Probability of having only girls (no boys))$=1-\frac{1}{2}\times \frac{1}{2}\times \frac{1}{2}=\boxed{\frac{7}{8}}$ - 3 years, 7 months ago Yes you are right. This is the same approach what I had. - 3 years, 7 months ago Nice approach @Yash Dev Lamba - 3 years, 6 months ago it's an easy one :P let the birth of boy be success and girl be failure ( i'm not being an anti-feminist :P) the answer would be 3c1+3c2+3c3/(2^3)=7/8 ! easy enough :P - 2 years, 7 months ago lol. - 1 year ago I agree with @Yash Dev Lamba Probability can lead to amazing paradoxes. Here is a very well known probability question often misunderstood: A family has two children. What is the probability that they are both sons, given that a) At least one of them is a son? b) the elder child is a son? Staff - 3 years, 7 months ago Ya , parodoxes created by probability are probably the best. (a)1/3 (b)1/2 - 3 years, 7 months ago Simple conceptual learning condtitonal probability Probability Rocks By- YDL - 3 years, 7 months ago What do you guys think? I would be great if you participate in this discussion as you were playing a major role in the slack discussion. Thanks! - 3 years, 7 months ago The Family cares about getting Boy, not about getting a young boy or an old boy. So, how does having two younger daughters and an elder son different from having a younger son and two elder daughters ? - 3 years, 7 months ago How 3/4 will come? - 3 years, 7 months ago if blindly consider BBG,BGB,GBB same and BGG,GBG,GGB also same then prob. is 3/4 which is incorrect. - 3 years, 7 months ago Thanks for explaining. A wrong answer is more important than a right. - 3 years, 7 months ago Take the complementary probability, the chance that no boys are picked. For this to happen, all girls must be picked, so the probability is (1/2)^3 = 1/8. Every other case has at least one boy, so the probability that at least one boy is chosen is 1 - 1/8 = 7/8. - 3 years, 7 months ago Ans should be 7/8 as if we remove the case of no boys then the remaining will be atleast 1 boy i.e 1-(1/2)^3= 7/8 ☺ - 3 years, 6 months ago But there is a large assumption that boys and girls are given birth to with 0.5 probability each! That's quite huge an assumption, and it would allow any answer to be correct... - 3 years, 6 months ago The Family cares about getting Boy, not about getting a young boy or an old boy. So, how does having two younger daughters and an elder son different from having a younger son and two elder daughters ? - 3 years, 6 months ago I feel like the answer is 1/5 - 3 years, 7 months ago - 3 years, 7 months ago Duh!! It is 4/5 because in a family of 5 (3 children 2 parents) then there is at least 1 women so at least 1 boy is 4/5 PS: I am weak in probability - 3 years, 7 months ago It's only about the children, not the parents. So consider a family of 3 children (assuming total members as 3) and then find out the probability that at least one of them is a boy. Thanks! Don't worry. Keep practicing. You will soon be a master in combinatorics. $\ddot \smile$ - 3 years, 7 months ago - 3 years, 7 months ago There are 4 possibilities - 1 boy , 2 boys , 3 boys and no boy . so at least 1 boy so the answer is 3/4 Am I correct ? Sandeep sir - 3 years, 7 months ago What I think is that BGG , GBG, GGB would be same so I count them as 1. Are we considering order of birth as well ? - 3 years, 7 months ago If we replace chilren with coins, the answer remains the same but that maybe a better way to tell you why order is necessary. - 3 years, 7 months ago yes, we are considering although it is not mentioned in question but it is understood (I think) to consider order of birth. - 3 years, 7 months ago No. The correct answer is 7/8. - 3 years, 7 months ago Ohkk sir I thought order won't matter . - 3 years, 7 months ago
2019-10-15T22:43:04
{ "domain": "brilliant.org", "url": "https://brilliant.org/discussions/thread/probability-misconception/", "openwebmath_score": 0.9637357592582703, "openwebmath_perplexity": 2417.813163943126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9591542887603537, "lm_q2_score": 0.8807970654616711, "lm_q1q2_score": 0.8448202828650959 }
https://www.physicsforums.com/threads/why-tan-x-x-as-x-approaches-0.943522/
# Why tan x=x as x approaches 0? • I Hi! In one of my textbook i saw the relation tan(x) = x where x is very small value and expressed in radians. I want to know why its true and how it actually works. I would appreciate someone's help mfb Mentor Did you draw a sketch? A small part of the circle can be approximated by a straight line. It is similar to the approximations ##\cos(x) \approx 1## and ##\sin(x) \approx x## for small x, and you can derive the approximation of the tangent that way as well. fresh_42 Mentor Hi! In one of my textbook i saw the relation tan(x) = x where x is very small value and expressed in radians. I want to know why its true and how it actually works. I would appreciate someone's help One way to look at it is to take it as the linear approximation by the first derivative. We have ##\left. \dfrac{d}{dx}\right|_{x=0}\tan(x)=(1+\tan^2(x))|_{x=0}=1## which means that the tangent function is locally approximated by ##x \longmapsto (\tan(x))_0' \cdot x =1 \cdot x##. Another way is to use the Taylor series at ##x=0## which is ##\tan(x) = x + O(x^3)\,.## QuantumQuest Did you draw a sketch? A small part of the circle can be approximated by a straight line. It is similar to the approximations ##\cos(x) \approx 1## and ##\sin(x) \approx x## for small x, and you can derive the approximation of the tangent that way as well. But why is it true for radians only? I solved on my calculator and this is what i saw : Tan (0.12) = 0.12 (where 0.12 is radians) But when x is in degrees Tan(0.12) = 0.00209 Last edited by a moderator: fresh_42 Mentor But why is it true for radians only? I solved on my calculator and this is what i saw : Tan (0.12) = 0.12 (where 0.12 is radians) But when x is in degrees Tan(0.12) = 0.00209 ##\tan 0.12 \approx \tan 7° \approx [x + O(x^3)]_{at \, 0} = 0.12 \pm 0.002## which is close to ##0.12##. ##\tan 0.12° \approx \tan 0° \approx [x + O(x^3)]_{at \, 0} = 0 \pm 0## which is close to ##0.00209##. You simply compare a value ##0.12## with its ##60-##fold value. But both are still in a very good approximation to ##\tan (x) \approx x##. Radians are the natural unit here, degree more because of historical reasons, habit and clarity for humans. The approximation ##\tan (x) \approx x## requires radians if taken numerically disregarding the units. rishi kesh TeethWhitener Gold Member But why is it true for radians only? The series expansion of ##\tan x## is ##\tan x = x + \frac{1}{3}x^3 + \frac{2}{15}x^5+\cdots##, where the powers of ##x## get bigger and bigger. This means for any value of ##x > 1##, the higher power terms in the series will contribute proportionally more than the lower terms. So the approximation only really works well when ##x\ll 1##, because then the higher order terms die out quickly. If you use degrees instead of radians, then you're effectively using ##d = \frac{180}{\pi}x## and calculating ##\tan d## instead of ##\tan x##. Since ##\frac{180}{\pi}\approx 57.2##, you should expect to get ##\tan d \approx \frac{1}{57.2}d##. Doing a quick calculation: $$0.12°\times \frac{1}{57.2} \approx 0.002098$$ in line with what you would expect. Mark44 Mentor Hi! In one of my textbook i saw the relation tan(x) = x where x is very small value and expressed in radians. No reputable book would say that tan(x) = x. The proper relationship is that ##\tan(x) \approx x## for values of x near 0 (and in radians). Khashishi But why is it true for radians only? Radians are the natural units for angles. If you take a circle with radius 1, then the circumference is ##2\pi##. And there are also ##2\pi## radians in a revolution. So, the arc length of a circle radius 1 is equal to the angle in radians. The tangent function is defined as the ratio of lengths of a right triangle ##\frac{opposite}{adjacent}##. If you take a right triangle with a small angle, the length of the opposite leg is very close to the arc length of the circle next to it, and the length of the adjacent leg is very close to the radius of the circle. So, the tangent is very close to the angle in radians. Chestermiller, FactChecker and olivermsun One can show using the L'Hopital's rule that $$\lim_{x\to 0} \frac{\tan x}{x} = \lim _{x\to 0} \frac{1}{\cos ^2x} = 1$$ This immediately implies ##\tan x = x + \alpha (x) ##, where ##\alpha (x)\to 0 ## as ##x\to 0 ##. In other words, the closer you get to ##0 ##, the smaller the difference between ##\tan x ## and ##x ## becomes. mfb Mentor This immediately implies ##\tan x = x + \alpha (x) ##, where ##\alpha (x)\to 0 ## as ##x\to 0 ##. In other words, the closer you get to ##0 ##, the smaller the difference between ##\tan x ## and ##x ## becomes. That is a much weaker statement than the ratio. It would also apply for ##2x=x+\alpha(x)##, for example, but approximating 2x as x is usually a bad idea. It implies ##\tan x = x(1 + \alpha (x)) ##, where ##\alpha (x)\to 0 ## as ##x\to 0 ## QuantumQuest and nuuskur Our statements are equivalent, although I like yours more as it is more explicit in a way. mfb Mentor The statements are not equivalent. Your second statement just says tan(x) and x have the same limit for x->0, that is a much weaker statement. The ratio is a strong statement, my reply was commenting on the remark afterwards only. fresh_42 If ##\tan x = x + \alpha (x)##, then ##\frac{\tan x}{x} = 1 + o(x) ##. Conversely, if ##\lim_{x\to 0} \frac{\tan x}{x} = 1 ##, then ##\tan x = x(1+\alpha (x)) =: x + \hat{\alpha} (x) ##. We just label things differently, it seems. Besides, if their limits are the same in the viewed process, the ratio statement follows (in this case). mfb Mentor If ##\tan x = x + \alpha (x)##, then ##\frac{\tan x}{x} = 1 + o(x) ##. That argument does not work in general, see my example with 2x instead of tan(x). Conversely, if ##\lim_{x\to 0} \frac{\tan x}{x} = 1 ##, then ##\tan x = x(1+\alpha (x)) =: x + \hat{\alpha} (x) ##. We just label things differently, it seems. That direction is fine but that is a much weaker statement on the right side if you just require ##\hat \alpha(x)## to go to 0. What you need to make the two statements equivalent is the condition that ##\displaystyle \frac{\alpha(x)}{x} \to 0## instead of ##\alpha(x) \to 0##. nuuskur You are correct. I should have made my remainder term more explicit, hence my remark why I like yours more. fresh_42 Mentor You are correct. I should have made my remainder term more explicit, hence my remark why I like yours more. Your mistake goes deeper than sloppiness, because your concept of a (linear) approximation by the first derivative missed the point. It is essential that the normed direction tends towards zero not just the remainder term. We have ##\tan(0+v) = \tan(0) + \tan'(0)\cdot v + r(v)## and ##\lim_{v \to 0}\dfrac{r(v)}{||v||} =0##. This is the reason why @mfb's counterexample works, if this is not the case. You cannot skip the denominator. nuuskur Radians are the natural units for angles. If you take a circle with radius 1, then the circumference is ##2\pi##. And there are also ##2\pi## radians in a revolution. So, the arc length of a circle radius 1 is equal to the angle in radians. The tangent function is defined as the ratio of lengths of a right triangle ##\frac{opposite}{adjacent}##. If you take a right triangle with a small angle, the length of the opposite leg is very close to the arc length of the circle next to it, and the length of the adjacent leg is very close to the radius of the circle. So, the tangent is very close to the angle in radians. This is the explanation i actually needed. But i want you to extend it a little bit, hopefully you will clear my doubt(please check my attachment). I will appreciate further reply from you :) #### Attachments • 1522847373700-707240553.jpg 28.4 KB · Views: 429 Mark44 Mentor This is the explanation i actually needed. But i want you to extend it a little bit, hopefully you will clear my doubt(please check my attachment). I will appreciate further reply from you :) The photo you posted is the reason that we discourage images of work. 1. The image is unreadable because it is so small. 2. The image is rotated, making it difficult to read even if it were larger. Khashishi
2021-09-20T13:24:38
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/why-tan-x-x-as-x-approaches-0.943522/", "openwebmath_score": 0.9414845705032349, "openwebmath_perplexity": 1083.5200391542446, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9609517072737737, "lm_q2_score": 0.8791467706759584, "lm_q1q2_score": 0.844817590225287 }
https://math.stackexchange.com/questions/2077981/alternating-series-first-term-is-0-do-i-have-a-problem/2077983
# Alternating series; first term is 0. Do I have a problem? I have an alternate series which I want to test for convergence or divergence. The series is as follows: $$\sum_{n=1}^\infty (-1)^n \frac{n^2-1}{n^3+1}$$ I know how to test this for convergence, but the first term is $0$ and so "$n+1$" terms are not allways smaller than $n$ terms. I have seen the answer and the series is convergent (although not absolutely, but I knew that from testing $\sum_{n=1}^\infty \frac{n^2-1}{n^3+1}$ in a previous exercise), can I just "throw out" the $0$ and say it doesn't matter in the grand scheme of things? The terms of the series tend to $0$, so the conditions for convergence in alternate series are satisfied except for that nasty $0$. • "The terms of the series tend to 0, so the conditions for convergence in alternate series are satisfied except for that nasty 0" And except for the crucial condition that the terms are decreasing in absolute value. Did you check they are? – Did Dec 31 '16 at 23:14 • ((FWIW, the votes on this page seem rather irrational.)) – Did Dec 31 '16 at 23:18 You can remove a finite number of terms and not affect convergence. • So, from what I understand, what matters is that the series is ultimately convergent? – AstlyDichrar Dec 31 '16 at 0:50 • Exactly, convergence is determined by what ultimately happens. – Oscar Lanzi Dec 31 '16 at 0:54 • I love it when you can answer questions with tautologies. :) – user541686 Dec 31 '16 at 7:55 • Indeed, questions with answers which are the answers to the questions themselves are best. – Mateen Ulhaq Jan 1 '17 at 1:53 • google.com/url?sa=t&source=web&rct=j&url=https://… – Oscar Lanzi Jul 17 '17 at 19:30 Observe that your series just rewrites $$\sum_{n=1}^\infty (-1)^n \frac{n^2-1}{n^3+1}=\sum_{n=\color{red}{2}}^\infty (-1)^n \frac{n^2-1}{n^3+1}.$$ • I have no idea how I didn't think of this, it's the exact same series. – AstlyDichrar Dec 31 '16 at 0:49 • @AstlyDichrar Yes, it is the exact same series ;) – Olivier Oloa Dec 31 '16 at 0:51 • @AstlyDichrar The series is convergent by the alternating test of convergence:en.wikipedia.org/wiki/Alternating_series_test – Olivier Oloa Dec 31 '16 at 0:52 • Indeed, and you could as well just shift indices if you wanted them to start at $1$: $$\sum_{n=1}^\infty (-1)^n \frac{n^2-1}{n^3+1}=\sum_{m=1}^\infty (-1)^{m+1} \frac{(m+1)^2-1}{(m+1)^3+1}.$$ – Ruslan Dec 31 '16 at 9:40 If your sequence is $a_n$, you could test the series for the sequence for $b_0 = 1$ (or $-1$) and $b_n = a_n$ for $n > 1$ for convergence. You can "cleanly" apply the convergence test to $b_n$, and I will leave as an exercise relating that to the series for $a_n$ (it is not hard). Because this hack is so trivial we usually just apply it "sloppily," but you are definitely doing the right thing by asking how to do it properly. In general when testing for convergence for any series you can do any arbitrary manipulation to the first $N$ terms you want (i.e. you "ignore" them, whatever that needs to mean).
2020-07-11T21:31:48
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2077981/alternating-series-first-term-is-0-do-i-have-a-problem/2077983", "openwebmath_score": 0.7994163632392883, "openwebmath_perplexity": 342.2683776603794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517050371973, "lm_q2_score": 0.8791467627598857, "lm_q1q2_score": 0.8448175806520445 }
https://math.stackexchange.com/questions/2571938/limit-of-average-of-decimal-digits-lim-frac1n-x-1-dots-x-n-consta
# Limit of average of decimal digits: $\lim\frac{1}{n} (x_1 + \dots +x_n) = constant$ I have a problem to solve in Ergodic Theory, but I am stuck and have no idea how to procedure. The problem is the following. Prove that there exists a constant α such that for Lebesgue a.e. x∈[0,1] $\lim_{n\to\infty} \frac{1}{n} (x_1 + \dots + x_n) = \alpha$ where $x_1 ,...,x_n$ are digits of the decimal expansion of x meaning $x_i \in$ {0,...,9}. I have, that if $x \in Q$, $\alpha$ is obviously 0. So if $x \in$ R\Q we can bound the limit by above by 9 and below by 1 e.g. $\lim_{n\to\infty} \frac{1}{n} (x_1 + \dots + x_n) \leq \lim_{n\to\infty} \frac{9n}{n} = 9$. Right? But now I still have to prove it exists, how can I do that? Thanks a lot already. • Why is $a$ obviously $0$? Informally it looks like you are asking for the average value of a digit in a randomly chosen decimal. As each decimal is equally probable I'd have thought that was $\frac {0+1+2+\cdots +9}{10}=4.5$ – lulu Dec 18 '17 at 16:34 • For $x \in Q$ the number of $x_i$ unequal to 0 is finite. Since we view for the limit of n this must be 0 right? – Andreas Wicher Dec 18 '17 at 16:38 • In other words, $\lim_{n\to\infty} \frac{constant}{n}$ = 0 right? – Andreas Wicher Dec 18 '17 at 16:41 • Oh, but the rationals have measure $0$. – lulu Dec 18 '17 at 16:50 • @AndreasWicher : 1/3 = 0.3333333.... – Michael Dec 18 '17 at 17:00 Hint: If you have been following a course on Ergodic theory you have most certainly encountered the map $x\mapsto 2 x$ (mod 1) and the fact that it preserves and is ergodic with respect to Lebesgue measure? If you consider the indicator function on $[1/2,1)$ as an observable then the sum along an orbit of a number $x$ corresponds to the number of binary digits in the expansion of $x$. For Lesbesgue a.e. point the average therefore converges to the integral of the observable, i.e. 1/2. Redo this exercise but for the map $x\mapsto 10 x$ (mod 1) and figure out the right observable to use. • Okey thank you very much!! I will try :) – Andreas Wicher Dec 18 '17 at 16:46 • So, pretty sure $x \mapsto 10x$ is also ergodic and lebesque measure presurving. And I guess I should use the interval [0.1, 0.2) but i don't understand right now, why the orbit should correspond to the number of digits? And do i need to use ergodicity somewhere? – Andreas Wicher Dec 18 '17 at 17:23 • Hint2: Try with an observable that equals 0 on [0,0.1), 1 on [0.1,0.2), ... 9 on [0.9,1) – H. H. Rugh Dec 18 '17 at 17:32 • What is a observable here exactly, it's a function right? The only definition i find is, that it's a property on a non-zeromeasure set... – Andreas Wicher Dec 18 '17 at 17:43 • It is a function. You know the Birkhoff ergodic theorem, I presume? It deals with averages of a function (or observable) $A$ along the orbit of $x$ under an ergodic transformation $T$, i.e. of the (possible) limit of $(A(x)+A(Tx)+...+A(T^{n-1}x))/n$ as $n$ goes to infinity. – H. H. Rugh Dec 18 '17 at 18:12 Finally I understood :D So I take the MPS $[0,1) -> [0,1)$, $x \mapsto 10x$ with the lebesque measure. This is ergodic. I take as my function $f(x) = 0 on [0,0.1), 1 on [0.1,0.2) ...$. Then the sum along an orbit of a number $x$ correspond to the number of 10 digits. So we get by Birkhoff $\frac{1}{n} \sum_{i=1}^{n-1} f (10^{-1}(x)) -> \int_{[0,1)} f d\mu =4.5$. Right? Thank you so much!! • Yes correct, except that the sum along the orbit is the sum of digits (not just the number) but this is certainly also what you meant. This holds for Lebesgue a.e. point. – H. H. Rugh Dec 20 '17 at 14:25
2020-07-10T17:05:48
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2571938/limit-of-average-of-decimal-digits-lim-frac1n-x-1-dots-x-n-consta", "openwebmath_score": 0.9213207364082336, "openwebmath_perplexity": 350.21933895164386, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9893474907307124, "lm_q2_score": 0.8539127529517044, "lm_q1q2_score": 0.8448164394357235 }
https://math.stackexchange.com/questions/1806205/two-points-are-randomly-selected-on-a-line-of-length-1
# Two points are randomly selected on a line of length $1$ Two points are randomly selected on a line of length $1$. What is the probability that one of the segments is greater than $\frac{1}{2}$? Points can be placed anywhere between [0, 1], for example. Thanks! • Notice that a line segment will only be greater than 1/2 if both chosen points are either both to the left or to the right of the middle point. So knowing this I guess you could work out the details – Tom Ultramelonman May 30 '16 at 21:07 • @TomUltramelonman What if the first point is at $.1$ and the second at $.9$? Then the segment between them has length $.8$. – lulu May 30 '16 at 21:09 • Oh damn, how could I overlook this ^^ Yea sorry – Tom Ultramelonman May 30 '16 at 21:10 • @TomUltramelonman That's not correct. Two points divide a segment into three parts and any of them may be longer than $\frac 12$... – CiaPan May 30 '16 at 21:10 Ok suppose you take some point $$x\in[0,1/2]$$. Now taking a second point $$y\in[0,1]$$, there are two situations where you obtain a segment of length at least $$1/2$$. Firstly if $$y\le 1/2$$. Secondly if $$y\ge x+1/2$$. So for taking a first point $$x$$, the chance that you have a segment of desired length is $$1/2+(1-1/2-x)=1 - x$$. Now integrating this over $$[0,1/2]$$ you get $$3/8$$. Yu can do the same for $$x\in[1/2,1]$$. So this gives you a chance of $$6/8$$. • Thanks to lulu and CiaPan for the comments. – Tom Ultramelonman May 30 '16 at 21:28 • This is solid (+1). My variant, below, does the same thing geometrically...without referring to an integral. But the underlying principle is the same. – lulu May 30 '16 at 21:31 Probability is $3/4$ if points $x$ and $y$ are chosen with uniform probability. That corresponds to the area in color in the picture below. With probability $\frac 12$ both points are on the same side of the midpoint, so we are guaranteed success. If the points are on opposite sides of the midpoint(a probability $\frac 12$ event, with $P<\frac 12< Q$ say, then again with probability $\frac 12$ we have $Q$ is nearer $1$ than $P$ is near $\frac 12$,so the segment between them has length greater than $\frac 12$. Thus the total probability Is $$\frac 12+\frac 12\times \frac 12=\frac 34$$ Note: this is equivalent to asking how probable it is that the three segments formed by the two points can form a triangle (the above shows that the answer is $\frac 14$). Many proofs for that can be found e.g. here
2020-12-05T00:31:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1806205/two-points-are-randomly-selected-on-a-line-of-length-1", "openwebmath_score": 0.9103648066520691, "openwebmath_perplexity": 247.18918181943246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9893474869616596, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.8448164362172812 }
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
# Floor and ceiling functions Floor and ceiling functions Floor function Ceiling function In mathematics and computer science, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted floor(x) or x. Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ceil(x) or x.[1] For example, ⌊2.4⌋ = 2, ⌊−2.4⌋ = −3, ⌈2.4⌉ = 3, and ⌈−2.4⌉ = −2. The integral part or integer part of x, often denoted [x] is usually defined as the x if x is nonnegative, and x otherwise. For example, [2.4] = 2 and [−2.4] = −2. The operation of truncation generalizes this to a specified number of digits: truncation to zero significant digits is the same as the integer part. Some authors define the integer part as the floor regardless of the sign of x, using a variety of notations for this.[2] For n an integer, n⌋ = ⌈n⌉ = [n] = n. ## Notation The integral part or integer part of a number (partie entière in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula. Carl Friedrich Gauss introduced the square bracket notation ${\displaystyle [x]}$ in his third proof of quadratic reciprocity (1808).[3] This remained the standard[4] in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names "floor" and "ceiling" and the corresponding notations ${\displaystyle \lfloor x\rfloor }$ and ${\displaystyle \lceil x\rceil }$.[5][6] Both notations are now used in mathematics,[7] although Iverson's notation will be followed in this article. In some sources, boldface or double brackets ${\displaystyle [\![x]\!]}$ are used for floor, and reversed brackets ${\displaystyle ]\!]x[\![}$ or ]x[ for ceiling.[8][9] Sometimes ${\displaystyle [x]}$ is taken to mean the round-toward-zero function.[citation needed] The fractional part is the sawtooth function, denoted by ${\displaystyle \{x\}}$ for real x and defined by the formula[10] ${\displaystyle \{x\}=x-\lfloor x\rfloor .}$ For all x, ${\displaystyle 0\leq \{x\}<1.}$ ### Examples x Floor ${\displaystyle \lfloor x\rfloor }$ Ceiling ${\displaystyle \lceil x\rceil }$ Fractional part ${\displaystyle \{x\}}$ 2 2 2 0 2.4 2 3 0.4 2.9 2 3 0.9 −2.7 −3 −2 0.3 −2 −2 −2 0 ### Typesetting The floor and ceiling functions are usually typeset with left and right square brackets, where the upper (for floor function) or lower (for ceiling function) horizontal bars are missing (${\displaystyle \lfloor \,\rfloor }$ for floor and ${\displaystyle \lceil \,\rceil }$ for ceiling). These characters are provided in Unicode: • U+2308 LEFT CEILING (HTML &#8968; · &lceil;, &LeftCeiling;) • U+2309 RIGHT CEILING (HTML &#8969; · &rceil;, &RightCeiling;) • U+230A LEFT FLOOR (HTML &#8970; · &LeftFloor;, &lfloor;) • U+230B RIGHT FLOOR (HTML &#8971; · &rfloor;, &RightFloor;) In the LaTeX typesetting system, these symbols can be specified with the \lfloor, \rfloor, \lceil and \rceil commands in math mode, and extended in size using \left\lfloor, \right\rfloor, \left\lceil and \right\rceil as needed. ## Definition and properties Given real numbers x and y, integers k, m, n and the set of integers ${\displaystyle \mathbb {Z} }$, floor and ceiling may be defined by the equations ${\displaystyle \lfloor x\rfloor =\max\{m\in \mathbb {Z} \mid m\leq x\},}$ ${\displaystyle \lceil x\rceil =\min\{n\in \mathbb {Z} \mid n\geq x\}.}$ Since there is exactly one integer in a half-open interval of length one, for any real number x, there are unique integers m and n satisfying the equation ${\displaystyle x-1 where ${\displaystyle \lfloor x\rfloor =m}$ and ${\displaystyle \lceil x\rceil =n}$ may also be taken as the definition of floor and ceiling. ### Equivalences These formulas can be used to simplify expressions involving floors and ceilings.[11] {\displaystyle {\begin{aligned}\lfloor x\rfloor =m&\;\;{\mbox{ if and only if }}&m&\leq x In the language of order theory, the floor function is a residuated mapping, that is, part of a Galois connection: it is the upper adjoint of the function that embeds the integers into the reals. {\displaystyle {\begin{aligned}x These formulas show how adding integers to the arguments affects the functions: {\displaystyle {\begin{aligned}\lfloor x+n\rfloor &=\lfloor x\rfloor +n,\\\lceil x+n\rceil &=\lceil x\rceil +n,\\\{x+n\}&=\{x\}.\end{aligned}}} The above are never true if n is not an integer; however, for every x and y, the following inequalities hold: {\displaystyle {\begin{aligned}\lfloor x\rfloor +\lfloor y\rfloor &\leq \lfloor x+y\rfloor \leq \lfloor x\rfloor +\lfloor y\rfloor +1,\\\lceil x\rceil +\lceil y\rceil -1&\leq \lceil x+y\rceil \leq \lceil x\rceil +\lceil y\rceil .\end{aligned}}} ### Relations among the functions It is clear from the definitions that ${\displaystyle \lfloor x\rfloor \leq \lceil x\rceil ,}$   with equality if and only if x is an integer, i.e. ${\displaystyle \lceil x\rceil -\lfloor x\rfloor ={\begin{cases}0&{\mbox{ if }}x\in \mathbb {Z} \\1&{\mbox{ if }}x\not \in \mathbb {Z} \end{cases}}}$ In fact, for integers n, both floor and ceiling functions are the identity: ${\displaystyle \lfloor n\rfloor =\lceil n\rceil =n.}$ Negating the argument switches floor and ceiling and changes the sign: {\displaystyle {\begin{aligned}\lfloor x\rfloor +\lceil -x\rceil &=0\\-\lfloor x\rfloor &=\lceil -x\rceil \\-\lceil x\rceil &=\lfloor -x\rfloor \end{aligned}}} and: ${\displaystyle \lfloor x\rfloor +\lfloor -x\rfloor ={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\-1&{\text{if }}x\not \in \mathbb {Z} ,\end{cases}}}$ ${\displaystyle \lceil x\rceil +\lceil -x\rceil ={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\1&{\text{if }}x\not \in \mathbb {Z} .\end{cases}}}$ Negating the argument complements the fractional part: ${\displaystyle \{x\}+\{-x\}={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\1&{\text{if }}x\not \in \mathbb {Z} .\end{cases}}}$ The floor, ceiling, and fractional part functions are idempotent: {\displaystyle {\begin{aligned}{\Big \lfloor }\lfloor x\rfloor {\Big \rfloor }&=\lfloor x\rfloor ,\\{\Big \lceil }\lceil x\rceil {\Big \rceil }&=\lceil x\rceil ,\\{\Big \{}\{x\}{\Big \}}&=\{x\}.\end{aligned}}} The result of nested floor or ceiling functions is the innermost function: {\displaystyle {\begin{aligned}{\Big \lfloor }\lceil x\rceil {\Big \rfloor }&=\lceil x\rceil ,\\{\Big \lceil }\lfloor x\rfloor {\Big \rceil }&=\lfloor x\rfloor \end{aligned}}} due to the identity property for integers. ### Quotients If m and n are integers and n ≠ 0, ${\displaystyle 0\leq \left\{{\frac {m}{n}}\right\}\leq 1-{\frac {1}{|n|}}.}$ If n is a positive integer[12] ${\displaystyle \left\lfloor {\frac {x+m}{n}}\right\rfloor =\left\lfloor {\frac {\lfloor x\rfloor +m}{n}}\right\rfloor ,}$ ${\displaystyle \left\lceil {\frac {x+m}{n}}\right\rceil =\left\lceil {\frac {\lceil x\rceil +m}{n}}\right\rceil .}$ If m is positive[13] ${\displaystyle n=\left\lceil {\frac {n}{m}}\right\rceil +\left\lceil {\frac {n-1}{m}}\right\rceil +\dots +\left\lceil {\frac {n-m+1}{m}}\right\rceil ,}$ ${\displaystyle n=\left\lfloor {\frac {n}{m}}\right\rfloor +\left\lfloor {\frac {n+1}{m}}\right\rfloor +\dots +\left\lfloor {\frac {n+m-1}{m}}\right\rfloor .}$ For m = 2 these imply ${\displaystyle n=\left\lfloor {\frac {n}{2}}\right\rfloor +\left\lceil {\frac {n}{2}}\right\rceil .}$ More generally,[14] for positive m (See Hermite's identity) ${\displaystyle \lceil mx\rceil =\left\lceil x\right\rceil +\left\lceil x-{\frac {1}{m}}\right\rceil +\dots +\left\lceil x-{\frac {m-1}{m}}\right\rceil ,}$ ${\displaystyle \lfloor mx\rfloor =\left\lfloor x\right\rfloor +\left\lfloor x+{\frac {1}{m}}\right\rfloor +\dots +\left\lfloor x+{\frac {m-1}{m}}\right\rfloor .}$ The following can be used to convert floors to ceilings and vice versa (m positive)[15] ${\displaystyle \left\lceil {\frac {n}{m}}\right\rceil =\left\lfloor {\frac {n+m-1}{m}}\right\rfloor =\left\lfloor {\frac {n-1}{m}}\right\rfloor +1,}$ ${\displaystyle \left\lfloor {\frac {n}{m}}\right\rfloor =\left\lceil {\frac {n-m+1}{m}}\right\rceil =\left\lceil {\frac {n+1}{m}}\right\rceil -1,}$ For all m and n strictly positive integers:[16][better source needed] ${\displaystyle \sum _{k=1}^{n-1}\left\lfloor {\frac {km}{n}}\right\rfloor ={\frac {(m-1)(n-1)+\gcd(m,n)-1}{2}},}$ which, for positive and coprime m and n, reduces to ${\displaystyle \sum _{k=1}^{n-1}\left\lfloor {\frac {km}{n}}\right\rfloor ={\frac {1}{2}}(m-1)(n-1).}$ Since the right-hand side of the general case is symmetrical in m and n, this implies that ${\displaystyle \left\lfloor {\frac {m}{n}}\right\rfloor +\left\lfloor {\frac {2m}{n}}\right\rfloor +\dots +\left\lfloor {\frac {(n-1)m}{n}}\right\rfloor =\left\lfloor {\frac {n}{m}}\right\rfloor +\left\lfloor {\frac {2n}{m}}\right\rfloor +\dots +\left\lfloor {\frac {(m-1)n}{m}}\right\rfloor .}$ More generally, if m and n are positive, {\displaystyle {\begin{aligned}&\left\lfloor {\frac {x}{n}}\right\rfloor +\left\lfloor {\frac {m+x}{n}}\right\rfloor +\left\lfloor {\frac {2m+x}{n}}\right\rfloor +\dots +\left\lfloor {\frac {(n-1)m+x}{n}}\right\rfloor \\=&\left\lfloor {\frac {x}{m}}\right\rfloor +\left\lfloor {\frac {n+x}{m}}\right\rfloor +\left\lfloor {\frac {2n+x}{m}}\right\rfloor +\cdots +\left\lfloor {\frac {(m-1)n+x}{m}}\right\rfloor .\end{aligned}}} This is sometimes called a reciprocity law.[17] ### Nested divisions For positive integer n, and arbitrary real numbers m,x:[18] ${\displaystyle \left\lfloor {\frac {\lfloor x/m\rfloor }{n}}\right\rfloor =\left\lfloor {\frac {x}{mn}}\right\rfloor }$ ${\displaystyle \left\lceil {\frac {\lceil x/m\rceil }{n}}\right\rceil =\left\lceil {\frac {x}{mn}}\right\rceil .}$ ### Continuity and series expansions None of the functions discussed in this article are continuous, but all are piecewise linear: the functions ${\displaystyle \lfloor x\rfloor }$, ${\displaystyle \lceil x\rceil }$, and ${\displaystyle \{x\}}$ have discontinuities at the integers. ${\displaystyle \lfloor x\rfloor }$  is upper semi-continuous and  ${\displaystyle \lceil x\rceil }$  and ${\displaystyle \{x\}}$  are lower semi-continuous. Since none of the functions discussed in this article are continuous, none of them have a power series expansion. Since floor and ceiling are not periodic, they do not have uniformly convergent Fourier series expansions. The fractional part function has Fourier series expansion[19] ${\displaystyle \{x\}={\frac {1}{2}}-{\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {\sin(2\pi kx)}{k}}}$ for x not an integer. At points of discontinuity, a Fourier series converges to a value that is the average of its limits on the left and the right, unlike the floor, ceiling and fractional part functions: for y fixed and x a multiple of y the Fourier series given converges to y/2, rather than to x mod y = 0. At points of continuity the series converges to the true value. Using the formula floor(x) = x − {x} gives ${\displaystyle \lfloor x\rfloor =x-{\frac {1}{2}}+{\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {\sin(2\pi kx)}{k}}}$ for x not an integer. ## Applications ### Mod operator For an integer x and a positive integer y, the modulo operation, denoted by x mod y, gives the value of the remainder when x is divided by y. This definition can be extended to real x and y, y ≠ 0, by the formula ${\displaystyle x{\bmod {y}}=x-y\left\lfloor {\frac {x}{y}}\right\rfloor .}$ Then it follows from the definition of floor function that this extended operation satisfies many natural properties. Notably, x mod y is always between 0 and y, i.e., if y is positive, ${\displaystyle 0\leq x{\bmod {y}} and if y is negative, ${\displaystyle 0\geq x{\bmod {y}}>y.}$ Gauss's third proof of quadratic reciprocity, as modified by Eisenstein, has two basic steps.[20][21] Let p and q be distinct positive odd prime numbers, and let ${\displaystyle m={\frac {p-1}{2}},}$ ${\displaystyle n={\frac {q-1}{2}}.}$ First, Gauss's lemma is used to show that the Legendre symbols are given by ${\displaystyle \left({\frac {q}{p}}\right)=(-1)^{\left\lfloor {\frac {q}{p}}\right\rfloor +\left\lfloor {\frac {2q}{p}}\right\rfloor +\dots +\left\lfloor {\frac {mq}{p}}\right\rfloor }}$ and ${\displaystyle \left({\frac {p}{q}}\right)=(-1)^{\left\lfloor {\frac {p}{q}}\right\rfloor +\left\lfloor {\frac {2p}{q}}\right\rfloor +\dots +\left\lfloor {\frac {np}{q}}\right\rfloor }.}$ The second step is to use a geometric argument to show that ${\displaystyle \left\lfloor {\frac {q}{p}}\right\rfloor +\left\lfloor {\frac {2q}{p}}\right\rfloor +\dots +\left\lfloor {\frac {mq}{p}}\right\rfloor +\left\lfloor {\frac {p}{q}}\right\rfloor +\left\lfloor {\frac {2p}{q}}\right\rfloor +\dots +\left\lfloor {\frac {np}{q}}\right\rfloor =mn.}$ Combining these formulas gives quadratic reciprocity in the form ${\displaystyle \left({\frac {p}{q}}\right)\left({\frac {q}{p}}\right)=(-1)^{mn}=(-1)^{{\frac {p-1}{2}}{\frac {q-1}{2}}}.}$ There are formulas that use floor to express the quadratic character of small numbers mod odd primes p:[22] ${\displaystyle \left({\frac {2}{p}}\right)=(-1)^{\left\lfloor {\frac {p+1}{4}}\right\rfloor },}$ ${\displaystyle \left({\frac {3}{p}}\right)=(-1)^{\left\lfloor {\frac {p+1}{6}}\right\rfloor }.}$ ### Rounding For an arbitrary real number ${\displaystyle x}$, rounding ${\displaystyle x}$ to the nearest integer with tie breaking towards positive infinity is given by ${\displaystyle {\text{rpi}}(x)=\left\lfloor x+{\tfrac {1}{2}}\right\rfloor =\left\lceil {\tfrac {\lfloor 2x\rfloor }{2}}\right\rceil }$; rounding towards negative infinity is given as ${\displaystyle {\text{rni}}(x)=\left\lceil x-{\tfrac {1}{2}}\right\rceil =\left\lfloor {\tfrac {\lceil 2x\rceil }{2}}\right\rfloor }$. If tie-breaking is away from 0, then the rounding function is ${\displaystyle {\text{ri}}(x)=\operatorname {sgn}(x)\left\lfloor |x|+{\tfrac {1}{2}}\right\rfloor }$, and rounding towards even can be expressed with the more cumbersome ${\displaystyle \lfloor x\rceil =\left\lfloor x+{\tfrac {1}{2}}\right\rfloor +\left\lceil {\tfrac {2x-1}{4}}\right\rceil -\left\lfloor {\tfrac {2x-1}{4}}\right\rfloor -1}$, which is the above expression for rounding towards positive infinity ${\displaystyle {\text{rpi}}(x)}$ minus an integrality indicator for ${\displaystyle {\tfrac {2x-1}{4}}}$. ### Number of digits The number of digits in base b of a positive integer k is ${\displaystyle \lfloor \log _{b}{k}\rfloor +1=\lceil \log _{b}{(k+1)}\rceil .}$ ### Factors of factorials Let n be a positive integer and p a positive prime number. The exponent of the highest power of p that divides n! is given by a version of Legendre's formula[23] ${\displaystyle \left\lfloor {\frac {n}{p}}\right\rfloor +\left\lfloor {\frac {n}{p^{2}}}\right\rfloor +\left\lfloor {\frac {n}{p^{3}}}\right\rfloor +\dots ={\frac {n-\sum _{k}a_{k}}{p-1}}}$ where ${\textstyle n=\sum _{k}a_{k}p^{k}}$ is the way of writing n in base p. This is a finite sum, since the floors are zero when pk > n. ### Beatty sequence The Beatty sequence shows how every positive irrational number gives rise to a partition of the natural numbers into two sequences via the floor function.[24] ### Euler's constant (γ) There are formulas for Euler's constant γ = 0.57721 56649 ... that involve the floor and ceiling, e.g.[25] ${\displaystyle \gamma =\int _{1}^{\infty }\left({1 \over \lfloor x\rfloor }-{1 \over x}\right)\,dx,}$ ${\displaystyle \gamma =\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}\left(\left\lceil {\frac {n}{k}}\right\rceil -{\frac {n}{k}}\right),}$ and ${\displaystyle \gamma =\sum _{k=2}^{\infty }(-1)^{k}{\frac {\left\lfloor \log _{2}k\right\rfloor }{k}}={\tfrac {1}{2}}-{\tfrac {1}{3}}+2\left({\tfrac {1}{4}}-{\tfrac {1}{5}}+{\tfrac {1}{6}}-{\tfrac {1}{7}}\right)+3\left({\tfrac {1}{8}}-\cdots -{\tfrac {1}{15}}\right)+\cdots }$ ### Riemann zeta function (ζ) The fractional part function also shows up in integral representations of the Riemann zeta function. It is straightforward to prove (using integration by parts)[26] that if ${\displaystyle \phi (x)}$ is any function with a continuous derivative in the closed interval [a, b], ${\displaystyle \sum _{a Letting ${\displaystyle \phi (n)={n}^{-s}}$ for real part of s greater than 1 and letting a and b be integers, and letting b approach infinity gives ${\displaystyle \zeta (s)=s\int _{1}^{\infty }{\frac {{\frac {1}{2}}-\{x\}}{x^{s+1}}}\,dx+{\frac {1}{s-1}}+{\frac {1}{2}}.}$ This formula is valid for all s with real part greater than −1, (except s = 1, where there is a pole) and combined with the Fourier expansion for {x} can be used to extend the zeta function to the entire complex plane and to prove its functional equation.[27] For s = σ + it in the critical strip 0 < σ < 1, ${\displaystyle \zeta (s)=s\int _{-\infty }^{\infty }e^{-\sigma \omega }(\lfloor e^{\omega }\rfloor -e^{\omega })e^{-it\omega }\,d\omega .}$ In 1947 van der Pol used this representation to construct an analogue computer for finding roots of the zeta function.[28] ### Formulas for prime numbers The floor function appears in several formulas characterizing prime numbers. For example, since ${\displaystyle \left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor }$ is equal to 1 if m divides n, and to 0 otherwise, it follows that a positive integer n is a prime if and only if[29] ${\displaystyle \sum _{m=1}^{\infty }\left(\left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor \right)=2.}$ One may also give formulas for producing the prime numbers. For example, let pn be the n-th prime, and for any integer r > 1, define the real number α by the sum ${\displaystyle \alpha =\sum _{m=1}^{\infty }p_{m}r^{-m^{2}}.}$ Then[30] ${\displaystyle p_{n}=\left\lfloor r^{n^{2}}\alpha \right\rfloor -r^{2n-1}\left\lfloor r^{(n-1)^{2}}\alpha \right\rfloor .}$ A similar result is that there is a number θ = 1.3064... (Mills' constant) with the property that ${\displaystyle \left\lfloor \theta ^{3}\right\rfloor ,\left\lfloor \theta ^{9}\right\rfloor ,\left\lfloor \theta ^{27}\right\rfloor ,\dots }$ are all prime.[31] There is also a number ω = 1.9287800... with the property that ${\displaystyle \left\lfloor 2^{\omega }\right\rfloor ,\left\lfloor 2^{2^{\omega }}\right\rfloor ,\left\lfloor 2^{2^{2^{\omega }}}\right\rfloor ,\dots }$ are all prime.[31] Let π(x) be the number of primes less than or equal to x. It is a straightforward deduction from Wilson's theorem that[32] ${\displaystyle \pi (n)=\sum _{j=2}^{n}\left\lfloor {\frac {(j-1)!+1}{j}}-\left\lfloor {\frac {(j-1)!}{j}}\right\rfloor \right\rfloor .}$ Also, if n ≥ 2,[33] ${\displaystyle \pi (n)=\sum _{j=2}^{n}\left\lfloor {\frac {1}{\sum _{k=2}^{j}\left\lfloor \left\lfloor {\frac {j}{k}}\right\rfloor {\frac {k}{j}}\right\rfloor }}\right\rfloor .}$ None of the formulas in this section are of any practical use.[34][35] ### Solved problems Ramanujan submitted these problems to the Journal of the Indian Mathematical Society.[36] If n is a positive integer, prove that 1. ${\displaystyle \left\lfloor {\tfrac {n}{3}}\right\rfloor +\left\lfloor {\tfrac {n+2}{6}}\right\rfloor +\left\lfloor {\tfrac {n+4}{6}}\right\rfloor =\left\lfloor {\tfrac {n}{2}}\right\rfloor +\left\lfloor {\tfrac {n+3}{6}}\right\rfloor ,}$ 2. ${\displaystyle \left\lfloor {\tfrac {1}{2}}+{\sqrt {n+{\tfrac {1}{2}}}}\right\rfloor =\left\lfloor {\tfrac {1}{2}}+{\sqrt {n+{\tfrac {1}{4}}}}\right\rfloor ,}$ 3. ${\displaystyle \left\lfloor {\sqrt {n}}+{\sqrt {n+1}}\right\rfloor =\left\lfloor {\sqrt {4n+2}}\right\rfloor .}$ ### Unsolved problem The study of Waring's problem has led to an unsolved problem: Are there any positive integers k ≥ 6 such that[37] ${\displaystyle 3^{k}-2^{k}\left\lfloor \left({\tfrac {3}{2}}\right)^{k}\right\rfloor >2^{k}-\left\lfloor \left({\tfrac {3}{2}}\right)^{k}\right\rfloor -2}$ ? Mahler[38] has proved there can only be a finite number of such k; none are known. ## Computer implementations Int function from floating-point conversion in C In most programming languages, the simplest method to convert a floating point number to an integer does not do floor or ceiling, but truncation. The reason for this is historical, as the first machines used ones' complement and truncation was simpler to implement (floor is simpler in two's complement). FORTRAN was defined to require this behavior and thus almost all processors implement conversion this way. Some consider this to be an unfortunate historical design decision that has led to bugs handling negative offsets and graphics on the negative side of the origin.[citation needed] A bit-wise right-shift of a signed integer ${\displaystyle x}$ by ${\displaystyle n}$ is the same as ${\displaystyle \left\lfloor {\frac {x}{2^{n}}}\right\rfloor }$. Division by a power of 2 is often written as a right-shift, not for optimization as might be assumed, but because the floor of negative results is required. Assuming such shifts are "premature optimization" and replacing them with division can break software.[citation needed] Many programming languages (including C, C++,[39][40] C#,[41][42] Java,[43][44] PHP,[45][46] R,[47] and Python[48]) provide standard functions for floor and ceiling, usually called floor and ceil, or less commonly ceiling.[49] The language APL uses ⌊x for floor. The J Programming Language, a follow-on to APL that is designed to use standard keyboard symbols, uses <. for floor and >. for ceiling.[50] ALGOL usesentier for floor. Most spreadsheet programs support some form of a ceiling function. Although the details differ between programs, most implementations support a second parameter—a multiple of which the given number is to be rounded to. For example, ceiling(2, 3) rounds 2 up to the nearest multiple of 3, giving 3. The definition of what "round up" means, however, differs from program to program. Microsoft Excel used almost exactly the opposite of standard notation, with INT for floor, and FLOOR meaning round-toward-zero, and CEILING meaning round-away-from-zero.[51] This has followed through to the Office Open XML file format. Excel 2010 now follows the standard definition.[52] The OpenDocument file format, as used by OpenOffice.org, Libreoffice and others, follows the mathematical definition of ceiling for its ceiling function, with an optional parameter for Excel compatibility. For example, CEILING(-4.5) returns −4. ## Notes 1. ^ Graham, Knuth, & Patashnik, Ch. 3.1 2. ^ 1) Luke Heaton, A Brief History of Mathematical Thought, 2015, ISBN 1472117158 (n.p.) 2) Albert A. Blank et al., Calculus: Differential Calculus, 1968, p. 259 3) John W. Warris, Horst Stocker, Handbook of mathematics and computational science, 1998, ISBN 0387947469, p. 151 3. ^ Lemmermeyer, pp. 10, 23. 4. ^ e.g. Cassels, Hardy & Wright, and Ribenboim use Gauss's notation, Graham, Knuth & Patashnik, and Crandall & Pomerance use Iverson's. 5. ^ Iverson, p. 12. 6. ^ Higham, p. 25. 7. ^ See the Wolfram MathWorld article. 8. ^ 9. ^ Mathwords: Ceiling Function 10. ^ Graham, Knuth, & Patashnik, p. 70. 11. ^ Graham, Knuth, & Patashink, Ch. 3 12. ^ Graham, Knuth, & Patashnik, p. 73 13. ^ Graham, Knuth, & Patashnik, p. 85 14. ^ Graham, Knuth, & Patashnik, p. 85 and Ex. 3.15 15. ^ Graham, Knuth, & Patashnik, Ex. 3.12 16. ^ J.E.blazek, Combinatoire de N-modules de Catalan, Master's thesis, page 17. 17. ^ Graham, Knuth, & Patashnik, p. 94 18. ^ Graham, Knuth, & Patashnik, p. 71, apply theorem 3.10 with x/m as input and the division by n as function 19. ^ Titchmarsh, p. 15, Eq. 2.1.7 20. ^ Lemmermeyer, § 1.4, Ex. 1.32–1.33 21. ^ Hardy & Wright, §§ 6.11–6.13 22. ^ Lemmermeyer, p. 25 23. ^ Hardy & Wright, Th. 416 24. ^ Graham, Knuth, & Patashnik, pp. 77–78 25. ^ These formulas are from the Wikipedia article Euler's constant, which has many more. 26. ^ Titchmarsh, p. 13 27. ^ Titchmarsh, pp.14–15 28. ^ Crandall & Pomerance, p. 391 29. ^ Crandall & Pomerance, Ex. 1.3, p. 46. The infinite upper limit of the sum can be replaced with n. An equivalent condition is n > 1 is prime if and only if ${\displaystyle \sum _{m=1}^{\lfloor {\sqrt {n}}\rfloor }\left(\left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor \right)=1}$ . 30. ^ Hardy & Wright, § 22.3 31. ^ a b Ribenboim, p. 186 32. ^ Ribenboim, p. 181 33. ^ Crandall & Pomerance, Ex. 1.4, p. 46 34. ^ Ribenboim, p.180 says that "Despite the nil practical value of the formulas ... [they] may have some relevance to logicians who wish to understand clearly how various parts of arithmetic may be deduced from different axiomatzations ... " 35. ^ Hardy & Wright, pp.344—345 "Any one of these formulas (or any similar one) would attain a different status if the exact value of the number α ... could be expressed independently of the primes. There seems no likelihood of this, but it cannot be ruled out as entirely impossible." 36. ^ Ramanujan, Question 723, Papers p. 332 37. ^ Hardy & Wright, p. 337 38. ^ Mahler, K. On the fractional parts of the powers of a rational number II, 1957, Mathematika, 4, pages 122–124 39. ^ "C++ reference of floor function". Retrieved 5 December 2010. 40. ^ "C++ reference of ceil function". Retrieved 5 December 2010. 41. ^ dotnet-bot. "Math.Floor Method (System)". docs.microsoft.com. Retrieved 28 November 2019. 42. ^ dotnet-bot. "Math.Ceiling Method (System)". docs.microsoft.com. Retrieved 28 November 2019. 43. ^ "Math (Java SE 9 & JDK 9 )". docs.oracle.com. Retrieved 20 November 2018. 44. ^ "Math (Java SE 9 & JDK 9 )". docs.oracle.com. Retrieved 20 November 2018. 45. ^ "PHP manual for ceil function". Retrieved 18 July 2013. 46. ^ "PHP manual for floor function". Retrieved 18 July 2013. 47. ^ 48. ^ "Python manual for math module". Retrieved 18 July 2013. 49. ^ Sullivan, p. 86. 50. ^ "Vocabulary". J Language. Retrieved 6 September 2011. 51. ^ 52. ^ But the online help provided in 2010 does not reflect this behavior.
2021-10-21T07:02:06
{ "domain": "wikipedia.org", "url": "https://en.wikipedia.org/wiki/Floor_and_ceiling_functions", "openwebmath_score": 0.9981468915939331, "openwebmath_perplexity": 14025.1043888957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.989347487903923, "lm_q2_score": 0.8539127510928476, "lm_q1q2_score": 0.8448164351828367 }
https://www.physicsforums.com/threads/probability-of-multiples.759687/
# Probability of multiples 1. Jun 27, 2014 1. The problem statement, all variables and given/known data An integer is chosen random from the first 100 positive integers. What is the probability that the integer is divisible by 6 or 8? 2. Relevant equations NaN 3. The attempt at a solution The answer is 24/100 if I ignore one of the Multiples of 6 AND 8,which occurs 4 times in 100 The answer is 28/100 if I include the numbers which occur twice What is correct here? Last edited: Jun 27, 2014 2. Jun 27, 2014 ### HallsofIvy Staff Emeritus What do you mean by "ignore" and "count"? You certainly cannot "ignore" multiples of 24 (smallest number divisible by both 6 and 8) you just don't want to count them twice. 24/100 is correct. 6 divides into 100 16 times. 8 divides into 100 12 times. 24 divides into 100 4 times. There are 16+ 12- 4= 24 numbers less than 100 which are divisible by 6 or 8 or both. 3. Jun 27, 2014 Oh, thanks. I have another question too. The answer is 233168 according to them. But I think this is wrong. This is the code which was used to get this answer: Code (Text): int result = 0; for (int i = 1; i < 1000; i++) { if (((i % 3) == 0) || ((i % 5) == 0)) { result += i; } Obviously,if a number is a multiple of both 3 and 5, then we should add it to the result twice. However, this code does not do that. That means this is wrong. 1000/5=200. The question asks numbers below 1000, so it should be 200-5=195. $\to$195 multiples of 5 is below 1000. 1000/3=333.333... so 333 multiples of 3 is below 1000. Then we should sum the arithmetic sequences of multiples of 3 and 5: $$\frac{n}{2}(a+l)$$ $$\frac{333}{2}(3+999)+ \frac{195}{2}(5+995)=264333$$ The answer is $264333$ Who is right? Me or project euler? 4. Jun 27, 2014 ### Pranav-Arora If a number is a multiple of both 3 and 5, it gets added to the result twice. So you should subtract the numbers which are a multiple of $\text{lcm}(3,5)=15$ because those numbers will be added twice in the sum and you get an erroneous result. $199$ multiples of $5$. Replace $195$ with $199$ and remember to subtract the sum of numbers which are a multiple of $15$. 5. Jun 27, 2014 Oh, thanks for that 199. I still do not understand. The question asks to Find the sum of all the multiples of 3 or 5 below 1000. Why should we not add the multiples of 15 simply because it occurs twice? 6. Jun 27, 2014 ### Pranav-Arora When you add the multiples of 3, you also add the multiples of 15, right? When you add the multiples of 5, you again add the multiples of 15. So now there are two instances of the same number getting added to the required sum. Do you see now? 7. Jun 27, 2014 Oh , I see that the "OR" in the question is an exclusive or. But how do we determine it from the question? 8. Jun 27, 2014 ### Pranav-Arora Isn't that obvious? We need to count the numbers which are either a multiple of 3 or 5. So if the question asked about the numbers less than 20, then the required set of numbers would be 3,5,6,9,10,12,15,18 instead of 3,5,6,9,10,12,15,15,18. 9. Jun 28, 2014 ### haruspex No, it's inclusive. If it were exclusive you would not count multiples of 15 at all. It is inclusive, so you count multiples of 15 once. In neither case would you count them twice. 10. Jun 28, 2014 Why? This is an example of inclusive or This becomes true even if p and q comes twice. #### Attached Files: • ###### ss.PNG File size: 945 bytes Views: 98 11. Jun 28, 2014 ### Orodruin Staff Emeritus P or Q is 1 if P and Q is 1. You are treating it as if it was 2. 12. Jun 28, 2014 ### Orodruin Staff Emeritus Well, in logic terms what you are doing is first summing over P=1 and then adding the sum over Q=1. This is not the same as summing over (P or Q). The sum over (P or Q) is exactly what is in the code in #3. 13. Jun 28, 2014
2017-08-24T11:50:12
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/probability-of-multiples.759687/", "openwebmath_score": 0.6629159450531006, "openwebmath_perplexity": 634.0763683373375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172659321807, "lm_q2_score": 0.8902942144788076, "lm_q1q2_score": 0.8448155518784687 }
https://physics.stackexchange.com/questions/367717/what-is-the-state-of-the-equilibrium-for-a-second-derivative-equal-to-zero
What is the state of the equilibrium for a second derivative equal to zero? Considering a potential energy of $U$, and a displacement of $x$, the force is given by $F=-\frac{\partial U}{\partial x}$. Since equilibrium is defined as the point at which $F=0$, we can express this as $\frac{\partial U}{\partial x}=0$. This is clear to see on the following graph; It is also clear that some equilibria are stable and some are not; given a small displacement at $x_2$ the system will return to equilibrium, whereas this would not happen at $x_3$. Hence, we can say that for $\frac{\partial ^2U}{\partial ^2x}>0$ the equilibrium is stable, whereas for $\frac{\partial ^2U}{\partial ^2x}<0$ the equilibrium is unstable. Is there a general solution to this case, or does each have to be considered individually? What is not clear to me is the case where $\frac{\partial ^2U}{\partial ^2x}=0$. Does this simply mean that the equilibrium is stable given a displacement in one direction and not the other, or is it more complicated - for example if a particle were to oscillate about a stable equilibrium point, its motion would be dampened until it were at rest, but this would not be possible at a point where $\frac{\partial ^2U}{\partial ^2x}=0$; if the particle were to move to the side where $\frac{\partial ^2U}{\partial ^2x}<0$, it would not return to the equilibrium point. Is there a general solution to this case, or does each case have to be considered by inspection? • possibly related to physics.stackexchange.com/q/362641 – ZeroTheHero Nov 8 '17 at 15:39 • I believe you'd just go to the third derivative since to find out behavior around equilibrium in the first place we take a taylor series about that point (and normally throw away the third and higher derivatives). – Señor O Nov 8 '17 at 15:39 • The case the OP described is called "intrinsically nonlinear". See Mohazzabi, Pirooz. "Theory and examples of intrinsically nonlinear oscillators." American Journal of Physics 72.4 (2004): 492-498. – ZeroTheHero Nov 8 '17 at 15:47 • @SeñorO the third derivative is likely zero also since $\frac{\partial U}{\partial x}=0$. It might be the fourth derivative the makes the case. – ja72 Nov 8 '17 at 17:57 Consider the following potentials: \begin{align} U(x) &= x^4 \\ U(x) &= x^6 - x^4 \\ U(x) &= x^4 + x^3 \end{align} All three of these potentials have an equilibrium point at $x = 0$. All three of these potentials are such that the second derivative of $U(x)$ at this equilibrium point is zero. However, you should convince yourself (perhaps by plotting these potentials) that in the first case the equilibrium is stable, in the second case it is unstable, and in the third case the equilibrium is, as you put it, "stable in one direction but unstable in the other". The moral is: knowing only that the second derivative is zero tells us nothing about stability. We need to look at higher derivatives if we want to know more. First of all you have an imprecise idea about stability: also small velocities matter not only small displacements from the equilibrium. An equilibrium $x_0$ is stable if the motion is confined around $x_0$ and its velocity is confined around the vanishing velocity for every positive time, for every initial condition close to $x_0$ and every initial velocity close to $0$ at time $t=0$. In other words, according to the general theory of stability (e.g., see Arnold's or Fasano-Marmi's textbooks) the equilibrium $x_0$ is stable (in the future) if fixing a neighborhood $U$ of $(x_0,0)$, there exists a second neighborhood $V\subset U$ of $(x_0,0)$ such that, every pair of initial conditions $x(0)=y_0$ and $\dot{x}(0) = \dot{y}_0$ with $(y_0, \dot{y}_0) \in V$ gives rise to a motion $x=x(t)$ such that $(x(t), \dot{x}(t)) \in U$ for every $t\in (0, +\infty)$. A theorem (as above I restrict myself to the one-dimensional case) proves that if all forces are conservative then (a) a configuration $x_0$ is an equilibrium if and only if $\frac{dU}{dx}|_{x_0}=0$, (b) an equilibrium $x_0$ is stable if $U$ has a strict minimum at $x_0$ (i.e. $U(x)>U(x_0)$ for $x\neq x_0$ in a neighborhood of $x_0$). (c) an equilibrium $x_0$ is unstable if $\frac{d^2U}{dx^2}|_{x_0}>0$. The condition in (b) is satisfied if $\frac{d^2U}{dx^2}|_{x_0} <0$, but this is just a sufficient condition (think of $U(x)=x^4$ with $x_0=0$, it is evidently stable and satisfies (b), but $\frac{d^2U}{dx^2}|_{x_0}=0$). It remains open the case $\frac{d^2U}{dx^2}|_{x_0}=0$. It has to be studied case-by-case. However certain cases are easy. In particular consider any point $x_0 > x_5$ in your picture. It is clear that the condition in (a) is true, so that $x_0$ is an equilibrium and also $\frac{d^2U}{dx^2}|_{x_0}=0$. However, perhaps contrarily to the naive idea, $x_0>x_5$ is unstable. Indeed if you start with an initial condition $x(0) = y_0$ arbitrarily close to $x_0$ and a speed $\dot{x}(0) = \dot{y}_0 > 0$ arbitrarily close to $0$, the arising motion is $x(t) = \dot{y}_0 t + y_0$ and, waiting a sufficiently large time $t>0$, $x(t)$ exits from every neighborhood of $x_0$ initially fixed. (Statement (b) is nowadays an elementary subcase of a famous theorem due to Lyapunov but a proof was already known by Lagrange and Dirichlet. As a matter of fact, the total energy $E(x, \dot{x})$ is a Lyapunov function for the system for the critical point $(x_0,0)$ when $U$ has a strict minimum at $x_0$.) • Didn't you define a sort of Lyapunov stability? I think most textbooks define stability in weaker sense, just in terms of restoring force or just in terms of the motion in the configuration space (see Goldstein, chapter about oscillations, for instance). Regarding the region $x>x_5$ Goldstein calls it neither stable or unstable, it is said to be neutral. – Diracology Nov 8 '17 at 18:02 • Actually I do not know, what I can say is that here in Italy the notion of stability is just that I defined (it is the subject of some of my lectures for undergrads). The general theory, with many results (like the stability or instability of permanent rotations of a body) relies on that definition and on Lyapunov's theorems (there are many also on asymptotic stability and delicate issues). – Valter Moretti Nov 8 '17 at 18:13 • Nice answer, by the way! – Diracology Nov 8 '17 at 18:16 Taylor expand the force $F(x) = -U'(x)$ about $x = x_0$: $$F(x_0 + \Delta x) = -U'(x_0) - U''(x_0)\Delta x - \frac{1}{2}U'''(x_0)(\Delta x)^2 + \cdots$$ Stipulate that $F(x_0) = 0$ and then $$F(x_0 + \Delta x) = -U''(x_0)\Delta x - \frac{1}{2}U'''(x_0)(\Delta x)^2 + \cdots$$ In the case that $U''(x_0) \gt 0$, then for $\Delta x$ small, the force is approximately a linear restoring force. However, in the case that $U''(x_0) = 0$ (and at least one higher order derivative is non-zero), then for $\Delta x$ small, the force is non-linear and not necessarily a restoring force. For example, if $U'''(x_0) \ne 0$, then the sign of the force does not change as $\Delta x$ goes through zero; the force is opposite the displacement in one direction and with the displacement in the other direction (which will drive the particle away from $x=x_0$). For the force to be restoring in the case that $U''(x_0) = 0$ requires that the lowest order non-zero derivative (higher than the 2nd) be even order and positive • That's a very straightforward explanation. Great answer, thanks! – ExtremeRaider Sep 29 at 16:41 protected by Qmechanic♦Nov 8 '17 at 18:19 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-10-17T06:24:39
{ "domain": "stackexchange.com", "url": "https://physics.stackexchange.com/questions/367717/what-is-the-state-of-the-equilibrium-for-a-second-derivative-equal-to-zero", "openwebmath_score": 0.9404358863830566, "openwebmath_perplexity": 216.61011038008067, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9805806518175515, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8448077041987379 }
http://math.stackexchange.com/questions/684946/how-to-find-the-limit-of-this-recurrence-relation/685013
# How to find the limit of this recurrence relation? $a_n$ is a sequence where $a_1=0$ and $a_2=100$, and for $n \geq 2$: $$a_{n+1}=a_n+\frac{a_n-1}{(n)^2-1}$$ I have a basic understanding of sequences. I wasn't sure how to deal with this recurrence relation since there is $n$ in the equation. By using an excel sheet, I know the limit is 199. And I confirmed this with Wolfram Alpha, which showed that the "Recurrence equation solution" is: $f(x)=199-\frac{198}{x}$ My question: Is it possible to find the limit of this sequence or even the "recurrence equation solution" without using an excel sheet or Wolfram Alpha? If so, can you clearly explain how this is done? - Is it $$a_n-1\text{ or }a_{n-1}?$$ –  lab bhattacharjee Feb 21 at 15:41 You only need to know with $a_1$ or $a_{100}$ to find the limit since it is a recurrence relation that only depends on the last element. So hopefully there is no contradiction between $a_1$ and $a_{100}$. That being said, I would doubt the limit is 99 because $a_{101} = 100 + \frac{99}{9,999} > 99$ and this is an increasing sequence. –  Squirtle Feb 21 at 15:44 To "lab bhattacharjee": It is correct the way it is. It is: $a_n-1$ –  FiBO Feb 21 at 15:47 To "Squirtle": The limit is 199, not 99. I confirmed this. –  FiBO Feb 21 at 15:48 I think what's confusing is that you have a first-order diff eq'n yet have two initial values. –  Ron Gordon Feb 21 at 16:02 You have: $$(n^2-1)\,a_{n+1} = n^2 a_n - 1,$$ that by putting $b_n = n a_n$ becomes: $$(n-1) b_{n+1} = n b_n - 1,$$ or: $$\frac{b_{n+1}}{n}-\frac{b_n}{n-1}=-\frac{1}{n(n-1)}=\frac{1}{n}-\frac{1}{n-1},$$ so if we set $c_n=\frac{b_n}{n-1}=\frac{n}{n-1}a_n$, we end with: $$c_{n+1}-c_{n} = \frac{1}{n}-\frac{1}{n-1}.\tag{1}$$ If $c_2=2a_2=200$ (notice that only one starting value is needed), by summing both sides of $(1)$ with $n$ going from $2$ to $N-1$ you get: $$c_{N}-c_2 = \sum_{n=2}^{N-1}\left(\frac{1}{n}-\frac{1}{n-1}\right)=\frac{1}{N-1}-1,$$ then: $$c_{N} = \frac{1}{N-1}+199$$ and: $$a_{N} = \frac{1}{N}+199\cdot\frac{N-1}{N} = 199 - \frac{198}{N}$$ as claimed by Wolfram Alpha. - I like your solution, but you can add more explanation to how you reached this solution exactly? I kind of got lost with c and summation. –  FiBO Feb 21 at 16:37 This kind of technique is the discrete analogue of integration through a change of variable: we define ausiliary sequences in order to have a recursion like $d_{n+1}-d_{n}=f(n)$. Doing so, $d_n$ is the $n$-th partial sum of $f(n)$. If $\sum f(n)$ is a telescoping sum, we get a "closed"-expression for $d_n$, then for our starting sequence. –  Jack D'Aurizio Feb 21 at 16:43 To find $a_n$ for every $n\geqslant2$, one can use the trick of centering a recursion around its fixed point. Here $a_n=1$ would imply $a_{n+1}=1$, hence one can consider the sequence $b_n=a_n-1$, and, see what happens! one gets $$b_{n+1}=\frac{n^2}{n^2-1}b_n.$$ Thus, for every $n\geqslant2$, $$a_n=1+A_n\cdot(a_2-1),\qquad A_n=\prod_{k=2}^{n-1}\frac{k^2}{k^2-1}.$$ Now, $k^2-1=(k+1)(k-1)$ hence $$A_n=\frac{2\cdot3\cdots(n-1)}{1\cdot2\cdots(n-2)}\cdot\frac{2\cdot3\cdots(n-1)}{3\cdot4\cdots n}=\frac{2(n-1)}n=2-\frac2n.$$ Finally, $$a_n=2a_2-1-(a_2-1)\frac2n.$$ This confirms the formula you indicate in your post when $a_2=100$ and shows that, in the general case, $$\lim\limits_{n\to\infty}a_n=2a_2-1.$$ - I didn't understand what you meant with "centering a recursion around its fixed point" and why did you a product of the sequence? –  FiBO Feb 21 at 16:50 "Centering around $1$" is, as explained one line below, to consider $b_n=a_n-1$. And if $b_{n+1}=c_{n+1}b_n$ for every $n\geqslant2$, then $b_n=c_nc_{n-1}\cdots c_3b_2$, that is, $a_n-1=c_nc_{n-1}\cdots c_3(a_2-1)$, that is, $a_n=1+c_nc_{n-1}\cdots c_3(a_2-1)$. –  Did Feb 21 at 16:55 The recurrence $$a_{n+1}=a_n+\frac{a_n-1}{n^2-1}$$ is a discretization of the differential equation $$\frac{dy}{dx} = \frac{y-1}{x^2-1}.$$ This equation is separable and has solution $$y(x) = 1 + C \sqrt{1 - \frac{2}{x+2}}.$$ Now, for large $x$ we have $$y(x) \approx 1 + C \left(1 - \frac{1}{x+2}\right) \approx 1 + C \left(1 - \frac{1}{x}\right) = 1+C - \frac{C}{x},$$ by the binomial theorem, which suggests checking for a solution of the form $a_n = 1+C - \frac{C}{n}$ in the recurrence relation. - The recurrence relation can be rewritten as $${n+1\over n}a_{n+1}=\left({1\over n}-{1\over n-1}\right)+{n\over n-1}a_n$$ Now let $$b_k={k\over k-1}a_k$$ to obtain \begin{align} b_{n+1}&=\left({1\over n}-{1\over n-1}\right)+b_n\\ &=\left({1\over n}-{1\over n-1}\right)+\left({1\over n-1}-{1\over n-2}\right)+b_{n-1}\\ &\vdots\\ &=\left({1\over n}-{1\over3-2}\right)+b_{3-1}\\ &=\left({1\over n}-1\right)+2a_2\\ &={1\over n}+199 \end{align} It follows that $\lim_{n\to\infty}a_n=\lim_{n\to\infty}{n-1\over n}b_n=199$. Having written all this up, I see it's essentially the same answer as Jack D'Aurizio's, just organized in a somewhat different fashion. - Once you have the clue that $a_n = b + c/n$ for some constants $b$ and $c$, it's easy to plug this in to the equation and see that this works if $b+c=1$. Then take $n=2$ to match the value there. EDIT: So, how could you guess the form $a_n = b + c/n$? Well, if you look for solutions to $f(z+1) = f(z) + \dfrac{f(z) - 1}{n^2 - 1}$ where $a_n = f(n)$ is a rational function of $n$, if $f(z)$ has a pole of order $k$ at $z=p$ then $f(z+1)$ has a pole of the same order at $z=p-1$. This rapidly leads to the conclusion that the only possible pole of $f(z)$ is at $z=0$ (and that of order at most $2$). For example, if there was a pole at $z = \infty$, i.e. $f(z) = a z^d + O(z^{d-1})$ with $d \ge 1$ and $a \ne 0$, then $$f(z+1) - f(z) - \dfrac{f(z)-1}{z^2 - 1} = a d z^{d-1} + O(z^{d-2}) \ne 0$$ - Hint: clearly, your sequence is increasing. To prove it converges, an idea would be to find an upper bound on $a_n$ and prove it holds via a recurrence relation. Cheating by looking at the limit computed by Mathematica (which, to generalize a bit, is $2\alpha -1$ when $a_2=\alpha$), you can try to prove $$a_n < 2\alpha - 1 - \frac{C}{n}\qquad \forall n\geq 2$$ for some "convenient" constant $C$ (I tried quickly, unless I made a mistake $C\stackrel{\rm{}def}{=}6(\alpha-1)$ should work). You will then, by monotone convergence, have $a_n\xrightarrow[n\to\infty]{}\ell \leq 2\alpha-1$. To show the limit is actually $2\alpha-1$, I suppose (this is very hazy) that a similar approach, but with a convenient lower bound this time, should work. - But the solutions of Did and Jack D'Aurizio above are definitely better (that is, "clean" and elegant -- this is not.) –  Clement C. Feb 21 at 16:21
2014-09-20T20:18:31
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/684946/how-to-find-the-limit-of-this-recurrence-relation/685013", "openwebmath_score": 0.9684588313102722, "openwebmath_perplexity": 278.83536778450224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9805806546550656, "lm_q2_score": 0.8615382094310357, "lm_q1q2_score": 0.844807701414238 }
https://mathoverflow.net/questions/231179/does-there-exist-some-c-independent-of-n-and-f-such-that-f-p-geq
# Does there exist some $C$ independent of $n$ and $f$ such that $\|f''\|_p \geq Cn^2 \| f \|_p$, where $1 \leq p\leq \infty$? Let $f$ be a trigonometric polynomial on the circle $\mathbb{T}$ with $\hat{f}(j) = 0$ for all $j \in \mathbb{Z}$ with $\lvert j \rvert < n$. Does there exist some $C$ independent of $n$ and $f$ such that $$\|f''\|_p \geq Cn^2 \| f \|_p,$$ where $1 \leq p\leq \infty$? • Can C depend on p? Feb 15, 2016 at 13:30 • The more general inequality $\| f' \|_p \geq Cn \| f \|_p$ should in fact hold. This is problem 1.8 in the first volume of Classical and Multilinear Harmonic Analysis by C. Muscalu and W. Schlag. I have a thread about this on Math StackExchange which I will update accordingly. Feb 22, 2016 at 6:46 • @ChristianRemling If if helps with justification of the edit, I would be eager to see the extension of your argument to the more general inequality. I have encountered an obstacle in the extension of my argument, as the sequence $a_{n,j}$ is no longer even. Feb 23, 2016 at 4:15 • @EricThoma: Actually, I've now discovered a problem with my answer and I've deleted (wasted too much time on this already). I'm not finding an easy argument why approximations of $\sum e^{ikx}/k$ should have an $L^1$ error not worse asymptotically than approximations of characteristic functions (though "philosophically" it's clear this has to be right). Feb 23, 2016 at 4:30 I wish to add another proof based on the following result. If $(a_n)_{n \in \mathbb{Z}}$ is an even sequence of nonnegative numbers with $$a_{n+1} + a_{n-1} - 2a_n \geq 0 \quad \forall n > 0,$$ then there exists $g \in L^1(\mathbb{T})$ with $g \geq 0$ and $\hat{g}(n) = a_n$. This is lemma 1.12 in Classical and Multilinear Harmonic Analysis Vol 1 by C. Muscalu and W. Schlag. The desired function is $$g = \sum_{n=1}^\infty n (a_{n+1} + a_{n-1} - 2a_n) K_n$$ where $K_n$ is the Fejér kernel. Define the sequences $(a_{n,j})_{j=0}^\infty$ by $$a_{n,j} = \begin{cases} \frac{1}{n^2} + \frac{2(n-j)}{n^3},& \text{if } j < n\\ \frac{1}{j^2}, & \text{if } j \geq n \end{cases}$$ for each $n \in \mathbb{N}$. Then (extending to $j \in \mathbb{Z}$ by $a_{n,(-j)} = a_{n,j}$) we can use the lemma to find $g_n \in L^1(\mathbb{T})$ with $g_n \geq 0$ and $\hat{g}_n(j) = a_{n,j}$. By the monotone convergence theorem, we have $$\|g_n \|_1 = \sum_{j=1}^\infty j(a_{n,(j+1)} + a_{n,(j-1)} - 2 a_{n,j}).$$ A computation will show that $\| g_n \|_1$ is dominated by $n^{-2}$. Furthermore, for any trigonometric polynomial $f$ with $\hat{f}(j) = 0$ for all $| j | < n$, we have $$f = g_n \ast f''$$ so that Young's inequality finishes the proof. • [deleted comment, I didn't see your comment to the main question] Feb 22, 2016 at 18:53 Here is more pedestrian argument. If $$T_{n}$$ is a trigonometric polynomial of degree at most $$n$$ with total mass $$\frac{1}{2\pi}\int_{-\pi}^{\pi}T_{n}=1$$ then $$f*T_{n}=0$$, and in particular, $$f(x) =\frac{1}{2\pi}\int_{-\pi}^{\pi}(f(x)-f(x-s))T_{n}(s)ds.$$ Therefore, by the triangle inequality $$\|f\|_{p} \leq \frac{1}{2\pi}\int_{-\pi}^{\pi}\|f(x)-f(x-s)\|_{L^{p}(dx)}|T_{n}(t)|dt \leq \|f'\|_{p}\frac{1}{2\pi}\int_{-\pi}^{\pi}|s||T_{n}(s)|ds,$$ where the inequality $$\|f(x)-f(x-s)\|_{L^{p}(dx)} \leq |s| \|f'\|_{p}$$ follows, for instance from Schur test applied to the operator $$(Af')(x)=\int f'(t) 1_{[x-s,x]}(t)dt$$. Now how can we make $$\frac{1}{2\pi}\int_{-\pi}^{\pi}|s||T_{n}(s)|ds$$ of order $$\frac{1}{n}$$? Let us be not too demanding and seek for $$T_{n}$$ among even nonnegative trigonometric polynomials to reduce the matters to $$\int_{0}^{\pi}sT_{n}(s)ds$$. One immediate choice is Fejer kernel $$k_{n}(s) = \frac{1}{n}\left(\frac{\sin(\frac{ns}{2})}{\sin(\frac{s}{2})}\right)^{2}.$$ Now $$k_{n}(s) \asymp n$$ on $$[0,\frac{1}{n}]$$, and $$k_{n}(s) on $$[\frac{1}{n}, \pi]$$, therefore $$\int_{0}^{\pi}sk_{n}(s)ds \leq C' \frac{\log(n)}{n}$$. Well, not too bad but not exactly what was requested. What else can we do? Let us look at $$k_{n}^{2}(s)$$. It is even nonnegative trigonometric polynomial of degree $$2n$$. A small jump in degree is okay (we can just start from $$f$$ of degree $$\geq 2n$$). Since $$k_{n}^{2}(s) \asymp n^{2}$$ on $$[0, \frac{1}{n}]$$ its total mass is at least $$\geq n$$. Then $$\frac{k_{n}^{2}}{n} \asymp n \quad \text{on}\quad [0, \frac{1}{n}], \quad \text{and} \quad \frac{k_{n}^{2}}{n} \leq C\frac{1}{n^{3}} \frac{1}{s^{4}} \quad \text{on} \quad [\frac{1}{n}, \pi]$$ therefore $$\int_{0}^{\pi} s \frac{k^{2}_{n}}{n}\leq C' \frac{1}{n}$$ voila!
2023-04-01T04:50:20
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/231179/does-there-exist-some-c-independent-of-n-and-f-such-that-f-p-geq", "openwebmath_score": 0.9720077514648438, "openwebmath_perplexity": 236.31530025529176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9805806472775278, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.8448076968012495 }
http://math.stackexchange.com/questions/180246/is-there-notation-for-some-two-of-the-three-statements-are-true
# Is there notation for “some two of the three statements are true”? There are three propositions A, B, C and another condition "some two of these propositions are true and the third one is false", or, in other words, "exactly 2 of 3 propositions are true". Using truth tables and a Karnaugh map (as discussed at How to find the logical formula for a given truth table?) i deducted the Boolean expression for this: ABC' + AB'C + A'BC. Is there any more succinct notation for this expression in any branch of logic? Edit: Obviously using proposition calculus notation the above statement may be represented as: $(A \wedge B \wedge \neg C) \vee (A \wedge \neg B \wedge C) \vee (\neg A \wedge B \wedge C)$. I am sorry if that misguided you. I'm still interested, if any more succinct notation is possible. - I have never seen such a notation. Don't forget, if you are writing up something where you need this a lot, you can always invent ad hoc notation of your own. Just don't forget to explain it, and also beware that too much ad hoc notation may hinder rather than help communication, especially if it's not well thought out. –  Harald Hanche-Olsen Aug 8 '12 at 11:22 @HaraldHanche-Olsen I think the notation is similar to the boolean algebra operation of $\cdot$ and $+$ with the $\cdot$ implicitily used between letters. –  William Aug 8 '12 at 14:00 If you are using the symbol for your own purposes (taking notes, studying, etc.), then invent any symbol you like. If you are planning to use this symbol on a manuscript you are expecting anyone else to read, then I strongly suggest you do not use such a symbol. I personally believe that one should do everything possible to make their own papers as easy to read as possible. In my opinion, if you are not willing to put in the work to make your paper as easy to digest as possible, then why should others put in the work to read the paper? I would highly recommend simply using words. –  JavaMan Aug 8 '12 at 20:23 Using the Iverson bracket, $$[A]+[B]+[C]=2$$ - The most symmetric definition of 'exactly one of three' I know is $$\text{exactly one of } P, Q, R \text{ is true} \;\equiv\; (P \equiv Q \equiv R) \land \lnot (P \land Q \land R)$$ This uses the fact that equivalence ($\;\equiv\;$) is associative. Using this, we can write \begin{align} & \text{exactly two of } A, B, C \text{ are true} \\ = & \;\;\;\;\;\text{"invert the count"} \\ & \text{exactly one of } A, B, C \text{ is false} \\ = & \;\;\;\;\;\text{"$\;P \equiv \text{false}\;$ is the same as $\;\lnot P \equiv \text{true}\;$ (three times)"} \\ & \text{exactly one of } \lnot A, \lnot B, \lnot C \text{ is true} \\ = & \;\;\;\;\;\text{"the above definition"} \\ & (\lnot A \equiv \lnot B \equiv \lnot C) \land \lnot(\lnot A \land \lnot B \land \lnot C) \\ = & \;\;\;\;\;\text{"simplify"} \\ & \lnot(A \equiv B \equiv C) \land (A \lor B \lor C) \\ \end{align} - The double use of $\equiv$ as a binary and ternary operatior is hideous and confusing. –  Lord_Farin Nov 18 '13 at 18:14 @Lord_Farin Obviously we have different tastes at this point. :-) If $\;(P \equiv Q) \equiv R\;$ is equivalent to $\;P \equiv (Q \equiv R)\;$, as it is in classical logic, why write the parentheses? Also, the associativity and symmetry of $\;\equiv\;$ make a formula like $$P \land Q \;\equiv\; P \lor Q \;\equiv\; P \;\equiv\; Q$$ a very useful tool in logical calculations. As to confusion: that depends on what you are used to. If that downvote is yours: I don't really see how this answer deserves it. It is correct and it answers the question... –  Marnix Klooster Nov 18 '13 at 18:19 @Lord_Farin I changed the calculation to use $\;=\;$ throughout. That is actually what Dijkstra/Scholten and Gries/Schneider do in their work (EWD1300 and A Logical Approach to Discrete Math), if I recall correctly. –  Marnix Klooster Nov 18 '13 at 18:23 I see now. It is even more confusing than I thought at first, because I read $P\equiv Q\equiv R$ as "$P \equiv Q$ and $Q\equiv R$" until now. $\equiv$ looks too much like $=$ for tricks like these. Compare $P = Q = R$. // I downvoted because I consider this answer to be "not useful". –  Lord_Farin Nov 18 '13 at 18:26 Indeed, I disagree with that practice. The notation is used commonly in two different ways (yours, apparently, and mine). That makes it a source of confusion, and I would never recommend someone to use such a thing. I'm not saying you didn't give an answer, but I consider it not to be a useful one. My downvote firmly stands. –  Lord_Farin Nov 18 '13 at 18:34
2014-10-26T01:15:32
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/180246/is-there-notation-for-some-two-of-the-three-statements-are-true", "openwebmath_score": 0.9940757751464844, "openwebmath_perplexity": 588.732639416477, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9805806512500486, "lm_q2_score": 0.8615382040983515, "lm_q1q2_score": 0.8448076932515587 }