Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
http://math.stackexchange.com/questions/248710/conditional-probability-concept-question
# Conditional Probability Concept Question The organizers of a cycling competition know that about 8% of the racers use steroids. They decided to employ a test that will help them identify steroid-users. The following is known about the test: When a person uses steroids, the person will test positive 96% of the time; on the other hand, when a person does not use steroids, the person will test positive only 9% of the time. The test seems reasonable enough to the organizers. The one last thing they want to find out is this: Suppose a cyclist does test positive, what is the probability that the cyclist is really a steroid-user. S be the event that a randomly selected cyclist is a steroid-user and P be the event that a randomly selected cyclist tests positive. ***My questions is Can someone please translate and explain P(P|S) and P(S|P) ? - $\Pr(P|S)$ is the probability that the person tests positive, given that she uses steroids. We are told explicitly that this is $0.96$. $\Pr(S|P)$ is the probability she is a steroid user, given that she tests positive. That is what we are asked to find. Informally, if we confine attention to the people who test positive, $\Pr(S|P)$ measures the proportion of them that really are steroid users. Since the problem says that the proportion of steroid users is not high (sure!), many of the positives will be false positives. Thus I would expect that $\Pr(S|P)$ will not be very high: the test is not as good as it looks on first sight. For computing, there are two ways I would suggest, the first very informal and probably not acceptable to your grader, and the second more formal. $(1)$: Imagine $1000$ cyclists. About how many of them will test positive? About how many of these will be steroid users? Divide the second number by the first, since $\Pr(S|P)$ asks us to confine attention to the subpopulation of people who tested positive. $(2)$: Use the defining formula $$\Pr(S|P)=\frac{S\cap P}{\Pr(P)}.$$ The two numbers on the right are not hard to compute. I can give further help if they pose difficulty. - We are given that $P(S)=0.08$ (hence $P(\neg S)=0.92$), $P(P|S)=0.96$ and $P(P|\neg S)=0.09$. What we want to knwo is $P(S|P)$. Note that $P(S\cap P)=P(S|P)\cdot P(P)$ as well as $P(S\cap P)=P(P|S)\cdot P(S)$, therefore $$P(S|P) = \frac{P(P|S)\cdot P(S)}{P(P)}.$$ Thus we first need $P(P)$, which we get from $P(P)=P(P|S)P(S)+P(P|\neg S)P(\neg S)$. At last now all is reduced to given values. - Here is a relatively plain English explanation for Bayesian statistics. Here, you have an evidence which says a user tested positive and the hypothesis you want to test against is whether steroids were used. P(H | E) = P(E|H) x P(H) / [ P(E|H) x P(H) + P(E|~H) x P(~H) ] from your numbers, P(H) = 8% so P(~H) = 92%, P(E|H) = 96% and P(E|~H) = 9%. Plug in all these numbers and you get ~ 48% which is quite low. -
2014-12-21T16:09:34
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/248710/conditional-probability-concept-question", "openwebmath_score": 0.7164428234100342, "openwebmath_perplexity": 626.9631275217126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429560855736, "lm_q2_score": 0.8558511488056151, "lm_q1q2_score": 0.8415951986357477 }
https://math.stackexchange.com/questions/3059698/greatest-common-divisor-of-two-elements
# greatest common divisor of two elements Find all possible values of GCD(4n + 4, 6n + 3) for naturals n and prove that there are no others 3·(4n + 4) - 2·(6n + 3) = 6, whence the desired GCD is a divisor 6. But 6n + 3 is odd, so only 1 and 3 remain. n=1 and n=2 are examples for GCD=1 and GCD=3 is the solution correct ? any other way to solve this ? This is correct. A slightly different way to solve it is by observing that $$(4n+4,6n+3) = (4n+4,2n-1) = (6,2n-1) = (3,2n-1).$$ • thanks for answering – Mustafa Azzurri Jan 2 at 17:50 Other way. $$\gcd(4n+4, 6n+3) = \gcd(4n+4, (6n+3) - (4n+4)) =$$ $$\gcd (4n+4, 2n -1) = \gcd(4n+4 - 2(2n-1), 2n-1)=$$ $$\gcd (6, 2n- 1) =$$ ... Now two things should be apparent. $$2n-1$$ is odd and $$6$$ is even so the prime factor $$2$$ of $$6$$ will not be a factor of $$2n-1$$. And Lemma: if $$\gcd(j,b) = 1$$ then $$\gcd(j*a, b) = \gcd(a,b)$$. That can be easily proven many ways. So $$\gcd(2*3, 2n-1) = \gcd(3,2n-1)$$. Which is equal to $$3$$ if $$3|2n-1$$ which can happen if $$2n-1 \equiv 0 \pmod 3$$ or $$n\equiv 2 \pmod 3$$. Or is equal to $$1$$ if $$3\not \mid 2n-1$$ which can happen if $$n\equiv 0, 1 \pmod 3$$. And another way: $$\gcd(4n+4, 6n+3) = \gcd(4(n+1), 3(2n+1)=$$. ... as $$3(2n+1)$$ is odd.... $$\gcd(n+1, 3(2n+1))$$. Now $$\gcd(n+1, 2n+1) = \gcd(n+1, (2n+1)-(n+1) = \gcd(n+1, n) = \gcd(n+1 - n, n) = \gcd(1, n) = 1$$. So... $$\gcd(n+1, 3(2n+1)) = \gcd(n+1, 3)$$. Which is $$3$$ if $$3|n+1$$ and is $$1$$ if not. Perhaps we can retrofit this as $$\gcd(3,n+1) = \{1,3\}$$ $$\gcd(2n+1, n+1) = 1$$ so $$\gcd(3(2n+1), n+1) = \gcd(3,n+1)$$. $$\gcd(3(2n+1), 2) = 1$$ so $$\gcd(3(2n+1), 2^2(n+1)) = \gcd(3,n+1)$$. All comes down to "casting out" relatively prime factors. \begin{align} (\color{#c00}4(n\!+\!1),\,3(2n\!+\!1))\, &=\, (n\!+\!1,\,3(\color{#0a0}{2n\!+\!1}))\ \ \ {\rm by}\ \ \ (\color{#c00}4,3)=1=(\color{#c00}4,2n\!+\!1)\\[.2em] &=\, (n\!+\!1,3)\ \ {\rm by} \ \bmod n\!+\!1\!:\ n\equiv -1\,\Rightarrow\, \color{#0a0}{2n\!+\!1\equiv -1} \end{align} Remark Your argument that the gcd $$\,d\mid \color{#c00}3$$ is correct, but we can take it further as follows $$d = (4n\!+\!4,6n\!+\!3) = (\underbrace{4n\!+\!4,6n\!+\!3}_{\large{\rm reduce}\ \bmod \color{#c00}3},\color{#c00}3) = (n\!+\!1,0,3)\qquad$$ Therefore $$\, d = 3\,$$ if $$\,3\mid n\!+\!1,\,$$ else $$\,d= 1$$
2019-07-22T04:18:35
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3059698/greatest-common-divisor-of-two-elements", "openwebmath_score": 0.9550416469573975, "openwebmath_perplexity": 725.6734418598273, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429599907709, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.841595196554813 }
http://mathhelpforum.com/calculus/4044-find-limit-print.html
# Find the limit • Jul 8th 2006, 05:47 PM Nichelle14 Find the limit Limit as n approaches infinity [(3^n + 5^n)/(3^n+1 + 5^n+1)] I tried to divide the numerator and denominator by 3^n. Was not successful. What should I do next? • Jul 8th 2006, 07:03 PM ThePerfectHacker Quote: Originally Posted by Nichelle14 Limit as n approaches infinity [(3^n + 5^n)/(3^n+1 + 5^n+1)] I tried to divide the numerator and denominator by 3^n. Was not successful. What should I do next? Did you consider to divide by $3^n+5^n$ Thus, $\frac{1}{1+\frac{2}{3^n+5^n}}\to 1$ as $n\to\infty$ • Jul 8th 2006, 07:07 PM ThePerfectHacker Also, $1-\frac{1}{n}\leq \frac{3^n+5^n}{3^n+1+5^n+1} \leq 1+\frac{1}{n}$ Since, $\lim_{n\to\infty} 1-\frac{1}{n}=\lim_{n\to\infty}1+\frac{1}{n}=1$ Thus, $\lim_{n\to\infty}\frac{3^n+5^n}{3^n+1+5^n+1}=1$ by the squeeze theorem. • Jul 8th 2006, 09:29 PM Soroban Hello, Nichelle14! Quote: $\lim_{n\to\infty}\frac{3^n + 5^n}{3^{n+1} + 5^{n+1}}$ I tried to divide the numerator and denominator by $3^n$ . . .Was not successful. What should I do next? Divide top and bottom by $5^{n+1}.$ The numerator is: . $\frac{3^n}{5^{n+1}} + \frac{5^n}{5^{n+1}} \;= \;\frac{1}{5}\cdot\frac{3^n}{5^n} + \frac{1}{5}\;=$ $\frac{1}{5}\left(\frac{3}{5}\right)^n + \frac{1}{5}$ The denominator is: . $\frac{3^{n+1}}{5^{n+1}} + \frac{5^{n+1}}{5^{n+1}} \;=\;\left(\frac{3}{5}\right)^{n+1} + 1$ Recall that: if $|a| < 1$, then $\lim_{n\to\infty} a^n\:=\:0$ Therefore, the limit is: . $\lim_{n\to\infty}\,\frac{\frac{1}{5}\left(\frac{3} {5}\right)^n + \frac{1}{5}} {\left(\frac{3}{5}\right)^n + 1} \;=\;\frac{\frac{1}{5}\cdot0 + \frac{1}{5}}{0 + 1}\;=\;\frac{1}{5} $ • Jul 9th 2006, 07:19 PM malaygoel Quote: Originally Posted by Soroban Hello, Nichelle14! Divide top and bottom by $5^{n+1}.$ The numerator is: . $\frac{3^n}{5^{n+1}} + \frac{5^n}{5^{n+1}} \;= \;\frac{1}{5}\cdot\frac{3^n}{5^n} + \frac{1}{5}\;=$ $\frac{1}{5}\left(\frac{3}{5}\right)^n + \frac{1}{5}$ The denominator is: . $\frac{3^{n+1}}{5^{n+1}} + \frac{5^{n+1}}{5^{n+1}} \;=\;\left(\frac{3}{5}\right)^{n+1} + 1$ Recall that: if $|a| < 1$, then $\lim_{n\to\infty} a^n\:=\:0$ Therefore, the limit is: . $\lim_{n\to\infty}\,\frac{\frac{1}{5}\left(\frac{3} {5}\right)^n + \frac{1}{5}} {\left(\frac{3}{5}\right)^n + 1} \;=\;\frac{\frac{1}{5}\cdot0 + \frac{1}{5}}{0 + 1}\;=\;\frac{1}{5} $ I think the trick here is that you will divide all the terms by the largest term in the expression(it is useful when the limit tends to infinity) Keep Smiling Malay
2017-08-22T17:28:07
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/4044-find-limit-print.html", "openwebmath_score": 0.9974018335342407, "openwebmath_perplexity": 2959.1010301254755, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429590144717, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.841595195719246 }
http://hotelgalileo-padova.it/kqrf/basis-of-symmetric-matrix.html
# Basis Of Symmetric Matrix The matrix having $1$ at the place $(1,2)$ and $(2,1)$ and $0$ elsewhere is symmetric, for instance. Symmetric matrices, quadratic forms, matrix norm, and SVD 15-19. Note that AT = A, so Ais. For a real matrix A there could be both the problem of finding the eigenvalues and the problem of finding the eigenvalues and eigenvectors. Then $$D$$ is the diagonalized form of $$M$$ and $$P$$ the associated change-of-basis matrix from the standard basis to the basis of eigenvectors. Lady Let A be an n n matrix and suppose there exists a basis v1;:::;vn for Rn such that for each i, Avi = ivi for some scalar. (1) The product of two orthogonal n × n matrices is orthogonal. The Symmetry Way is how we do business – it governs every client engagement and every decision we make, from our team to our processes to our technology. But what if A is not symmetric? Well, then is not diagonalizable (in general), but instead we can use the singular value decomposition. 1 p x has the same symmetry as B. looking at the Jacobi Method for finding eigenvalues of a of basis to the rest of the matrix. The matrix Q is called orthogonal if it is invertible and Q 1 = Q>. Most snowflakes have hexagonal symmetry (Figure 4. The diagonalization of symmetric matrices. 3 will have the same character; all mirror planes σ v, σ′ v, σ″ v will have the same character, etc. Now lets use the quadratic equation to solve for. Show that the set of all skew-symmetric matrices in 𝑀𝑛(ℝ) is a subspace of 𝑀𝑛(ℝ) and determine its dimension (in term of n ). This implies that M= MT. symmetry p x transforms as B. We know from the first section that the. These algorithms need a way to quantify the "size" of a matrix or the "distance" between two matrices. If a matrix has some special property (e. If eigenvectors of an nxn matrix A are basis for Rn, the A is diagonalizable TRUE( - If vectors are basis for Rn, then they must be linearly independent in which case A is diagonalizable. Standard basis of : the set of vectors , where is defined as the 0 vector having a 1 in the position. By induction we can choose an orthonormal basis in consisting of eigenvectors of. If you have an n×k matrix, A, and a k×m matrix, B, then you can matrix multiply them together to form an n×m matrix denoted AB. A real $(n\times n)$-matrix is symmetric if and only if the associated operator $\mathbf R^n\to\mathbf R^n$ (with respect to the standard basis) is self-adjoint (with respect to the standard inner product). Finally, section 8 brings an example of a. Use MathJax to format. Recall that if V is a vector space with basis v1,,v n, then its dual space V∗ has a dual basis α 1,,α n. In terms of the matrix elements, this means that a i , j = − a j , i. If a matrix A is reduced to an identity matrix by a succession of elementary row operations, the. orthonormal basis and note that the matrix representation of a C-symmetric op-erator with respect to such a basis is symmetric (see [6, Prop. The following theorem. Recall that a square matrix A is symmetric if A = A T. It turns out that this property implies several key geometric facts. x T Mx>0 for any. To prove this we need merely observe that (1) since the eigenvectors are nontrivial (i. 9 Symmetric Matrices and Eigenvectors In this we prove that for a symmetric matrix A ∈ Rn×n, all the eigenvalues are real, and that the eigenvectors of A form an orthonormal basis of Rn. Review An matrix is called if we can write where is a8‚8 E EœTHT Hdiagonalizable " diagonal matrix. If Ais an n nsym-metric matrix then (1)All eigenvalues of Aare real. In this case, B is the inverse matrix of A, denoted by A −1. So far, symmetry operations represented by real orthogonal transformation matrices R of coordinates Since the matrix R is real and also holds. FALSE: There are also "degenerate" cases where the solution set of xT Ax = c can be a single point, two intersecting lines, or no points at all. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. If v1 and v2 are eigenvectors of A. Jacobi Method for finding eigenvalues of symmetric matrix. 1; 1/ are perpendicular. Also, we will…. The matrix Q is called orthogonal if it is invertible and Q 1 = Q>. Invert a Matrix. (d)The eigenvector matrix Sof a symmetrix matrix is symmetric. Symmetric matrices, quadratic forms, matrix norm, and SVD 15-19. Complex Symmetric Matrices David Bindel UC Berkeley, CS Division Complex Symmetric Matrices - p. Interpretation as symmetric group. A matrix Ais symmetric if AT = A. 3 Alternate characterization of eigenvalues of a symmetric matrix The eigenvalues of a symmetric matrix M2L(V) (n n) are real. Let v 1, v 2, , v n be the promised orthogonal basis of eigenvectors for A. Note that we have used the fact that. Secondly, based on interpolated integrated similarity matrix, we utilized Kronecker regularized least square (KronRLS) method to obtained disease-miRNA association score matrix. This result is remarkable: any real symmetric matrix is diagonal when rotated into an appropriate basis. bilinear forms on vector spaces. A square matrix, A, is skew-symmetric if it is equal to the negation of its nonconjugate transpose, A = -A. Proof: Since has an eigenspace decomposition, we can choose a basis of consisting of eigenvectors only. Visit Stack Exchange. In particular, an operator T is complex symmetric if and only if it is unitarily Work partially supported by National Science Foundation Grant DMS-0638789. it's a Markov matrix), its eigenvalues and eigenvectors are likely to have special properties as well. One point more is to be. Richard Anstee An n nmatrix Qis orthogonal if QT = Q 1. In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). A square matrix is invertible if and only if it is row equivalent to an identity matrix, if and only if it is a product of elementary matrices, and also if and only if its row vectors form a basis of Fn. Example Determine if the following matrices are diagonalizable. By induction we can choose an orthonormal basis in consisting of eigenvectors of. In that case $\mathcal{T}^2=-1$. (1) A is similar to A. A basis for S 3x3 ( R ) consists of the six 3 by 3 matrices. It turns out that this property implies several key geometric facts. The set of matrix pencils congruent to a skew-symmetric matrix pencil A− B forms a manifold in the complex n2 −ndimensional space (Ahas n(n−1)~2. Orthogonal matrices and isometries of Rn. It is clear that the characteristic polynomial is an nth degree polynomial in λ and det(A−λI) = 0 will have n (not necessarily distinct) solutions for λ. Quandt Princeton University Definition 1. Determining the eigenvalues of a 3x3 matrix. APPLICATIONS Example 2. When I use [U E] = eig(A), to find the eigenvectors of the matrix. Definition 1 A real matrix A is a symmetric matrix if it equals to its own transpose, that is A = AT. A symmetric matrix is symmetric across the main diagonal. It remains to consider symmetric matrices with repeated eigenvalues. The eigenvalues still represent the variance magnitude in the direction of the largest spread of the data, and the variance components of the covariance matrix still represent the variance magnitude in the direction of the x-axis and y-axis. Symmetric matrices have an orthonormal basis of eigenvectors. The matrix U is called an orthogonal matrix if UTU= I. Therefore, there are only 3 + 2 + 1 = 6 degrees of freedom in the selection of the nine entries in a 3 by 3 symmetric matrix. Yu 3 4 1Machine Learning, 2Center for the Neural Basis of Cognition, 3Biomedical Engineering, 4Electrical and Computer Engineering Carnegie Mellon University fwbishop, [email protected] Then, it is clear that is a diagonal. 3 Alternate characterization of eigenvalues of a symmetric matrix The eigenvalues of a symmetric matrix M2L(V) (n n) are real. None of the other answers. We can define an orthonormal basis as a basis consisting only of unit vectors (vectors with magnitude $1$) so that any two distinct vectors in the basis are perpendicular to one another (to put it another way, the inner product between any two vectors is $0$). From Theorem 2. Recall some basic de nitions. MATH 340: EIGENVECTORS, SYMMETRIC MATRICES, AND ORTHOGONALIZATION Let A be an n n real matrix. Note that AT = A, so Ais. Suppose A is an n n matrix such that AA = kA for some k 2R. More specifically, we will learn how to determine if a matrix is positive definite or not. If nl and nu are 1, then the matrix is tridiagonal and treated with specialized code. In this problem, we will get three eigen values and eigen vectors since it's a symmetric matrix. The above matrix is skew-symmetric. Every symmetric matrix is congruent to a diagonal matrix, and hence every quadratic form can be changed to a form of type ∑k i x i 2 (its simplest canonical form) by a change of basis. The first step into solving for eigenvalues, is adding in a along the main diagonal. Note on symmetry. If Ais a symmetric real matrix A, then maxfxTAx: kxk= 1g is the largest eigenvalue of A. Symmetric matrices have an orthonormal basis of eigenvectors. Let A= 2 6 4 3 2 4 2 6 2 4 2 3 3 7 5. Interpretation as symmetric group. If v1 and v2 are eigenvectors of A. In characteristic not 2, every bilinear form Bis uniquely expressible as a sum B 1 +B 2, where B 1 is symmetric and B 2 is alternating (equivalently, skew-symmetric). §Example 2: Make a change of variable that transforms the quadratic form into a quadratic form with no cross-product term. When you have a non-symmetric matrix you do not have such a combination. The primary goal in this paper is to build a new basis, the “immaculate basis,” of NSym and to develop its theory. matrices and (most important) symmetric matrices. Recommended books:-http://amzn. This implies that UUT = I, by uniqueness of inverses. Now lets FOIL, and solve for. The identity matrix In is the classical example of a positive definite symmetric matrix, since for any v ∈ Rn, vTInv = vTv = v·v 0, and v·v = 0 only if v is the zero vector. Every square complex matrix is similar to a symmetric matrix. 1, applies to square symmetric matrices and is the basis of the singular value decomposition described in Theorem 18. The first step is to create an augmented matrix having a column of zeros. Consider the matrix that takes the standard basis to this eigenbasis. Recall that congruence preserves skew symmetry. De nition 1 Let U be a d dmatrix. 2] or [5, Sect. n ×n matrix Q and a real diagonal matrix Λ such that QTAQ = Λ, and the n eigenvalues of A are the diagonal entries of Λ. Moreover, the number of basis eigenvectors corresponding to an eigenvalue is equal to the number of times occurs as a root of. Let v 1, v 2, , v n be the promised orthogonal basis of eigenvectors for A. Theorem 18. Ais orthogonally diagonalizable), where Dis the diagonal matrix of eigenvalues i of A, and by assumption i >0 for all i. The matrix 1 1 0 2 has real eigenvalues 1 and 2, but it is not symmetric. If Ais an m nmatrix, then its transpose is an n m matrix, so if these are equal, we must have m= n. Any power A n of a symmetric matrix A (n is any positive integer) is a. The Spectral Theorem: If Ais a symmetric real matrix, then the eigenvalues of Aare real and Rn has an orthonormal basis of eigenvectors for A. So far, symmetry operations represented by real orthogonal transformation matrices R of coordinates Since the matrix R is real and also holds. [email protected] point group p x has B 1. Using a, b, c, and d as variables, I find that the row reduced matrix says Thus, Therefore, is a basis for the null space. The identity matrix In is the classical example of a positive definite symmetric matrix, since for any v ∈ Rn, vTInv = vTv = v·v 0, and v·v = 0 only if v is the zero vector. I want to find an eigendecomposition of a symmetric matrix, which looks for example like this: 0 2 2 0 2 0 0 2 2 0 0 2 0 2 2 0 It has a degenerate eigenspace in which you obviously have a certain freedom to chose the eigenvectors. When you have a non-symmetric matrix you do not have such a combination. A square matrix is symmetric if for all indices and , entry , equals entry ,. As we learned. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Quandt Princeton University Definition 1. The transpose of the orthogonal matrix is also orthogonal. (e)A complex symmetric matrix has real eigenvalues. 1 Vector-Vector Products Given two vectors x,y ∈ Rn, the quantity xTy, sometimes called the inner product or dot product of the vectors, is a real number given by xTy ∈ R = x1 x2 ··· xn y1 x2 yn Xn i=1 xiyi. The size of a matrix is given in the form of a dimension, much as a room might be referred to as "a ten-by-twelve room". In particular, if. Toeplitz A matrix A is a Toeplitz if its diagonals are constant; that is, a ij = f j-i for some vector f. Since , it follows that is a symmetric matrix; to verify this point compute It follows that where is a symmetric matrix. Therefore, there are only 3 + 2 + 1 = 6 degrees of freedom in the selection of the nine entries in a 3 by 3 symmetric matrix. Fact 7 If M2R n is a symmetric real matrix, and 1;:::; n are its eigenvalues with multiplicities, and v. Symmetric matrices have an orthonormal basis of eigenvectors. metric Matrix Vector product (SYMV) for dense linear al-gebra. 3 Alternate characterization of eigenvalues of a symmetric matrix The eigenvalues of a symmetric matrix M2L(V) (n n) are real. At Symmetry, our SAP Basis consultants who fulfill the SAP Basis Administrator duties not only run all installation, upgrade and support stacks of SAP software, but they also have thousands of hours of experience in doing so. The thing about positive definite matrices is xTAx is always positive, for any non-zerovector x, not just for an eigenvector. Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. References. Given any complex matrix A, define A∗ to be the matrix whose (i,j)th entry is a ji; in other words, A∗ is formed by taking the complex conjugate of each element of the transpose of A. Matrix norm the maximum gain max x6=0 kAxk kxk is called the matrix norm or spectral norm of A and is denoted kAk max. First, we prove that the eigenvalues are real. When the kernel function in form of the radial basis function is strictly positive definite, the interpolation matrix is a positive definite matrix and non-singular (positive definite functions were considered in the classical paper Schoenberg 1938 for example). References. Such complex symmetric matrices arise naturally in the study of damped vibrations of linear systems. If matrix A of size NxN is symmetric, it has N eigenvalues (not necessarily distinctive) and N corresponding. , v1 ¢v2 =1(¡1)+1(1. This process is then repeated for each of the remaining eigenvalues. Classifying 2£2 Orthogonal Matrices Suppose that A is a 2 £ 2 orthogonal matrix. Find a basis for the space of symmetric 3 × 3 {\displaystyle 3\!\times \!3} matrices. That these columns are orthonormal is confirmed by checking that Q T Q = I by using the array formula =MMULT(TRANSPOSE(I4:K7),I4:K7) and noticing that the result is the 3 × 3 identity matrix. For any scalars a,b,c: a b b c = a 1 0 0 0 +b 0 1 1 0 +c 0 0 0 1 ; hence any symmetric matrix is a linear combination of. bilinear forms on vector spaces. This book describes an easier method for generating symmetry-adapted basis sets automatically with computer techniques. P =[v1v2:::vn]. The Gram-Schmidt process starts with any basis and produces an orthonormal ba­ sis that spans the same space as the original basis. In this Letter, a symmetric matrix (SM), which is the sum of a symmetric TM and Hankel matrix, is proposed. Show that the skew symmetric matrices are a subspace of Rn×n. An indefinite quadratic form will notlie completely above or below the plane but will lie above for somevalues of x and belowfor other values of x. Notice that a. The next result gives us sufficient conditions for a matrix to be diagonalizable. Symmetric matrices have an orthonormal basis of eigenvectors. Let Sbe the matrix which takes the standard basis vector e i to v i; explicitly, the columns of Sare the v i. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. A symmetric matrix is a square matrix that equals its transpose: A = A T. If we use the "flip" or "fold" description above, we can immediately see that nothing changes. Therefore, there are only 3 + 2 + 1 = 6 degrees of freedom in the selection of the nine entries in a 3 by 3 symmetric matrix. 368 A is called an orthogonal matrix if A−1 =AT. A symmetric matrix A is a square matrix with the property that A_ij=A_ji for all i and j. That's minus 4/9. If we futher choose an orthogonal basis of eigenvectors for each eigenspace (which is possible via the Gram-Schmidt procedure), then we can construct an orthogonal basis of eigenvectors for $$\R^n\text{. For any scalars a,b,c: a b b c = a 1 0 0 0 +b 0 1 1 0 +c 0 0 0 1 ; hence any symmetric matrix is a linear combination of. Now lets FOIL, and solve for. The transpose of the orthogonal matrix is also orthogonal. Let Sbe the matrix which takes the standard basis vector e i to v i; explicitly, the columns of Sare the v i. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. Symmetry of the inner product implies that the matrix A is symmetric. Orthogonalization of a symmetric matrix: Let A be a symmetric real \( n\times n$$ matrix. 2 Hat Matrix as Orthogonal Projection The matrix of a projection, which is also symmetric is an orthogonal projection. A matrix is a rectangular array of numbers, and it's symmetric if it's, well, symmetric. As with linear functionals, the matrix representation will depend on the bases used. negative-definite quadratic form. symmetry p x transforms as B. Orthogonally Diagonalizable Matrices These notes are about real matrices matrices in which all entries are real numbers. Then det(A−λI) is called the characteristic polynomial of A. (We sometimes use A. (1,2,3,3), (1,2,3,3), this is a symmetric matrix. 1, applies to square symmetric matrices and is the basis of the singular value decomposition described in Theorem 18. The Spectral Theorem: If Ais a symmetric real matrix, then the eigenvalues of Aare real and Rn has an orthonormal basis of eigenvectors for A. APPLICATIONS Example 2. 5), a simple Jacobi-Trudi formula. We claim that S is the required basis. The sum of two skew-symmetric matrices is skew-symmetric. 1 Vector-Vector Products Given two vectors x,y ∈ Rn, the quantity xTy, sometimes called the inner product or dot product of the vectors, is a real number given by xTy ∈ R = x1 x2 ··· xn y1 x2 yn Xn i=1 xiyi. So, if a matrix Mhas an orthonormal set of eigenvectors, then it can be written as UDUT. 2 Given a symmetric bilinear form f on V, the associated. If a matrix A is reduced to an identity matrix by a succession of elementary row operations, the. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Eigenvalues and eigenvectors of a real square matrix by Rutishauser's method and inverse iteration method Find Eigenvalues and Eigenvectors of a symmetric real matrix using Householder reduction and QL method Module used by program below Eigenvalues of a non symmetric real matrix by HQR algorithm. Calculate a Basis for the Column Space of a Matrix Step 1: To Begin, select the number of rows and columns in your Matrix, and press the "Create Matrix" button. The Gram-Schmidt process starts with any basis and produces an orthonormal ba­ sis that spans the same space as the original basis. We then use row reduction to get this matrix in reduced row echelon form, for. 2, it follows that if the symmetric matrix A ∈ Mn(R) has distinct eigenvalues, then A = P−1AP (or PTAP) for some orthogonal matrix P. A matrix with real entries is skewsymmetric. , v1 ¢v2 =1(¡1)+1(1. Every symmetric matrix is congruent to a diagonal matrix, and hence every quadratic form can be changed to a form of type ∑k i x i 2 (its simplest canonical form) by a change of basis. Definition is mentioned in passing on page 87 in. Totally Positive/Negative A matrix is totally positive (or negative, or non-negative) if the determinant of every submatrix is positive (or. Symmetry Properties of Rotational Wave functions and Direction Cosines It is in the determination of symmetry properties of functions of the Eulerian angles, and in particular in the question of how to apply sense-reversing point-group operations to these functions, that the principal differences arise in group-theoretical discussions of methane. If $$A$$ is symmetric, we know that eigenvectors from different eigenspaces will be orthogonal to each other. A basis for S 3x3 ( R ) consists of the six 3 by 3 matrices. This means that for a matrix to be skew symmetric, A’=-A. To summarize, the symmetry/non-symmetry in the FEM stiffness matrix depends, both, on the underyling weak form and the selection (linear combinantion of basis functions) of the trial and test functions in the FE approach. Active 1 month ago. Theorem 3 Any real symmetric matrix is diagonalisable. is the projection operator onto the range of. And if I have some subspace, let's say that B is equal to the span of v1 and v2, then we can say that the basis for v, or we could say that B is an orthonormal basis. Let’s translate diagoinalizability into the language of eigenvectors rather than matrices. However, there is something special about it: The matrix U is not only an orthogonal matrix; it is a rotation matrix, and in D, the eigenvalues are listed in decreasing order along the diagonal. Symmetric Matrix By Paul A. In this Letter, a symmetric matrix (SM), which is the sum of a symmetric TM and Hankel matrix, is proposed. A = 1 2 (A+AT)+ 1 2 (A−AT). (Note that this result implies the trace of an idempotent matrix is equal. This book describes an easier method for generating symmetry-adapted basis sets automatically with computer techniques. Now the next step to take the determinant. Determining the eigenvalues of a 3x3 matrix. For any symmetric matrix A: The eigenvalues of Aall exist and are all real. (1,2,3,3), (1,2,3,3), this is a symmetric matrix. A square matrix A is a projection if it is idempotent, 2. Interpretation as symmetric group. This should be easy. (2018) The number of real eigenvectors of a real polynomial. De nition 1 Let U be a d dmatrix. Can you go on? Just take as model the standard basis for the space of all matrices (those with only one $1$ and all other entries $0$). The matrix 1 2 2 1 is an example of a matrix that is not positive semidefinite, since −1 1 1 2 2 1 −1 1 = −2. It is shown in this paper that a complex symmetric matrix can be diagonalised by a (complex) orthogonal transformation, when and only when each eigenspace of the matrix has an orthonormal basis; this. The wave-functions, which do not all share the symmetry of the Hamiltonian,. Let A= 2 6 4 3 2 4 2 6 2 4 2 3 3 7 5. Well, let's try this course format: Teach concepts like Row/Column order with mnemonics instead of explaining the reasoning. Say the eigenvectors are v 1; ;v n, where v i is the eigenvector with eigenvalue i. Show that the skew symmetric matrices are a subspace of Rn×n. • Transition are classified as either 1st order (latent heat) or 2nd order (or continuous) • A simple example: Paramagnetic -> Ferromagnetic transition “Time-reversal” is lost. (2) A symmetric matrix is always square. That's minus 4/9. Can you go on? Just take as model the standard basis for the space of all matrices (those with only one $1$ and all other entries $0$). Definition is mentioned in passing on page 87 in. If matrix A of size NxN is symmetric, it has N eigenvalues (not necessarily distinctive) and N corresponding. A square matrix is invertible if and only if it is row equivalent to an identity matrix, if and only if it is a product of elementary matrices, and also if and only if its row vectors form a basis of Fn. The asterisks in the matrix are where “stuff'' happens; this extra information is denoted by $$\hat{M}$$ in the final expression. Eigenvalues and Eigenvectors. 3 Diagonalization of Symmetric Matrices DEF→p. The basis vectors for symmetric irreducible representations of the can easily be constructed from those of U(2 l + 1) U(2 l - 1. For a real matrix A there could be both the problem of finding the eigenvalues and the problem of finding the eigenvalues and eigenvectors. The columns of Qwould form an orthonormal basis for Rn. 2, and matrix R= 1 j0 0 j1. The initial vector is submitted to a symmetry operation and thereby transformed into some resulting vector defined by the coordinates x', y' and z'. Then there exists an eigen decomposition. Therefore A= VDVT. If we use the "flip" or "fold" description above, we can immediately see that nothing changes. Theorem: Any symmetric matrix 1) has only real eigenvalues; 2) is always diagonalizable; 3) has orthogonal eigenvectors. Now we need to write this as a linear combination. Find a basis for the 3 × 3 skew symmetric matrices. Since they appear quite often in both application and theory, lets take a look at symmetric matrices in light of eigenvalues and eigenvectors. We now will consider the problem of finding a basis for which the matrix is diagonal. Notice that a. 2, and matrix R= 1 j0 0 j1. Here, then, are the crucial properties of symmetric matrices: Fact. Groups of matrices: Linear algebra and symmetry in various geometries Lecture 14 a. The first step into solving for eigenvalues, is adding in a along the main diagonal. Answer: 0T = −0 so 0 is skew symmetric. In particular, if. Basis Functions. If Ais an m nmatrix, then its transpose is an n m matrix, so if these are equal, we must have m= n. So, if a matrix Mhas an orthonormal set of eigenvectors, then it can be written as UDUT. Finite-dimensional space: a space which has a finite basis. Symmetry of the inner product implies that the matrix A is symmetric. To find the basis of a vector space, start by taking the vectors in it and turning them into columns of a matrix. In addition the matrix can be marked as probably a positive definite. Ais orthogonal diagonalizable if and only if Ais symmetric(i. Symmetric Matrix By Paul A. If you're seeing this message, it means we're having trouble loading external resources on our website. Eigenvectors and Diagonalizing Matrices E. The eigenvalues of a symmetric matrix are always real. The diagonalization of symmetric matrices. 1 p x forms a basis for the B 1. I have a symmetric matrix which I modified a bit: The above matrix is a symmetric matrix except the fact that I have added values in diagonal too (will tell the purpose going forward) This matrix graph visualization in R basis symmetric matrix having values in diagonal. The matrix 1 2 2 1 is an example of a matrix that is not positive semidefinite, since −1 1 1 2 2 1 −1 1 = −2. M is positive definite. When you have a non-symmetric matrix you do not have such a combination. nis the symmetric group, the set of permutations on nobjects. Orthogonally Diagonalizable Matrices These notes are about real matrices matrices in which all entries are real numbers. Note that AT = A, so Ais. A symmetric tensor is a higher order generalization of a symmetric matrix. Solves the linear equation set a * x = b for the unknown x for square a matrix. In linear algebra, a symmetric real matrix is said to be positive definite if the scalar is strictly positive for every non-zero column vector of real numbers. So what we've done in this video is look at the summation convention, which is a compact and computationally useful, but not very visual way to write down matrix operations. A matrix with real entries is skewsymmetric. This implies that UUT = I, by uniqueness of inverses. We claim that S is the required basis. So these guys are indeed orthogonal. Step 1: Find an ordered orthonormal basis B for $$\mathbb{R}^n ;$$ you can use the standard basis for $$\mathbb{R}^n. Introduction. If nl and nu are 1, then the matrix is tridiagonal and treated with specialized code. In fact if you take any square matrix A (symmetric or not), adding it to its transpose (A + A T) creates a symmetric matrix. (5) For any matrix A, rank(A) = rank(AT). Making statements based on opinion; back them up with references or personal experience. Symmetric matrix: a matrix satisfying for each Basis: a linearly independent set of vectors of a space which spans the entire space. If $A$ is a real skew-symmetric matrix and $\lambda$ is a real eigenvalue, then $\lambda = 0$, i. Let v 1, v 2, , v n be the promised orthogonal basis of eigenvectors for A. 2 Given a symmetric bilinear form f on V, the associated. (2018) Symmetric orthogonal approximation to symmetric tensors with applications to image reconstruction. The last equality follows since \(P^{T}MP$$ is symmetric. Another way of stating the real spectral theorem is that the eigenvector s of a symmetric matrix are orthogonal. A projection A is orthogonal if it is also symmetric. Today, we are continuing to study the Positive Definite Matrix a little bit more in-depth. Then det(A−λI) is called the characteristic polynomial of A. If we multiply a symmetric matrix by a scalar, the result will be a symmetric matrix. edu Abstract. So B is an orthonormal set. We will do these separately. Complex Symmetric Matrices David Bindel Every matrix is similar to a complex symmetric matrix. Theorem 3 If Ais a symmetric matrix. T (20) If A is a symmetric matrix, then its singular values coincide with its eigenvalues. 369 A is orthogonal if and only if the column vectors. Definition 3. The last part is immediate. linalg may offer more or slightly differing functionality. We call such matrices symmetric. To prove this we need merely observe that (1) since the eigenvectors are nontrivial (i. The Geometrical Basis of PT Symmetry. Note that we have used the fact that. bilinear forms on vector spaces. For a real matrix A there could be both the problem of finding the eigenvalues and the problem of finding the eigenvalues and eigenvectors. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. 9 Symmetric Matrices and Eigenvectors In this we prove that for a symmetric matrix A ∈ Rn×n, all the eigenvalues are real, and that the eigenvectors of A form an orthonormal basis of Rn. More precisely, a matrix is symmetric if and only if it has an orthonormal basis of eigenvectors. Thus, the answer is 3x2/2=3. Step 1: Find an ordered orthonormal basis B for $$\mathbb{R}^n ;$$ you can use the standard basis for $$\mathbb{R}^n. For any symmetric matrix A: The eigenvalues of Aall exist and are all real. Find a basis of the subspace and determine the dimension. Symmetry under reversal of the electric current High symmetry phase, Group G0 Low symmetry phase, Group G1. 3 Alternate characterization of eigenvalues of a symmetric matrix The eigenvalues of a symmetric matrix M2L(V) (n n) are real. It is a beautiful story which carries the beautiful name the spectral theorem: Theorem 1 (The spectral theorem). However, there is something special about it: The matrix U is not only an orthogonal matrix; it is a rotation matrix, and in D, the eigenvalues are listed in decreasing order along the diagonal. Some Basic Matrix Theorems Richard E. This result is remarkable: any real symmetric matrix is diagonal when rotated into an appropriate basis. linalg imports most of them, identically named functions from scipy. The matrix for H A with respect to the stan-dard basis is A itself. Therefore, there are only 3 + 2 + 1 = 6 degrees of freedom in the selection of the nine entries in a 3 by 3 symmetric matrix. Therefore, there are only 3 + 2 + 1 = 6 degrees of freedom in the selection of the nine entries in a 3 by 3 symmetric matrix. Now since Ais symmetric, Ais normal (you will see that later), and hence there exists an invertible matrix Pwith P 1 = PT, such that A= PDPT (you will learn that later too, i. The conventional method for generating symmetry-adapted basis sets is through the application of group theory, but this can be difficult. (Matrix diagonalization theorem) Let be a square real-valued matrix with linearly independent eigenvectors. Find a basis for the 3 × 3 skew symmetric matrices. 9 Symmetric Matrices and Eigenvectors In this we prove that for a symmetric matrix A ∈ Rn×n, all the eigenvalues are real, and that the eigenvectors of A form an orthonormal basis of Rn. It follows that is an orthonormal basis for consisting of eigenvectors of. In order to determine the eigenvectors of a matrix, you must first determine the eigenvalues. Since , it follows that is a symmetric matrix; to verify this point compute It follows that where is a symmetric matrix. By induction we can choose an orthonormal basis in consisting of eigenvectors of. Every symmetric matrix is congruent to a diagonal matrix, and hence every quadratic form can be changed to a form of type ∑k i x i 2 (its simplest canonical form) by a change of basis. 5), a simple Jacobi–Trudi formula. Thus, the answer is 3x2/2=3. We make a stronger de nition. Let A be an n´ n matrix over a field F. A matrix Ais symmetric if AT = A. Every real symmetric matrix is Hermitian, and therefore all its eigenvalues are real. A projection A is orthogonal if it is also symmetric. Letting V = [x 1;:::;x N], we have from the fact that Ax j = jx j, that AV = VDwhere D= diag( 1;:::; N) and where the eigenvalues are repeated according to their multiplicities. Recall some basic de nitions. For proof, use the standard basis. The second important property of real symmetric matrices is that they are always diagonalizable, that is, there is always a basis for Rn consisting of eigenvectors for the matrix. Keywords—Community Detection,Non-negative Matrix Factoriza-tion,Symmetric Matrix,Semi-supervised Learning,Pairwise Constraints I. Symmetric matrices have useful characteristics: if two matrices are similar to each other, then they have the same eigenvalues; the eigenvectors of a symmetric matrix form an orthonormal basis; symmetric matrices are diagonalizable. Is there a library for c++ which I can force to find the Orthogonal Basis such that H = UDU^{T}?. To begin, consider A and U in (1). Taking the first and third columns of the original matrix, I find that is a basis for the column space. Yu 3 4 1Machine Learning, 2Center for the Neural Basis of Cognition, 3Biomedical Engineering, 4Electrical and Computer Engineering Carnegie Mellon University fwbishop, [email protected] These are the numbers of. We'll see that there are certain cases when a matrix is always diagonalizable. (a) Prove that any symmetric or skew-symmetric matrix is square. 3 Recall that a matrix is symmetric if A = At. The elements on the diagonal of a skew-symmetric matrix are zero, and therefore its trace equals zero. Eigenvectors and Diagonalizing Matrices E. metric Toeplitz matrix T of order n, there exists an orthonormal basis for IRn, composed of nbn= 2 c symmetric and bn= 2 c skew-symmetric eigenvectors of T , where b c denotes the integral part of. 369 A is orthogonal if and only if the column vectors. 1 Basics Definition 2. The form chosen for the matrix elements is one which is particularly convenient for transformation to an asymmetric rotator basis either by means of a high-speed digital computer or by means of a desk calculator. Eigenvalues and Eigenvectors. It follows that is an orthonormal basis for consisting of eigenvectors of. A symmetric matrix A is a square matrix with the property that A_ij=A_ji for all i and j. Theorem An nxn matrix A is symmetric if and only if there is an orthonormal basis of R n consisting of eigenvectors of A. Perhaps the most important and useful property of symmetric matrices is that their eigenvalues behave very nicely. Since , it follows that is a symmetric matrix; to verify this point compute It follows that where is a symmetric matrix. DECOMPOSING A SYMMETRIC MATRIX BY WAY OF ITS EXPONENTIAL FUNCTION MALIHA BAHRI, WILLIAM COLE, BRAD JOHNSTON, AND MADELINE MAYES Abstract. We say a matrix A is symmetric if it equals it's tranpose, so A = A T. Recall that a square matrix A is symmetric if A = A T. 3 Alternate characterization of eigenvalues of a symmetric matrix The eigenvalues of a symmetric matrix M2L(V) (n n) are real. The discriminant of a symmetric matrix AT = A = [x ij] in inde-terminates x ij is a sum of squares of polynomials in Z[x ij: 1 ≤ i ≤ j ≤ n]. This is often referred to as a “spectral theorem” in physics. Symmetric Matrices There is a very important class of matrices called symmetric matrices that have quite nice properties concerning eigenvalues and eigenvectors. What are some ways for determining whether a set of vectors forms a basis for a certain vector space? Diagonalization of a Matrix [12/10/1998] Diagonalize a 3x3 real matrix A (find P, D, and P^(-1) so that A = P D P^(-1)). The Geometrical Basis of PT Symmetry. The matrices are symmetric matrices. Thus, all the eigenvalues are. This course contains 47 short video lectures by Dr. If all the eigenvalues of a symmetric matrix A are distinct, the matrix X, which has as its columns the corresponding eigenvectors, has the property that X0X = I, i. \begingroup The covariance matrix is symmetric, and symmetric matrices always have real eigenvalues and orthogonal eigenvectors. Is there a library for c++ which I can force to find the Orthogonal Basis such that H = UDU^{T}?. De nition 1. Fact 7 If M2R n is a symmetric real matrix, and 1;:::; n are its eigenvalues with multiplicities, and v. In characteristic 2, the alternating bilinear forms are a subset of the symmetric bilinear forms. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. The basic idea of symmetry analysis is that any basis of orbitals, displacements, rotations, etc. If this is the case, then there is an orthogonal matrix Q, and a diagonal matrix D, such that A = QDQ T. Since each basis submatrix of a symmetric idempotent matrix is a symmetric nonsingular idempotent matrix, it follows by Lemma 1 and Theorem 17 that each tropical matrix group containing a symmetric idempotent matrix is isomorphic to some direct products of some wreath products. Find Eigenvalues, Orthonormal eigenvectors , Diagonazible - Linear Algebra Orthogonal diagonalisation of symmetric 3x3 matrix using eigenvalues Orthogonal and Orhonormal Basis Example. A symmetric matrix is one that is equal to its transpose. In the latter, it does a computation using universal coefficients, again distinguishing the case when it is able to compute the "corresponding" basis of the symmetric function algebra over \(\QQ$$ (using the corresponding_basis_over hack) from the case when it isn't (in which case it transforms everything into the Schur basis, which is slow). This implies that M= MT. Symmetric matrices A symmetric matrix is one for which A = AT. Symmetric Matrices There is a very important class of matrices called symmetric matrices that have quite nice properties concerning eigenvalues and eigenvectors. A matrix is a rectangular array of numbers, and it's symmetric if it's, well, symmetric. In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). A basis of the vector space of n x n skew symmetric matrices is given by {A_ik: 1 ≤ i < k ≤ n, a_ik = 1, a_ki = -1, and all other entries are 0}. For any scalars a,b,c: a b b c = a 1 0 0 0 +b 0 1 1 0 +c 0 0 0 1 ; hence any symmetric matrix is a linear combination of. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. This result is remarkable: any real symmetric matrix is diagonal when rotated into an appropriate basis. 3 Diagonalization of Symmetric Matrices DEF→p. The matrices are symmetric matrices. So it equals 0. Another way of stating the real spectral theorem is that the eigenvector s of a symmetric matrix are orthogonal. 2 In fact, this is an equivalent definition of a matrix being positive definite. Diagonalization of Symmetric Matrices We have seen already that it is quite time intensive to determine whether a matrix is diagonalizable. and define. Then the elementary symmetric function corresponding to is defined to be the product. Note on symmetry. The discriminant of a symmetric matrix AT = A = [x ij] in inde-terminates x ij is a sum of squares of polynomials in Z[x ij: 1 ≤ i ≤ j ≤ n]. int gsl_linalg_symmtd_decomp (gsl_matrix * A, gsl_vector * tau) ¶ This function factorizes the symmetric square matrix A into the symmetric tridiagonal decomposition. Find a basis for the space of symmetric 3 × 3 {\displaystyle 3\!\times \!3} matrices. A basis for S 3x3 ( R ) consists of the six 3 by 3 matrices. Let V be the real vector space of symmetric 2x2 matrices. Theory The SVD is intimately related to the familiar theory of diagonalizing a symmetric matrix. So,wehave w 1 = v1 kv1k = 1 √ 12 +12. A symmetric matrix is one that is equal to its transpose. T (20) If A is a symmetric matrix, then its singular values coincide with its eigenvalues. of Non-symmetric Matrices The situation is more complexwhen the transformation is represented by a non-symmetric matrix, P. If $$A$$ is symmetric, we know that eigenvectors from different eigenspaces will be orthogonal to each other. Since Ais symmetric, it is possible to select an orthonormal basis fx jgN j=1 of R N given by eigenvectors or A. b) Find a basis for V. Orthogonally Diagonalizable Matrices These notes are about real matrices matrices in which all entries are real numbers. Rank Theorem: If a matrix "A" has "n" columns, then dim Col A + dim Nul A = n and Rank A = dim Col A. If you're behind a web filter, please make sure that the domains *. Write down a basis in the space of symmetric 2×2 matrices. All the eigenvalues of M are. To emphasize the connection with the SVD, we will refer. Show that the set of all skew-symmetric matrices in 𝑀𝑛(ℝ) is a subspace of 𝑀𝑛(ℝ) and determine its dimension (in term of n ). [Solution] To get an orthonormal basis of W, we use Gram-Schmidt process for v1 and v2. The scalar matrix I n= d ij, where d ii= 1 and d ij = 0 for i6=jis called the nxnidentity matrix. The matrix for H A with respect to the stan-dard basis is A itself. n ×n matrix Q and a real diagonal matrix Λ such that QTAQ = Λ, and the n eigenvalues of A are the diagonal entries of Λ. A square matrix, A, is skew-symmetric if it is equal to the negation of its nonconjugate transpose, A = -A. Viewed 58 times 3. Symmetry of the inner product implies that the matrix A is symmetric. That is, we show that the eigenvalues of A are real and that there exists an orthonormal basis of eigenvectors. 2, it follows that if the symmetric matrix A ∈ Mn(R) has distinct eigenvalues, then A = P−1AP (or PTAP) for some orthogonal matrix P. The matrix 1 2 2 1 is an example of a matrix that is not positive semidefinite, since −1 1 1 2 2 1 −1 1 = −2. Can you go on? Just take as model the standard basis for the space of all matrices (those with only one $1$ and all other entries $0$). In this video You know about matrix representation of various symmetry elements by Prof. The matrix having $1$ at the place $(1,2)$ and $(2,1)$ and $0$ elsewhere is symmetric, for instance. All the eigenvalues are real. 7 - Inner product An inner product on a real vector space V is a bilinear form which is. We say a matrix A is symmetric if it equals it's tranpose, so A = A T. Let v 1, v 2, , v n be the promised orthogonal basis of eigenvectors for A. DISCRIMINANTS OF SYMMETRIC MATRICES Abstract. It turns out that this property implies several key geometric facts. In characteristic 2, the alternating bilinear forms are a subset of the symmetric bilinear forms. (a) Prove that any symmetric or skew-symmetric matrix is square. Optimizing the SYMV kernel is important because it forms the basis of fundamental algorithms such as linear solvers and eigenvalue solvers on symmetric matrices. For a real matrix A there could be both the problem of finding the eigenvalues and the problem of finding the eigenvalues and eigenvectors. ) If A is a nxn matrix such that A = PDP-1 with D diagonal and P must be the invertible then the columns of P must be the eigenvectors of A. Our optimized SYMV in single. In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). Now, we will start off with a very, very interesting theorem. On the basis of 2-way splitting method, the recursive formula of SMVP is presented. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. To compare those methods for computing the eigenvalues of a real symmetric matrix for which programs are readily available. Consider again the symmetric matrix A = 0 @ 2 1 1 1 2 1 1 1 2 1 A; and its eigenvectors v1 = 0 @ 1 1 1 1 A; v2 = 0 @ 1 1 0 1 A; v3 = 0 @ 1. For example, suppose an algorithm only works well with full-rank, n ×n matrices, and it produces. Every symmetric matrix is congruent to a diagonal matrix, and hence every quadratic form can be changed to a form of type ∑k i x i 2 (its simplest canonical form) by a change of basis. Here, then, are the crucial properties of symmetric matrices: Fact. The spectral theorem implies that there is a change of variables which. 2, it follows that if the symmetric matrix A ∈ Mn(R) has distinct eigenvalues, then A = P−1AP (or PTAP) for some orthogonal matrix P. Strang makes it seem; it requires the fact that the Vandermonde matrix is invertible (see Strang, p. Thus, all the eigenvalues are. Thus, all the eigenvalues are. To emphasize the connection with the SVD, we will refer. However, sometimes it is necessary to use a lower symmetry or a different orientation than obtained by the default, and this can be achieved by explicit specification of the symmetry elements to be used, as described below. The Eigenvalues I. x T Mx>0 for any. The columns of Qwould form an orthonormal basis for Rn. The leading coefficients occur in columns 1 and 3. Consequently, there exists an orthogonal matrix Qsuch that. These eigenvectors must be orthogonal, i. We shall not prove the mul-tiplicity statement (that isalways true for a symmetric matrix), but a convincing exercise follows. The new form is the symmetric analogue of the power form, because it can be regarded as an “Hermite two-point expansion” instead. Multiply Two Matrices. In other words, the entries above the main diagonal are reflected into equal (for symmetric) or opposite (for skew-symmetric) entries below the diagonal. applications of symmetry in condensed matter physics are concerned with the determination of the symmetry of fields (functions of x, y, z, and t, although we will mostly consider static fields), which can be defined either on discrete points (e. Classifying 2£2 Orthogonal Matrices Suppose that A is a 2 £ 2 orthogonal matrix. Find the matrix of the orthogonal projection onto W. The transpose of the orthogonal matrix is also orthogonal. If A is a square-symmetric matrix, then a useful decomposition is based on its eigenvalues and eigenvectors. Its eigenvalues are all real, therefore there is a basis (the eigenvectors) which transforms in into a real symmetric (in fact, diagonal) matrix. Strang makes it seem; it requires the fact that the Vandermonde matrix is invertible (see Strang, p. This representation will in general be reducible. , X is an orthogonal matrix. 9: A matrix A with real enties is symmetric if AT = A. The first thing we note is that for a matrix A to be symmetric A must be a square matrix, namely, A must have the same number of rows and columns. On output the diagonal and subdiagonal part of the input matrix A contain the tridiagonal matrix. transforms either as one of the irreducible representations or as a direct sum (reducible) representation. In section 7 we indicate the relations of the obtained basis with that of Gel fand Tsetlin. Let A= 2 6 4 3 2 4 2 6 2 4 2 3 3 7 5. Recall some basic de nitions. We recall that a scalar l Î F is said to be an eigenvalue (characteristic value, or a latent root) of A, if there exists a nonzero vector x such that Ax = l x, and that such an x is called an eigen-vector (characteristic vector, or a latent vector) of A corresponding to the eigenvalue l and that the pair (l, x) is called an. There is no inverse of skew symmetric matrix in the form used to represent cross multiplication (or any odd dimension skew symmetric matrix), if there were then we would be able to get an inverse for the vector cross product but this is not possible. (6) If v and w are two column vectors in Rn, then. Diagonalization of Symmetric Matrices We have seen already that it is quite time intensive to determine whether a matrix is diagonalizable. Banded matrix with the band size of nl below the diagonal and nu above it. If $A$ is a real skew-symmetric matrix and $\lambda$ is a real eigenvalue, then $\lambda = 0$, i. Then det(A−λI) is called the characteristic polynomial of A. of Non-symmetric Matrices The situation is more complexwhen the transformation is represented by a non-symmetric matrix, P. 2), and have collinear C6, C3, and C 2 axes, six perpendicular C 2 axes, and a horizontal mirror plane. Now lets FOIL, and solve for. The identity matrix In is the classical example of a positive definite symmetric matrix, since for any v ∈ Rn, vTInv = vTv = v·v 0, and v·v = 0 only if v is the zero vector. Numerical Linear Algebra with Applications 25 :5, e2180. Let v 1, v 2, , v n be the promised orthogonal basis of eigenvectors for A. Theorem 3 If Ais a symmetric matrix. In particular, the rank of is even, and. Now, and so A is similar to C. Theorem 1 (Spectral Decomposition): Let A be a symmetric n × n matrix, then A has a spectral decomposition A = CDC T where C is a n × n matrix whose columns are unit eigenvectors C 1, …, C n corresponding to the eigenvalues λ 1, …, λ n of A and D is then × n diagonal matrix whose main diagonal consists of λ 1, …, λ n. Also, since B is similar to C, there exists an invertible matrix R so that. The eigenvalues still represent the variance magnitude in the direction of the largest spread of the data, and the variance components of the covariance matrix still represent the variance magnitude in the direction of the x-axis and y-axis. Example: If square matrices Aand Bsatisfy that AB= BA, then (AB)p= ApBp. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. These eigenvectors must be orthogonal, i. ifolds, and serves as a potential basis for many extensions and applications. where and is the identity matrix of order. ) Rank of a matrix is the dimension of the column space. Calculate a Basis for the Column Space of a Matrix Step 1: To Begin, select the number of rows and columns in your Matrix, and press the "Create Matrix" button. ) If A is a nxn matrix such that A = PDP-1 with D diagonal and P must be the invertible then the columns of P must be the eigenvectors of A. Hence both are the zero matrix. and define. Find a basis for the space of symmetric 3 × 3 {\displaystyle 3\!\times \!3} matrices. De nition 1 Let U be a d dmatrix. let M2,2 be the vector space of all 2 x 2 matrices with real entries this has a basis given by? let M2,2 be the vector space of all 2 x 2 matrices with real entries this has a basis given by B = { (1 1) , (0 1) , (0 0) , (0 0) }. a) Explain why V is a subspace of the space M{eq}_{2} {/eq}(R) of 2x2 matrices with real entries. The last part is immediate. We claim that S is the required basis. On the other hand, the concept of symmetry for a linear operator is basis independent. That is, AX = X⁄ (1). The set of matrix pencils congruent to a skew-symmetric matrix pencil A− B forms a manifold in the complex n2 −ndimensional space (Ahas n(n−1)~2. Now we need to write this as a linear combination. I To show these two properties, we need to consider. Totally Positive/Negative A matrix is totally positive (or negative, or non-negative) if the determinant of every submatrix is positive (or. Calculate the Null Space of the following Matrix. Let Abe a real, symmetric matrix of size d dand let Idenote the d didentity matrix. All the element pairs that trade places were already identical. Ask Question Asked 1 month ago. The eigenvalues of a symmetric matrix are always real. Find the dimension of the collection of all symmetric 2x2 matrices. The scalar matrix I n= d ij, where d ii= 1 and d ij = 0 for i6=jis called the nxnidentity matrix. they have a complete basis worth of eigenvectors, which can be chosen to be orthonormal. QR decomposition for general matrix; SVD decomposition (single value decomposition) for symmetric matrix and non-symmetric matrix (Jacobi method) Linear solver. A matrix Ais symmetric if AT = A. Let the symmetric group permute the basis vectors, and consider the induced action of the symmetric group on the vector space. Let v 1, v 2, , v n be the promised orthogonal basis of eigenvectors for A. Point Group Symmetry. (4) If A is invertible then so is AT, and (AT) − 1 = (A − 1)T. , X is an orthogonal matrix. int gsl_linalg_symmtd_decomp (gsl_matrix * A, gsl_vector * tau) ¶ This function factorizes the symmetric square matrix A into the symmetric tridiagonal decomposition. When you have a non-symmetric matrix you do not have such a combination. A symmetric matrix A is a square matrix with the property that A_ij=A_ji for all i and j. and define. Defining the M N matrix A with elements Aij = a(fi,yj), we recognize that a(u,v) = uTAv. Diagonalization of Symmetric Matrices We have seen already that it is quite time intensive to determine whether a matrix is diagonalizable.
2020-05-25T13:20:39
{ "domain": "hotelgalileo-padova.it", "url": "http://hotelgalileo-padova.it/kqrf/basis-of-symmetric-matrix.html", "openwebmath_score": 0.8374430537223816, "openwebmath_perplexity": 338.47806095546264, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9901401455693091, "lm_q2_score": 0.84997116805678, "lm_q1q2_score": 0.8415905760694558 }
https://math.stackexchange.com/questions/3048559/are-all-highly-composite-numbers-even
# Are all highly composite numbers even? A highly composite number is a positive integer with more divisors than any smaller positive integer. Are all highly composite numbers even (excluding 1 of course)? I can't find anything about this question online, so I can only assume that they obviously are. But I cannot see why. Yes. Given an odd number $$n$$, choose any prime factor $$p$$, and let $$k\geq 1$$ be the number such that $$p^k\mid n$$ but $$p^{k+1}\not\mid n$$. Then $$n\times\frac{2^k}{p^k}$$ has the same number of factors, and is smaller. • The same idea extends to show the primes dividing a highly composite number must be the smallest primes and the exponents must decrease as the primes get larger. – Ross Millikan Dec 21 '18 at 14:57 • The associated OEIS sequence is A025487. – Charles Dec 21 '18 at 15:35 • Furthermore, from 6 on, they are all multiples of 3. From 12 on, they are all multiples of 4. From 60 on, they are all multiples of 5. I believe that for any given factor N, there is a point after which all numbers in the sequence are multiples of N. I don't have a proof for this, but it seems like it's probably the case. – Darrel Hoffman Dec 21 '18 at 15:35 • @DarrelHoffman I would love to have proofs of these. – Charles Dec 21 '18 at 15:36 • @Charles: isn't the relevant sequence A002182 (oeis.org/A002182)? – Michael Lugo Dec 21 '18 at 16:39
2021-03-04T07:10:21
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3048559/are-all-highly-composite-numbers-even", "openwebmath_score": 0.8047124743461609, "openwebmath_perplexity": 240.28540169859951, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9901401432417145, "lm_q2_score": 0.84997116805678, "lm_q1q2_score": 0.8415905740910675 }
https://math.stackexchange.com/questions/4190318/difference-between-hypotenuse-and-larger-leg-in-a-pythagorean-triple
# Difference between hypotenuse and larger leg in a Pythagorean triple I've been number crunching irreducible Pythagorean triples and this pattern came up: the difference between the hypotenuse and the larger leg seems to always be n² or 2n² for some integer n. Moreover, every integer of the form n² or 2n² is the difference between hypotenuse and the larger leg for some irreducible Pythagorean triple. Is there a simple proof for that? There is a result listed on Wikipedia that looks kinda, sorta related: that the area of a Pythagorean triangle can not be the square or twice the square of a natural number. EDIT: Actually, the statements are correct only for odd n² (but also by any 2n² as stated) as John Omielan demonstrated below. • Are you aware of the formula $(p^2-q^2, 2pq, p^2+q^2)$ where $p,q$ are coprime and have opposite parity? Jul 5, 2021 at 2:20 • By the way, what is an example of an irreducible Pythagorean triple in which the difference between the hypotenuse and larger leg is $2^2$? Jul 5, 2021 at 2:41 • @DavidK The difference of 4 is impossible as proved in the answers below. Jul 7, 2021 at 6:11 • Yes, that was a hint that the $n^2$ differences would occur only for odd $n.$ Jul 7, 2021 at 11:45 As explained in Pythagorean triple, for integers $$m \gt n \gt 0$$, Euclid's formula of $$a = m^2 - n^2, \; \; b = 2mn, \; \; c = m^2 + n^2 \tag{1}\label{eq1A}$$ generates all primitive (i.e., irreducible) Pythagorean triples, specifically Every primitive triple arises (after the exchange of $$a$$ and $$b$$, if $$a$$ is even) from a unique pair of coprime numbers $$m$$, $$n$$, one of which is even. Note \eqref{eq1A} results in $$c - a = (m^2 + n^2) - (m^2 - n^2) = 2n^2 \tag{2}\label{eq2A}$$ $$c - b = (m^2 + n^2) - 2mn = (m - n)^2 \tag{3}\label{eq3A}$$ Regarding your second part, to help avoid confusion with $$n$$ above, let's call the values $$k^2$$ and $$2k^2$$ instead. With the first one, it uses \eqref{eq3A} so $$k = m - n$$. However, since one of $$m$$ and $$n$$ is even and the other is odd, their difference is odd, so only odd $$k$$ will work. For any such $$k$$, set $$n = k + 1$$ and $$m = 2k + 1$$ (note $$m$$ and $$n$$ are coprime, with $$n$$ even) to get $$m^2 - n^2 = (4k^2 + 4k + 1) - (k^2 + 2k + 1) = 3k^2 + 2k \tag{4}\label{eq4A}$$ $$2mn = 2(2k + 1)(k + 1) = 2(2k^2 + 3k + 1) = 4k^2 + 6k + 2 \tag{5}\label{eq5A}$$ This shows $$2mn \gt m^2 - n^2$$, so $$2mn$$ is the longer leg. Thus, the difference would result in \eqref{eq3A}. For the $$2k^2$$ case, choose $$n = k$$ and $$m = 3k + 1$$ (note $$m$$ and $$n$$ are coprime, with one of them even). Therefore, $$m^2 - n^2 = (9k^2 + 6k + 1) - k^2 = 8k^2 + 6k + 1 \tag{6}\label{eq6A}$$ $$2mn = 2(3k + 1)k = 6k^2 + 2k \tag{7}\label{eq7A}$$ This means $$m^2 - n^2 \gt 2mn$$, so $$m^2 - n^2$$ is the longer leg. Thus, the difference would result in \eqref{eq2A}. • +1: (also) to your answer, as presenting a more complete answer to the question than my answer did. Jul 4, 2021 at 21:15 • @user2661923 Thanks for the response & upvote. I was about to press the Post button with just the first part answered, similar to your answer, when I happened to look at the question again to notice it had a second part as well. So I also missed that part initially. Jul 4, 2021 at 21:25 • You also proved my statement is wrong. Not all squares can be the "personality" (I'm calling [c - larger leg] that) of a primitive Pythagorean triple. The double of any square can and is, though. Jul 4, 2021 at 22:14 • Also, the statement applies to both c-a and c-b regardless of which is the "personality" (i.e., the largest). Jul 4, 2021 at 22:17 Counter example is $$(9, 12, 15)$$. Edit In addition to overlooking the 2nd part of the OP's question, as indicated in my 2nd edit (below), I also overlooked that in the first part of the OP's question, he is (also) specifically focusing on irreducible Pythagorean triplets. I've been number crunching irreducible Pythagorean triples However, if the pythagorean triplet is presumed irreducible, then the problem is completely resolved by this article which indicates that the product will either have form $$[(m^2 + n^2) - (m^2 - n^2)] = 2n^2$$ or will have form $$[(m^2 + n^2) - (2mn)] = (m - n)^2.$$ Edit After reading John Omielan's answer, and then re-reading the question, I realized that my answer is incomplete. However, there is no point in my trying to complete my answer, since John Omielan's answer covers the exact same ground. We have $$a^2+b^2=c^2$$, hence $$(c-b)(c+b)=a^2$$. Since the triple is irreducible, the GCD of $$c-b$$ and $$c+b$$ is at most $$2$$: any divisor of both $$c-b$$ and $$c+b$$ would also have to be a divisor of $$(c+b)-(c-b)=2b$$, and $$GCD(c-b, b)=GCD(c+b,b)=GCD(b,c)=1$$. So we can say the following about $$c-b$$ and $$c+b$$: • The product of these two numbers is a square. That is, the power of each prime factor of their product is even. • The two numbers do not share any prime factors apart from, possibly, $$2$$. This means that, with a possible exception of $$2$$, every prime factor's power of both $$c-b$$ and $$c+b$$ is even. If the numbers are odd (their GCD is 1), the same holds for $$2$$ (its power is $$0$$), so each of them is a perfect square. If the numbers are even (their GCD is 2), divide both by $$2$$ first, and then similarly conclude that $$(c-b)/2$$ as well as $$(c+b)/2$$ is a square. As for the second part of the question, just find any solution to either $$c-b=n^2$$ or $$c-b=2n^2$$ for an arbitrary $$n$$. In particular, the triples $$(n^2+2n, 2n+2, n^2+2n+2)$$ and $$(2n^2+2n, 2n+1, 2n^2+2n+1)$$ respectively satisfy the condition. Note though that for even $$n$$ there are no irreducible triples with $$c-b=n^2$$: since $$c+b$$ has to be a square of the same parity, both $$c-b$$ and $$c+b$$ are divisible by $$4$$, then $$2b=(c+b)-(c-b)$$ and $$2c=(c+b)+(c-b)$$ are divisible by $$4$$, which means $$b$$, $$c$$, and consequently $$a$$ are all even. If we replace the usual $$(m,n)$$ of Euclid's formula with $$(2n-1+k,k)$$, we get \begin{align*} A=(2n-1)^2+ & 2(2n-1)k \\ B= \qquad\quad\quad & 2(2n-1)k+ 2k^2\\ C=(2n-1)^2+ & 2(2n-1)k+ 2k^2\\ \end{align*} which produces the mostly-primitve table of Pythagorean triples below. Notice that half of all triples have $$A>B$$ and half have $$B>A$$. $$\begin{array}{c|c|c|c|c|c|} n & k=1 & k=2 & k=3 & k=4 & k=5 \\ \hline Set_1 & 3,4,5 & 5,12,13& 7,24,25& 9,40,41& 11,60,61 \\ \hline Set_2 & 15,8,17 & 21,20,29 &27,36,45 &33,56,65 & 39,80,89 \\ \hline Set_3 & 35,12,37 & 45,28,53 &55,48,73 &65,72,97 & 75,100,125 \\ \hline Set_{4} &63,16,65 &77,36,85 &91,60,109 &105,88,137 &119,120,169 \\ \hline Set_{5} &99,20,101 &117,44,125 &135,72,153 &153,104,185 &171,140,221 \\ \hline Set_{6} &43,24,145 &165,52,173 &187,84,205 &209,120,241 &231,160,281 \\ \hline \end{array}$$ From the formula we can see that $$\quad C-A=2k^2\quad$$ and that $$\quad C-B=(2n-1)^2.\quad$$ The first case it is twice the square of any natural number and the second is an odd number squared.
2022-08-15T00:42:56
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4190318/difference-between-hypotenuse-and-larger-leg-in-a-pythagorean-triple", "openwebmath_score": 0.8854357004165649, "openwebmath_perplexity": 224.4185153155354, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517050371972, "lm_q2_score": 0.8757869900269366, "lm_q1q2_score": 0.8415890013157795 }
https://www.physicsforums.com/threads/friction-ramp-problem-im-challenging-a-score-received-on-a-test.349216/
# Friction Ramp Problem (I'm challenging a score received on a test) 1. Oct 26, 2009 ### shankman Hello! First time, long time! This is kind of a long post. I did what I could to keep it clear. TYIA! I was marked totally wrong on a test question and I think I may have been correct. I'm trying to get my ducks in a row before I ask the professor to review this with/for me. Everyone in the class seems to have gotten different answers so I don't have anything to compare it to. I believe that in this system, there is no acceleration because the friction is too great. This calculator I found online agrees with my answers: http://hyperphysics.phy-astr.gsu.edu/hbasees/incpl2.html#c1 Please, help me stick it to the man! Or, keep me from making a jerk out of myself. 1. The problem statement, all variables and given/known data The hanging 300g mass is connected to a 500g mass on a 35 degree downwards incline by an ideal string/pulley arrangement. Calculate the acceleration of the masses and the tension in the string when the system is released. The friction coefficient between the 500g mass and the ramp is Mk=.150. Here is a picture of the situation: m1=500g m2=300g 2. Relevant equations F=ma w=mg N=mg(cos@) <----- @=theta Force parallel to ramp = mg(sin@) Friction=(N)(Mk) 3. The attempt at a solution For the 500g block: W = (.5)(9.8) = 4.9N N = (.5)(9.8)(cos35) = 4.01N Force parallel to ramp = (.5)(9.8)(sin35) = 2.81N Friction on ramp = (4.01)(.150) = .601N For the 300g (hanging) block: W = (.3)(9.8) = 2.94N OK, if we were totally frictionless we would get: Fnet = 2.94 - 2.81 = .13N towards the hanging 300g block. This means: .13N = (.8kg)a a = .1625 m/s^2 But, we have friction and friction works against any motion. I believe that in this system, the friction is too great to overcome with these masses. We are in the zone where the blocks will not move. Since the system wants to move towards the 300g mass, friction opposes it: Fnet = 2.94N – 2.81N - .601N = -.471N Therefore, it will not accelerate towards the 300g hanging mass because the friction is too great. AND, I just don’t get to add the friction as a force going down the hill and say: Fnet = 2.81N + .601N – 2.94N = .471N This due to the fact that the friction would then oppose the downhill motion and bring me back to the first situation. Therefore, we have too much friction in this system. Am I correct in my logic and reasoning? Since there is no movement, the Tensions are equal to the weights of the masses. T1 = 2.81N T2 = 2.94N Last edited: Oct 26, 2009 2. Oct 26, 2009 ### rock.freak667 Mass 1 down the plane $$m_1a = m_1 gsin\theta - \mu mgcos\theta -T$$ hanging mass $$m_2 a = T-m_2 g$$ Solve. I don't think there would be too much friction. But I don't have a calculator at hand to check it. 3. Oct 26, 2009 ### shankman Hello! Thanks for responding. I get that. But, I don't think this is a problem where you can just plug the numbers into the formula. I think it's one of those special cases where you have intermediate values that prevent acceleration. With no friction: a = [(m2)g-(m1)(g)(sin@)] / [m1+m2] with numbers a = [(.3)(9.8) - (.5)(9.8)(sin35)] / [.5 +.3] a = .1618 a = [(m2)g-(m1)(g)(sin@) - (Mu)(m1)(g)(cos@)] / [m1+m2] With numbers a = [(.3)(9.8) - (.5)(9.8)(sin35) - (.15)(.5)(9.8)(cos 35)] / [.5 +.3] a = -.591 The magnitude of the acceleration increases when you add friction. Plus the acceleration changes directions. I don't see how that is possible. That's why I think this is in the intermediate range where the weights don't overcome friction. Do you see the point I'm trying to make? Is this correct? Or, am I just crazy? 4. Oct 27, 2009 ### PhanthomJay No, you are quite sane. I tended to agree with rockfreak until I cranked out the numbers. In problems such as these, depending on the values, the mass on the ramp could move up the plane, down the plane, or stand still. You have to work it out, as you did. In this case, there is just enough static friction force available to keep the system still (in equilibrium), where the static friction force on the mass on the ramp is less than (mu_s)N. Your tension calculation is wrong, however; the tension on both sides of the pulley must be the same, whether the masses are moving or still. Don't forget that there is still some static friction acting on the mass on the ramp. 5. Oct 28, 2009 ### shankman UPDATE: I talked to the professor. She said that the only time there is zero movement is when the frictional force is equal to (not greater than) the forces creating movement. If there is any difference, there is movement acceleration. So, no extra points for me. Honestly, I don't buy this explanation because it entirely negates the idea that friction could hold something in place. I was under the impression that friction always against any motion. Not just against the direction of the initial tug. For example, by her logic, if I have my initial situation but with Mk=.95 (say it's covered in velcro), the block would fly down the the ramp at 32.23 m/s^2 (I did the math). Obviously, this does not happen. Hmmm... maybe I'll try her one more time with this example. 6. Oct 28, 2009 ### willem2 You're entirely correct. There's no need to solve rock.freak667's equations if the friction force is too great for movement, nor is it valid because the maximum of the friction force is $-\mu m g cos \theta$, the force can be smaller. 7. Oct 28, 2009 ### shankman Exactly! My professor is essentially saying that the force of friction between the block and ramp is exceeding the other force on the block. Therefore, the force of friction is causing the block to move. This is impossible; the frictional force cannot do this. At least I feel vindicated even if my grade didn't change. I think I'll let this rest now... that is unless at the end of the semester, I'm 1% from the next letter grade. 8. Mar 30, 2010 ### sickle lol i feel for ya shankman... you got a pretty stupid prof lol sound like my high school physics teacher....
2017-12-17T19:58:53
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/friction-ramp-problem-im-challenging-a-score-received-on-a-test.349216/", "openwebmath_score": 0.49233731627464294, "openwebmath_perplexity": 1067.535700270482, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517050371973, "lm_q2_score": 0.8757869819218865, "lm_q1q2_score": 0.8415889935272179 }
https://cs.stackexchange.com/questions/75186/can-you-give-an-inductive-definition-to-define-the-length-of-a-list-l
# Can you give an inductive definition to define the length of a list L? Having trouble understanding what it means to define something inductively in the following context. Can you give an inductive to define the length of a list L? a. The total number of items in L is the length of L b. Basis: the length of an empty list is 0 Induction: the length of a list is 2*(length of half the list) c. Basis: the length of an empty list is 0 Induction: the length of a list is (length of head(list))+(length of tail(list)) d. Basis: the length of an empty list is 0 Induction: the length of a list is 1+(length of tail(list)) The answer is (d). Why might this be? And also for this similar question: Can you give an inductive definition to define what it means for an element X to be a member of a list L? a. X is a member of a list L if and only if X belongs to the list L. b. X is a member of a list L if either X is the head of the list or X is a member of the tail of L. c. X is a member of a list L if we can find an element Y in L and X=Y. d. X is a member of a list L if we can find an element Y in the tail of L Thank you. • A multiple choice question beginning with "can you..." should have two possible answers: yes and no. – Kai May 10 '17 at 9:40 In an inductive definition what you have to do is to find a base case and the inductive step. In your case, suppose to have a list $L$ and let head be the first element and tail the rest of the list. For example, if $L=[1,2,3,4,5]$: • head = $1$ • tail = $[2,3,4,5]$ In order to understand the meaning you could consider writing a recursive function to compute the length of the list. • Basis: the length of an empty list is 0 • Induction: the length of a list is 1+(length of tail(list)) This corresponds to the recursive function function len(list L) if L is empty return 0 else return 1 + len(tail(L)) Same for the second question: function member(elem x, list L) if L is empty return false return true else return member(x, tail(L)) and this corresponds to say: X is a member of a list L if either • X is the head of the list or • X is a member of the tail of L.
2022-01-29T11:22:50
{ "domain": "stackexchange.com", "url": "https://cs.stackexchange.com/questions/75186/can-you-give-an-inductive-definition-to-define-the-length-of-a-list-l", "openwebmath_score": 0.3447000980377197, "openwebmath_perplexity": 533.1757747638399, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9572778000158575, "lm_q2_score": 0.8791467770088162, "lm_q1q2_score": 0.8415876925860312 }
https://math.stackexchange.com/questions/3233586/let-t-be-a-normal-random-variable-that-describes-the-temperature
# Let $T$ be a normal random variable that describes the temperature… Let $$T$$ be a normal random variable that describes the temperature in Rome on the 2nd of June. It is known that on this date the average temperature is equal to $$µ_T = 20$$ centigrade degrees and that $$P (T ≤ 25) = 0.8212$$. How can I calculate the variance of $$T$$? From $$P(T \leq 25)=0.8212$$, you can find the $$z$$-score of $$25$$ (reverse-lookup in a $$z$$-score table). The $$z$$-score of $$25$$ is also given by $$z=\frac{25-\mu_T}{\sigma}$$. Set these two expressions for the $$z$$-score equal to each other and solve for $$\sigma$$. Finally, square it to get $$\sigma^2$$. • Yes. Since you found the z score, you know $0.92 = (25-\mu_T)/\sigma$. You already know $\mu_T$, and $\sigma$ is what you are trying to find. – bob.sacamento May 20 at 21:22 Consider the standard version of $$T$$, $$\Pr(Z \leq \alpha) = 0.8212$$ where we subtract by the mean and divide by standard deviation $$\sigma$$, as $$\Pr(T \leq 25) = \Pr(\underbrace{\frac{T - \mu_T}{\sigma}}_Z \leq \underbrace{\frac{25 - 20}{\sigma}}_\alpha) = 0.8212$$ Using the z table, you can easily find $$\alpha$$, which gives you $$\sigma$$.
2019-06-18T13:13:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3233586/let-t-be-a-normal-random-variable-that-describes-the-temperature", "openwebmath_score": 0.9448578953742981, "openwebmath_perplexity": 181.91445995750303, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9924227582839225, "lm_q2_score": 0.8479677622198947, "lm_q1q2_score": 0.8415425055181132 }
https://freevcenotes.com/methods/notes/discrete-probability
# Sample Spaces and Events The outcome of a random experiment is uncertain, but there exists a set of possible outcomes, $$\varepsilon$$, known as the sample space. The sum of the probabilities of all the outcomes in, $$\varepsilon$$ is 1. For example, the sample space for rolling a six-sided dice is: $\varepsilon = \{1,2,3,4,5,6\}$ An event is a subset of the sample space denoted by a capital letter. For example, if the event A is defined as the odd numbers when rolling a sixes sided dice, then we have: $A = \{1,3,5\}$ If the event B is impossible, $$\text{Pr}$$(B) = 0. If the event B is certain, $$\text{Pr}$$(B) = 1. So, for any event B, 0 $$\leqslant$$ $$\text{Pr}$$(B) $$\leqslant$$ 1. # Determining Probabilities When the sample space is finite, the probability of an event is the sum of the probabilities of the outcomes in that event. For example, if A is defined as the odd numbers when rolling a six sided dice, then: \begin{aligned} \Pr(A) &= \Pr(\text{Roll 1}) + \Pr(\text{Roll 3}) + \Pr(\text{Roll 5})\\ &= \frac{1}{6} + \frac{1}{6} + \frac{1}{6}\\ &= \frac{3}{6}\\ &= \frac{1}{2}\\ \end{aligned}\\ When dealing with area questions, assume that it is equally likely to hit any region of the define area. So, the probability of hitting a certain region A is: $\text{Pr}(A) = \frac{\text{Area of A}}{\text{Total area}}$ When an experiment has only two possible outcomes (events), they are said to be complementary. The complement of the event A is denoted by A'. In the example with the six sided dice above, A' would represent everything in the sample space, except the odd numbers. So, $A' = \{2,4,6\}$ Because the sum of the probabilities of events A and A' must be 1, we have: $\text{Pr}(A') = 1 - \text{Pr}(A)$ # The Addition Rule and Mutual Exclusivity The addition rule is generally used to calculate $$\Pr(A \cap B)$$ or $$\Pr(A \cup B)$$ $\Pr(A) + \Pr(B) - \Pr(A \cap B) = \Pr(A \cup B)$ We say that two events are mutually exclusive if: $\Pr(A \cap B) = 0$ That is the two events will never occur at the same time. # Probability Tables A very powerful table which isn't emphasised enough! $A$ $A'$ $B$ $\Pr(A \cap B)$ $\Pr(A' \cap B)$ $\Pr(B)$ $B’$ $\Pr(A \cap B’)$ $\Pr(A' \cap B’)$ $\Pr(B’)$ $\Pr(A)$ $\Pr(A')$ $1$ For appropriate questions, place the probabilities given in their corresponding box. The sum of each column and row is the last entry. For example: $\Pr(A \cap B) + \Pr(A \cap B') = \Pr(A)$ $\text{Example 9.1: John has lost his class timetable. The probability that}\\\text{ he will have Methods period one is 0.35.}\\ \text{The probability that he has PE on a given day is 0.1 and the probability}\\ \text{ that he will have Methods period one and PE on the same day is 0.05.}\\ \text{Find the probability that John does not have Methods period one and PE on the same day.}\\ \text{ }\\ \text{Let } M \text{ represent methods period one and } P \text{ represent PE. From the information given we have:}\\$ $M$ $M’$ $P$ $0.05$ $\Pr(M’ \cap P)$ $0.1$ $P’$ $\Pr(M \cap P’)$ $\Pr(M’ \cap P’)$ $\Pr(P’)$ $0.35$ $\Pr(M’)$ $1$ \text{Looking at the second row}\\ \begin{aligned} 0.05 + \Pr(M’ \cap P) &= 0.1\\ \Pr(M’ \cap P) &= 0.05\\ \text{} \\ \text{Looking at the last row}\\ 0.35 + \Pr(M’) &= 1\\ \Pr(M’) &= 0.65\\ \end{aligned} $M$ $M’$ $P$ $0.05$ $0.05$ $0.1$ $P’$ $\Pr(M \cap P’)$ $\Pr(M’ \cap P’)$ $\Pr(P’)$ $0.35$ $0.65$ $1$ \text{Looking at the third column}\\ \begin{aligned} 0.05 + \Pr(M’ \cap P’) &= 0.65\\ \Pr(M’ \cap P’) &= 0.6\\ \end{aligned}\\ \text{So, the probability that John does not have Methods period one and PE on a given day is } 0.6\\ # Conditional Probability The probability that event A happens when we know that event B has already occured: $\Pr(A \mid B) = \frac{\Pr(A \cap B)}{\Pr(B)}$ It is often difficult to recognise when we are being asked a conditional probability question. However, generally speaking, if the question includes “if” or “given that”, you can be almost certain that you are dealing with conditional probability. We have written the same question below twice using the two different phrases. \text{Example 9.2 Find the probability that a six is rolled with a} \\\text{ six sided die given that an even number has been rolled. }\\ \text{Or equivelantly, If an even number has been rolled on a six sided die}\\\text{ find the probability that a six is rolled. }\\ \text{ }\\ \begin{aligned} \Pr(\text{Even}) &= \frac{3}{6} \\ \Pr(\text{Even and Six}) &= \Pr(\text{Six}) \\ &= \frac{1}{6} \\ \Pr(\text{Six if Even}) &= \frac{\Pr(\text{Even and Six})}{\Pr(\text{Even})} \\ &= \frac{\frac{1}{6}}{\frac{3}{6}}\\ &= \frac{1}{3}\\ \end{aligned}\\ # Independence If knowing that event B has happened does not change the probability of event A from happening, then we say that events A and B are independent. For example, it raining outside and you going to school is independent as you will go to school regardless of whether it is raining or not. However, you being in class and having lunch is not independent as you will (probably) not be able to have lunch during class. Mathematically two events are independent if: \begin{aligned} \Pr(A \cap B) &= \Pr(A) \cdot \Pr(B)\\ \Pr(A) \neq 0 &\text{ and } \Pr(B) \neq 0 \end{aligned} \text{Example 9.3: John has lost his class timetable.}\\\text{ The probability that he will have Methods period one is 0.35.}\\ \text{The probability that he has PE on a given day is 0.1 and the probability} \\\text{that he will have Methods period one and PE on the same day is 0.035.}\\ \text{Is John having Methods period one independent to him having PE on the same day?}\\ \text{ } \\ \text{A. Let } M \text{ represent methods period one and } P \text{ represent PE. From the information given we have:}\\ \begin{aligned} \Pr(M) &= 0.35\\ \Pr(P) &= 0.1\\ \Pr(M) \cdot \Pr(P) &= 0.035 \\ &= \Pr(M \cap P) \\ \end{aligned}\\ \text{So, John having Methods period one and PE on the same day are independent events} \\ # Discrete Random Variables A random variable is a function that assigns a number to each outcome of an experiment. A discrete random variable can take one of a countable number of possible outcomes. Continuous random variables will be considered in the next section. For example, the number of free throws John can score when taking two is a discrete random variable which may take one of the values 0,1 or 2. More on this below. # Discrete Probability Distributions The probability distribution for a random variable consists of all the values the variable can take along with the associated probabilities. The general format is: $\text{}$ $x_1$ $x_2$ $...$ $x_n$ $\Pr(X=x)$ $\Pr(X=x_1)$ $\Pr(X=x_2)$ $...$ $\Pr(X=x_n)$ The table allows us to easily find probabilities such as: $$\text{Pr}(X>1)$$ and $$\text{Pr}(X<2)$$ by summing the relevant probabilities in the table Note: the bottom row must sum to 1 and each probability must be at least zero and at most one. A discrete probability function, also called a probability mass function describes the distributions of a discrete random variable. An example of a graph for a discrete probability function is given below. \text{Example 9.4: James scores 80\% of all free throws he takes.}\\ \text{Create a probability distribution table and graph the probability mass function if James has two free throws.}\\ \text{ }\\ \text{Let } X \text{ be the number of free throws James scores}\\ \begin{aligned} \varepsilon &= {0,1,2}\\ Pr(X = 2) &= 0.80 \cdot 0.80 \\ &= 0.64\\ Pr(X = 1) &= \binom{2}{1} \cdot 0.8 \cdot 0.2\\ &= 0.32\\ Pr(X = 0) &= 0.2 \cdot 0.2 \\ &=0.04\\ \text{ }\\ \end{aligned}\\ \text{Bear with us on how we calculated } \Pr(X = 1). \\ \text{We will explain how we calculated this in the next chapter - binomial distribution}\\ $x$ $0$ $1$ $2$ $\Pr(X=x)$ $0.04$ $0.32$ $0.64$ # Mean, Variance and Standard Deviation The expected value, or mean, is the average value of a discrete random variable. To calculate it, we sum the products of each value of X and its associated probability. That is, we first find the product of each column of the probability distribution table and then sum them up. Mathematically: $E(X) = \mu = \sum_{x} x \cdot \Pr(x)$ Variance and standard deviation are a measure of spread. Standard deviation is more relevant to us as it is in the same units as those which we are measuring. The formula provided by VCAA involves a very large number of calculations, as such we recommend you memorising: $\text{Var}(X) = E(X^2) - [E(X)]^2$ To calculate the first term, simply square all of the x values in the top row of the probability distribution table and then find the product of each column of the probability distribution table before summing them up. To get standard deviation we simply square root the variance. $\text{sd}(X) = \sigma = \sqrt{\text{Var}(X)}$ \text{Note:}\\ \begin{aligned} E(aX+b) &= a \cdot E(X) + b\\ \text{Var}(aX+b) &= a^2 \cdot Var(X)\\ \end{aligned} For many random variables, there is a 95% chance of obtaining an outcome within two standard deviations either side of the mean. That is, $\Pr(\mu - 2\sigma \leqslant X \leqslant \mu + 2\sigma) \approx 0.95$ \text{Example 9.5: James scores 80\% of all free throws he takes. James takes two shots.}\\ \text{Find the expected value, variance and standard deviation for this scenario.}\\ \text{Correct your answers to 2 decimal points where appropriate}\\ \text{ }\\ \text{The probabilities for each outcome was calculated in example 9.3.}\\ \text{Let } X \text{ be the number of free throws James scores.}\\ \text{ }\\ \begin{aligned} E(X) &= 0 \times 0.04 + 1 \times 0.32 + 2 \times 0.64\\ &= 0 + 0.32 + 1.28\\ &= 1.6\\ \text{ }\\ E(X^2) &= 0^2 \times 0.04 + 1^2 \times 0.32 + 2^2 \times 0.64\\ &= 2.88\\ \text{ }\\ \text{Var}(X) &= E(x^2) - [E(X)]^2\\ &= 2.88 - 1.6^2\\ &= 0.32\\ \text{ }\\ \text{sd}(X) &= \sqrt{Var(X)}\\ \text{sd}(X) &= 0.57\\ \end{aligned}
2022-08-14T01:02:09
{ "domain": "freevcenotes.com", "url": "https://freevcenotes.com/methods/notes/discrete-probability", "openwebmath_score": 1.000003457069397, "openwebmath_perplexity": 662.8091400310481, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9924227582839227, "lm_q2_score": 0.8479677545357568, "lm_q1q2_score": 0.8415424978922 }
https://math.stackexchange.com/questions/2926971/combinatorics-h-men-m-women-and-n-chairs-in-a-circular-table
# Combinatorics - h men, m women and n chairs in a circular table Question: How many ways there are to sit h men, m women in a circular table with n chairs in a way that no woman is going to be sit next to other woman? The objective of this post: I would like for someone to check if what I've done, including my reasoning, is correct. If it is, I'd like to know if there is an easier solution. If it's not, I'd like to know what is my mistake. Thank you: I started thinking about the cases: First I thought about what would be the answer if $$n. There would be no answer since at least a man or a woman wouldn't sit. Next I thought about the case where $$n=h+m$$. That is easy and I used the following strategy: 1) In the beginning, each seat is the same, so you sit any woman to differ the seats. 2) After the first woman has seated, then each seat is different, so you sit the remaining women: $$(m-1)!$$. 3) Now we just need to separate woman by sitting men between them. We use stars and bars method to get all the possibilities of how many men are going to be between each woman: $${h-1 \choose m-1}$$. 4) Then we create a line of men and tell them to sit according to the past result. Hence the answer is: $$(m-1)!\cdot {h-1 \choose m-1} \cdot h!$$ Next I thought about the cases where $$n>h+m$$. That case only differs from the previous one because there are more chairs, so an answer, which is a sequence, would differ because of where those extras chairs are positioned... Hence I used a similar strategy: 1) In the beginning each seat is the same, so you sit any woman to differ the seats. 2) After the first woman has seated, then each seat is different, so you sit the remaining women: $$(m-1)!$$. 3) Now, instead of sitting the men, we're going to position chairs between each woman using stars and bars method. There are $$n-m$$ remaining seats that we want to position between each woman in a circular table, hence: $${n-m-1 \choose m-1}$$. 4) Finally, we make each men choice a remaining seat to sit: $$P_{h}^{n-m}$$. 5) Therefore, the result is:$$(m-1)!\cdot{n-m-1\choose m-1}\cdot P_{h}^{n-m}$$. EDIT: The reason for this edit is that I noticed that N. F. Taussig answer and the one that I gave are actually the same! Let's introduce the same change of variables. Let n be the number of seats; let m be the number of men; let w be the number of women. First case: My answer: \begin{align*} (w-1)!\cdot {m-1 \choose w-1} \cdot m! &= \frac{(w-1)!(m-1)!(m!)}{(w-1)!(m-w)!}\\ &=\frac{(m-1)!(m!)}{(m-w)!} \end{align*} \begin{align*} (m - 1)!\binom{m}{w}w! &= \frac{(m-1)!(m!)(w!)}{w!(m-w)!}\\ &= \frac{(m-1)!(m!)}{(m-w)!} \end{align*} Second Case: My answer: \begin{align*} (w-1)!\cdot{n-w-1\choose w-1}\cdot P_{m}^{n-w} &= \frac{(w-1)!(n-w-1)!(n-w)!}{(w-1)!(n-2w)!(n-w-m)!}\\ &=\frac{(n-w-1)!(n-w)!}{(n-2w)!(n-w-m)!} \end{align*} \begin{align*} \binom{n - w - 1}{m - 1}(m - 1)!\binom{n - w}{w}w! &= \frac{(n-w-1)!(m-1)!(n-w)!w!}{(m-1)!(n-w-m)!w!(n-2w)!}\\ &= \frac{(n-w-1)!(n-w)!}{(n-w-m)!(n-2w)!}\\ \end{align*} Anyway, thank you for answering N. F. Taussig! Thank you for your time and support! • If there is an empty seat between two women, do you consider the women to be sitting next to each other? – N. F. Taussig Sep 22 '18 at 21:41 • @N.F.Taussig No, a seat separates two women just as a man would do. – Bruno Reis Sep 22 '18 at 21:43 Let's introduce a change of variables. Let $$n$$ be the number of seats; let $$m$$ be the number of men; let $$w$$ be the number of women. As you observed, the problem only has a solution if $$n \geq m + w$$. Case 1: $$n = m + w$$ Since the women must be separated by the men, there must be at least as many men as women, so we require that $$m \geq w$$. Hand each person a chair. Suppose Andrew is one of the $$m$$ men. Seat him first. The other men can be seated around the table in $$(m - 1)!$$ ways as we proceed clockwise around the table from Andrew. Seating the $$m$$ men creates $$m$$ spaces in which we can place a woman. To separate the women, we must choose $$w$$ of those $$m$$ spaces, which can be done in $$\binom{m}{w}$$ ways. The women can be arranged in those $$w$$ spaces in $$w!$$ ways as we proceed clockwise around the table from Andrew. Hence, there are $$(m - 1)!\binom{m}{w}w!$$ seating arrangements in which no two of the women are adjacent. Case 2: $$n \geq m + w$$ Since the women must be separated by the men or by an empty seat, we require that $$w \leq \left\lfloor \dfrac{n}{2} \right\rfloor$$. Place $$n - w$$ chairs at the table. Place Andrew in one of them (it does not matter which one since we will use Andrew as our reference point). That leaves $$n - w - 1$$ seats where the remaining $$m - 1$$ men can be seated. Choose $$m - 1$$ of these $$n - w - 1$$ seats for the men, which can be done in $$\binom{n - w - 1}{m - 1}$$ ways. The men can be arranged in these seats in $$(m - 1)!$$ ways as we proceed clockwise around the table from Andrew. Hand each woman a chair. We now have $$n - w$$ spaces in which a woman can be placed, one to the left of each of the $$n - w$$ chairs at the table. To separate the women, choose $$w$$ of these spaces, which can be done in $$\binom{n - w}{w}$$ ways. The women can be arranged in these chosen spaces in $$w!$$ ways as we proceed clockwise around the table from Andrew. Hence, there are $$\binom{n - w - 1}{m - 1}(m - 1)!\binom{n - w}{w}w!$$ seating arrangements in which no two of the women are adjacent. As a check, observe that if $$m + w = n$$, then our formula reduces to $$\binom{m + w - w - 1}{m - 1}(m - 1)!\binom{m + w - w}{w}w! = \binom{m - 1}{m - 1}(m - 1)!\binom{m}{w}w! = (m - 1)!\binom{m}{w}w!$$ • Thank you for answering, but please, check the edit :). Ty anyway! – Bruno Reis Sep 22 '18 at 22:58
2019-06-18T08:42:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2926971/combinatorics-h-men-m-women-and-n-chairs-in-a-circular-table", "openwebmath_score": 0.8213850259780884, "openwebmath_perplexity": 1771.5107348592674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964198826467, "lm_q2_score": 0.8539127585282744, "lm_q1q2_score": 0.8415279664217294 }
https://www.storyofmathematics.com/glossary/parallel/
# Parallel|Definition & Meaning ## Definition Two lines or line segments are said to be parallel to each other, if the perpendicular distance between their lines remains same throughout their length. Two lines are called parallel to each other if we can prove that the perpendicular distance between them at all points is the same, they do not intersect each other at any point, they are pointing in the same direction, and they never converge or diverge. All of the above conditions are basically describing the same thing in different words. The underlying mathematical condition or constraint remains the same. The following figure shows two line segments that are parallel to each other. Figure 1: Two Parallel Lines It can be seen clearly that both lines have the same direction, all points on both lines have equal perpendicular distance from the adjacent line, the are neither converging nor diverging, and, definitely they do not seem to have any point of intersection (at least not in the frame). ## Explanation of Parallel Lines The Greek Posidonius is credited by Proclus with defining parallel lines as equally spaced lines. However, the modern concept of parallelism was formalized by Euclid’s parallel postulate, which focuses on parallel lines. In the perspective of geometry in particular and mathematics in general, parallel lines may be classified as co-planar straight lines that do not cross each other at any point. In other words, any pair of co-planer lines that do not intersect are termed parallel lines. This concept can easily be extended to planes. That is, any pair of planes in the same three-dimensional space that never cross each other are said to be parallel planes. There is a predetermined minimum separation or perpendicular distance between parallel lines that they maintain from minus infinity to plus infinity, and they do not touch each other or converge at any point. In three-dimensional Euclidean space, a line and a plane are said to be parallel if they do not share a point. On the other hand, the intersection of two non-co-planar lines results in skew lines. The parallel lines are important because of the unique set of deductions and geometrical laws that they follow. They help us as reference objects in many geometrical problems and help simplify more complex problems. One example of this kind of geometry is euclidean geometry, and parallelism is a characteristic of affine geometries. Similar parallelism qualities may be seen in lines in other geometries, such as hyperbolic geometry. ### Real-life Examples Parallelism is very common in many real-world applications. The figure given below lists two such common examples. Figure 2: Real-Life Examples of Parallel Lines Here you can see on the left in the figure that there is a ladder. The vertical supports of the ladder are parallel to each other. If they were not parallel, they would not support each other, and the structure would break. The rungs of this ladder represent the perpendicular distance between the lines passing through the supporting legs. These rungs are also parallel to each other. Notice that the distance between the parallel supports remains the same throughout the length of the ladder, which is proven by the fact that the length of the rungs remains the same. The figure shows a transmission line on the right side. It can be noticed that the hanging power lines on the transmission supports are also parallel to each other. Two such lines are highlighted in red and blue color for clarity. The perpendicular distance between these lines is kept constant and is depicted by the cross arm length of the supporting tower that, as we know, remains constant. ## Euclidean Postulates of Parallelism (Properties of Parallel Lines) In this section, we present a more mathematically rich perspective of parallelism with respect to straight lines. We formally introduce parallelism and the properties of parallel lines in the following paragraphs. These properties can also be used to verify or check whether two lines are parallel or not. (a) For two lines to be parallel, each point on one of the lines must maintain a constant minimum distance from the other line. That is, both lines should maintain equal distance at all points. (b) For two lines to be parallel, there must not exist any point that satisfies both line equations. That is, there shouldn’t exist any point of intersection or the lines must never converge. (c) If a straight line crosses two parallel lines, the corresponding angles created by this line with both of the parallel lines must be congruent. Congruent means that the angles will be identical to each other. This property is explained in the following figure. (d) For two lines to be parallel, they must have the same slope. Figure 3: Line Crossing two Parallel Lines Now in this figure, the red and blue lines are parallel if and only if the pair of angles a, a’ and b, b’ are congruent (equal). This means that if you draw a line such that it crosses two lines, as shown in the figure, and you somehow prove that such corresponding angles are equal, then it is a proof that the lines are parallel. The first and third criteria are “more complex” than the second since they require measurement, although any of these related features might be used to locate parallel lines in euclidean space. As a result, in euclidean geometry, parallel lines are often represented by the second characteristic. The effects of Euclid’s Parallel Postulate are the other features. The same gradient between parallel lines may serve as another characteristic of measurements (slope). ### Calculation of Distance Between Two Lines There is a certain distance between the two parallel lines because parallel lines in a Euclidean plane are identical in length. Given the equations for two parallel non-vertical lines, by locating two points (one on each line) that are perpendicular to one another and figuring out their distances, it is possible to determine the distance between the two lines. Let us say that two lines are represented in the slope-intercept form as follows: $y = mx + u_1$ $y = mx + u_2$ Notice that the slope is kept the same (i.e., m) since the lines are parallel. The distance between these two lines is given by the following formula: $d = \dfrac{ | u_1-u_2 | }{ \sqrt{ m^2 + 1 } }$ If two lines are represented in the form of the standard form as follows: $a x + b y + c_1 = 0$ $a x + b y + c_2 = 0$ Notice that for parallel lines, a and b must remain the same. The distance between these two lines is given by the following formula: $d = \dfrac{ | c_1-c_2 | }{ \sqrt{ a^2 + b^2 } }$ The following figure summarizes all these formulae: Figure 4: Distance Between Two Parallel Lines ## Numerical Problems Part (a): Find the distance between parallel lines represented by 4x + 3y + 4 = 0 and 4x + 3y + 24 = 0. Part (b): Find the distance between parallel lines represented by y = 10x + 2 and y = 10x +10. ### Solution to Part (a) Given: Line 1: 4x + 3y + 4 = 0 Line 2: 4x + 3y + 24 = 0 Comparing with standard line equation: $a$ = 4, $b$ = 3, $c_1$ = 4, $c_2$ = 24 Using the formula: $d = \dfrac{ | c_1-c_2 | }{ \sqrt{ a^2 + b^2 } }$ Plugging the values: $d = \dfrac{ | 24-4 | }{ \sqrt{ 4^2 + 3^2 } }$ $d = \dfrac{ | -20 | }{ \sqrt{ 25 } }$ $d = \dfrac{ 20 }{ 5 }$ $d = 4$ ### Solution to Part (b) Given: Line 1: y = 10x + 2 Line 2: y = 10x + 10 Comparing with the slope-intercept form: $m$ = 10, $u_1$ = 4, $u_2$ = 24 Using the formula: $d = \dfrac{ | u_1-u_2 | }{ \sqrt{ m^2 + 1 } }$ $d = \dfrac{ | 2-10 | }{ \sqrt{ 10^2 + 1 } }$ $d = \dfrac{ | -8 | }{ \sqrt{ 101 } }$ $d = \dfrac{ 8 }{ 10.05 }$ $d = 0.796$ All images were created with GeoGebra.
2023-03-29T22:02:52
{ "domain": "storyofmathematics.com", "url": "https://www.storyofmathematics.com/glossary/parallel/", "openwebmath_score": 0.7222394943237305, "openwebmath_perplexity": 248.68869775629955, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985496419030704, "lm_q2_score": 0.8539127566694178, "lm_q1q2_score": 0.8415279638623481 }
https://uppic.me/forum/v2e1oyv.php?id=076685-set-theory-notes
Roster is a method of naming a set by listing its members. Example − $S = \lbrace x \:| \:x \in N,\ 7 \lt x \lt 9 \rbrace$ = $\lbrace 8 \rbrace$. Do you also have Class 11 NCERT Books and solutions for Class 11 Set Theory ? Such a relation between sets is denoted by A ⊆ B. Practice test sheets for Class 11 for Set Theory made for important topics in NCERT book 2020 2021 available for free... Free CBSE Class 11 Set Theory Online Mock Test with important multiple choice questions as per CBSE syllabus. A set is a group of objects, numbers, and so forth. Venn diagrams (and Euler circles) are ways of pictorially describing sets as shown in Figure 1. Universal sets are represented as $U$. These entities are … Basics. Quiz Evaluating Expressions, Next Set is a collection of well defined and distinct objects. For example, {1,2,3, …} is a set with an infinite number of elements, thus it is an infinite set. Here set Y is a subset of set X as all the elements of set Y is in set X. The symbol for finding the intersection of two sets is ∩. Example − $S = \lbrace x \:| \: x \in N$ and $7 \lt x \lt 8 \rbrace = \emptyset$. Can I download the Notes for other subjects too ? He had defined a set as a collection of definite and distinguishable objects selected by the means of certain rules or description. Previous Example − Let, $A = \lbrace 1, 2, 6 \rbrace$ and $B = \lbrace 7, 9, 14 \rbrace$, there is not a single common element, hence these sets are overlapping sets. You should always revise the Class 11 Set Theory concepts and notes before the exams and will help you to recap all important topics and you will be able to score better marks. and any corresponding bookmarks? b) Short notes for each chapter given in the latest Class 11 books for Set Theory will help you to learn and redo all main concepts just at the door of the exam hall. Infinite sets contain an uncountable number of elements. Properties of Basic Mathematical Operations, Quiz: Properties of Basic Mathematical Operations, Quiz: Multiplying and Dividing Using Zero, Quiz: Signed Numbers (Positive Numbers and Negative Numbers), Simplifying Fractions and Complex Fractions, Quiz: Simplifying Fractions and Complex Fractions, Signed Numbers (Positive Numbers and Negative Numbers), Quiz: Variables and Algebraic Expressions, Quiz: Solving Systems of Equations (Simultaneous Equations), Solving Systems of Equations (Simultaneous Equations), Quiz: Operations with Algebraic Fractions, Solving Equations Containing Absolute Value, Quiz: Linear Inequalities and Half-Planes, Online Quizzes for CliffsNotes Algebra I Quick Review, 2nd Edition. from your Reading List will also remove any Also download collection of CBSE books for... Download Class 11 Set Theory assignments. The material is mostly elementary. The above notes will help you to excel in exams. If the order of the elements is changed or any element of a set is repeated, it does not make any changes in the set. For two sets A and B. n(AᴜB) = … We are constantly working to get and supply students with the information that can motivate both teachers and students to use electronic devices and internet for studying purposes. Set Theory. You can also click below to download solved latest sample papers, past year (last 10 year) question papers pdf printable worksheets, mock online tests, latest Class 11 Books based on syllabus and guidelines issued by CBSE NCERT KVS. This chapter will be devoted to understanding set theory, relations, functions. Venn diagram, invented in 1880 by John Venn, is a schematic diagram that shows all possible logical relations between different mathematical sets. The empty set, or null set, is represented by ⊘, or { }. Example − Let, $A = \lbrace 1, 2, 6 \rbrace$ and $B = \lbrace 6, 12, 42 \rbrace$.
2021-01-25T22:14:44
{ "domain": "uppic.me", "url": "https://uppic.me/forum/v2e1oyv.php?id=076685-set-theory-notes", "openwebmath_score": 0.45551881194114685, "openwebmath_perplexity": 969.2182186916547, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9854964207345893, "lm_q2_score": 0.8539127548105611, "lm_q1q2_score": 0.8415279634854209 }
https://math.stackexchange.com/questions/974357/does-sum-n-1-infty-frac-1nn1-frac1n-converge
# Does $\sum_{n=1}^\infty \frac{(-1)^n}{n^{1+\frac{1}{n}}}$ converge? I want to use the alternating series test here, but I've just been told that it won't work because it's not monotonically decreasing. However, if the alternating harmonic series converges then don't we have for $\sum_{n=1}^\infty \frac{(-1)^n}{n^{1+\frac{1}{n}}}$ that $$\lim_{n \to \infty} \frac{1}{n^{1+\frac{1}{n}}} = 0$$ since $$\lim_{n \to \infty} \frac{1}{n^{1+\frac{1}{n}}} < \lim_{n \to \infty} \frac{1}{n} = 0.$$ Can someone point out where the mistake here is? To show that it is monotonically decreasing one should show that: $$\frac{1}{n^{1+\frac{1}{n}}} > \frac{1}{(n+1)^{1+\frac{1}{n+1}}}.$$ This is equivalent to showing that: $$\frac{n+1}{n} > \frac{n^\frac{1}{n}}{(n+1)^\frac{1}{n+1}},$$ which is the same as $$(1+\frac{1}{n})^n > \frac{n}{n+1}\cdot (n+1)^\frac{1}{n+1}.$$ For sufficiently large values of $n$, this must be the case, as the limit of the LHS is just $e$ and the one of the RHS is $1$. I’m not entirely sure what you’re proposing as an argument, but the following is not a theorem (and it looks like you may think it is): If $0\le a_n \le b_n$ and $\displaystyle\sum_{n=1}^\infty (-1)^n b_n$ converges, then $\displaystyle\sum_{n=1}^\infty (-1)^n a_n$ converges. For a counterexample, let $b_n=\frac{1}{n}$ and let $a_n$ be $0$ for even $n$ and $\frac{1}{n}$ for odd $n$. That said, the function $f(x)=x^{(1+\frac{1}{x})}$ is increasing* for $x>0$, so the terms of your sequence decrease in absolute value and the alternating series test hypotheses are true. *The function $f(x)$ is differentiable for $x>0$, and for positive $x$, its derivative, $\displaystyle x^{1 + \frac{1}{x}} \left(\frac{1 + \frac{1}{x}}{x} - \frac{\log x}{x^2}\right)$, is greater than $x(\frac{1}{x}-\frac{1}{x})$, and therefore positive. I haven't checked whether or not $\frac{1}{n^{1+\frac{1}{n}}}$ is a monotonically decreasing sequence, but I will point out that $\displaystyle\lim_{n\to \infty} \frac{1}{n^{1+\frac{1}{n}}} = 0$ does not imply that $\frac{1}{n^{1+\frac{1}{n}}}$ monotonically decreases. You must confirm that both properties hold. By the way, it would be enough to know that $\frac{1}{n^{1+\frac{1}{n}}}$ monotonically decreases after some point (i.e. for all $n > M$ for some fixed $M$).
2019-10-21T23:44:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/974357/does-sum-n-1-infty-frac-1nn1-frac1n-converge", "openwebmath_score": 0.961049497127533, "openwebmath_perplexity": 66.57553064672354, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964224384745, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.8415279631084935 }
https://math.stackexchange.com/questions/1217096/why-do-we-need-two-linearly-independent-solutions-for-2nd-order-linear-ode
# Why do we need two linearly independent solutions for 2nd order linear ODE Let we have a second-order homogeneous linear ODE with two initial conditions. $y''+ p(x)y'+q(x)y=0$ $y(x_0)=K_0$ and $y'(x_0)=K_1$ Why do we need two linearly independent solutions to satisfy the IVP. If we have only one solution what would happen? • If $cosx$ is a solution, then $kcosx$ is automatically also a solution,therefore its not providing any new information on the equation. – Avrham Aton Apr 2 '15 at 10:17 • true. I edited my question. thanks. – 104078 Apr 2 '15 at 10:19 second order linear differential equation needs two linearly independent solutions so that it has a solution for any initial condition, say, $y(0) = a, y'(0) = b$ for arbitrary $a, b.$ from a mechanical point of view the position and the velocity can be prescribed independently. Having two linearly independent solutions gives us the genral solution,that is the general form of all the possible solutions for the equation, whereas only one gives you only part of the possible solutions. Consider for example, the simple equation $y''=0$ it has obvious two solutions $y_1=C$ $y_2=x$,therfore the genral solution is $y=ax+c$ having only one solution does not give this general form. The set of all solutions of such a differential equation forms a vector space under the canonical operations. The dimension of that vector space is $2$ and hence two linearly independent solutions will form a basis for it. As you have written the initial value problem (IVP), there are 2 parameters characterizing the wanted solution, $K_1$ and $K_2$. Variation of these 2 parameters gives a 2 dimensional manifold of IVP and their solutions. Linear ODE now have the property that their solutions form a linear or at least affine space, the first for homogeneous, the second for general inhomogeneous problems. As such, they can be described by giving the basis of the (underlying) vector space, and each such basis has 2 elements. For instance those for the initial conditions $(K_1,K_2)=(0,1)$ and $(K_1,K_2)=(1,0)$. The set of all solutions of $$Lx=x^{(n)}+p_{n-1}(t)x^{(n-1)}+\cdots+p_0(t)x^{(0)}=0,$$ where $p_0,\ldots,p_{n-1}\in C(I)$, is an $n-$dimensional space $X$. If $\tau\in I$, and $\varphi_j$, $\,j=1,\ldots,n$, is the solution of the initial value problem $$Lx=0, \quad x^{(i-1)}(\tau)=\delta_{ij}, \,\,i=1,\ldots,n,$$ then $B=\{\varphi_1,\ldots,\varphi_n\}$ is basis of $X$. In fact, if $\psi$ is the solution of the initial value problem $$Lx=0, \quad x^{(i-1)}(\tau)=\xi_i, \,\,i=1,\ldots,n,$$ then $\psi=\xi_1\varphi_1+\cdots+\xi_n\varphi_n$.
2019-05-23T01:41:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1217096/why-do-we-need-two-linearly-independent-solutions-for-2nd-order-linear-ode", "openwebmath_score": 0.9110515117645264, "openwebmath_perplexity": 137.66779098981624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964224384745, "lm_q2_score": 0.8539127529517043, "lm_q1q2_score": 0.8415279631084935 }
https://math.stackexchange.com/questions/2297649/is-this-a-valid-proof-of-lagranges-theorem-finite-case
# Is this a valid proof of Lagrange's theorem (finite case). Let $$G$$ be finite and $$H$$ be a subgroup. We will show that the left cosets of $$H$$ partition $$G$$ and each coset has the same size. 1) Each element $$g \in G$$ belongs to the coset $$gH$$ since $$g1=g$$ and $$1\in H$$. So every element lies in at least one coset. 2) We know show that each $$g \in G$$ lies in exactly one coset. Suppose for a contradiction $$g$$ lies in more than one coset then $$g \in aH$$ and $$g \in bH$$ where $$aH,bH$$ are distinct left cosets. Then $$g=ah_1$$ and $$g=bh_2$$ so $$ah_1=bh_2$$ for some $$h_1,h_2 \in H$$. Now $$aH \subseteq bH$$ since if $$ah \in aH$$ then $$ah=bh_2h_1^{-1}h \in bH$$. Likewise $$bH \subseteq aH$$ so $$aH=bH$$ a contradiction. So each $$g \in G$$ lies in exactly one coset. Hence the cosets form a partition of $$G$$. 3) For each $$g \in G$$ the coset $$gH$$ has the same order as $$H$$. To see this establish a function $$\phi:gH \rightarrow H$$ by $$\phi(gh)=h$$. This is clearly a bijection. So $$|G|=\text{Number of cosets} \times \text{Size of each coset}=[G:H]|H|$$ and so $$|H| \mid |G|$$. Is this proof valid? • What do you mean "finite case"? Lagrange's theorem is only applicable to finite groups, since "divides the order" only makes sense when "order" is a number. – Adam Hughes May 26 '17 at 14:11 • There's just a small issue with your argument that $aH\subseteq bH$. You wrote $ah = bh_2h_{1}^{-1}$, where I think it should be $ah = bh_2h_{1}^{-1}h$. – James May 26 '17 at 14:22 • By finite case I meant there is some kind of extension that the index is infinite for infinite groups. – Ben B May 26 '17 at 15:36 • @James Yes you are correct that was just a mistake when writing it out. Is everything else sound though? Thanks! – Ben B May 26 '17 at 15:37 • Everything sounds good to me. Though it may be easier to justify the last point by considering $\phi : H\to gH$ defined by $\phi(h)= gh$. It would then be easier to justify that :1. It is well-defined, 2. It is bijective – Maxime Ramzi May 26 '17 at 17:08 Your proof is fine, except that I recommend you consider \begin{align} \varphi: H&\to gH \\ h&\mapsto gh, \end{align} then justify that $$\varphi$$ is a well-defined bijection; it's much easier than your $$\phi$$.
2020-06-04T02:28:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2297649/is-this-a-valid-proof-of-lagranges-theorem-finite-case", "openwebmath_score": 0.9732756018638611, "openwebmath_perplexity": 207.40305552866243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985496419030704, "lm_q2_score": 0.8539127548105611, "lm_q1q2_score": 0.8415279620304514 }
https://math.stackexchange.com/questions/2781065/proving-that-a-relation-r-is-an-equivalence-relation
# Proving that a relation R is an equivalence relation While I fully understand what it means to be an equivalence relation, I have a difficulty establishing proof that $R$ is an equivalence relation without just listing all pairs that $R$ creates and testing them. However this method is greatly time consuming and is not possible during exams as we usually have only 2 minutes (exam is 120 min long and is out of 120 marks, the question below is worth 2 marks only) to show that R is an equivalence relation. For the following relation, can someone show me a fast method for proving that $R$ is an equivalence relation? Let $\mathcal{P}(S)$ be the power set of $S =\{0,1,2,...,9\}$ and define $$R = \{ (A,B) \in \mathcal{P}(S) \times \mathcal{P}(S) : A=S\backslash B \text{ or } A=B\}.$$ I know we can use $A=B$ from the relation definition to assert that it is reflexive, but what about symmetry and transitivity? If I prove that $xRy$ is the same as $yRx$ for one example, that doesn't prove that all $A$s and $B$s have symmetric relations as there might be a contradiction somewhere, or is just proving one example symmetric enough to assert that all $A$s and $B$s have a symmetric relation? • Suppose that $(A,B)\in\mathcal{R}$ and suppose nothing further about $(A,B)$ for now. We wish to prove that this implies that $(B,A)\in\mathcal{R}$. So, since $(A,B)\in\mathcal{R}$ this implies that either $A=S\setminus B$ or that $A=B$. From here we break into two cases. In the case that $A=B$, since $=$ is known to be an equivalence relation this implies that $B=A$ and so $(B,A)\in\mathcal{R}$ as desired. In the case that $A\neq B$, this implies instead that $A=S\setminus B$. Now... do you see why this should imply that $B=S\setminus A$? Make sure you can fully explain why. – JMoravitz May 14 '18 at 17:07 • Well, think about what the relationship means. In this case either the sets are equal or the first is the complement of the second. That's reflexive as all sets are equal to themselves. More to the point, it's symmetric because If one set is the complement of another than the other is the complement to the first. Transitivity might be case heavy but if $B=C$ or $A=B$ and $ARB;BRC$ then either $ARB=C$ or $A=BRC$ so $ARC$. And if $A\ne B$ and $B\ne C$ then $A = S-B$ so $B=S-A=S-C$ so $A=C$. So transitive. – fleablood May 14 '18 at 18:28 You want to try to prove as much as you can for arbitrary elements $(A,B)$, working from the definitions. For example, let's say you want to prove symmetry. Symmetry means the following: if you assume $(A,B) \in R$, you want to prove $(B,A) \in R$. So assume $(A,B) \in R$. That means, according to the definition of $R$, either $A=B$ or $A=S\setminus B$. So we break down into two cases: • Case 1: $A=B$. Then $B=A$, so by definition $(B,A) \in R$. (This basically boils down to reflexivity, which you have already proven.) • Case 2: $A = S \setminus B$. Then, by the properties of set subtraction, $B = S \setminus A$ (do you see why?). Thus $(B,A) \in R$ again by definitioin of $R$. In either case $(B,A) \in R$, so we have proven symmetry. Transitivity can be done similarly (though you might need to break up into more cases and subcases); I'll leave it for you to tackle. • +1 for beating me by 41 seconds :) – gt6989b May 14 '18 at 17:10 • For transitivity, assuming a,b∈P(S) and a=/=b, then a=S\b.And assuming b,c∈P(S) and b=/=c, then b=S\c. Since b=S\c and a=S\b then a=S(S\c) which is simplifies into a=c. Thus for aRb and bRc, aRc is true. – Mohamad Moustafa May 14 '18 at 17:20 • @MohamadMoustafa Looks good to me! – Y. Forman May 14 '18 at 17:23 Let's try to prove symmetry. You note correctly, that $R$ is symmetric if $aRb \Leftrightarrow bRa$. In your situation, $aRb$ means either $a=b$ or $a=S\backslash b$. Let $a,b \in P(S)$ such that $aRb$. Then either $a=b$ or $a = S - b$. In the first case, $bRa$ is true. In the second case, $b = S-a$ so $bRa$ is true as well, and so $R$ is symmetric. Can you examine transitivity by yourself? • For transitivity, assuming a,b∈P(S) and a=/=b, then a=S\b.And assuming b,c∈P(S) and b=/=c, then b=S\c. Since b=S\c and a=S\b then a=S(S\c) which is simplifies into a=c. Thus for aRb and bRc, aRc is true. – Mohamad Moustafa May 14 '18 at 17:21 • @MohamadMoustafa looks good – gt6989b May 14 '18 at 18:11 $A$ and $B$ are related iff they are equal or complements. Reflexivity: For every subset $A$ of $S$ we have $A=A$ Symmetry: If $A$ is related to $B$ then either they are equal or complements, so $B$ is also related to $A$ Transitivity: If $A$ is related to $B$ and $B$ is related to $C$, then either $B=A$ or $B$ is complement of $A$ and either $C=B$ or $C$ is complement of $B$ In either case we have $A=C$ or $C$ is complement of $A$ First think about what the relationship means. In plain english. In this case $a R b$ means either $a=b$ or $a = b^{c}$. Then think about the consequences in terms of reflection, symmetry or trasitivity. Equality is obviously (almost by definition) an equivalence class. And complements are symmetric although not reflexive or transitive... although the symmetry of complements make them a bit of a toggle. ($a = b^c;b=c^c \implies c=b^c = a$.) And in combination we get: Ergo: Reflexive: $a=a$ for all $a$ so $a R a$ for all $a$. Symmetric: $a R b$ means either $a =b$ and $b=a$; or $a = b^c$ and $b= a^c$. So $aRb \implies bRa$. Transitive: $a Rb$ and $bRc$ means either $a=b$ and so $a = b Rc$; or $b=c$ and so $a R b=c$; or $a = b^c$ and $b = c^c$ so $c = b^c = a$. So $aRb$ and $bRc\implies aRc$.
2019-06-18T09:12:26
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2781065/proving-that-a-relation-r-is-an-equivalence-relation", "openwebmath_score": 0.9254528284072876, "openwebmath_perplexity": 192.38943241078954, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964181787613, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8415279558072768 }
http://kfcr.jf-huenstetten.de/exact-differential-equation-integrating-factor.html
It is commonly used to solve ordinary differential equations , but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential (which can then be integrated to give a scalar field ). Steps to solving a first-order exact ordinary differential equation. In that case we would say that (x;y) is an integrating factor for (1). Section 2-3 : Exact Equations. N x ---Select--- are are not equal, the equation is not exact. 1 Some Basic Mathematical Models. Moreover, if μ(x, y) is an integrating factor for 12) then a · μ(x, y) is also an integrating factor, where a is an arbitrary constant. The general form of a first order ODE is M(x,y) dx + N(x, y) dy = 0. exact differential equation by multiplying both sides with a common factor. • An equation So our calculation of the integrating factor was correct. So, from this example, we see that we may not have uniqueness of the integrating factor. Solution to this differential equations problem is given in the video below!. Consider the heat that is transfered to a gas that changes it temperature and volume a very small amount:. a factor multiplication by which Explanation of Method of integrating factor. Exact Differential Equations/ Integrating Factors Linear Differential Equations Implicit Differential Equations Existance and Uniqueness Theory. We compute ∂M ∂y = −3x and ∂N ∂x = 1. 1 Mathematical Modeling. This method involves multiplying the entire equation by an integrating factor. Integrating Factors It is sometimes possible to convert a differential equation that is not exact into an exact equation by multiplying the equation by a suitable integrating factor. An integrating factor is a function that we multiply a differential equation by, in order to make it exact. Before we get into the full details behind solving exact differential equations it’s probably best to work an example that will help to show us just what an exact differential equation is. But now I want to do another exercise, which uses a function of the form $\mu(x+y)$. Equations with linear fractions; Exact equations; Integrating factor. , Let d 2 y / dx 2 + y = 0. To solve, take and solve for Note, when using integrating factors, the +C constant is irrelevant as we only need one solution, not infinitely many. Note that by back. The function u(x,y) (if it exists) is called the integrating factor. A clever method for solving differential equations (DEs) is in the form of a linear first-order equation. Sometimes, equation can be not exact, but it can be transformed into exact by multiplying equation by integrating factor. For example, a linear first-order ordinary differential equation of type. F F Remember from Calculus III that the total differential of F is given by dF dx dy x y If the equation M (x , y )dx. 6: Exact Equations & Integrating Factors Elementary Differential Equations and Boundary Value Problems, 9 th edition, by William E. Integrating Factors. Apr 22, 2018- Explore nellauyen's board "Differential Equations" on Pinterest. 2 Write a first order linear ODE in standard form. (ii) In some texts on differential equations the study of exact equations precedes that of linear DES. 4 Exact Differential Equations Suppose that you have a differential equation of the form dy M (x, y) + N (x, y) =0 dx If it satisfies the condition: My (x, y) = Nx (x, y) it is known as an Exact differential equation and is relatively simple to solve using the methods that we have learned so far. resulting differential equation, but I wanted you to see that sometimes there is an integrating factor that can be used to make a non-exact equation exact. And it at least looked like it could be exact. F) of the equation M(x,y) dx + N(x,y) dy = 0 if it possible to obtain a function u(x,y) such that ϕ(x,y)[M(x,y) dx + N(x,y) dy ]= du. Exact Equations and Integrating Factors. Exact equations. (a) ( 2 x 2 + y ) dx + ( x 2 y - x ) dy = 0. Math 2280 - Lecture 6: Substitution Methods for First-Order ODEs and Exact Equations Dylan Zwick Fall 2013 In today's lecture we're going to examine another technique that can be useful for solving first-order ODEs. e dy ax then any factor (Function of x, y) which when multiplied to the given equation converts it into an exact differential equation is called Integrating Factor (IF). The method is simple: Integrate M with respect to x , integrate N with respect to y , and then “merge” the two resulting expressions to construct the desired function f. Solutions of homogeneous and non homogeneous first order differential equations. An integrating factor is a function by which an ordinary differential equation can be multiplied in order to make it integrable. Definition 1. means for a differential equation to be in exact form and how to solve differential equations in this form. Thanks to all of you who support me on Patreon. and this can be reduced directly to an integration problem. tex V3 - January 21, 2015 10:51 A. We will also learn how to find an integrating factor in order to make a non-exact differential equation, exact. We give an in depth overview of the process used to solve this type of differential equation as well as a derivation of the formula needed for the integrating factor used in the solution process. For example – given an expression, Solving the Linear First Order Differential Equation. Special cases in which can be found include -dependent, -dependent, and -dependent integrating factors. We also present some illustrative examples. Let u 0 (x,t) &. An integrating factor for (??) is a function such that the differential equation is exact. Lecture 04 Simplest Non-Exact Equations Sep. The AWS Access Key Id you provided does not exist in our records. Dynamical modeling Flux balance analysis Logical modeling Network modeling Stochastic simulation …. written as. Integrating Factor example. I want to make the function exact first. Integrating Factors Found by Inspection. If an equation is “almost” exact that means that there is some integrating factor, that we can multiply times the equation to turn it into an exact equation. An exact equation is a conservative vector field, and the implicit solution of this equation is the potential function. How would I find an integrating factor $μ(x,y)$ so that when I multiply this integrating factor by the differential equation, it become exact? Update: Here's what I got: ordinary-differential-equations. Assume that the equation , is not exact, that is- In this case we look for a function u(x,y) which makes the new equation , an exact one. 2 Equations Reducible to Exact - Integrating Factor Integrating factor Suppressed solutions Reduction to exact equation 2. Ordinary Differential Equations. Boyce and Richard C. 3 Separable equations Separable equation Solution of separable equation. Sometimes, equation can be not exact, but it can be transformed into exact by multiplying equation by integrating factor. EXACT DIFFERENTIAL EQUATIONS 3 which would be equivalent whenever (x;y) 6= 0. o In practice, finding such an integrating factor can be quite difficult. SOLUTION The given equation is linear since it has the form of Equation 1 with and. Nevertheless, the concept of an integrating factor gives us a useful tool since integrating factors for certain particular equations can be found by ad hoc methods. Exact Equations, Integrating Factors, and Homogeneous Equations Exact Equations A region Din the plane is a connected open set. Higher-Order, Linear, Homogeneous, Ordinary Differential Equations. Integrating Factor. Exact Equation. Consider the heat that is transfered to a gas that changes it temperature and volume a very small amount:. Equations of nonconstant coefficients with missing y-term If the y-term (that is, the dependent variable term) is missing in a second order linear equation, then the equation can be readily converted into a first order linear equation and solved using the integrating factor method. 3 (part 1) An expression is an exact if it corresponds to the differential of some function f(x,y) Definition 2. Lecture 04 Simplest Non-Exact Equations Sep. 6 Orthogonal trajectories of curves 1. Method-3: EXACT DIFFERENTIAL EQUATION A D. F) of the equation M(x,y) dx + N(x,y) dy = 0 if it possible to obtain a function u(x,y) such that ϕ(x,y)[M(x,y) dx + N(x,y) dy ]= du. Hosch , Associate Editor. Multiply everything in the differential equation by and verify that the left side becomes the product rule and write it as such. 3 The general solution to an exact equation M(x,y)dx+N(x,y)dy= 0 is defined. Then we call, the given differential equation to be exact. (2x+4y)+(2x¡2y)y0 = 0 Solution. An integrating factor is a function that we multiply a differential equation by, in order to make it exact. E of the form is said to be exact D. into the total differential of some function U(x,y). One then multiplies the equation by the following “integrating factor”: IF= e R P(x)dx This factor is defined so that the equation becomes equivalent to: d dx (IFy) = IFQ(x),. we first find the integrating factor I = e R P dx = e R 3 x dx now Z 3 x dx = 3lnx = lnx3 hence I = elnx3 = x3. Home; web; books; video; audio; software; images; Toggle navigation. an integrating factor that transforms the left hand side to an exact differential. 5 Special Integrating Factors. 2 Equations Reducible to Exact - Integrating Factor Integrating factor Suppressed solutions Reduction to exact equation 2. Special cases in which can be found include -dependent, -dependent, and -dependent integrating factors. Using an integrating factor to make a differential equation exact 大家可以通過微分方程 學到很多不同的技巧 在這個影片裏面 我教大家一個 它的作用很大 因爲它總是。. Integrating factors and first integrals for ordinary diflerential equations 247 Definition 2. integrating factor - A function by which a differential equation is multiplied so that each side may be. ] [Integrating Factor Technique. You can distinguish among linear, separable, and exact differential equations if you know what to look for. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem. We give an in depth overview of the process used to solve this type of differential equation as well as a derivation of the formula needed for the integrating factor used in the solution process. Solving Differential Equations by Partial Integrating Factors 50 Open Access Journal of Physics V1 11 2017 concept implements same methodology for solving an ordinary differential equation, but with partial integration. Standard Form. The resulting profile takes all orders of scattering into. Exact Equation, integrating factor. Linear Differential Equations i. a function which is the derivative of another function. COLAcode is a serial particle mesh-based N-body code illustrati. Thus or or ôx I-I ax—a y) Since is a function of y alone , Ôx—Ôy therefore or M ax—a y) or or and so In =. Integrable systems via polynomial inverse integrating factors. If an equation is “almost” exact that means that there is some integrating factor, that we can multiply times the equation to turn it into an exact equation. Differential equations show up in just about every branch of science, including classical mechanics, electromagnetism, circuit design, chemistry, biology, economics, and medicine. , determine what function or functions satisfy the equation. Non-exact Second Order Differential Equations and Integrating Factors In this section, we introduce the idea of finding integrating factors for the second order differential equation (2. So, in order for a differential dQ, that is a function of four variables to be an exact differential, there are six conditions to satisfy. The Numerical Differential Equation Analysis package combines functionality for analyzing differential equations using Butcher trees, Gaussian quadrature, and Newton-Cotes quadrature. Ordinary-differential-equations-Text Book 1. Total Differential of a Function F(x,y) F is a function of two variables which has continuous partial. This could be as difficult as the original problem, or much easier, depending on the example. First-Order Differential Equations. 4 Bernoulli Differntial Equations Worksheet-4 on Bernoulli de 1. To investigate the possibility of. Obviously (x^-1)(y^-1) is a particular integrating factor, as it separates the variables, but I'm not sure how to arrive at this conclusion without that insight, given a general integrating factor, as implied by the problem. Linear differential equations involve only derivatives of y and terms of y to the first power, not raised to any higher power. Initial Value Problems – Particular Solutions b. Unfortunately not every differential equation of the form (,) + (,) ′ is exact. Exact equations. The expression is an exact differential. Because this equation could be solved by separation of variables, we could. What is needed for to be an integrating factor? Apply the test for exact equations to : Unfortunately, in general it's just as hard to solve this for as it is to solve the original differential equation. E of the form is said to be Non-Exact D. The given differential equation is not exact. We give an in depth overview of the process used to solve this type of differential equation as well as a derivation of the formula needed for the integrating factor used in the solution process. Exact Differential Equations. You will learn what a differential equation is and how to recognize some of the basic different types. Consider the heat that is transfered to a gas that changes it temperature and volume a very small amount:. Integrating factors 2 Now that we've made the equation exact, let's solve it! Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. • methods to bring equation to separated-variables form • methods to bring equation to exact differential form • transformations that linearize the equation ♦ 1st-order ODEs correspond to families of curves in x, y plane ⇒ geometric interpretation of solutions ♦ Equations of higher order may be reduceable to first-order problems in. Check that the equation below is not exact but becomes exact when multiplied by the integrating factor. Namely, substitutuion. Seeking an integrating factor, we solve the linear equation Multiplying our differential equation by , we obtain the exact equation which has its solutions given implicitly by * * *. Apr 22, 2018- Explore nellauyen's board "Differential Equations" on Pinterest. 1 First Order Separable DE Worksheet-1 on Separable DE 1. 6 Exact differential equations and integrating factors The first order differential equation M (x , y )dx N (x , y )dy 0 is exact if there exists a function F (x , y ) such that dF (x , y ) M (x , y )dx N (x , y )dy in short dF Mdx Ndy dF denotes the total differential of F. Tisdell (2017) Alternate solution to generalized Bernoulli equations via an integrating factor: an exact differential equation approach, International Journal of Mathematical Education in Science and Technology, 48:6, 913-918, DOI: 10. After writing the equation in standard form, P(x) can be identified. by using the Integrating Factor solution method. The expression is an exact differential. Solving Exact Differential Equations. Non-exact Second Order Differential Equations and Integrating Factors In this section, we introduce the idea of finding integrating factors for the second order differential equation (2. And if you're taking differential equations, it might be on an exam. Solve 3x2 22xy+ 2 + (6y x2 + 3)y0 = 0 8. in which case we can multiply through equation (3) by f(x,y) to give a total differential equation. These conditions, which are easy to generalize, arise from the independence of the order of differentiations in the calculation of the second derivatives. The goal of this section is to go backward. Now we have a separable equation in v c and v. Butcher Runge-Kutta methods are useful for numerically solving certain types of ordinary differential equations. Example 1:ydx-xdy=0 is not an exact equation. This video defines total differential, exact equations and uses clairiots theorem to derive the form of the integrating factor for a First order linear ODE. Y Z EY / Y Z EZ Conditions (necessary and sufficient) for Exact Differential Equation. Solutions to exact differential equations Given an exact differential equation defined on some simply connected and open subset D of R 2 with potential function F then a differentiable function f with (x, f ( x )) in D is a solution if and only if there exists real number c so that. We shall only discuss the procedure for solving a linear non-autonomous first order differential equation which is not exact. 1) and, correspondingly, @[y] = const is a first integral of system (2. How would I find an integrating factor $μ(x,y)$ so that when I multiply this integrating factor by the differential equation, it become exact? Update: Here's what I got: ordinary-differential-equations. A differential equation along with a subsidiary condition y (t0)=y0, given at some value of the independent variable t=t0, constitutes an initial value problem. The general form of a first order ODE is M(x,y) dx + N(x, y) dy = 0. Consider equation (1) ×µ,. Y Z V Y, / Y Z V Z. Moreover, the study of some variational inequalities will also be considered. Solutions to exact differential equations Given an exact differential equation defined on some simply connected and open subset D of R 2 with potential function F then a differentiable function f with (x, f ( x )) in D is a solution if and only if there exists real number c so that. In other words, even if the above equality is not satisfied, there may exist a function f(x,y) such that. differential equation is exact. 27 [미분방정식] 4. Check that the equation below is not exact but becomes exact when multiplied by the integrating factor. 2016-02-01. Differential Equations is a very important topic in Math. Inexact differentials and integrating factors M x,,y dx N x y dy we may be able to find an integrating factor G(x,y) to convert this to an exact differential GMdx GNdy 0 Example: xdy ydx 0 we already know how to solve this by writing dy dx yx Even if is not an exact differential equation,. Section 6: Exact Differential Equations. (2x+4y)+(2x¡2y)y0 = 0 Solution. 7 Existence and uniqueness of solutions 1. [email protected] DIFFERENTIAL EQUATIONS COURSE NOTES, LECTURE 4: EXACT EQUATIONS AND INTEGRATING FACTORS. Check the following equations: ( ) ∫ (or ) ∫. 3 The general solution to an exact equation M(x,y)dx+N(x,y)dy= 0 is defined. Given an inexact first-order ODE, we can also look for an integrating factor so that. Integrating Factor. Find the sufficient condition for the differential equation M(x, y) dx + N(x, y) dy = 0 to have an integrating factor as a function of (x+y). Finally, we will generalize the notion of integrating. That is, a subset which cannot be decomposed into two non-empty disjoint open subsets. Determine conditions on a and b so that u(x, y) = (x^a)(y^b) is an integrating factor. integrating factor which will transform this into an exact equation. If an equation is not exact, it may be possible to find an integrating factor (a multiplier for the functions P and Q, defined previously) that converts the equation into exact form. To investigate the possibility of. Examples of solving Linear First Order Differential Equations with an Integrating Factor. Solving Linear First-Order Differential Equations (integrating factor) Ex 1: Solve a Linear First-Order. 11 is not exact. But now I'm stuck with integrating factors. 1) F(x;y) = 0 for some function F(x;y). Solving Exact Differential Equations. 20-11 is called an exact differential and a exists such that. 4 Exact Differential Equations Definition 2. Integrating Factor Method Consider an ordinary differential equation (o. 19) where P and Q are either constants or functions of x only. A differential equation of type ${P\left( {x,y} \right)dx + Q\left( {x,y} \right)dy }={ 0}$ is called an exact differential equation if there exists a function of two variables $$u\left( {x,y} \right)$$ with continuous partial derivatives such that. Ifyoursyllabus includes Chapter 10 (Linear Systems of Differential Equations), your students should have some prepa-ration inlinear algebra. In example with equation (A), 1 t is an integrating factor, in (B) 1 ty is an integrating factor. Before we get into the full details behind solving exact differential equations it's probably best to work an example that will help to show us just what an exact differential equation is. We let M(x,y) = x2 − 3xy and N(x,y) = x. Rules for Finding Integrating Factor. exact & non exact differential equations, integrating factor Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Integrating Factor Depends on the Variable $$x:$$ \(\mu = \mu \left( x \right). We seek an integrating factor that. Examples of solving Linear First Order Differential Equations with an Integrating Factor. These must be functions of a single variable. Because it is not always obvious when a given equation is in exact form, a practical "test for exactness" will also be developed. Thus, it must sound ridiculous to ask about the integrating factors. Find integrating factors and solve initial value problems ii. Differential Equations Solving for the Integrating Factor? Given the differential equation (dy/dx) + (y/x) = cos x, find the integrating factor and solve for y? Solving differential equations using an integrating factor ?. implicit solution explicit solution of. Shows step by step solutions for some Differential Equations such as separable, exact, Includes Slope Fields, Euler method, Runge Kutta, Wronskian, LaPlace transform, system of Differential Equations, Bernoulli DE, (non) homogeneous linear systems with constant coefficient, Exact DE, shows Integrating Factors, Separable DE and much more. 1 Mathematical Modeling. 2 Compartmental. Substitute v back into to get the second linearly independent solution. EXACT DIFFERENTIAL EQUATIONS 21 2. inexact differentials) because integration must account for the path taken. 1 (page 11) is a separable equation that can be solved by first separating the variables and then. Because it is not always obvious when a given equation is in exact form, a practical “test for exactness” will also be developed. You can distinguish among linear, separable, and exact differential equations if you know what to look for. ] [Integrating Factor Technique. Differential Equations An equation involving independent variable x, dependent variable y and the differential coefficients is called differential equation. The relation above, Clairaut's theorem, is the necessary and sufficient condition for an exact equation. 1 A set of factors {A"[ Y]} satisfying (2. Integrating Factor. N x ---Select--- are are not equal, the equation is not exact. More separation of variables, implicit solutions, recognizing separable equations, the heat conduction partial DE, exact equations, integrating factors for exact equations. That is, a subset which cannot be decomposed into two non-empty disjoint open subsets. You can distinguish among linear, separable, and exact differential equations if you know what to look for. We can solve these linear DEs using an integrating factor. Differential Equation Solving with DSolve. Determine the integrating factor e ∫ P(x)dx, where P(x) is the factor multiplied by y above. General Solutions – Implicit and Explicit iii. The equation has as its standard form, y′ + y = t. My = 2cos(x)-xsin(x) Nx = 2cos(x)-2xsin(x) Since My and Nx are not equal, the equation is not exact. The new concept of an adjoint equation is used for construction of a Lagrangian for an arbitrary differential equation and for any system of differential equations where the number of equations is equal to the number of dependent variables. Solutions of non exactly differential equations. A first-order differential equation of the form M x ,y dx N x ,y dy=0 is said to be an exact equation if the expression on the left-hand side is an exact differential. 5 Homogeneous First order Differential Equation Worksheet-5 on Homogeneous de 1. written as. Integrating factors 2 Now that we've made the equation exact, let's solve it! Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. So far, we have studied first-order DEs (mostly ODEs) that are both (i) linear, and (ii) separable. If an equation is not exact, it may be possible to find an integrating factor (a multiplier for the functions P and Q, defined previously) that converts the equation into exact form. An integrating factor is Multiplying both sides of the differential equation by , we get or. 4 Bernoulli Differntial Equations Worksheet-4 on Bernoulli de 1. A linear differential equation is one that does not contain any powers (greater than one) of the function or its derivatives. Multiply whole equation by μ(t) 4. Since it is rare - to put it gently - to find a differential equation of this kind ever occurring in engineering practice, the exercises provided. The second question is much more difficult, and often we need to resort to numerical methods. The highest order derivative present determines the order of the ODE and the power to which that highest order derivative appears is the degree of the ODE. The integrating factor is µ(t) ∫dt=e t. o In practice, finding such an integrating factor can be quite difficult. 그렇다면, 완전 미분방정식이 아닌 미분방정식은 어떻게 푸는 것인지 알아보도록 하겠습니다. If M y N x N. The Method of Direct Integration : If we have a differential equation in the form $\frac{dy}{dt} = f(t)$ , then we can directly integrate both sides of the equation in order. 3 Separable equations Separable equation Solution of separable equation. Also, we deduce some conditions for the existence of such integrating factor. We seek an integrating factor that. Differential Equations Calculators; Math Problem Solver (all calculators) Differential Equation Calculator. Financial Math Formulas and Financial Equations. 7in x 10in Felder c10_online. Integrating Factors Found by Inspection. Integrating-Factor [mathjax] Integrating-factor in 'Linear first-order Differential Equation' is a function, that makes the equation as a recognizable or exact derivative which is easy to solve simply by integration. Integrating factors. E of the form is said to be exact D. In other words, even if the above equality is not satisfied, there may exist a function f(x,y) such that. You will learn what a differential equation is and how to recognize some of the basic different types. Integrating Factor Method by Andrew Binder February 17, 2012 The integrating factor method for solving partial differential equations may be used to solve linear, first order differential equations of the form: dy dx + a(x)y= b(x), where a(x) and b(x) are continuous functions. A solution of a differential equation is a relation between the variables, not involving the differential coefficients, such that this relation and the derivative obtained from it satisfy the given differential equation. Let u 0 (x,t) &. Exact First-Order Differential Equations; Integrating Factors; Separable First-Order Differential Equations; Homogeneous First-Order Differential Equations; Linear First-Order Differential Equations; Bernoulli Differential Equations; Linear Second-Order Equations with Constant Coefficients; Linearly Independent Solutions; Wronskian; Laplace. substitution method for solving differential equations: This doesn't directly convert the differential equation to an exact form, but changes variables in a manner that it is easier to see the exact form. School:Mathematics > Topic:Differential_Equations > Ordinary Differential Equations > Integrating Factors Definition. A linear first-order equation takes the following form: To use this method, follow these steps: Calculate the. exact differential. exact differential equation by multiplying both sides with a common factor. into the total differential of some function U(x,y). Degree and order of a differential equation. A word of caution is in order here. Can you explain this answer? are solved by group of students and teacher of Physics, which is also the largest student community of Physics. Integrating Factor. Using the integrat-ing factor, we can reduce it to a simpler equation. Find an explicit or implicit solutions to the differential equation (x2 − 3xy)+x dy dx = 0. About the Author Steven Holzner is an award-winning author of science, math, and technical books. Because it is not always obvious when a given equation is in exact form, a practical "test for exactness" will also be developed. 𝑑𝑑 𝑑𝑥 + (1 −𝑁)𝑝𝑥𝑑= (1 −𝑁)𝑞(𝑥) the integrating factor 𝜶𝒂 can also be found by setting:. Consider equation (1) ×µ,. How to Solve Exact Differential Equations. After writing the equation in standard form, P(x) can be identified. To solve the linear differential equation , multiply both sides by the integrating factor and integrate both sides. Exact Differential Equations. The given differential equation is not exact. Discover the world's. REDUCIBLE TO EXACT DIFFERENTIAL EQUATIONS & CONCEPTS OF INTEGRATING FACTOR A differential Equation of the form M(x,y)dx+N(x,y)dy=0 is exact if If equation is not Exact i. If the quotient is not a function of y alone, look for another method of solving the equation. This is a partial differential equation, but there are numerous cases where the determination of the integrating factor can be completed under the assumption that is a function of either or , but not both. In this section, we learn how to identify and test if an ordinary differential is "exact" in nature. so that there is an integrating factor which is a function of y only which satisfies „0 = 1=y. x^2y^3 + x(1+y 2)y' = 0 Integrating factor: µ(x,y)=1/(xy 3). Example: t y″ + 4 y′ = t 2 The standard form is y t t. 3) a) The equation is not exact so lets flnd the integrating factor to make it an equation exact. Before we get into the full details behind solving exact differential equations it's probably best to work an example that will help to show us just what an exact differential equation is. One then multiplies the equation by the following "integrating factor": IF= e R P(x)dx This factor is defined so that the equation becomes equivalent to: d dx (IFy) = IFQ(x),. Hint: Try to find an integrating factor that depends only on one variable. Solve an exact differential equation. For permissions beyond the scope of this license, please contact us. Exact differential equations are a subset of first-order ordinary differential equations. This is a partial differential equation, but there are numerous cases where the determination of the integrating factor can be completed under the assumption that is a function of either or , but not both. (2) of some functionuxy ,. In that case we would say that (x;y) is an integrating factor for (1). • methods to bring equation to separated-variables form • methods to bring equation to exact differential form • transformations that linearize the equation ♦ 1st-order ODEs correspond to families of curves in x, y plane ⇒ geometric interpretation of solutions ♦ Equations of higher order may be reduceable to first-order problems in. Multiply the given differential equation by the integrating factor μ(x, y) = xy and verify that the new equation is exact. If it is exact, we learn how to solve it by using the constraints placed upon Exact Differential Equations. Exact equations (and computing integrating factors) A first-order ODE is exact if y '( x) f (x, y(x)) ddx R(x, y(x)). CONCLUSION: A general solution to an exact di erential equation can be found by the method used to nd a potential function for a conservative vector eld. Sometimes a differential equation is not exact, but it is “almost” exact. How do we nd integrating factors?. to study the solution of nonlinear differential equations exactly. Then we can solve the original. For instance, the expression 2xy5 +4x2y4y0. Next we will focus on a more speci c type of di erential equation, that is rst order, linear ordinary di erential equations or rst order linear ODEs for short. Bibliography for Exact Differential Equations. My steps: /N = 2/x,\$ which is the integrating factor. Let be continuous functions and suppose that the differential equation is not exact. If the quotient is a function of y alone, use the integrating factor defined in Rule 2 above and proceed to Step 6. An equation involving a function of one independent variable and the derivative(s) of that function is an ordinary differential equation (ODE). You can distinguish among linear, separable, and exact differential equations if you know what to look for. The relation above, Clairaut's theorem, is the necessary and sufficient condition for an exact equation. Differential Equations 1 Quiz 5 Solutions For each of the following differential equations, finding an integrating factor and verify. Equations of nonconstant coefficients with missing y-term If the y-term (that is, the dependent variable term) is missing in a second order linear equation, then the equation can be readily converted into a first order linear equation and solved using the integrating factor method. However, they exist in a few cases that are good. Steps to solving a first-order exact ordinary differential equation. In other words what? This left hand side of the differential equation, is the total differential of capital F(x, y) in the region R. DIFFERENTIAL EQUATIONS COURSE NOTES, LECTURE 4: EXACT EQUATIONS AND INTEGRATING FACTORS.
2020-04-04T05:58:54
{ "domain": "jf-huenstetten.de", "url": "http://kfcr.jf-huenstetten.de/exact-differential-equation-integrating-factor.html", "openwebmath_score": 0.9115209579467773, "openwebmath_perplexity": 444.8221804955363, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9947798736874025, "lm_q2_score": 0.8459424314825853, "lm_q1q2_score": 0.8415265051370604 }
https://math.stackexchange.com/questions/2285272/trying-to-prove-that-mathcal-p-bbb-r-cong-bbb-r-infty
# Trying to prove that $\mathcal P(\Bbb R)'\cong\Bbb R^\infty$ Prove that $\mathcal P(\Bbb R)'$ and $\Bbb R^\infty$ are isomorphic vector spaces. Here $\mathcal P(\Bbb R)'$ is the dual space of the vector space of polynomial functions with real coefficients $\mathcal P(\Bbb R)$. And $\Bbb R^\infty$ is the set of all real valued sequences. This is the exercise 35 in page 132 of Linear algebra done right, 3rd edition. The problem here is that the book at this point only teach, in almost all cases, the theory of linear algebra of finite dimensional vector spaces, but $\mathcal P(\Bbb R)$ is not finite dimensional. My work, at the moment, is below. Define $B_k(p):=\frac{p^{(k)}(0)}{k!}=[x^k]p(x)$ for $p\in\mathcal P(\Bbb R)$, where $p^{(k)}$ is the $k$-th derivative of $p$. Then the set defined by $$B:=\{B_k:k\in\Bbb N_{\ge 0}\}\tag{1}$$ is linearly independent in $\mathcal P(\Bbb R)'$, however it is not a basis because functionals defined as $$p\mapsto \sum_{k=0}^\infty c_kB_k(p),\quad c_k\in\Bbb R\tag{2}$$ belong also to $\mathcal P(\Bbb R)'$ but they arent a linear combination of $B_k$ because the above sum is not finite. It is easy to see that any functional with the form in $(2)$ define a vector subspace $S$ of $\mathcal P(\Bbb R)'$ and the map defined by $$h: S\to \Bbb R^\infty,\quad\sum_{k=0}^\infty c_kB_k\mapsto (c_0,c_1,\ldots,c_k,\ldots)$$ is linear and bijective. Then if we show that $S=\mathcal P(\Bbb R)'$ we are done. Because every polynomial is a linear combination of monomials we can study all the elements of $\mathcal P(\Bbb R)'$ just studying the form of all possible linear functionals for monomials of the kind $x^k$. For finite vector spaces $\mathcal P_m(\Bbb R)$, defined as the vector space of polynomial functions of degree at most $m$, we know that the set $$H_m:=\{B_k:k\in\{0,\ldots,m\}\}$$ is a basis of $\mathcal P_m(\Bbb R)'$. But Im stuck here: my main problem is that I dont know if I can prove (and how, if it would be possible) that $S=\mathcal P(\Bbb R)'$ from the finite case, that is that all the functionals of $\mathcal P_m(\Bbb R)'$ have the form $$p\mapsto\sum_{k=0}^m c_kB_k(p),\quad c_k\in\Bbb R,p\in\mathcal P_m(\Bbb R)$$ Probably the way Im trying to prove the statement of the exercise is a dead end, but the theory in the book dont show complicated proofs or theorems, so it cannot be so complicate. Some hints will be welcome. I think trying to work with a basis in $\mathcal P(\mathbb R)'$ is going to be a bad idea - bases of many infinite dimensional vector spaces, $\mathcal P(\mathbb R)'$ included, are impossible to right down. Thankfully, a basis of $\mathcal P(\mathbb R)$ can be written down: for each $k\in\mathbb N$ (including zero), let $e_k(x):=x^k$; then $\{e_k\}$ is a basis of $\mathcal P(\mathbb R)$. In particular, any linear functional is determined uniquely by its action on $\{e_k\}$. This turns out to be a better method. For any sequence $a=(a_0,a_1,a_2,\ldots)$, let $\phi_a$ be the unique linear functional on $\mathcal P(\mathbb R)$ such that $\phi_a(e_k)=a_k$ for every $k\in\mathbb N$. I claim $\Phi:a\mapsto\phi_a$ is a linear isomoprhism. That $\Phi$ is linear is easy to prove. If $\phi_a=0$, then $\phi_a(e_k)=0$ for every $k$, i.e. $a_k=0$ for all $k$, so $a=0$. This shows $\Phi$ is injective. Now let $\psi\in\mathcal P(\mathbb R)'$ be any linear functional. Define $a_k:=\psi(e_k)$. By definition $\psi=\phi_a$. This shows $\Phi$ is surjective, completing the proof. Note: the notation $\mathbb R^\infty$ is not standard, and in fact some authors use it to refer to the space of sequences for which only finitely many entries are non-zero. (This space is actually isomorphic to $\mathcal P(\mathbb R)$ - can you prove it?) Writing $\mathbb R^\mathbb N$ is more standard, and is a special case of the general notation $X^Y:=\{\text{functions }f:Y\to X\}$. • Yes, I generally use the notation $\Bbb R^{\Bbb N}$ but Im following the notation of the book. I will need some time to digest the answer completely. May 17 '17 at 17:21 • I think I get it. Indeed we can define each linear functional $\phi_a$ by a series of the form of $(2)$ in my question, right? May 17 '17 at 17:42 • Well... as written, $B_k$ maps $\mathcal P(\mathbb R)$ to itself. If you instead take $B_k(p)=\frac{p^{(k)}(0)}{k!}$, then $B_k(p)$ is simply the coefficient of $x^k$ in $p$, so something like that should work. May 17 '17 at 17:56 • I had a typo in the original text, yes $B_k(p):=[x^k]p(x)$. I had written my own answer completing the path of my question. May 17 '17 at 18:01 $\mathcal{P}(\mathbb{R})$ has basis $\{1,x,x^2,\dots,x^n,\dots\}$. Since any linear functional in $\mathcal{P}(\mathbb{R})'$ is determined completely by its action on the basis, we may construct an injective linear map from $\mathcal{P}(\mathbb{R})'$ to $\mathbb{R}^\infty$ via the assignment $$\ell \mapsto (\ell(x^n))_{n=0}^\infty$$ Surjectivity is also quick to see. • what means $\ell(x^n)$? The concept of action is not shown in this book (at this moment). May 17 '17 at 17:02 • The phrase "its action on" may be replaced with "what it does to". By $\ell(x^n)$ I mean the real number one gets when applying a linear functional $\ell$ to $x^n$. May 17 '17 at 17:06 • I see but, how you knows that the map is surjective? It is hard to imagine that for each real valued sequence exists a linear functional $\ell$ that with the map $(\ell(x^n))_{n\in\Bbb N}$ define this sequence. May 17 '17 at 17:11 • Let $(a_n)_{n\in \mathbb{N}}$ be any real valued sequence. Now define $\ell$ on $\mathcal{P}(\mathbb{N})$ by $\ell(x^n) = a_n$ for all $n$. By defining $\ell$ on the basis we have actually defined it everywhere: any polynomial in our space may be written $p = \sum_{j=1}^n \lambda_j x^j$ for some real scalars $\lambda_j$. We may force linearity of $\ell$ by setting $\ell(p) = \sum_{j=1}^n \lambda_j a_j$. May 17 '17 at 17:14 • @Masacroso The map he means is a bijection is the map that associates to each linear functional $\ell$ the sequence $(\ell(x^{n})_{n=0}^{\infty}$. May 17 '17 at 17:14 Just for the record I will add my own solution to complete the path of my question (based in the previous answers). Any possible functional $f_{c,k}$ over $x^k$ can be written as $$f_{c,k}(x^k)=c\tag{3}$$ for any $c\in\Bbb R$. But observe that if we define $$f_{c,k}(x^k):=c B_k(x^k)$$ then the $f_{c,k}$ are linear, and because $\{x^k:k\in\Bbb N_{\ge 0}\}$ is a basis of $\mathcal P(\Bbb R)$ then any functional of $\mathcal P(\Bbb R)'$ have the form in $(2)$, hence $S=\mathcal P(\Bbb R)'$.$\Box$
2021-09-28T14:53:19
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2285272/trying-to-prove-that-mathcal-p-bbb-r-cong-bbb-r-infty", "openwebmath_score": 0.9392385482788086, "openwebmath_perplexity": 111.81694957194874, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.987758725428898, "lm_q2_score": 0.851952809486198, "lm_q1q2_score": 0.8415238212236557 }
http://mathhelpforum.com/algebra/195807-financial-maths-future-value-calculation.html
# Math Help - Financial Maths - Future Value Calculation 1. ## Financial Maths - Future Value Calculation The problem: Zanele plans to save R50 000 towards buying a new car in three years' time. She makes three equal deposits at the beginning of each year into a savings account, starting immediately. The interest paid on the money in the savings account is 11% p.a. compounded quarterly. Calculate how much money she will need to pay on each occasion. My attempt: $50000 = \frac{x[(1+\frac{0.11}{4})^3 - 1]}{\frac{0.11}{4}}$ Which gets me x = R16216.62 According to the textbook, the answer should be R13362.60 What am I doing wrong? 2. ## Re: Financial Maths - Future Value Calculation The formula you are using assumes a deposit at end of each year; but with your problem, the deposit is made at beginning of each year. Also, the "i" used in formula must be the annual "i", to coincide with the deposit frequency. This means i = (1 + .11/4)^4 - 1 =~ .11462, or ~11.462%; makes sense that an annual rate paid more frequently than annual ends up higher, right? The formula for Future Value of deposits made at beginning of period is: F = D * {[(1 + i)^(n + 1) - 1] / i - 1} So, to calculate D: D = F / {[(1 + i)^(n + 1) - 1] / i - 1} : see NOTE below D = deposit required (?) F = future value (50000) n = number of periods (3) i = interest per period [(1 + .11/4)^4 - 1] = .11462 D = 50000 / [(1.11462^4 - 1) / .11462 - 1] = 13,362.60 And your "account" will look like: Code: DATE DEPOSIT INTEREST BALANCE Jan.01/1 13,362.60 .00 13,362.60 Jan.01/2 13,362.60 1,531.64 28,256.84 Jan.01/3 13,362.60 3,238.84 44,858.28 Dec.31/3 5,141.72 50,000.00 NOTE: Can be simplified to: D = F * i / [(1 + i) * ((1 + i)^n - 1)] Usually a good idea to make (1 + i) a variable, like let r = 1 + i, then equation easier to handle: D = F * i / [r * (r^n - 1)] where r = 1 + i 3. ## Re: Financial Maths - Future Value Calculation To avoid searching fior the right factor 50000= R(1.0275)^12 +R(1.0275)^8 + R 91.02750^4 which gives R =13362.60 4. ## Re: Financial Maths - Future Value Calculation Originally Posted by bjhopper To avoid searching fior the right factor 50000= R(1.0275)^12 +R(1.0275)^8 + R 91.02750^4 which gives R =13362.60 True BJ...but lotsa typing if for 25 years! But it's a good way to show a student how the formula is arrived at: F = Future value (50000) D = Deposit per period (?) n = number of periods (3) r = 1 + rate per period (1.0275^4) Code: -F = -Dr - Dr^2 - Dr^3 - ....... - Dr^(n-1) - Dr^n Fr = Dr^2 + Dr^3 + ....... + Dr^(n-1) + Dr^n + Dr^(n+1) -------------------------------------------------------------------------- Fr - F = -Dr + 0 + 0 + ....... + 0 + 0 + Dr^(n+1) F(r - 1) = Dr^(n + 1) - Dr F(r - 1) = D[r^(n + 1) - r] D = F(r - 1) / [r^(n + 1) - r] D = F(r - 1) / [r(r^n - 1)] SO: D = 50000(1.0275^4 - 1) / [1.0275^4(1.0275^(4n) - 1] = 13362.60
2014-07-25T16:37:52
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/195807-financial-maths-future-value-calculation.html", "openwebmath_score": 0.3982069790363312, "openwebmath_perplexity": 4510.916382957243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9877587272306606, "lm_q2_score": 0.8519528057272543, "lm_q1q2_score": 0.8415238190457429 }
http://math.stackexchange.com/questions/67988/an-inequality-involving-bell-numbers-b-n2-leq-b-n-1b-n1
# An Inequality Involving Bell Numbers: $B_n^2 \leq B_{n-1}B_{n+1}$ The following inequality came up while trying to resolve a conjecture about a certain class of partitions (the context is not particularly enlightening): $$B_n^2 \leq B_{n-1}B_{n+1}$$ for $n \geq 1$, where $B_n$ denotes the $n$th Bell number (i.e. the number of partitions of an $n$-element set). I ran this inequality through Maple for values of $n$ up to 500 or so and did not find a counterexample. There is no nice closed form for $B_n$, so I was hoping to prove this inequality combinatorially rather than analytically (particularly since the given inequality is just the simplest version of a more general inequality I hope to establish). Let $P_n$ be the collection of (not number of) all partitions of an $n$-element set. My approach was to find an injection from $P_n \times P_n$ into $P_{n-1} \times P_{n+1}$. Suppose we are to map $(C_1, D_1)$ to $(C_2, D_2)$ and suppose, for convenience, our ground set is the integers from $1$ to $n$. A natural seeming choice was to choose $C_2$ to be the partition $C_1$ with the element $n$ removed. Since removing $n$ will map many partitions in $P_n$ to the same partition in $P_{n-1}$, we would need somehow to choose $D_2$ in such a way as to retain information about where $n$ was in $C_1$. We have the new element $n+1$ to work with, so perhaps it can be used to "tag" the partitions in some unique way. I've stressed a combinatorial approach in this post, but I would greatly appreciate any techniques that might be of use in establishing (or refuting) this inequality. - A sequence $c_n$ satisfying $c_n^2 \le c_{n-1} c_{n+1}$ for all $n$ is called discrete log-convex, for what it's worth. –  Sasha Sep 27 '11 at 19:32 This answer is courtesy of Bouroubi (paraphrased): Theorem. Define $B(x)=e^{-1}\sum_{k=0}^\infty k^x k!^{-1}$. Dobinski's formula states $B(n)=B_n$ is the $n$th Bell number. Now we let $\frac{1}{p}+\frac{1}{q}=1$. Then $$B(x+y)\le B(px)^{1/p}B(qx)^{1/q}.$$ Proof. Let $Z$ be the discrete random variable with density function (under counting measure) $$P(Z=k)=\frac{1}{e}\frac{1}{k!}.$$ Observe $\mathbb{E}(Z^x)=B(x)$. Hölder's inequality gives $\mathbb{E}(Z^{x+y})\le\mathbb{E}(Z^{px})^{1/p}\mathbb{E}(Z^{qx})^{1/q}$, which proves the theorem. Corollary. The sequence $B_n$ is logarithmically convex, or equivalently $$B_n^2\le B_{n-1}B_{n+1}.$$ Proof. Set $x=\frac{n-1}{2}$, $y=\frac{n+1}{2}$, and $p=q=2$ in the original theorem. Not a combinatorial proof, but straightforward given a couple powerful premises at least. I'm curious, what's the general formula you're trying to establish? - (+1) Very nice proof, and thanks for the reference. –  Sasha Sep 27 '11 at 19:44 "Probability mass function", or "probability density function with respect to counting measure", or just "probability distribution", would be a term less likely to be misunderstood than "probability distribution function", because of the convention of using that latter term to refer to the cumulative distibution function. –  Michael Hardy Sep 27 '11 at 20:18 @Michael: Right, right. I was going to change that term from the reference but spaced off. –  anon Sep 27 '11 at 20:24 I have a suspicion that finding an injection of the sort proposed in the question is possible, and I wonder wheter, in the case where $x$ and $y$ and $ps$ and $py$ are integers, the same thing could be done in order to prove the inequality $B(x+y)\le B(px)^{1/p}B(qx)^{1/q}$. –  Michael Hardy Sep 27 '11 at 20:24 @Michael: I'm still looking for an injection proof, but I'm unsure if the idea is powerful enough to prove that generalized inequality. –  anon Sep 27 '11 at 20:43 Here's a combinatorial argument. Let $S_n$ denote the total number of sets over all partitions of $\{1, 2, \ldots, n\}$, so that $A_n = \frac{S_n}{B_n}$ is the average number of sets in a partition of $\{1, 2, \ldots, n\}$. First, $A_n$ is increasing. Each partition of $\{1, 2, \ldots, n\}$ consisting of $k$ sets maps to $k$ partitions consisting of $k$ sets (if $n+1$ is placed in an already-existing set) and one partition consisting of $k+1$ sets (if $n+1$ is placed in a set by itself) out of the partitions of $\{1, 2, \ldots, n+1\}$. Thus partitions with more sets reproduce more partitions of their size as well as one larger partition, raising the average number of sets as you move from $n$ elements to $n+1$ elements. Next, the inequality to be proved is equivalent to the fact that $A_n$ is increasing. Separate the partitions counted by $B_{n+1}$ into (1) those that have a set consisting of the single element $n+1$ and (2) those that don't. It should be clear that there are $B_n$ of the former. Also, there are $S_n$ of the latter because each partition in group (2) is formed by adding $n+1$ to a set in a partition of $\{1, 2, \ldots, n\}$. Thus $B_{n+1} = B_n + S_n$. The inequality to be shown can then be reformulated as $$\frac{B_{n+1}}{B_n} \geq \frac{B_n}{B_{n-1}} \Longleftrightarrow 1 + \frac{S_n}{B_n} \geq 1 + \frac{S_{n-1}}{B_{n-1}} \Longleftrightarrow A_n \geq A_{n-1},$$ and we know the last inequality holds because we've already shown that $A_n$ is increasing. - Some more references, which will give you some additional proofs if you're interested in tracking them down. (Added: The Bender and Canfield paper mentioned below gives this bound as well.) "The log-convexity of the Bell numbers was first obtained by Engel ["On the average rank of an element in a filter of the partition lattice," Journal of Combinatorial Theory Series A 65 (1994) 67-78] . Then Bender and Canfield ["Log-concavity and related properties of the cycle index polynomials," Journal of Combinatorial Theory Series A 74 (1996) 57-70] gave a proof by means of the exponential generating function of the Bell numbers. Another interesting proof is to use Dobinski formula [as in anon's answer]. We can also obtain the log-convexity of the Bell numbers by Proposition 2.3 and the well-known recurrence $$B_{n+1} = \sum_{k=0}^n \binom{n}{k} B_k."$$ Liu and Wang's proposition 2.3 (due to Davenport and Pólya) says If $\{x_n\}$ is log-convex, and $z_n = \sum_{k=0}^n \binom{n}{k} x_k$, then $\{z_n\}$ is log-convex as well. While at first this may seem circular when applied to the Bell numbers, it's not. Proposition 2.3 says that if $x_{k-1}x_{k+1} \geq x_k^2$ for $1 \leq k \leq n-1$ then $z_{k-1}z_{k+1} \geq z_k^2$ for $1 \leq k \leq n-1$. With the Bell number recurrence, then, we have $B_{k-1}B_{k+1} \geq B_k^2$ for $1 \leq k \leq n-1$ implying $B_{k}B_{k+2} \geq B_{k+1}^2$ for $1 \leq k \leq n-1$. Since we can easily check that $B_0 B_2 \geq B_1^2$, this gives us an inductive proof of the log-convexity of the Bell numbers. • (Added 2) Canfield, in "Engel's inequality for Bell numbers" [Journal of Combinatorial Theory Series A 72 (1995) 184-187], discusses this inequality as well and gives the same proof as in my other answer. -
2014-12-18T21:57:34
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/67988/an-inequality-involving-bell-numbers-b-n2-leq-b-n-1b-n1", "openwebmath_score": 0.9466351866722107, "openwebmath_perplexity": 233.87458381823848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9877587236271349, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.8415238178321739 }
https://math.stackexchange.com/questions/552780/is-tran-geq-nn-deta-for-a-symmetric-positive-definite-matrix-a-in-m
# Is $(tr(A))^n\geq n^n \det(A)$ for a symmetric positive definite matrix $A\in M_{n\times n} (\mathbb{R})$ If $A\in M_{n\times n} (\mathbb{R})$ a positive definite symmetric matrix, Question is to check if : $$(tr(A))^n\geq n^n \det(A)$$ What i have tried is : As $A\in M_{n\times n} (\mathbb{R})$ a positive definite symmetric matrix, all its eigen values would be positive. let $a_i>0$ be eigen values of $A$ then i would have : $tr(A)=a_1+a_2+\dots +a_n$ and $\det(A)=a_1a_2\dots a_n$ for given inequality to be true, I should have $(tr(A))^n\geq n^n \det(A)$ i.e., $\big( \frac{tr(A)}{n}\big)^n \geq \det(A)$ i.e., $\big( \frac{a_1+a_2+\dots+a_n}{n}\big)^n \geq a_1a_2\dots a_n$ I guess this should be true as a more general form of A.M-G.M inequality saying $(\frac{a+b}{2})^{\frac{1}{2}}\geq ab$ where $a,b >0$ So, I believe $(tr(A))^n\geq n^n \det(A)$ should be true.. please let me know if i am correct or try providing some hints if i am wrong. EDIT : As every one say that i am correct now, i would like to "prove" the result which i have used just like that namely generalization of A.M-G.M inequality.. I tried but could not see this result in detail. SO, i would be thankful if some one can help me in this case. • Good observation. And yes, you are correct. – Shuchang Nov 5 '13 at 11:33 • I see no reason why your argument could be wrong. – Han de Bruijn Nov 5 '13 at 11:39 • As every one say that i am correct now, i would like to "prove" the result which i have used just like that namely generalization of A.M-G.M inequality.. please help me to see that... – user87543 Nov 5 '13 at 11:49 • There are some hints and links at math.stackexchange.com/questions/483842/… – Gerry Myerson Nov 5 '13 at 11:52 • @GerryMyerson : Thank you Sir. – user87543 Nov 5 '13 at 11:54 This is really a Calculus problem! Indeed, let us look for the maximum of $h(x_1,\dots,x_n)=x_1^2\cdots x_n^2$ on the sphere $x_1^2+\cdots+x_n^2=1$ (a compact set, hence the maximum exists). First note that if some $x_i=0$, then $h(x)=0$ which is obviously the minimum. Hence we look for a conditioned critical point with no $x_i=0$. For this we compute the gradient of $h$, namely $$\nabla h=(\dots,2x_iu_i,\dots),\quad u_i=\prod_{j\ne i}x_j^2,$$ and to be a conditioned critical point (Lagrange) it must be orthogonal to the sphere, that is, parallel to $x$. This implies $u_1=\cdots=u_n$, and since no $x_i=0$ we conclude $x_1=\pm x_i$ for all $i$. Since $x$ is in the sphere, $x_1^2+\cdots+x_1^2=1$ and $x_1^2=1/n$. At this point we get the maximum of $h$ on the sphere: $$h(x)=x_1^{2n}=1/n^n.$$ And now we can deduce the bound. Let $a_1,\dots,a_n$ be positive real numbers and write $a_i=\alpha_i^2$. The point $z=(\alpha_1,\dots,\alpha_n)/\sqrt{\alpha_1^2+\cdots+\alpha_n^2}$ is in the sphere, hence $$\frac{1}{n^n}\ge h(z)=\frac{\alpha_1^2\cdots\alpha_n^2}{(\alpha_1^2+\cdots+\alpha_n^2)^n}=\frac{a_1\cdots a_n}{(a_1+\cdots+a_n)^n},$$ and we are done. For convenience, we use the notation $A\succ 0$ to indicate that a symmetric matrix $A$ is positive definite. We can see the inequality $(tr(A))^n\geq n^n \det(A),\;\forall A\succ 0$ as $$\frac{1}{n}\mathrm{trace}(A)\geq \sqrt[n\,]{\det(A)}, \quad \forall A\succ 0.$$ Note that, if $A=\mathrm{diag}(\lambda_1,\ldots,\lambda_i,\ldots,\lambda_n)$, that is, $$A= \begin{pmatrix} \lambda_{1} & \cdots & 0 & \cdots & 0 \\ \vdots & \ddots &\vdots & &\vdots \\ 0 &\cdots & \lambda_i & \cdots & 0 \\ \vdots & &\vdots &\ddots &\vdots \\ 0 &\cdots &0 &\cdots &\lambda_n \end{pmatrix}$$ we have $$\frac{1}{n}\mathrm{trace}(A)\geq \sqrt[n\,]{\det(A)} \Longleftrightarrow \frac{\lambda_1+\ldots+\lambda_i+\ldots+\lambda_n}{n}\geq \sqrt[n\,]{\lambda_1\cdot\ldots\cdot\lambda_i\cdot\ldots\cdot\lambda_n}$$ So we can see the inequality in question as a generalization of the inequality between the arithmetic mean and geometric mean. See a proof using forward–backward induction here. For every $n\times n$ real symmetric matrix $A$, the eigenvalues are real and the eigenvectors can be chosen such that they are orthogonal to each other. Thus a real symmetric matrix $A$ can be decomposed as $A=Q\Lambda {Q}^{T}$ where $\Lambda$ is a diagonal matrix whose entries are the eigenvalues of $A$, and $Q$ is an orthonormal matrix. For a orthonormal matrix we have $Q^{-1}=A^{T}$ and $QQ^T=I$. With these observations the required inequality is the result of '$\det$', '$\mathrm{trace}$' properties and algebraic manipulations: \begin{align} \frac{\mathrm{trace}(A)}{n} =& \frac{\mathrm{trace}(Q\Lambda Q^T)}{n} \\ =& \frac{\mathrm{trace}(Q^TQ\Lambda)}{n} \\ =& \frac{\mathrm{trace}(\Lambda)}{n} \\ =& \frac{\lambda_1+\ldots+\lambda_i+\ldots+\lambda_n}{n} \\ \\ \geq& \sqrt[n\,]{\lambda_1\cdot\ldots\cdot\lambda_i\cdot\ldots\cdot\lambda_n} \\ =& \sqrt[n]{\det(\Lambda)} \\ =& \sqrt[n]{\det(Q^TQ\Lambda)} \\ =& \sqrt[n]{\det(Q\Lambda Q^T)} \\ =& \sqrt[n]{\det(A)} \end{align}
2019-10-15T18:36:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/552780/is-tran-geq-nn-deta-for-a-symmetric-positive-definite-matrix-a-in-m", "openwebmath_score": 0.9995983242988586, "openwebmath_perplexity": 182.2206098411846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9877587239874877, "lm_q2_score": 0.8519528038477825, "lm_q1q2_score": 0.841523814426248 }
https://math.stackexchange.com/questions/2432238/prove-that-for-any-positive-integer-n-the-following-sum-is-a-perfect-square
# Prove that for any positive integer n, the following sum is a perfect square Prove that for any positive integer n, the following sum is a perfect square: $1+8+16+...+8n$. I have tried to solve the problem by trying to find the equation that would give me the desired values, but since the values differ by so much with each increment on $n$, I think the expression would be exponential. However, all the equations I have tried stop working after some number $n$. What is the equation, prove it using induction, and how should I approach future problems like this asking me to find a term with an expression? • when get we the number $1$ by $8n$? – Dr. Sonnhard Graubner Sep 16 '17 at 20:44 • it should be $0$ i believe? – K Split X Sep 16 '17 at 20:44 • Perhaps the $1$ is not part of the sequence. For example, perhaps the OP means the sum $$1+\sum_{k=1}^n 8k$$. – Franklin Pezzuti Dyer Sep 16 '17 at 20:45 • We define $0$ as a perfect square too, no? – K Split X Sep 16 '17 at 20:45 • @Nilknarf that makes a lot of sense actually – George Coote Sep 16 '17 at 20:45 You are saying that the value of the sum $$1+\sum_{k=1}^n 8k$$ is always a perfect square. However, since $$\sum_{k=1}^n k=\frac{n(n+1)}{2}$$ we have $$1+\sum_{k=1}^n 8k=1+8\sum_{k=1}^n k$$ $$1+\sum_{k=1}^n 8k=1+\frac{8n(n+1)}{2}$$ $$1+\sum_{k=1}^n 8k=1+4n(n+1)$$ $$1+\sum_{k=1}^n 8k=1+4n^2+4n$$ $$1+\sum_{k=1}^n 8k=(2n+1)^2$$ and so it always has the value $(2n+1)^2$, which is obviously a perfect square. Hint: $$\sum_{i=1}^n i = \frac {n(n+1)} 2$$ $$1 + \sum_{i=1}^n 8i$$ you will get $$1+\sum_{k=1}^n8k=1+4n(n+1)$$, can you prove that this is a complete square?
2019-10-15T11:35:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2432238/prove-that-for-any-positive-integer-n-the-following-sum-is-a-perfect-square", "openwebmath_score": 0.8405057191848755, "openwebmath_perplexity": 177.7706692585558, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.987758723627135, "lm_q2_score": 0.8519528019683105, "lm_q1q2_score": 0.8415238122627797 }
https://www.physicsforums.com/threads/alternating-series-testing-for-convergence.866913/
# I Alternating Series, Testing for Convergence 1. Apr 13, 2016 ### Staff: Mentor The criteria for testing for convergence with the alternating series test, according to my book, is: Σ(-1)n-1bn With bn>0, bn+1 ≤ bn for all n, and lim n→∞bn = 0. My question is about the criteria. I'm running into several homework problem where bn is not always greater than bn+1, such as the following: Σ(-1)n sin(6π/n). This sequence is also not always greater than zero either (n=4 and n=5 make this negative), nor is it (-1)n-1 like the criteria says, but the series converges anyways. From n=6 to n=12, it appears that bn < bn+1. But my criteria says bn+1 ≤ bn for all n. Am I missing something? What's with these apparent inconsistencies? 2. Apr 13, 2016 ### axmls That's odd. My textbook also says for all $n$, however I checked Paul's notes online and he specifically points out that it only needs to be eventually decreasing, and your sequence does eventually strictly decrease. I would go with Paul's notes. After all, suppose that the sequence $a_n$ is not decreasing for $1, 2, ... N$ but that it is decreasing for all $n > N$ (and further suppose that $\lim_{n \to \infty} a_n = 0$). Then certainly we could simply write the sum as $$\sum _{n=1} ^\infty (-1)^n a_n = \sum_{n=1} ^N (-1)^n a_n + \sum_{n = N+1} ^\infty (-1)^n a_n$$ Then certainly the first term is finite, and the second term converges by the alternating series test. Of course, you'd have to show that your sequence is in fact strictly decreasing after some $N$, but intuitively that's certainly the case for $\sin(x)$ as $x \to 0$. In this case, I'd say your function is strictly decreasing for $n \geq 12$. I'd love to hear someone else's opinion, though. It's quite possible that the textbook intends to say this: get the series in a form such that it is always decreasing, even if you have to split it up into some finite sum and an infinite sum. 3. Apr 13, 2016 ### pwsnafu If you can show $\sum_{n=13}^\infty (-1)^n b_n$ converges, then trivially $\sum_{n=1}^\infty (-1)^n b_n$ also converges because all you are doing is adding a finite number of terms. Also $(-1)^{n} = - (-1)^{n-1}$ so it's the same thing. 4. Apr 13, 2016 ### Staff: Mentor That's what I figured. It wouldn't make sense otherwise. It's possible. I'll read it over again and see if I missed something. Okay. That was my train of thought, but I didn't know if there was some crucial difference I was missing. Thanks. 5. Apr 13, 2016 ### jbunniii The behavior of the first $N$ terms (where $N$ is any fixed finite number) has no effect on whether the series converges or diverges. (Of course, those terms do affect the value to which the series converges, if it converges.) Observe that $\sin$ is nonnegative and monotonically increasing on $[0,\pi/2]$. For $n \geq 12$, we have $6\pi/n \in [0,\pi/2]$. This means that we are in the domain where $\sin$ is monotonically increasing, so $6\pi/(n+1) < 6\pi/n$ implies $0 < \sin(6\pi/(n+1)) < \sin(6\pi/n)$ for $n \geq 12$. Also, $$\lim_{n \to \infty}\sin(6\pi/n) = \sin\left(\lim_{n \to \infty} 6\pi/n\right) = \sin(0) = 0$$ where we can bring the limit inside $\sin$ because $\sin$ is continuous. This means that for $n \geq 12$, the series is alternating and so the alternating series test applies. Putting it slightly more formally: \begin{aligned} \sum_{n=1}^{\infty} (-1)^n \sin(6\pi/n) &= \sum_{n=1}^{11}(-1)^n \sin(6\pi/n) + \sum_{n=12}^{\infty}(-1)^n\sin(6\pi/n) \\ &= \sum_{n=1}^{11}(-1)^n \sin(6\pi/n) - \sum_{m=1}^{\infty}(-1)^{m}\sin(6\pi/(m+11)) \\ \end{aligned} In the last expression, the first sum is taken over finitely many terms, so of course it converges. The second sum is an alternating series which converges as discussed above. Last edited: Apr 13, 2016
2017-10-22T03:21:20
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/alternating-series-testing-for-convergence.866913/", "openwebmath_score": 0.9870030879974365, "openwebmath_perplexity": 448.8092717682309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632288833652, "lm_q2_score": 0.8840392802184581, "lm_q1q2_score": 0.8414844837284676 }
https://projecteuler.chat/viewtopic.php?f=17&t=7255&view=print
Page 1 of 1 ### Calculating modular inverses of p mod $2^p$ Posted: Thu Mar 11, 2021 8:51 am Let p an odd prime. We want to calculate $I(p)=p^{-1}\mod 2^p$. If we use EXGCD, it takes $O(\log_22^p)=O(p)$ time, which is not efficient enough. Some small values are 3, 13, 55, 931, 3781, 61681, 248347, 4011943, 259179061...... It seems that $I(p)$ is slightly smaller than $2^{p-1}$, but I didn't find more explicit patterns, nor did I find the sequence on OEIS. So is there a closed form of $I(p)$, or at least can we calculate that more efficiently? Any idea would be appreciated! ### Re: Calculating modular inverses of p mod $2^p$ Posted: Thu Mar 11, 2021 2:04 pm After some experimentation it turns out that if $p=2k+1$ then the inverse of $p$ mod $2^p$ is $\frac{k\cdot2^p + 1}{p}$. Clearly if that fraction is actually an integer, then it works as the inverse of $p$ mod $2^p$. To prove that it is an integer for prime $p$, you can use Fermat's little theorem ($2^p\equiv2 \bmod p$): We have: $$k\cdot2^p + 1 = k\cdot2 + 1 = p \equiv 0 \mod p$$ so for prime $p=2k+1$ the number $k\cdot2^p + 1$ is divisible by $p$, and that division produces the inverse you want. ### Re: Calculating modular inverses of p mod $2^p$ Posted: Wed Mar 17, 2021 10:14 am jaap wrote: Thu Mar 11, 2021 2:04 pm After some experimentation it turns out that if $p=2k+1$ then the inverse of $p$ mod $2^p$ is $\frac{k\cdot2^p + 1}{p}$. Which is $2^{p-1} - \frac{2^{p-1} - 1}{p}$, explaining the observation that it's slightly smaller than $2^{p-1}$. A variant approach is to use the form of Fermat's little theorem that says that $2^{p-1} \equiv 1 \pmod p$, so that $\frac{2^{p-1} - 1}{p}$ is an integer. We have a $-1$ rather than a $+1$ in the numerator, so square the numerator to get $q = \frac{(2^{p-1} - 1)^2}{p} = \frac{2^{2p-2} - 2^p + 1}{p} = \frac{2^p (2^{p-2} - 1) + 1}{p}$. Since $p > 2$, $q$ is a multiplicative inverse of $p$ modulo $2^p$. The big disadvantage with respect to jaap's approach is that $q$ isn't inherently reduced modulo $2^p$.
2021-04-10T14:19:14
{ "domain": "projecteuler.chat", "url": "https://projecteuler.chat/viewtopic.php?f=17&t=7255&view=print", "openwebmath_score": 0.964990496635437, "openwebmath_perplexity": 249.68820148579837, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9899864296722661, "lm_q2_score": 0.8499711813581708, "lm_q1q2_score": 0.8414599351570937 }
https://www.jiskha.com/questions/1510110/abcd-is-a-quadrilateral-with-angle-abc-a-right-angle-the-point-d-lies-on-the
ABCD is a quadrilateral with angle ABC a right angle. The point D lies on the perpendicular bisector of AB. The coordinates of A and B are (7, 2) and (2, 5) respectively. The equation of line AD is y = 4x − 26. find the area of quadrilateral ABCD 1. 👍 2. 👎 3. 👁 4. ℹ️ 5. 🚩 1. Equation of the perpendicular bisector of line segment AB: y=5x/3 - 4 Coordinates of point D = (66/7, 82/7) Gradient of BC = 5/3 Coordinates of point C = (8,15) These are the right answers but how to find the area of quadrilateral? Apparently the answer is 629/14 square units 1. 👍 2. 👎 3. ℹ️ 4. 🚩 2. There are several ways to do this: 1. If you join AC, you have a right-angled triangle ABC. Find the length of AB and BC and use area = (1/2)base x height for its area. Find angle D by using the slopes of AD and CD, find the lengths of CD and AD , then area of triangle ACD = (1/2)(CD)(AD)sin D 2. You could use Heron's formula to find the area of triangle ACD, add on the area of the right-angled triangle 3. The simplest and quickest way to find the area of any convex polygon is this: list the coordinates of the quadrilateral clockwise starting with any point in a column. repeat the first point you started with. Area = (1/2)(sum of the diagonal downproducts - sum of the diagonal upproducts) for yours: 7 2 66/7 82/7 8 15 2 5 7 2 Area = (1/2)[ (7(82/7) + 15(66/7) + 40 + 4) - (2(66/7) + 8(82/7) + 30 + 35) = (1/2)[1872/7 - 1243/7] = (1/2)(629/7) = 629/14 The last method works for any polygon. Listing the points in a clockwise rotation will result in the negative of the above, so just take the absolute value if you ignore the rotation. The important thing is to list the points in sequence, and to repeat the first one you started with. 1. 👍 2. 👎 3. ℹ️ 4. 🚩 3. For the last method, I meant to say : " list the coordinates of the quadrilateral counter-clockwise starting with any point in a column. " 1. 👍 2. 👎 3. ℹ️ 4. 🚩 4. I am not sure how you got 40+4 and 30+35 btw thx for your help 1. 👍 2. 👎 3. ℹ️ 4. 🚩 5. never mind i got it- it's just like matching 1. 👍 2. 👎 3. ℹ️ 4. 🚩 ## Similar Questions 1. ### Geometry ABCD is a quadrilateral inscribed in a circle, as shown below: Circle O is shown with a quadrilateral ABCD inscribed inside it. Angle A is labeled x plus 16. Angle B is labeled x degrees. Angle C is labeled 6x minus 4. Angle D is 2. ### Math 1. Name a pair of complementary angles. (1 point) 1 and 4 1 and 6 3 and 4 4 and 5 2. If m1=37°, what is m4? (1 point) 53° 43° 37° 27° 3. If m1 = 40°, what is m5? (1 point) 50° 40° 35° 25° Use the figure to answer 3. ### Math A quadrilateral contains two equal sides measuring 12 cm each with an included right angle. If the measure of the third side is 8 cm and the angle opposite the right angle is 120 degrees, find the fourth side and the area of the 4. ### Math,geometry Point $P$ is on side $\overline{AC}$ of triangle $ABC$ such that $\angle APB =\angle ABP$, and $\angle ABC - \angle ACB = 39^\circ$. Find $\angle PBC$ in degrees. 1. ### Mathematics Using a ruler and a pair of compasses only construct triangle ABC such that AB=8cm,angle ABC=60degree and angle BAC =75degree,Locate the point O inside triangle ABC equidistant from A,B and C. Construct the circle with center O, 2. ### Math "Need Help Asap"! Geometry Unit Practice Sheet Multiple Choice Use the figure to answer questions 1–3. 1. Name a pair of complementary angles. (1 point) 1 and 4 1 and 6 3 and 4 4 and 5 2. If m1=37°, what is m4? (1 point) 53° 43° 37° 27° 3. 3. ### Geometry Quadrilateral ABCD is a parallelogram. If adjacent angles are congruent, which statement must be true? A. Quadrilateral ABCD is a square. B. Quadrilateral ABCD is a rhombus. C. Quadrilateral ABCD is a rectangle. D. Quadrilateral 4. ### Geometry Angle ABC and ange DBE are vertical angles,the measure of angle ABC =3x+20, and the measure or angle DBE =4x-10. Write and solve an equation to find the measure of angle ABC and the measure of angle DBEin 1. ### Mathematics Using ruler and a pair of compasses only construt triangle ABC such that AB 8cm angle ABC 60° and angle BAC 75° Locate the point o inside ∆ ABC equidistant from A B and C Construct the circle with center o which passes through
2022-05-26T23:43:48
{ "domain": "jiskha.com", "url": "https://www.jiskha.com/questions/1510110/abcd-is-a-quadrilateral-with-angle-abc-a-right-angle-the-point-d-lies-on-the", "openwebmath_score": 0.6336784362792969, "openwebmath_perplexity": 1289.1396739170364, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9626731169394881, "lm_q2_score": 0.8740772351648677, "lm_q1q2_score": 0.8414506564220131 }
https://tex.stackexchange.com/questions/434771/lining-up-nonconsecutive-multi-line-equations/434775
# Lining up nonconsecutive multi-line equations I have the following code: \documentclass[fleqn, 12pt]{article} \usepackage{amsmath,amsfonts,amssymb} \usepackage{graphicx} \graphicspath{ {./images/} } \setlength{\parskip}{\baselineskip}% \setlength{\parindent}{0pt}% \begin{document} \raggedright From the model, \begin{align} &E( Y_i - \bar{ Y } ) = E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \nonumber \\ &( \text{Since the first normal equation gives} \ \ Y_i = \beta_0 + \beta_1 X_i \ \ \text{and} \ \ \bar{Y} = \beta_0 + \beta_1 \bar{X}) \nonumber \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \beta_1( X_i - \bar{X} ) \nonumber \end{align} \end{document} Notice that the lines &E( Y_i - \bar{ Y } ) = E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \nonumber \\ and & \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \beta_1( X_i - \bar{X} ) \nonumber are separated by the line &( \text{Since the first normal equation gives} \ \ Y_i = \beta_0 + \beta_1 X_i \ \ \text{and} \ \ \bar{Y} = \beta_0 + \beta_1 \bar{X}) \nonumber \\ Despite this, I want the line & \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \beta_1( X_i - \bar{X} ) \nonumber to line up at the = sign with the first line &E( Y_i - \bar{ Y } ) = E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \nonumber \\ This is why I put the \ \ \ \ \ \ \ \ \ \ \ \ \ \ Is it possible to somehow do this using &, whilst maintaining the line in the middle as it is? Or, perhaps, some other way to automatically have them align? EDIT: The reason I had to use & to put the middle line as its own line is because, if I put it on the same line as the first equation, then it would go off the page margin. Thanks. • Why not simply put since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$. after the display? There's little added value (if any) from putting the comment in the middle. – egreg Jun 3 '18 at 14:07 • @egreg You mean after the entire multi-line equation? It’s mostly for pedagogical reasons. It clearly associates the comment with the line of the equation with which it is referring to. – The Pointer Jun 3 '18 at 14:11 • And it hinders legibility and understandability. – egreg Jun 3 '18 at 14:15 Like this? \documentclass[fleqn, 12pt]{article} \usepackage{amsmath,amsfonts,amssymb} \usepackage{graphicx} \graphicspath{ {./images/} } \setlength{\parskip}{\baselineskip}% \setlength{\parindent}{0pt}% \begin{document} \raggedright From the model, \begin{align} E( Y_i - \bar{ Y } ) &= E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \nonumber \\ \rlap{(Since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$)} \phantom{E( Y_i - \bar{ Y } )} \nonumber \\ & = \beta_1( X_i - \bar{X} ) \nonumber \end{align} \end{document} • Yes, that's it! Thank you for the assistance. – The Pointer Jun 3 '18 at 14:04 • @Mico Oh, my apologies. You’re right, I should have waited. I just had an answer in mind and Stefan happened to exemplify it precisely. Again, my apologies. I will avoid this in the future. – The Pointer Jun 3 '18 at 14:16 I would put the explanatory text in a \parbox immediately below the material to the right of the first = symbol. That way, it's immediately clear that the explanatory text pertains to the material on the preceding line. \documentclass[fleqn, 12pt]{article} \usepackage{amsmath,amsfonts,amssymb} \DeclareMathOperator{\E}{E} % <-- new (expectation operator) \setlength{\parskip}{\baselineskip} \setlength{\parindent}{0pt} \begin{document} \raggedright From the model, \begin{align*} \E( Y_i - \bar{Y} ) &= \E[(\beta_0+\beta_1 X_i) - (\beta_0-\beta_1\bar{X})] \\ &\qquad\parbox[t]{0.5\textwidth}{(since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1\bar{X}$)}\\ &= \beta_1(X_i-\bar{X}) \,. \end{align*} \end{document} • Ahh, that looks even better than what I was trying to do! I think I will try this. Thank you for the answer. – The Pointer Jun 3 '18 at 14:12 I would simply set the comment after the display (using “because” instead of “since”). It's more legible and in style with standard mathematical practice; to the contrary, the comment in the middle will raise doubts what it refers to. Anyway, this is how you can do it; if the comment doesn't fit on one line it will wrap, as shown in the third example. \documentclass[fleqn, 12pt]{article} \usepackage{amsmath,amsfonts,amssymb} \usepackage{graphicx} \usepackage{parskip} % \parskip=\baselineskip is HUGE \makeatletter \newcommand{\devioustrick}[1]{% \ifmeasuring@\else \kern-\ifcase\expandafter1\maxcolumn@widths\fi \parbox{\dimexpr\linewidth-\mathindent\relax}{#1}% \fi } \makeatother \begin{document} From the model, \begin{align*} E( Y_i - \bar{ Y } ) &= E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \\ &= \beta_1( X_i - \bar{X} ) \end{align*} because the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$. From the model, \begin{align*} E( Y_i - \bar{ Y } ) ={}& E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \\ &\devioustrick{ (since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$) }\\ ={}& \beta_1( X_i - \bar{X} ) \end{align*} From the model, \begin{align*} E( Y_i - \bar{ Y } ) ={}& E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \\ &\devioustrick{ (since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$ and since the definitive answer on life, the universe and everything is~$42$) }\\ ={}& \beta_1( X_i - \bar{X} ) \end{align*} \end{document} Use \intertext to place text between aligned equations. There is rarely any need to manually align material using 'hard' spaces. The amsmath package provides environments that meet all of the most common requirements. \documentclass[fleqn, 12pt]{article} \usepackage{amsmath} \begin{document} From the model, \begin{align*} E( Y_i - \bar{ Y } ) &= E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \\ \intertext{Since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$} & = \beta_1( X_i - \bar{X} ) \end{align*} \end{document} • Thanks for the answer. The problem with this is that it removes the indentation of the middle line, which I'd prefer to keep. – The Pointer Jun 3 '18 at 13:59 • @ThePointer You're removing all indentations; why should that line be indented? – egreg Jun 3 '18 at 14:01 • @egreg I want it to be viewed as comment specifically with regards to that part of the equation, rather than something more major, if that makes sense. – The Pointer Jun 3 '18 at 14:03 Another solution is to just use \tag. This seems to be appropriate semantically, seeing as the text pertains to the previous equation-line. From the model, \begin{align*} E( Y_i - \bar{ Y } ) &= E[ ( \beta_0 + \beta_1 X_i ) - ( \beta_0 - \beta_1 \bar{ X } ) ] \tag{since the first normal equation gives $Y_i = \beta_0 + \beta_1 X_i$ and $\bar{Y} = \beta_0 + \beta_1 \bar{X}$}\\ & = \beta_1( X_i - \bar{X} ) \end{align*}
2019-11-15T07:16:18
{ "domain": "stackexchange.com", "url": "https://tex.stackexchange.com/questions/434771/lining-up-nonconsecutive-multi-line-equations/434775", "openwebmath_score": 0.9673343896865845, "openwebmath_perplexity": 1112.5858445806566, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9553191284552529, "lm_q2_score": 0.8807970717197768, "lm_q1q2_score": 0.8414422909012761 }
https://proofwiki.org/wiki/Definition:Convergent_Sequence_in_Metric_Space
# Definition:Convergent Sequence/Metric Space ## Definition Let $M = \left({A, d}\right)$ be a metric space or a pseudometric space. Let $\sequence {x_k}$ be a sequence in $A$. ### Definition 1 $\sequence {x_k}$ converges to the limit $l \in A$ if and only if: $\forall \epsilon \in \R_{>0}: \exists N \in \R_{>0}: \forall n \in \N: n > N \implies \map d {x_n, l} < \epsilon$ ### Definition 2 $\sequence {x_k}$ converges to the limit $l \in A$ if and only if: $\forall \epsilon > 0: \exists N \in \R_{>0}: \forall n \in \N: n > N \implies x_n \in \map {B_\epsilon} l$ where $\map {B_\epsilon} l$ is the open $\epsilon$-ball of $l$. ### Definition 3 $\sequence {x_k}$ converges to the limit $l \in A$ if and only if: $\displaystyle \lim_{n \mathop \to \infty} \map d {x_n, l} = 0$ ### Definition 4 $\sequence {x_k}$ converges to the limit $l \in A$ if and only if: for every $\epsilon \in \R{>0}$, the open $\epsilon$-ball about $l$ contains all but finitely many of the $p_n$. We can write: $x_n \to l$ as $n \to \infty$ or: $\displaystyle \lim_{n \to \infty} x_n \to l$ This is voiced: As $n$ tends to infinity, $x_n$ tends to (the limit) $l$. If $M$ is a metric space, some use the notation $\displaystyle \lim_{n \to \infty} x_n = l$ This is voiced: The limit as $n$ tends to infinity of $x_n$ is $l$. Note, however, that one must take care to use this alternative notation only in contexts in which the sequence is known to have a limit. It follows from Sequence Converges to Point Relative to Metric iff it Converges Relative to Induced Topology that this definition is equivalent to that for convergence in a topological space. ### Comment The sequence $x_1, x_2, x_3, \ldots, x_n, \ldots$ can be thought of as a set of approximations to $l$, in which the higher the $n$ the better the approximation. The distance $d \left({x_n, l}\right)$ between $x_n$ and $l$ can then be thought of as the error arising from approximating $l$ by $x_n$. Note the way the definition is constructed. Given any value of $\epsilon$, however small, we can always find a value of $N$ such that ... If you pick a smaller value of $\epsilon$, then (in general) you would have to pick a larger value of $N$ - but the implication is that, if the sequence is convergent, you will always be able to do this. Note also that $N$ depends on $\epsilon$. That is, for each value of $\epsilon$ we (probably) need to use a different value of $N$. ### Note on Domain of $N$ Some sources insist that $N \in \N$ but this is not strictly necessary and can make proofs more cumbersome.
2019-11-18T07:11:59
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/Definition:Convergent_Sequence_in_Metric_Space", "openwebmath_score": 0.993941068649292, "openwebmath_perplexity": 143.93097237577126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9923043544146899, "lm_q2_score": 0.8479677583778258, "lm_q1q2_score": 0.8414420990415802 }
https://ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/video-lectures/lecture-10-survey-of-difficulties-with-ax-b/
# Lecture 10: Survey of Difficulties with Ax = b Flash and JavaScript are required for this feature. ## Description The subject of this lecture is the matrix equation $$Ax = b$$. Solving for $$x$$ presents a number of challenges that must be addressed when doing computations with large matrices. ## Summary Large condition number $$\Vert A \Vert \ \Vert A^{-1} \Vert$$ $$A$$ is ill-conditioned and small errors are amplified. Undetermined case $$m < n$$ : typical of deep learning Penalty method regularizes a singular problem. Related chapter in textbook: Introduction to Chapter II Instructor: Prof. Gilbert Strang The following content is provided under a Creative Commons license. Your support will help MIT open courseware continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's go. So if you want to know the subject of today's class, it's A x = b. I got started writing down different possibilities for A x = b, and I got carried away. It just appears all over the place for different sizes, different ranks, different situations, nearly singular, not nearly singular. And the question is, what do you do in each case? So can I outline my little two pages of notes here, and then pick on one or two of these topics to develop today, and a little more on Friday about Gram-Schmidt? So I won't do much, if any, of Gram-Schmidt today, but I will do the others. So the problem is A x = b. That problem has come from somewhere. We have to produce some kind of an answer, x. So I'm going from good to bad or easy to difficult in this list. Well, except for number 0, which is an answer in all cases, using the pseudo inverse that I introduced last time. So that deals with 0 eigenvalues and zero singular values by saying their inverse is also 0, which is kind of wild. So we'll come back to the meaning of the pseudo inverse. But now, I want to get real, here, about different situations. So number 1 is the good, normal case, when a person has a square matrix of reasonable size, reasonable condition, a condition number-- oh, the condition number, I should call it sigma 1 over sigma n. It's the ratio of the largest to the smallest singular value. And let's say that's within reason, not more than 1,000 or something. Then normal, ordinary elimination is going to work, and Matlab-- the command that would produce the answer is just backslash. So this is the normal case. Now, the cases that follow have problems of some kind, and I guess I'm hoping that this is a sort of useful dictionary of what to do for you and me both. So we have this case here, where we have too many equations. So that's a pretty normal case, and we'll think mostly of solving by least squares, which leads us to the normal equation. So this is standard, happens all the time in statistics. And I'm thinking in the reasonable case, that would be ex hat. The solution A-- this matrix would be invertible and reasonable size. So backslash would still solve that problem. Backslash doesn't require a square matrix to give you an answer. So that's the good case, where the matrix is not too big, so it's not unreasonable to form a transpose. Now, here's the other extreme. What's exciting for us is this is the underdetermined case. I don't have enough equations, so I have to put something more in to get a specific answer. And what makes it exciting for us is that that's typical of deep learning. There are so many weights in a deep neural network that the weights would be the unknowns. Of course, it wouldn't be necessarily linear. It wouldn't be linear, but still the idea's the same that we have many solutions, and we have to pick one. Or we have to pick an algorithm, and then it will find one. So we could pick the minimum norm solution, the shortest solution. That would be an L2 answer. Or we could go to L1. And the big question that, I think, might be settled in 2018 is, does deep learning and the iteration from stochastic gradient descent that we'll see pretty soon-- does it go to the minimum L1? Does it pick out an L1 solution? That's really an exciting math question. For a long time, it was standard to say that these deep learning AI codes are fantastic, but what are they doing? We don't know all the interior, but we-- when I say we, I don't mean I. Other people are getting there, and I'm going to tell you as much as I can about it when we get there. So those are pretty standard cases. m = n, m greater than n, m less than n, but not crazy. Now, the second board will have more difficult problems. Usually, because they're nearly singular in some way, the columns are nearly dependent. So that would be the columns in bad condition. You just picked a terrible basis, or nature did, or somehow you got a matrix A whose columns are virtually dependent-- almost linearly dependent. The inverse matrix is really big, but it exists. Then that's when you go in, and you fix the columns. You orthogonalize columns. Instead of accepting the columns A1, A2, up to An of the given matrix, you go in, and you find orthonormal vectors in that column space and orthonormal basis Q1 to Qn. And the two are connected by Gram-Schmidt. And the famous matrix statement of Gram-Schmidt is here are the columns of A. Here are the columns of Q, and there's a triangular matrix that connects the two. So that is the central topic of Gram-Schmidt in that idea of orthogonalizing. It just appears everywhere. It appears all over course 6 in many, many situations with different names. So that, I'm sort of saving a little bit until next time, and let me tell you why. Because just the organization of Gram-Schmidt is interesting. So Gram-Schmidt, you could do the normal way. So that's what I teach in 18.06. Just take every column as it comes. Subtract off projections onto their previous stuff. Get it orthogonal to the previous guys. Normalize it to be a unit vector. Then you've got that column. Go on. So I say that again, and then I'll say it again two days from now. So Gram-Schmidt, the idea is you take the columns-- you say the second orthogonal vector, Q2, will be some combination of columns 1 and 2, orthogonal to the first. Lots to do. And there's another order, which is really the better order to do Gram-Schmidt, and it allows you to do column pivoting. So this is my topic for next time, to see Gram-Schmidt more carefully. Column pivoting means the columns might not come in a good order, so you allow yourself to reorder them. We know that you have to do that for elimination. In elimination, it would be rows. So elimination, we would have the matrix A, and we take the first row as the first pivot row, and then the second row, and then the third row. But if the pivot is too small, then reorder the rows. So it's row ordering that comes up in elimination. And Matlab just systematically says, OK, that's the pivot that's coming up. The third pivot comes up out of the third row. But Matlab says look down that whole third column for a better pivot, a bigger pivot. Switch to a row exchange. So there are lots of permutations then. You end up with something there that permutes the rows, and then that gets factored into LU. So I'm saying something about elimination that's just sort of a side comment that you would never do elimination without considering the possibility of row exchanges. And then this is Gram-Schmidt orthogonalization. So this is the LU world. Here is the QR world, and here, it happens to be columns that you're permuting. So that's coming. This is section 2.2, now. But there's more. 2.2 has quite a bit in it, including number 0, the pseudo inverse, and including some of these things. Actually, this will be also in 2.2. And maybe this is what I'm saying more about today. So I'll put a little star for today, here. What do you do? So this is a case where the matrix is nearly singular. You're in danger. It's inverse is going to be big-- unreasonably big. And I wrote inverse problems there, because inverse problem is a type of problem with an application that you often need to solve or that engineering and science have to solve. So I'll just say a little more about that, but that's a typical application in which you're near singular. Your matrix isn't good enough to invert. Well, of course, you could always say, well, I'll just use the pseudo inverse, but numerically, that's like cheating. You've got to get in there and do something about it. So inverse problems would be examples. Actually, as I write that, I think that would be a topic that I should add to the list of potential topics for a three week project. Look up a book on inverse problems. So what do I mean by an inverse problem? I'll just finish this thought. What's an inverse problem? Typically, you know about a system, say a network, RLC network, and you give it a voltage or current. You give it an input, and you find the output. You find out what current flows, what the voltages are. But inverse problems are-- suppose you know the response to different voltages. What was the network? You see the problem? Let me say it again. Discover what the network is from its outputs. So that turns out to typically be a problem that gives nearly singular matrices. That's a difficult problem. A lot of nearby networks would give virtually the same output. So you have a matrix that's nearly singular. It's got singular values very close to 0. What do you do then? Well, the world of inverse problems thinks of adding a penalty term, some kind of a penalty term. When I minimize this thing just by itself, in the usual way, A transpose, it has a giant inverse. The matrix A is badly conditioned. It takes vectors almost to 0. So that A transpose has got a giant inverse, and you're at risk of losing everything to round off. So this is the solution. You could call it a cheap solution, but everybody uses it. So I won't put that word on videotape. But that sort of resolves the problem, but then the question-- it shifts the problem, anyway, to what number-- what should be the penalty? How much should you penalize it? You see, by adding that, you're going to make it invertible. And if you make this bigger, and bigger, and bigger, it's more and more well-conditioned. It resolves the trouble, here. And like today, I'm going to do more with that. So with that, I'll stop there and pick it up after saying something about 6 and 7. I hope this is helpful. It was helpful to me, certainly, to see all these possibilities and to write down what the symptom is. It's like a linear equation doctor. Like you look for the symptoms, and then you propose something at CVS that works or doesn't work. But you do something about it. So when the problem is too big-- up to now, the problems have not been giant out of core. But now, when it's too big-- maybe it's still in core but really big-- then this is in 2.1. So that's to come back to. The word I could have written in here, if I was just going to write one word, would be iteration. Iterative methods, meaning you take a step like-- the conjugate radiant method is the hero of iterative methods. And then that name I erased is Krylov, and there are other names associated with iterative methods. So that's the section that we passed over just to get rolling, but we'll come back to. So then that one, you never get the exact answer, but you get closer and closer. If the iterative method is successful, like conjugate gradients, you get pretty close, pretty fast. And then you say, OK, I'll take it. And then finally, way too big, like nowhere. You're not in core. Just your matrix-- you just have a giant, giant problem, which, of course, is happening these days. And then one way to do it is your matrix. You can't even look at the matrix A, much less A transpose. A transpose would be unthinkable. You couldn't do it in a year. So randomized linear algebra has popped up, and the idea there, which we'll see, is to use probability to sample the matrix and work with your samples. So if the matrix is way too big, but not too crazy, so to speak, then you could sample the columns and the rows, and get an answer from the sample. See, if I sample the columns of a matrix, I'm getting-- so what does sampling mean? Let me just complete this, say, add a little to this thought. Sample a matrix. So I have a giant matrix A. It might be sparse, of course. I didn't distinguish over their sparse things. That would be another thing. So if I just take random X's, more than one, but not the full n dimensions, those will give me random guys in the column space. And if the matrix is reasonable, it won't take too many to have a pretty reasonable idea of what that column space is like, and with it's the right hand side. So this world of randomized linear algebra has grown because it had to. And of course, any statement can never say for sure you're going to get the right answer, but using the inequalities of probability, you can often say that the chance of being way off is less than 1 in 2 to the 20th or something. So the answer is, in reality, you get a good answer. That is the end of this chapter, 2.4. So this is all chapter 2, really. The iterative method's in 2.1. Most of this is in 2.2. Big is 2.3, and then really big is randomized in 2.4. So now, where are we? You were going to let me know or not if this is useful to see. But you sort of see what are real life problems. And of course, we're highly, especially interested in getting to the deep learning examples, which are underdetermined. Then when you're underdetermined, you've got many solutions, and the question is, which one is a good one? And in deep learning, I just can't resist saying another word. So there are many solutions. What to do? Well, you pick some algorithm, like steepest descent, which is going to find a solution. So you hope it's a good one. And what does a good one mean verses a not good one? They're all solutions. A good one means that when you apply it to the test data that you haven't yet seen, it gives good results on the test data. The solution has learned something from the training data, and it works on the test data. So that's the big question in deep learning. How does it happen that you, by doing gradient descent or whatever algorithm-- how does that algorithm bias the solution? It's called implicit bias. How does that algorithm bias a solution toward a solution that generalizes, that works on test data? And you can think of algorithms which would approach a solution that did not work on test data. So that's what you want to stay away from. You want the ones that work. So there's very deep math questions there, which are kind of new. They didn't arise until they did. And we'll try to save some of what's being understood. Can I focus now on, for probably the rest of today, this case, when the matrix is nearly singular? So you could apply elimination, but it would give a poor result. So one solution is the SVD. I haven't even mentioned the SVD, here, as an algorithm, but of course, it is. The SVD gives you an answer. Boy, where should that have gone? Well, the space over here, the SVD. So that produces-- you have A = U sigma V transposed, and then A inverse is V sigma inverse U transposed. So we're in the case, here. We're talking about number 5. Nearly singular, where sigma has some very small, singular values. Then sigma inverse has some very big singular values. So you're really in wild territory here with very big inverses. So that would be one way to do it. But this is a way to regularize the problem. So let's just pay attention to that. So suppose I minimize the sum of A x minus b squared and delta squared times the size of x squared. And I'm going to use the L2 norm. It's going to be a least squares with penalty, so of course, it's the L2 norm here, too. Suppose I solve that for a delta. For some, I have to choose a positive delta. And when I choose a positive delta, then I have a solvable problem. Even if this goes to 0, or A does crazy things, this is going to keep me away from singular. In fact, what equation does that lead to? So that's a least squares problem with an extra penalty term. So it would come, I suppose. Let's see, if I write the equations A delta I, x equals b 0, maybe that is the least squares equation-- the usual, normal equation-- for this augmented system. Because what's the error here? This is the new big A-- A star, let's say. X equals-- this is the new b. So if I apply least squares to that, what do I do? I minimize the sum of squares. So least squares would minimize A x minus b squared. That would be from the first components. And delta squared x squared from the last component, which is exactly what we said we were doing. So in a way, this is the equation that the penalty method is solving. And one question, naturally, is, what should delta be? Well, that question's beyond us, today. It's a balance of what you can believe, and how much noise is in the system, and everything. That choice of delta-- what we could ask is a math question. What happens as delta goes to 0? So suppose I solve this problem. Let's see, I could write it differently. What would be the equation, here? This part would give us the A transpose, and then this part would give us just the identity, x equals A transpose b, I think. Wouldn't that be? So really, I've written here-- what that is is A star transpose A star. This is least squares on this gives that equation. So all of those are equivalent. All of those would be equivalent statements of what the penalized problem is that you're solving. And then the question is, as delta goes to 0, what happens? Of course, something. When delta goes to 0, you're falling off the cliff. Something quite different is suddenly going to happen, there. Maybe we could even understand this question with a 1 by 1 matrix. I think this section starts with a 1 by 1. Suppose A is just a number. Maybe I'll just put that on this board, here. Suppose A is just a number. So what am I going to call that number? Just 1 by 1. Let me call it sigma, because it's certainly the leading singular value. So what's my equation that I'm solving? A transpose A would be sigma squared plus delta squared, 1 by 1, x-- should I give some subscript here? I should, really, to do it right. This is the solution for a given delta. So that solution will exist. Fine. This matrix is certainly invertible. That's positive semidefinite, at least. That's positive semidefinite, and then what about delta squared I? It is positive definite, of course. It's just the identity with a factor. So this is a positive definite matrix. I certainly have a solution. And let me keep going on this 1 by 1 case. This would be A transpose. A is just a sigma. I think it's just sigma b. So A is 1 by 1, and there are two cases, here-- Sigma bigger than 0, or sigma equals 0. And in either case, I just want to know what's the limit. So the answer x-- let me just take the right hand side. Well, that's fine. Am I computing OK? Using the penalize thing on a 1 by 1 problem, which you could say is a little bit small-- so solving this equation or equivalently minimizing this, so here, I'm finding the minimum of-- A was sigma x minus b squared plus delta squared x squared. You see it's just 1 by 1? Just a number. And I'm hoping that calculus will agree with linear algebra here, that if I find the minimum of this-- so let me write it out. Sigma squared x squared and delta squared x squared, and then minus 2 sigma xb, and then plus b squared. And now, I'm going to find the minimum, which means I'd set the derivative to 0. So I get 2 sigma squared and 2 delta squared. I get a two here, and this gives me the x derivative as 2 sigma b. So I get a 2 there, and I'm OK. I just cancel both 2s, and that's the equation. So I can solve that equation. X is sigma over sigma squared plus delta squared b. So it's really that quantity. I want to let delta go to 0. So again, what am I doing here? I'm taking a 1 by 1 example just to see what happens in the limit as delta goes to 0. What happens? So I just have to look at that. What is the limit of that thing in a circle, as delta goes to 0? So I'm finding out for a 1 by 1 problem what a penalized least squares problem, ridge regression, all over the place-- what happens? So what happens to that number as delta goes to 0? 1 over sigma. So now, let delta go to 0. So that approaches 1 over sigma, because delta disappears. Sigma over sigma squared, 1 over sigma. So it approaches the inverse, but what's the other possibility, here? The other possibility is that sigma is 0. I didn't say whether this matrix, this 1 by 1 matrix, was invertible or not. If sigma is not 0, then I go to 1 over sigma. If sigma is really small, it will take a while. Delta will have to get small, small, small, even compared to sigma, until finally, that term goes away, and I just have 1 over sigma. But what if sigma is 0? Sorry to get excited about 0. Who would get excited about 0? So this is the case when this is 1 over sigma, if sigma is positive. And what does it approach if sigma is 0? 0! Because this is 0, the whole problem was like disappeared, here. The sigma was 0. Here is a sigma. So anyway, if sigma is 0, then I'm getting 0 all the time. But I have a decent problem, because the delta squared is there. I have a decent problem until the last minute. My problem falls apart. Delta goes to 0, and I have a 0 equals 0 problem. I'm lost. But the point is the penalty kept me positive. It kept me with his delta squared term until the last critical moment. It kept me positive even if that was 0. If that is 0, and this is 0, I still have something here. I still have a problem to solve. And what's the limit then? So 1 over sigma if sigma is positive. And what's the answer if sigma is not positive? It's 0. Just tell me. I'm getting 0. I get 0 all the way, and I get 0 in the limit. And now, let me just ask, what have I got here? What is this sudden bifurcation? Do I recognize this? The inverse in the limit as delta goes to 0 is either 1 over sigma, if that makes sense, or it's 0, which is not like 1 over sigma. 1 over sigma-- as sigma goes to 0, this thing is getting bigger and bigger. But at sigma equals 0, it's 0. You see, that's a really strange kind of a limit. Now, it would be over there. What have I found here, in this limit? Say it again, because that was exactly right. The pseudo inverse. So this system-- choose delta greater than 0, then delta going to 0. The solution goes to the pseudo inverse. That's the key fact. When delta is really, really small, then this behaves in a pretty crazy way. If delta is really, really small, then sigma is bigger, or it's 0. If it's bigger, you go this way. If it's 0, you go that way. So that's the message, and this is penalized. These squares, as the penalty gets smaller and smaller, approaches the correct answer, the always correct answer, with that sudden split between 0 and not 0 that we associate with the pseudo inverse. Of course, in a practical case, you're trying to find the resistances and inductions in a circuit by trying the circuit, and looking at the output b, and figuring out what input. So the unknown x is the unknown system parameters. Not the voltage and current, but the resistance, and inductance, and capacitance. I've only proved that in the 1 by 1 case. You may say that's not much of a proof. In the 1 by 1 case, we can see it happen in front of our eyes. So really, a step I haven't taken here is to complete that to any matrix A. So that the statement then. That's the statement. So that's the statement. For any matrix A, this matrix, A transpose A plus delta squared inverse times A transpose-- that's the solution matrix to our problem. That's what I wrote down up there. I take the inverse and pop it over there. That approaches A plus, the pseudo inverse. And that's what we just checked for 1 by 1. For 1 by 1, this was sigma over sigma squared plus delta squared. And it went either to 1 over sigma or to 0. It split in the limit. It shows that limits can be delicate. The limit-- as delta goes to 0, this thing is suddenly discontinuous. It's this number that is growing, and then suddenly, at 0, it falls back to 0. Anyway, that would be the statement. Actually, statisticians discovered the pseudo inverse independently of the linear algebra history of it, because statisticians did exactly that. To regularize the problem, they introduced a penalty and worked with this matrix. So statisticians were the first to think of that as a natural thing to do in a practical case-- add a penalty. So this is adding a penalty, but remember that we stayed with L2 norms, staying with L2, least squares. We could ask, what happens? Suppose the penalty is the L1 norm. I'm not up to do this today. Suppose I minimize that. Maybe I'll do L2, but I'll do the penalty guy in the L1 norm. I'm certainly not an expert on that. Or you could even think just that power. So that would have a name. A statistician invented this. It's called the Lasso in the L1 norm, and it's a big deal. Statisticians like the L1 norm, because it gives sparse solutions. It gives more genuine solutions without a whole lot of little components in the answer. So this was an important step. Let me just say again where we are in that big list. The two important ones that I haven't done yet are these iterative methods in 2.1. So that's like conventional linear algebra, just how to deal with a big matrix, maybe with some special structure. That's what numerical linear algebra is all about. And then Gram-Schmidt with or without pivoting, which is a workhorse of numerical computing, and I think I better save that for next time. So this is the one I picked for this time. And we saw what happened in L2. Well, we saw it for 1 by 1. Would you want to extend to prove this for any A, going beyond 1 by 1? How would you prove such a thing for any A? I guess I'm not going to do it. It's too painful, but how would you do it? You would use the SVD. If you want to prove something about matrices, about any matrix, the SVD is the best thing you could have-- the best tool you could have. I can write this in terms of the SVD. I just plug-in A equals whatever the SVD tells me to put in there. U sigma V transposed. Plug it in there, simplify it using the fact that these are orthogonal. If I have any good luck, it'll get an identity somewhere from there and an identity somewhere from there. And it will all simplify. It will all diagonalize. That's what the SVD really does is turns my messy problem into a problem about their diagonal matrix, sigma in the middle. So I might as well put sigma in the middle. Yeah, why not? Before we give up on it-- a special case of that, but really, the genuine case would be when A is sigma. Sigma transpose sigma plus delta squared I inverse times sigma transpose approaches the pseudo inverse, sigma plus. And the point is the matrix sigma here is diagonal. Oh, I'm practically there, actually. Why am I close to being able to read this off? Well, everything is diagonal here. Diagonal, diagonal, diagonal. And what's happening on those diagonal entries? So you had to take my word that when I plugged in the SVD, the U and the V got separated out to the far left and the far right. And it was that that stayed in the middle. So it's really this is the heart of it. And say, well, that's diagonal matrix. So I'm just looking at what happens on each diagonal entry, and which problem is that? The question of what's happening on a typical diagonal entry of this thing is what question? The 1 by 1 case! The 1 by 1, because each entry in the diagonal is not even noticing the others. So that's the logic, and it would be in the notes. Prove it first for 1 by 1, then secondly for diagonal. This, and finally with A's, and they're using the SVD with and U and V transposed to get out of the way and bring us back to here. So that's the theory, but really, I guess I'm thinking that far the most important message in today's lecture is in this list of different types of problems that appear and different ways to work with them. And we haven't done Gram-Schmidt, and we haven't done iteration. So this chapter is a survey of-- well, more than a survey of what numerical linear algebra is about. And I haven't done random, yet. Sorry, that's coming, too. So three pieces are still to come, but let's take the last two minutes off and call it a day.
2021-10-16T18:21:51
{ "domain": "mit.edu", "url": "https://ocw.mit.edu/courses/mathematics/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/video-lectures/lecture-10-survey-of-difficulties-with-ax-b/", "openwebmath_score": 0.7841673493385315, "openwebmath_perplexity": 473.940848971208, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9923043544146899, "lm_q2_score": 0.8479677583778258, "lm_q1q2_score": 0.8414420990415802 }
http://mathhelpforum.com/geometry/67346-hard-math-problem.html
# Math Help - Hard math problem 1. ## Hard math problem My friend gave me a good question today so I want to see if you smart geniuses can answer it Consider a fixed line AB=4. Also consider a line CD, such that AB=CD=4, and that AB is the perpendicular bisector of CD AND CD is the perpendicular bisector of AB. Now, consider the set of all points P on CD (which there are inifineitely many). For every unique point P, there is a unique point Q such that QBPA is cyclic (all four points lie on a circle). The set of all possible points Q creates a familiar figure. What is that figure and what is the area of that figure? Prove this. I said it was a cirlce with area 4pi, but he said it was wrong? What other familar shape can it possibly be? If i draw it it seems like a circle (i just plotted a lot of points) 2. Hello, For every unique point P, there is a unique point Q such that QBPA is cyclic Is there some information missing here ? Because for a point P, there are infinitely many points Q such that Q,B,P,A lie on a circle Or am I missing something ? 3. Sorry, same user, i forogt user name anyways, you are right I forgot one more condition (forogt what he tole me): QP is bisected by AB. Thus there are two unique points Q for every P. Anyways, inifintiely many points P will create a unique figure for P. What is the figure and what is that area 4. Originally Posted by mluo Consider a fixed line segment AB=4. Also consider a line segment CD, such that AB=CD=4, and that AB is the perpendicular bisector of CD AND CD is the perpendicular bisector of AB. Now, consider the set of all points P on CD (which there are inifineitely many). For every unique point P, there are two unique points Q such that QBPA is cyclic (all four points lie on a circle) and that QP is bisected by AB. The set of all possible points Q creates a familiar figure. What is that figure and what is the area of that figure? Prove this. I said it was a cirlce with area 4pi, but he said it was wrong? What other familar shape can it possibly be? If i draw it it seems like a circle (i just plotted a lot of points) 1. Please don't double post. It's against the rules and it wastes everybodies time. 2. See attachment 5. It is an ellipse, with foci at A and B and eccentricity 1/√2. So its area is 4√2π. Now prove it. 6. Isnt the proof just Angle Bisector or am I wrong? This is because the chords AP and BP equal. Do you have a good solution? 7. Originally Posted by person8901 Isnt the proof just Angle Bisector or am I wrong? This is because the chords AP and BP equal. Do you have a good solution? I have a solution, though I wouldn't call it a "good" solution. (I would prefer to have an argument using synthetic geometry to show that AQ+QB = const.) Take a coordinate system in which A and B are the points (±a,0), and C and D are (0,±a). (We are told that a=2, but I prefer to work with letters rather than numbers.) If a circle contains A, B and P, then its centre must be on the y-axis, say at the point (0,t). Then the radius of the circle is $\sqrt{t^2+a^2}$, and its equation is $x^2 + (y-t)^2 = t^2+a^2$. Thus P is the point $(0,t-\sqrt{t^2+a^2})$. The condition that PQ is bisected by AB tells you that the y-coordinate of Q is the negative of the y-coordinate of P. So if Q=(x,y) then $y = \sqrt{t^2+a^2} - t$. Putting those coordinates into the equation of the circle, and simplifying, we get the equation $x^2 = 4t(\sqrt{t^2+a^2}-t) = 4ty$. Thus $t = \frac{x^2}{4y}$. Substitute that value of t into the equation $y = \sqrt{t^2+a^2} - t$, and you get $y = \sqrt{\frac{x^4}{16y^2} + a^2} - \frac{x^2}{4y}$. Write this as $4y^2+x^2 = \sqrt{x^4 + 16a^2y^2}$, square both sides, simplify, and the result comes out as $\frac{x^2}{2a^2} + \frac{y^2}{a^2} = 1$, the standard equation of an ellipse with semi-axes √2a and a.
2014-12-20T01:49:28
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/geometry/67346-hard-math-problem.html", "openwebmath_score": 0.8201042413711548, "openwebmath_perplexity": 642.6131589335839, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9923043525940247, "lm_q2_score": 0.8479677564567913, "lm_q1q2_score": 0.841442095591464 }
https://www.homeworklib.com/qaa/1387771/11-what-are-the-possible-combination-outcomes
Question # 11. What are the possible combination outcomes when you toss a fair coin three times? (6.25... 11. What are the possible combination outcomes when you toss a fair coin three times? (6.25 points) H = Head, T = Tail a {HHH, TTT) Ob. (HHH, TTT, HTH, THT) c. {HHH, TTT, HTH, THT, HHT, TTH, THH) d. (HHH, TTT, HTH, THT, HHT, TTH, THH, HTT} e. None of these 12. What is the probability of you getting three heads straight for tossing a fair coin three times? (6.25 points) a. 1/2 OD. 1/4 C. 118 d. 1/16 13. What is the probability of you getting no heads at all for tossing a fair coin three times? (6.25 points) . a. 112 D. 114 C 1/8 d. 1/16 11) What are the possible combination outcomes when you toss a fair coin three times? The correct answer is Option (d): $[HHH,TTT,HTH,THT,HHT,TTH,THH,HTT]$ There are 8 possible outcomes when we toss a fair coin three times. 12) What is the probability of you getting three heads straight for tossing a fair coin three times? The correct answer is Option (c): $1/8$ The sample space for tossing a fair coin three times is, $S=[HHH,TTT,HTH,THT,HHT,TTH,THH,HTT]$ Number of possible outcomes in sample space $n(S)=8$ Let A be the event of getting three heads. $A=[HHH]$ Number of outcomes with three heads $n(A)=1$ We know that, Probability = Number of favourable outcomes / Total number of outcomes The probability of getting three heads straight for tossing a fair coin three times is, $P[A]=n(A)/n(S)$ $=1/8$ 13) What is the probability of you getting no heads at all for tossing a fair coin three times? The correct answer is Option (c): $1/8$ The sample space for tossing a fair coin three times is, $S=[HHH,TTT,HTH,THT,HHT,TTH,THH,HTT]$ Number of possible outcomes in sample space $n(S)=8$ Let B be the event of getting no heads at all. $B=[TTT]$ Number of outcomes with no heads $n(B)=1$ We know that, Probability = Number of favourable outcomes / Total number of outcomes The probability of getting no heads at all for tossing a fair coin three times is, $P[B]=n(B)/n(S)$ $=1/8$ #### Earn Coins Coins can be redeemed for fabulous gifts. Similar Homework Help Questions • ### A far coin is tossed three times in succession. The set of equally likely outcomes is... A far coin is tossed three times in succession. The set of equally likely outcomes is (HHH, HHT, HTH, THH, HTT, THT, TTH, TTT). Find the probability of getting exactly zero heads The probability of getting zero heads is (Type an integer or a simplified fraction) • ### For a fair coin tossed three times, the eight possible simple events are HHH, HHT, HTH, THH, HTT, THT, TTH, TTT For a fair coin tossed three times, the eight possible simple events are HHH, HHT, HTH, THH, HTT, THT, TTH, TTT. Let X = number of tails. Find the probability distribution for X by filling in the table showing each possible value of X along with the probability that value occurs. k=0,1,2,3 P(X=k) • ### A coin is tossed three times. An outcome is represented by a string of the sort... A coin is tossed three times. An outcome is represented by a string of the sort HTT (meaning heads on the first toss, followed by two tails). The 8 outcomes are listed below. Assume that each outcome has the same probability. Complete the following. Write your answers as fractions. (If necessary, consult a list of formulas.) (a) Check the outcomes for each of the three events below. Then, enter the probability of each event. (a) Check the outcomes for each... • ### Need help: For the experiment described, write the indicated event in set notation Need help: For the experiment described, write the indicated event in set notation. A coin is tossed three times. Represent the event "the first two tosses come up the same" as a subset of the sample space. A) {(tails, tails), (heads, heads)} B) {hht, tth} C) {hhh, hht, hth, htt, thh, tht, tth, ttt} D) {hhh, hht, tth, ttt} • ### Probability Puzzle 3: Flipping Coins If you flip a coin 3 times, the probability of getting... Probability Puzzle 3: Flipping Coins If you flip a coin 3 times, the probability of getting any sequence is identical (1/8). There are 8 possible sequences: HHH, HHT, HTH, HTT, THH, THT, TTH, TTT Let's make this situation a little more interesting. Suppose two players are playing each other. Each player choses a sequence, and then they start flipping a coin until they get one of the two sequences. We have a long sequence that looks something like this: HHTTHTTHTHTTHHTHT....... • ### Consider the Probability Distribution of the SELECT ALL APPLICABLE CHOICES Number of Heads when Tossing of... Consider the Probability Distribution of the SELECT ALL APPLICABLE CHOICES Number of Heads when Tossing of a fair coin, three times A) B) 0 × .25 + 1 .50 + 2 x .25 X (Num. of Heads) P(X) 0 1/8 1 3/8 2 3/8 3 1/8 On average, how many HEADS would you expect to get out of every three tosses? note the sample space is HHH, HHT, HTH, HTT,THH, THT,TTH, TTT, A person measures the contents of 36 pop... • ### A fair coin is tossed 9 times. A fair coin is tossed 9 times.(A) What is the probability of tossing a tail on the 9th toss, given that the preceding 8 tosses were heads?(B) What is the probability of getting either 9 heads or 9 tails?(A) What is the probability of tossing a tail on the 9th toss, given that the preceding 8 tosses were heads?(B) What is the probability of getting either 9 heads or 9 tails? • ### 15. How many possible combination outcomes consist of two heads when you toss a fair coin... 15. How many possible combination outcomes consist of two heads when you toss a fair coin four times? (6.25 points) a. 4 b. 5 c. 6 d. 7 e. None of these • ### My No O-5 points LarPCac 92012 Determine whether the sequence is arithmetic. If so, find the common difference d.... My No O-5 points LarPCac 92012 Determine whether the sequence is arithmetic. If so, find the common difference d. (if the sequence is not arithmetic, enter NONE.) 6.1, 7.0, 7.9, 8.8, 9.7, .. Yes, the sequence is arithmetic ONo, the sequence is not arithmetic 45 points LarPCalc8 9.5.017. 15. Evaluate using Pascal's Triangle. 3C2 O-5polnts LarPCalce 9.6.015 M 17 A customer can choose one of six amplifiers, one of four compact disc players, and one of six speaker models for... • ### how to calculate d. uty lat thè unit has exactly four rooms, given that it has... how to calculate d. uty lat thè unit has exactly four rooms, given that it has at le two rooms. The conditional probability that the unit has at most four rooms, given that it has at least two rooms. c. d. Interpret your answers in parts (a) - (c) in terms of percentages 4.189 When a balanced coin is tossed three times, eight equally likely outcomes are possible: HHH HTH THH TTH HHT HTT THT TTT Let: A-event the first...
2022-01-22T12:06:38
{ "domain": "homeworklib.com", "url": "https://www.homeworklib.com/qaa/1387771/11-what-are-the-possible-combination-outcomes", "openwebmath_score": 0.76191645860672, "openwebmath_perplexity": 684.7352301643419, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9923043514561086, "lm_q2_score": 0.8479677545357568, "lm_q1q2_score": 0.8414420927202969 }
https://stats.stackexchange.com/questions/305548/expected-remaining-minutes-given-fixed-number-of-events
# Expected remaining minutes given fixed number of events An interview question that was relayed to me by a colleague: A machine beeps exactly 5 times within a time span of 10 minutes. However, the distribution of these 5 beeps across the entire time span is fully randomized - any distribution of the 5 beeps across the 10 minutes is equally likely. You heard the 3rd beep, then after exactly 1 minute you heard the 4th beep. What is the expected time remaining until the 5th beep? • Half of the remaining time? – Eran Sep 29 '17 at 11:31 • I guess that "the distribution of these 5 beeps across the entire time span is fully randomized - any distribution of the 5 beeps across the 10 minutes is equally likely" means the beep times are uniformly distributed. But it's not clear whether the time elapsed so far is available for estimating the time till the 5th beep. – Kodiologist Sep 29 '17 at 16:22 Create a clock with a hand that covers 10 minutes in one full revolution. Mark time 0 at the top. The assumptions amount to marking those five beeps independently and uniformly at random around the clock, creating six marks. This picture should make it obvious that the distributions of the six gaps between the marks are identical. It is immediate that their (unconditional) expectations are the same and must sum to the full time, so each expected gap length is $10/6$ minutes. It makes no difference whether you heard the third beep, or the second, or whatever: the question is tantamount to asking the expected time to the next beep around the clock, given that the preceding gap was one-tenth of the time. This leaves five gaps uniformly spanning nine-tenths of the remaining time. Again, each must have the same expectation, now equal to one-fifth of nine-tenths of the time. Thus, the answer is $1.8$ minutes. These plots summarize a simulation of 10,000 independent iterations of this beeping experiment. The histogram at left confirms the beeps were uniformly distributed (within random variation). The five scatterplots compare each gap with its successor (in terms of the total time, so one unit is ten minutes). (Gap 0 is the time of the first beep; Gap 5 is the time from the last beep until the end of the full time period.) Each plot is decorated, in red, with the Loess smooth of the data. (It fits a general curve rather than forcing a straight line). Their straightness strongly suggests the expectation of the next gap is a linear function of the preceding gap, descending from $1/5=0.2$ of the total time to $0$. The similarities among these scatterplots bear out the claim that the joint gap distributions are all the same, regardless of the location of the gap. The conditional expectation where the preceding gap was $1/10$, shown with the vertical blue lines, must be $(1/10)\times 0 + (1-1/10)\times 0.2 = 0.18$, or $1.8$ minutes. (A formal rigorous explanation is that the order statistics form a Markov chain in which the order statistics preceding a particular one--a given "mark"--are conditionally independent of those following that mark. See Order Statistics and Related Models by Lopez Blazquez, Balakrishnan, Crmer, and Kamps at http://statmath.wu.ac.at/courses/balakrishnan/OrderStatsandRecords.pdf.) This R code generated the illustration. Simple modifications at the beginning make it usable for studying other numbers of beeps in more or less detail. n <- 1e4 n.sample <- 6 # One larger than the number of beeps # # Generate data. # This is a fast way to generate uniform[0,1] order statistics, # avoiding an explicit sorting operation. # x <- matrix(rexp(n*n.sample), nrow=n.sample) # IID exponential x <- apply(x, 2, function(y) cumsum(y)) x <- t(x) / x[n.sample, ] # Order stats of IID uniform # # Display the beep distribution. # par(mfrow=c(1,n.sample)) hist(x[, -n.sample], freq=FALSE, main="Time", main="Histogram of all Beeps") X <- as.data.frame(cbind(x[, 1], x[, -1] - x[, -n.sample])) # The gaps names(X) <- paste0("Gap.", 1:n.sample - 1) # # Display the gap-to-gap scatterplots. # for (i in 1:(n.sample-1)) { Y <- X[, names(X)[0:1 + i]] plot(Y[, 1], Y[, 2], asp=1, xlim=0:1, ylim=0:1, pch=19, cex=0.5, col="#00000004", xlab=names(Y)[1], ylab=names(Y)[2], main="Gap Comparison") abline(v=0.1, col="Blue") f <- as.formula(paste(rev(names(Y)), collapse="~")) if (n <= 1e4) { fit <- loess(f, data=Y) } else { # Loess will be too slow fit <- lm(f, data=Y) } X.hat <- data.frame(V=seq(0, 1, length.out=101)) names(X.hat) <- names(Y)[1] y.hat <- predict(fit, newdata=X.hat) lines(X.hat[, 1], y.hat, col="Red", lwd=2) } par(mfrow=c(1,1))
2019-12-10T18:15:40
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/305548/expected-remaining-minutes-given-fixed-number-of-events", "openwebmath_score": 0.6912132501602173, "openwebmath_perplexity": 2593.037301965224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669234586964, "lm_q2_score": 0.8615382129861583, "lm_q1q2_score": 0.841437867044612 }
https://math.stackexchange.com/questions/2938478/partial-sum-of-divergent-series?noredirect=1
# Partial sum of divergent series I am trying to find the nth partial sum of this series: $$S(n) = 2(n+1)^2$$ I found the answer on WolframAlpha: $$\sum_{n=0}^m (1+2n)^2 =\frac{1}{3}(m+1)(2m+1)(2m+3)$$ How can I calculate that sum, without any software? • It's a sum of squares. You may learn from math.stackexchange.com/q/188602/290189 – GNUSupporter 8964民主女神 地下教會 Oct 1 '18 at 20:41 • $2(n+1)^2\ne(1+2n)^2$. Which one are you trying to calculate? – Andrei Oct 1 '18 at 20:43 • Hint: \begin{eqnarray*} \sum_{n=0}^{m} n^2=\frac{m(m+1)(2m+1)}{6} \end{eqnarray*} – Donald Splutterwit Oct 1 '18 at 20:44 • @Andrei Sorry, that is a mistake. I am trying to calculate $(1+2n)^2$ – GKEdv Oct 1 '18 at 20:48 $$S(n)=(1+2n)^2=1+4n+4n^2$$ You can now use the following $$\sum_{n=0}^m1=m+1\\\sum_{n=0}^mn=\frac{m(m+1)}{2}\\\sum_{n=0}^mn^2=\frac{m(m+1)(2m+1)}{6}$$ Alternatively, compute the first 4-5 elements. The sum of a polynomial of order $$p$$ will be a polynomial of order $$p+1$$ in the number of terms. Find the coefficients, then prove by induction • Thanks, the first approach is very easy to understand. Can you elaborate on the alternative approach or point me to more information, example on that? – GKEdv Oct 1 '18 at 21:20 • If you look at en.wikipedia.org/wiki/Faulhaber%27s_formula#Summae_Potestatum you can get that the sum of $n^p$ is a polynomial of order $p+1$. Adding then the rest of the smaller powers will change the coefficients, but not the order of the polynomial. – Andrei Oct 1 '18 at 21:44 • The first sum should be $m\color{red}{+1}$ ... it starts at zero! – Donald Splutterwit Oct 2 '18 at 1:39 • You are right. It does not matter for the other ones. Thanks. I'll fix it – Andrei Oct 2 '18 at 3:14 $$\sum_\limits{i=0}^n 2(i + 1)^2 = 2\sum_\limits{i=1}^{n+1} i^2$$ Which gets to the meat of the question, what is $$\sum_\limits{i=1}^n i^2$$? There are a few ways to do this. I think that this one is intuitive. In the first triangle, the sum of $$i^{th}$$ row equals $$i^2$$ The next two triangles are identical to the first but rotated 120 degrees in each direction. Adding corresponding entries we get a triangle with $$2n+1$$ in every entry. What is the $$n^{th}$$ triangular number? $$3\sum_\limits{i=1}^n i^2 = (2n+1)\frac {n(n+1)}{2}\\ \sum_\limits{i=1}^n i^2 = \frac {n(n+1)(2n+1)}{6}$$ To find: $$\sum_\limits{i=1}^{n+1} i^2$$, sub $$n+1$$ in for $$n$$ in the formula above. $$\sum_\limits{i=0}^n 2(i + 1)^2 = \frac {(n+1)(n+2)(2n+3)}{3}$$ Another approach is to assume that $$S_n$$ can be expressed as a degree $$3$$ polynomial. This should seem plausible $$S(n) = a_0 + a_1 n + a_2 n^2 + a_3n^3\\ S(n+1) = S(n) + 2(n+2)^2\\ S(n+1) - S_n = 2(n+2)^2\\ S(n+1) = a_0 + a_1 (n+1) + a_2 (n+1)^2 + a_3(n+1)^3\\ a_0 + a_1 n+a_1 + a_2 n^2 + 2a_2n+a_21 + a_3n^3 + 3a_3n^2 + 3a_3n + 1\\ S(n+1) - S(n) = (a_1 + a_2 + a_3) + (2a_2 + 3a_3) n + 3a_3 n^2 = 2n^2 + 4n + 2$$ giving a system of equations: $$a_1 + a_2 + a_3 = 2\\ 2a_2 + 3a_3 = 4\\ 3a_3 = 1\\ a_0 = S(0)$$
2019-09-17T14:49:59
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2938478/partial-sum-of-divergent-series?noredirect=1", "openwebmath_score": 0.8016624450683594, "openwebmath_perplexity": 275.89281868292477, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.8615382165412809, "lm_q1q2_score": 0.8414378664213635 }
https://math.stackexchange.com/questions/1561861/show-that-lim-limits-x-rightarrow-0fx-1
# Show that $\lim\limits_{x\rightarrow 0}f(x)=1$ Suppose a function $f:(-a,a)-\{0\}\rightarrow(0,\infty)$ satisfies $\lim\limits_{x\rightarrow 0}\left(f(x)+\frac{1}{f(x)}\right)=2$. Show that $$\lim\limits_{x\rightarrow 0}f(x)=1$$ Let $\epsilon>0$ , then there exists a $\delta>0$ such that $$\left(f(x)+\frac{1}{f(x)}\right)-2<\epsilon\;\;\;\text{and}\;\;\;|x|<\delta$$ Then \begin{align} &\left(f(x)+\frac{1}{f(x)}\right)-2\\ =&\left(f(x)-1\right) +\left(\frac{1}{f(x)}-1\right)<\epsilon\tag{1} \end{align} Squaring $(1)$ both sides gives $$\left(\left(f(x)-1\right) +\left(\frac{1}{f(x)}-1\right)\right)^2<\epsilon^2\tag{2}$$ Since $(f(x)-1)^2\leq\left(\left(f(x)-1\right) +\left(\frac{1}{f(x)}-1\right)\right)^2$, by using $(2)$, $(f(x)-1)^2<\epsilon^2\Rightarrow f(x)-1<\epsilon$; therefore, as $\epsilon$ is arbitrary, $\lim\limits_{x\rightarrow 0}f(x)=1$ Can someone give me a hint to do this question without using epsilon-delta definition? Thanks Since $f$ is positive and $f + (1/f)$ tends to $2$ as $x \to 0$, it follows that $f$ is bounded and away from $0$ as $x \to 0$ . Therefore it follows that $\sqrt{f(x)}$ is bounded and away from $0$ as $x \to 0$. Now we can see that $$f(x) + \frac{1}{f(x)} = \left(\sqrt{f(x)} - \frac{1}{\sqrt{f(x)}}\right)^{2} + 2$$ and therefore $$\frac{(f(x) - 1)^{2}}{f(x)} \to 0$$ so that $(f(x) - 1)^{2} \to 0$ and therefore $|f(x) - 1| \to 0$ and then $f(x) \to 1$. Define $h(u) = \frac{u + \sqrt{u^2 - 4}}{2}$ (this is the inverse function of $x \mapsto x + \frac{1}{x}$). Note that $h(2) = 1$ and that $h$ is continuous at $u = 2$. Thus, $$1 = \lim_{u \to 2} h(u) = \lim_{x \to 0} h \left( f(x) + \frac{1}{f(x)} \right) = \lim_{x \to 0} \frac{f(x) + \frac{1}{f(x)} + \sqrt{\left( f(x) + \frac{1}{f(x)} \right)^2 - 4}}{2} = \lim_{x \to 0} \frac{f(x) + \frac{1}{f(x)} + \sqrt{ \left( f(x) - \frac{1}{f(x)} \right)^2}}{2} = 1 + \lim_{x \to 0} \left| f(x) - \frac{1}{f(x)} \right|$$ which implies that $$\lim_{x \to 0} \left( f(x) - \frac{1}{f(x)} \right) = 0.$$ Combining both results together we have $$2 = 2 + 0 = \lim_{x \to 0} \left( f(x) + \frac{1}{f(x)} \right) + \lim_{x \to 0} \left( f(x) - \frac{1}{f(x)} \right) = \lim_{x \to 0} 2f(x)$$ which implies that $\lim_{x \to 0} f(x) = 1$. • Yes, isn't it what's written? – levap Dec 8 '15 at 13:20 • Thank you! I somehow completely failed to see it. – levap Dec 8 '15 at 13:49 $$a + \frac 1 a = \left( a - 2 + \frac 1 a \right) +2 = \left( \sqrt a - \frac 1 {\sqrt a} \right)^2 + 2 = \text{square} + 2.$$ So this is $\ge 2$ unless the square is $0$. $a+ \dfrac 1 a$ cannot get close to $2$ unless $a$ gets close to $1$. Even if you allow complex numbers (so that the inequality above doesn't hold), we have $$a + \frac 1 a = 2 \Longrightarrow a^2 + 1 = 2a$$ and that is a quadratic equation whose only solution is $a=1$ (a double root). • $a = 2$ is not a root of the equation. you probably meant $a = 1$ – Netivolu Feb 7 '18 at 10:06 Generalization: Suppose $f:(-a,a) \setminus \{0\} \to (0,\infty)$ and $g: (0,\infty)\to \mathbb R.$ Let $y_0> 0.$ Assume $g$ is strictly decreasing on $(0,y_0],$ and strictly increasing on $[y_0,\infty).$ If $\lim_{x\to 0} g(f(x)) = g(y_0),$ then $\lim_{x\to 0} f(x) = y_0.$ This applies to the problem at hand by letting $g(y) = y + 1/y$ with $y_0 = 1.$ Sketch of proof: Suppose $\limsup_{x\to 0} f(x) > y_0.$ Then there exists $z>y_0$ and a sequence $x_n \to 0$ such that $f(x_n)> z$ for all $n.$ This implies $g(f(x_n))> g(z)$ for all $n.$ Because $g(z) > g(y_0),$ this is a contradiction. Thus $\limsup_{x\to 0} f(x) \le y_0.$ Same idea if $\liminf_{x\to 0} f(x) < y_0.$ Thus $$y_0 \le \liminf_{x\to 0} f(x)\le \limsup_{x\to 0} f(x)\le y_0,$$ giving the result.
2020-04-07T08:11:21
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1561861/show-that-lim-limits-x-rightarrow-0fx-1", "openwebmath_score": 0.9918160438537598, "openwebmath_perplexity": 90.77520846391522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8414378641002129 }
https://math.stackexchange.com/questions/2186632/spanning-set-definition-and-theorem
# Spanning set definition and theorem I need a bit of clarification in regards to the spanning set. I am confused between the definition and the theorem. Definition of Spanning Set of a Vector Space: Let $S = \{v_1, v_2,...v_n\}$ be a subset of a vector space $V$. The set is called a spanning set of $V$ if every vector in $V$ can be written as a linear combination of vectors in $S$. In such cases it is said that $S$ spans $V$. Definition of the span of a set: If $S = \{v_1, v_2,...v_n\}$ is a set of vectors in a vector space $V$, then the span of $S$ is the set of all linear combinations of the vectors in $S$, $span(S) = \{k_1v_1 + k_2v_2+...+k_nv_n | k_1, k_2,...k_n \in \mathbb{R}\}$. The span of is denoted by $span(S)$ or $span\{v_1, v_2,...v_k\}$. If $span(S) = V$ it is said that $V$ is spanned by $\{v_1, v_2,...v_n\}$, or that $S$ spans $V$. What I understand from the definitions: $S$ is a subset of the vector space $V$ and if I can represent all of the vectors that are in the vector space by using just the subset or the smaller part of $V$ then it can be said that $S$ spans $V$ or can reach every vector in $V$. Linear combination has the following form $a = k_1v_1 + k_2v_2 + k_3v_3 +...+k_nv_n$ where $k_i$ are scalars and $v_i$ are the vectors in the subset $S$ of $V$ and $a$ is a particular vector in $V$ that can be created by a linear combination of vectors in $S$. This can be done for infinite number of vectors or all the vectors that are in the vector space $V$. We can create a set of all linear combinations of the vectors the can be reached by $S$ in $V$. For instance linear combination $a$ can be in the set and just like it, many others are a part of this set. We say that $S$ spans $V$ if every vector in $V$ can be reached by the vectors in $S$. Furthermore, $span(S)$ is the set that contains the linear combinations. Theorem 4.7 Span(S) is a subspace of V: If $S = \{v_1, v_2,...v_n\}$ is a set of vectors in a vector space $V$. then $span(S)$ is a subspace of $V$. Moreover, $span(S)$ is the smallest subspace of $V$ that contains $S$, in the sense that every other subspace of $V$ that contains $S$ must contain $span(S)$. Question: Theorem 4.7 is where I am confused. The reason why I posted my understanding of the above definitions is so that if I am missing something perhaps someone will point it out to me so I can bridge the gap. Regardless, where I am confused is that the theorem states that $span(S)$ is the smallest part of $V$, but how can it be the smallest if we are saying that $span(S) = V$ in the definition of the span of a set. Should this not mean that the $span(S)$ is $V$ because of the equality? I can see that subset $S$ could be the smallest part because we are only taking the elements that can span $V$ and that will make sense, but $span(S)$ is supposed to be a set of linear combination and therefore contains every thing that is in $V$. What am I missing here? P.S. Sorry for the long post, I have just been grappling with this for a while so I wanted to clarify. Also, I am self-studying so forums like these are my teachers. • Who says $\text{span}(S)=V$ in theorem $4.7$? – Mathematician 42 Mar 14 '17 at 18:13 • Elementary Linear Algebra, Sixth Edition, By Larson, Edwards, and Falvo, chapter 4, section 4.4, Spanning Sets and Linear Independence, page# 211. It is not the book I am using but because I was confused so I decided to look it up and this is what I came across, it clarified most of the things but left me confused still. Also, it is not in the theorem but if you look at the definition of the span of a set, that is where it is stated. In the span theorem they are saying it is the smallest part of V. If span(s) = V then how is it the smallest? Because it should cover all of V. – Iamlearningmath Mar 14 '17 at 18:17 • No, you're not reading it correctly. Definition of the span of set does not say that $\text{span}(S)=V$. It says that if $\text{span}(S)=V$ then we say that $V$ is spanned by $S$, but we don't ask this in theorem $4.7$. – Mathematician 42 Mar 14 '17 at 18:22 The definition does not assume $\textrm{span}(S) = V.$ If this happens to be the case, $S$ is called a spanning set, but Theorem 4.7 does not make this assumption. In the theorem, $S$ is just any subset of $V.$ Consider for example $S = \{0\},$ in which case $\textrm{span}(S)$ is also just $\{0\}.$ Or consider $\{(1,0)\} \subset \mathbb{R}^2,$ whose span is the $x$-axis inside of the plane.
2019-07-20T13:40:10
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2186632/spanning-set-definition-and-theorem", "openwebmath_score": 0.8710094094276428, "openwebmath_perplexity": 65.5530686794717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.8414378635533395 }
https://math.stackexchange.com/questions/2566195/find-a-solution-for-the-equation-a4n-an-over-4
# Find a solution for the equation $a(4n) = {a(n)\over 4}$ [closed] I was given the equation $a(4n) = {a(n)\over 4}$ where $a(1) = 1$. I know that ${1\over \sqrt n }$ solves this equation, but I don't know how I would find this solution by hand if I didn't know about it. Any hints on this matter are greatly appreciated. EDIT: oh snap I made a typo. It is $a(4n) = {a(n)\over 2}$. I apologize for my mistake. I'm not sure why my question was put on hold. To clarify, I was asking how one would systematically solve the recursive equation given above, as I was only able to see the solution, but not how one would find it. Anyway, my question has been answered by eranreches (do I need to something else apart from selecting the best answer to mark the question as answered? -- if I have to, I'm sorry, it is my first time posting here...) ## closed as unclear what you're asking by Did, Claude Leibovici, Shaun, José Carlos Santos, ShaileshDec 15 '17 at 0:11 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • Welcome to MSE. Please read this text about how to ask a good question. – José Carlos Santos Dec 14 '17 at 10:50 • What is the equation? – MathematicianByMistake Dec 14 '17 at 10:52 • $\frac 1{\sqrt n}$ does not solve your recurrence. With that definition, $a_1=1$ but $a_4=\frac 12$. – lulu Dec 14 '17 at 10:55 • @lulu Isn't $a_4 = 1/4$? – Yanko Dec 14 '17 at 10:58 • Note that $a(n) = c/n$ is a solution. I found this by assuming that $a(n) = n^p$, and then solving $4^p = 1/4$. I'm not sure if this is the only solution, though. – JavaMan Dec 14 '17 at 11:57 If you want a more systematic way, define $$b_{n}\equiv a_{4^{n}}$$ Then $$b_{0}=a_{1}=1$$ $$b_{n+1}=a_{4^{n+1}}=a_{4\cdot4^{n}}=\frac{a_{4^{n}}}{2}=\frac{b_{n}}{2}$$ with a solution $b_{n}=\frac{1}{2^{n}}$. Now reverse to get $$a_{n}=b_{\log_{4}n}=\frac{1}{2^{\log_{4}n}}=\frac{1}{4^{\frac{1}{2}\log_{4}n}}=\frac{1}{n^{\frac{1}{2}}}=\frac{1}{\sqrt{n}}$$ as wanted. • This is exactly what I was looking for. Thank you for your answer – loungelizard Dec 14 '17 at 14:43 Let $n=4^k$, so that $4n=4^{k+1}$, and let $b(k):=a(2^k)=a(n)$. Now $$b(k+1)=\frac{b(k)}2,$$ which is an ordinary recurrence, with the particular solution (fulfilling $b(0)=1$) $$b(k)=\frac 1{2^k}=\frac 1{\sqrt n}=a(n).$$
2019-06-19T09:32:42
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2566195/find-a-solution-for-the-equation-a4n-an-over-4", "openwebmath_score": 0.7673648595809937, "openwebmath_perplexity": 303.84795440120246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496973, "lm_q2_score": 0.861538211208597, "lm_q1q2_score": 0.8414378635533394 }
http://math.stackexchange.com/questions/108566/writing-a-function-f-when-x-and-fx-are-known/108579
# Writing a function $f$ when $x$ and $f(x)$ are known I'm trying to write a function. For each possible input, I know what I want for output. The domain of possible inputs is small: $$\begin{vmatrix} x &f(x)\\ 0 & 2\\ 1 & 0\\ 2 & 0\\ 3 &0\\ 4 &0\\ 5 &0\\ 6 &1\\ \end{vmatrix}$$ My thought is to start with a function $g(x)$ that transforms $x$ to a value from which I can subtract a function $h(g(x))$ and receive my desired $f(x)$: $$\begin{vmatrix} x &g(x)& h(g(x)) &f(x)\\ 0 & 7& 5 & 2\\ 6 & 6 & 5 & 1\\ 5 & 5 & 5& 0\\ 4 & 4 & 4 & 0\\ 3 & 3 & 3 & 0\\ 2 & 2 & 2 & 0\\ 1 & 1 & 1& 0 \end{vmatrix}$$ But I'm not sure where to go from there, or even whether I'm heading in the right direction. How should I approach creating this function? Is there a methodical way to go about it, or is it trial and error (and knowledge)? Feel free to suggest modifications to how I'm stating my problem, too. Thanks. - First you'd need to explain how you're using the term "function". Under the standard usage of that term, your first table already defines a function and there would be nothing left to do. Perhaps you're looking for an expression in terms of basic arithmetic operations and/or well-known elementary functions? If so, you should specify which ingredients you would like to allow in this expression. –  joriki Feb 12 '12 at 18:35 2 - 2*sign(x) + floor(x/6) comes to my mind. –  mzuba Feb 12 '12 at 18:38 What you're looking for, in general, is called curve fitting – see en.wikipedia.org/wiki/Curve_fitting. Essentially, the question is this: given a set of points, what is the "best" curve to draw through those points? Naturally, the answer depends on what you mean by "best". –  Tanner Swett Feb 12 '12 at 19:01 @joriki: Thanks for clarifying the terminology. I'll avoid editing my post since people have already commented and answered, but you are exactly right: I'm looking for an expression in terms of basic arithmetic operations and/or well-known elementary functions. –  PunctuallyChallenged Feb 12 '12 at 19:09 @mzuba: Perfect, thanks! I'm looking forward to answers that describe the process of coming up with something like that. –  PunctuallyChallenged Feb 12 '12 at 19:12 Obviously function that we are looking for is not linear, then if we do not want to split it to cases and we want to use only basic operations, at least one possibility is to find coefficients of the following polynomial $$f(x)=a_1x^6+a_2x^5+\dots+a_6x+a_7$$ By solving system of your 7 given equations: $$\dots\dots\dots \\f(1)=a_1+a_2+\dots+a_6+a_7=0\\ \dots\dots\dots$$ Then if I am correct, the result is: $$f(x)=\frac{x^6}{240}-\frac{19x^5}{240}+\frac{29x^4}{48}-\frac{113x^3}{48}+\frac{587x^2}{120}-\frac{76x}{15}+2$$ - Between this and Tanner's reference to Curve Fitting, I have a good sense of how to approach this now. While I much prefer mzuba's simple expression, I suspect that it is more of an ad-hoc solution than this general approach. –  PunctuallyChallenged Feb 12 '12 at 19:33 You can generalise the problem: suppose you know the value of $f(x)$ for a particular finite set of values of $x$. (Here, you know the value of $f(x)$ when $x=0,1,2,3,4,5,6$.) Then you can find a possible polynomial function $f$ which takes the given values using the following method. Suppose you know the value of $f(x)$ when $x=x_1, x_2, \dots, x_n$, and that $f(x_j) = a_j$ for $1 \le j \le n$. Let $$P_i(x) = \lambda (x-x_1)(x-x_2) \cdots (x-x_{i-1})(x-x_{i+1}) \cdots (x-x_n)$$ Where $\lambda$ is some constant. That is, it's $\lambda$ times the product of all the $(x-x_k)$ terms with $x-x_i$ left out. Then $P_i(x_k) = 0$ whenever $k \ne i$. We'd like $P_i(x_i) = 1$: then if we let $$f(x) = a_1 P_1(x) + a_2 P_2(x) + \cdots + a_n P_n(x)$$ then setting $x=x_j$ sends all the $P_i(x)$ terms to zero except $P_j(x)$, leaving you with $f(x_j) = a_jP_j(x_j) = a_j$, which is exactly what we wanted. Well we can set $\lambda$ to be equal to $1$ divided by what we get by setting $x=x_j$ in the product: this is never zero, so we can definitely divide by it. So we get $$P_i(x) = \dfrac{(x-x_1)(x-x_2) \dots (x-x_{i-1})(x-x_{i+1}) \dots (x-x_n)}{(x_i-x_1)(x_i-x_2) \dots (x_i-x_{i-1})(x_i-x_{i+1}) \dots (x_i-x_n)}$$ Then $P_i(x_k) = 0$ if $k \ne i$ and $1$ if $k=i$, which is just dandy. More concisely, if $f$ is to satisfy $f(x_j)=a_j$ for $1 \le j \le n$ then $$f(x) = \sum_{j=1}^n \left[ a_j \prod_{i=1}_{i \ne j}^n \frac{x-x_i}{x_j-x_i} \right]$$ This method is called Lagrange interpolation. So in this case, your $x_1, x_2, \dots, x_7$ are the numbers $0, 1, \dots, 6$ and your $a_1, a_2, \dots, a_7$ are, respectively, $2,0,0,0,0,0,1$. Substituting these into the above formula, we get: \begin{align} f(x) &= 2 \times \dfrac{(x-1)(x-2) \dots (x-6)}{(0-1) (0-2) \dots (0-6)} + 0 \times (\text{stuff}) + 1 \times \dfrac{x(x-1) \dots (x-5)}{6(6-1)(6-2) \dots (6-5)}\\ &= 2 \dfrac{(x-1) (x-2) \dots (x-6)}{720} + \dfrac{x(x-1) \dots (x-5)}{720} \\ &= \dfrac{(x-1)(x-2)(x-3)(x-4)(x-5)}{720} \left[ 2(x-6) + x \right] \\ &= \boxed{\dfrac{(x-1)(x-2)(x-3)(x-4)^2(x-5)}{240}} \end{align} You can check easily that this polynomial satisfies the values in your table. In fact, in this particular case, all the above machinery wasn't necessary. It's plain that $f(x)=0$ when $x=1,2,3,4,5$, and so $x-j$ must divide $f(x)$ for $j=1,2,3,4,5$, and so $$f(x) = (x-1)(x-2)(x-3)(x-4)(x-5)g(x)$$ for some polynomial $g(x)$. Since we're only worried about $x=0,6$ beyond this, i.e. $2$ values of $x$, it suggests we have $2$ free parameters in $g(x)$ and hence $g(x)=ax+b$ is linear. That is, we have $$f(x) = (x-1)(x-2)(x-3)(x-4)(x-5)(ax+b)$$ Substituting $x=0$ and $x=6$, respectively, gives \begin{align}2 &= -5! \cdot b \\ 1 &= 5! \cdot (6a+b) \end{align} and solving simultaneously gives $b=-\dfrac{2}{120}$ and $a = \dfrac{3}{720}$, which (after simplification) yields the desired result. - This is often called Lagrange interpolation. –  JavaMan Feb 12 '12 at 22:14 @JavaMan: Indeed, I can't believe I forgot to mention the name of the method! I'll edit my post. –  Clive Newstead Feb 12 '12 at 22:50 Thank you for the detailed explanation. Just enough information for me to follow it. –  PunctuallyChallenged Feb 12 '12 at 23:15 An alternative to Lagrange interpolation is Newton’s polynomial interpolation algorithm. –  mzuba Mar 27 '12 at 11:56 Use $f(x)=(x-1)(x-2)(x-3)(x-4)(x-5)(\frac{x}{240}-\frac{1}{60})$. The first five factors make the function $0$ at $1,2,3,4,5$. The last factor takes care of the rest. - As long as we're giving silly answers, here's another one: $f = \chi_{\lbrace0\rbrace} + \chi_{\lbrace0,6\rbrace} = 2\chi_{\lbrace0\rbrace} + \chi_{\lbrace6\rbrace}$, where $\chi_A$ denotes the indicator function of the set $A$. –  kahen Feb 12 '12 at 19:18 There is absolutely nothing wrong with using the table of values as the definition of the function $f:\{0,1,2,3,4,5,6\}\to\{0,1,2\}$ (or of a function $\{0,1,2,3,4,5,6\}\to\mathbb Z$, or anything similar, if you want that; the usual notion of function in mathematics requires you say it maps $X\to Y$ where you must specify both $X$ and $Y$, but there is no necessity that all values of $Y$ are actually attained by $f$). There is no need for a function to be given by an expression; that is merely a convenient method that is often used to avoid specifying the values of $f$ individually. As you can see form the other answers given, one can invent many different expressions that, when restricted to arguments in $\{0,1,2,3,4,5,6\}$, give the same value as those your table, but they are just different ways to describe the same function that you defined. And they are no more insightful than the table in your question. - I might as well: there is in fact a nice way to rewrite the Lagrange interpolating polynomial, called barycentric Lagrange interpolation, that allows you to quickly write down an expression for the interpolating polynomial, after which you only need to do a few more simplifications to get the actual polynomial you want. For the case of equally-spaced points like in the OP ($x_j=x_0+jh,\;j=0,1,\dots,n$) with corresponding $f(x_j)$, the barycentric form of the Lagrange interpolating polynomial looks remarkably simple: $$\frac{\sum\limits_{j=0}^n \frac{(-1)^j}{x-x_j}\binom{n}{j}f(x_j)}{\sum\limits_{j=0}^n \frac{(-1)^j}{x-x_j}\binom{n}{j}}$$ In fact, for numerical evaluation purposes, one could use this form directly, with some care needed when $x$ is equal to an interpolation point. (If $x$ is nearly, but not equal to, an interpolation point, the method still performs with good accuracy; see the linked article for a deeper discussion.) For OP's case, we obtain, thankfully due to most of the ordinates being zero, the expression $$\frac{\frac2{x}+\frac1{x-6}}{\frac1{x}-\frac6{x-1}+\frac{15}{x-2}-\frac{20}{x-3}+\frac{15}{x-4}-\frac6{x-5}+\frac1{x-6}}$$ or, after a bit more algebraic massaging, $$\frac{x(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)}{720}\left(\frac1{x-6}+\frac2{x}\right)=\frac{(x-1)(x-2)(x-3)(x-4)^2(x-5)}{240}$$ For the case of non-equispaced points, a bit more work needs to be done; see the Berrut/Trefethen paper I linked to above for more details. - If you have a set of discrete data points like this, and you want to inter a continuous function which satisfies all of these data points, you can basically plot the points and then draw straight lines between the closest points to obtain your continuous function. In other words, you can use linear interpolation. If you know two points ($x_1$, f($x_1$)), ($x_2$, f($x_2$)), then you can find the equation for a straight line between them "y=mx+b" by first finding the slope m of such a line: m=(f($x_2$)-f($x_1$))/($x_2$-$x_1$). Then you can figure out b by substituting $x_1$ for x, and f($x_1$) for y. Applying that for all appropriate pairs of points here gives us: f(x)=-2x+2 if x belongs to (0, 1] 0 if x belongs to (1, 5) x-5 if x belongs to [5, 6] Such a function is, of course, not differentiable everywhere. Other interpolation methods do exist, and as Marc van Leeuwen, as well as others, hint at, many other interpolations methods could get developed if desired. -
2015-10-10T03:53:48
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/108566/writing-a-function-f-when-x-and-fx-are-known/108579", "openwebmath_score": 0.9878716468811035, "openwebmath_perplexity": 290.1757021765764, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692284751635, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8414378635151517 }
https://math.stackexchange.com/questions/250429/knights-and-knaves/3584707
# Knights and Knaves A very special island is inhabited only by knights and knaves. Knights always tell the truth, and knaves always lie. You meet four inhabitants: Bozo, Marge, Bart and Zed. • Bozo says," Bart and Zed are both knights". • Marge tells you that both Bart is a knight and Zed is a knave • Bart tells you," Neither Marge nor Zed are knaves". • Zed says that neither Bozo nor Marge are knaves. Can you determine who is a knight and who is a knave? I am having extreme difficulty with this can anyone help me? I assume is starts like this. So $Bo\equiv(Ba\land Ze)$ $Ma≡(Ba\land \lnot Ze)$ $Ba\equiv(Ma\lor Ze)$ $Ze≡(Bo\lor Ma)$ Where $Bo$= Bozo is a knight $Ma$= Marge is a knight $Ba$= Bart is a knight $Ze$= Zed is a knight • possible duplicate of Knights and knaves: Who are B and C? (task 26 from "What Is the Name of This Book?") – DonAntonio Dec 4 '12 at 3:12 • @Don: Could you please describe the isomorphism you see between this problem and the duplicate you suggested? Superficially they look quite different. – joriki Dec 4 '12 at 3:20 • @amWhy: The same question to you; your suggested duplicate also looks quite different superficially. – joriki Dec 4 '12 at 3:22 • To all, I mistakenly voted to close, prematurely. The "possible duplicate" that was generated as a comment was not accurate. I have deleted that comment. Apologies. @joriki - You needn't be so argumentative, joriki. – amWhy Dec 4 '12 at 3:24 • @amWhy: I'm sorry if I came across as argumentative; that wasn't my intention; I was only asking for an explanation and expressing a different view. Please explain why my comment struck you as argumentative to help me avoid that in the future. – joriki Dec 4 '12 at 3:33 First, your symbolic translations of Bart’s and Zed’s statements are incorrect. Bart actually said $$\text{Ma}\land\text{Ze}\;,$$ and Zed said $$\text{Bo}\land\text{Ma}\;.$$ A quick way to solve it is to suppose that Bart is a knight. Then he’s telling the truth, so Marge and Zed are also knights. But that’s impossible, because Marge said that Zed is a knave: if she’s a knight, she’s telling the truth, and Zed isn’t knight. Thus, Bart cannot be a knight and must therefore be a knave. Can you finish it from there? Bozo says, "Bart and Zed are both knights." Marge tells you that both Bart is a knight and Zed is a knave. Bart tells you, "Neither Marge nor Zed are knaves." Zed says that neither Bozo nor Marge are knaves. Marge and Bart contradict each other. If Marge is telling the truth, Zed lies, and Bart tells the truth. Bart, however, Zed and Marge tell the truth. Thus, Marge and Bart are Knaves. Therefore, Zed is a knave (Because Marge said he was a knight). Bozo says Bart and Zed are knights, which has been established false. Thus all are knaves. The way I typically solve these problems is to look for contradictions. If Bozo is a knight, that means that Bart and Zed are knights, and if Zed is a knight, then Bozo and Marge are knights. If Marge is a knight, then Zed is a knave, which contradicts what we have, so Bozo is a knave. If Marge is a knight, then Bart is a knight and Zed is a knave. This means that (from Bart) that Zed is not a knave, so Marge is a knave. If Bart is a knight then that means Marge is a knight, which we already established to be false, so Bart is a knave. If Zed is a knight, then that means Bozo is a knight, which we already establish to be false, so Zed is a knave. You have chosen a good formalization here: with the corrections from Brian M. Scott's answer, you are given that \begin{align} (0) \;\;\; & Bo \equiv Ba \land Ze \\ (1) \;\;\; & Ma \equiv Ba \land \lnot Ze \\ (2) \;\;\; & Ba \equiv Ma \land Ze \\ (3) \;\;\; & Ze \equiv Bo \land Ma \\ \end{align} Now, looking at the shape of these formulae, we note that $\;Bo\;$'s $(0)$ and $\;Ma\;$'s $(1)$ have a similar structure where we may expect to get a contradiction, and these $\;Bo, Ma\;$ are used symmetrically in $(3)$. Therefore we calculate \begin{align} & Ze \\ \equiv & \;\;\;\;\;\text{"by (3)"} \\ & Bo \land Ma \\ \equiv & \;\;\;\;\;\text{"by (0); by (1)"} \\ & Ba \land Ze \;\land\; Ba \land \lnot Ze \\ \equiv & \;\;\;\;\;\text{"logic: contradiction; simplify"} \\ & \text{false} \\ \end{align} Using this in $(0)$ and $(2)$ immediately leads to $\;Bo \equiv \text{false}\;$ and $\;Ba \equiv \text{false}\;$, respectively. And plugging that last conclusion into $(1)$ gives us $\;Ma \equiv \text{false}\;$. Therefore all are knaves. Let M denote Marge is a knight and ~M denote Marge is a knave (not a knight). Similarly for the others. Here are the base statements, where comma denotes and: Bo => Ba , Z Ba => M, Z M => Ba, ~Z Z => Bo, M Notice that if one of them is a knave, for example ~M => ~Ba OR ~Z (that is at least one of these is true). M => Ba, ~Z => ~Ba, ~Bo (since they each claim Z, but Marge claims ~Z). But this contradicts Ba. Thus ~M. Also, ~Z (Z claims M). This makes them all knaves, since Bo and Ba both claim Z.
2020-07-07T12:57:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/250429/knights-and-knaves/3584707", "openwebmath_score": 0.9934993982315063, "openwebmath_perplexity": 3924.579675738092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542525, "lm_q2_score": 0.8615382129861583, "lm_q1q2_score": 0.8414378623641234 }
https://www.physicsforums.com/threads/roll-5-dice-simultaneously.847026/
# Roll 5 dice simultaneously 1. Dec 7, 2015 ### spacetimedude 1. The problem statement, all variables and given/known data Calculate a) P(exactly three dice have the same number) b) Calculate the conditional probability P(three of the dice shows six|two of the dice shows 5) 2. Relevant equations 3. The attempt at a solution a) Say we have 1 as the number we get for three dice. There is 1/6 chance of getting a 1 and 5/6 of getting other numbers. The total sample space is 6^5 for five rolls. Hence for P(getting three 1's)=[(1/6)^3*(5/6)^2]/[6^5] But that was just for getting three 1's and there are total of 6 numbers that can be used. So P(exactly three dice have the same number)=6*[(1/6)^3*(5/6)^2]/[6^5]. This answer seems to be right, but I would like to know how you can solve this using the binomial coefficent. My attempt: 3 in 5 rolls will result in a number, which can be expressed as 5C3. Each number has 1/6 chance of being that number. The others have the probability of 5/6. The sample space is still 6^5. Putting this all together, [6*(5C3)*(1/6)^3*(5/6)^2]/[6^5], which yields a wrong answer. b) The conditional probability P(three of the dice shows six|two of the dice shows 5) can be written as P(three of the dice show six|two of the dice show 5)=P(three show 6, two show 5)/P(two show 5). Why are the events not independent here? Each die is a different entity so can't we just say P(three shows 6, two show 5)=P(three shows 6)*P(two show 5)? Anyway, P(two shows 5)=[(1/6)^2*(5/6)^3]/[6^5] I am not sure how to find the probability of the intersection of three show 6 and two show 5. I tried using the propoerty that the numerator of conditional probability can be written as P(three show six)P(two show 5|three show 6) but that got me nowhere. Any help will be appreciated! 2. Dec 7, 2015 ### Staff: Mentor This the probability for "the first 3 dice show the same number, the last two do not have this number" (or any other set of 3 specific dice). It does not matter which three dice show the same number, you have to take this into account. Also, you include the factors of 1/6 twice. Use 1/6 and 5/6, or use 1 and 5 and divide by 65, but not both together. This is also a problem in all other formulas. Apart from the wrong factors of 6 (see above), this approach is right. Same as above: the dice don't have a given order. A die that shows 5 cannot show 6 and vice versa. This is not right. Neither for "exactly two" nor for "at least two". 3. Dec 7, 2015 ### spacetimedude Okay, I am having a bit of trouble understanding the choose function. It would be great if you can help me clear a problem: P(two dice show 5) So there are 5C2 possible ways to choose the two dice. And those dice have one choice (that they are showing 5), so for that part, it will be (5C2)*1. Then the three other dice can be any of the five other numbers. Here, the right method is 5*5*5, but why can't we use the choose function, 5C3, here? There are 5C3 ways of choosing the rest of the dice, and there are 5 choices for them, so (5C3)*5. Why is (5C3)*5=/=5*5*5? Using 5*5*5, P(two dice show 5)= [(5C2)*5^3]/[6^5] 4. Dec 7, 2015 ### PeroK To try to clear up this confusion, think of 3 dice, rather than 5. Suppose you want precisely two dice to show 5. There are $3C2 = 3$ ways to choose the two dice that are the same. These are: 55X 5X5 X55 Now, as we have chosen the two places with the 5's, there is no independent choice for the remaining place. It's already decided. It's the same when you have 5 dice. There are $5C2$ choices for where the two 5's are. But, once you have chosen these, the remaining three places are chosen also: 55XXX 5X5XX etc. 5. Dec 7, 2015 ### Staff: Mentor "exactly two", yes. Note that you can also choose the 3 non-5 dice instead of the 2 dice that are 5, as (5 choose 3) = (5 choose 2). It is still just one choice that fixes both groups. 6. Dec 7, 2015 ### spacetimedude That makes it much more clear! Thanks so much! Okay :) thank you! 7. Dec 7, 2015 ### spacetimedude I'm still a bit puzzled about part b of the original question. I do not know how to find the probability of the intersection (the numerator) in the conditional probability. Any hints to how I should approach this? 8. Dec 7, 2015 ### Staff: Mentor P(three show 6, two show 5)? This is easier than all the other probabilities you calculated so far. 9. Dec 7, 2015 ### spacetimedude Yes, that one. I can do them individually but not sure how you can combine them. 10. Dec 7, 2015 ### Staff: Mentor What do you mean by individually? How many options are there to have "6 6 6 5 5" as result? 11. Dec 7, 2015 ### spacetimedude Doh! So it's just 5C2 or 5C3?? So the probability is (5C3)/(6^5)? 12. Dec 8, 2015 Right.
2018-03-21T15:19:31
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/roll-5-dice-simultaneously.847026/", "openwebmath_score": 0.7218762040138245, "openwebmath_perplexity": 765.7420829303198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378963, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8414378617599683 }
https://math.stackexchange.com/questions/1120750/for-which-values-of-x-y-is-x-y-cap-mathbbq-closed
# for which values of $x,y$ is $[x,y]\cap \mathbb{Q}$ closed? for which values of $x,y$ is $[x,y]\cap \mathbb{Q}$ closed in the metric space $(\mathbb{Q},d)$ where $d(x,y) = |x-y|$ my attempt: I suspected it's closed for all real numbers: let $x,y \in \mathbb{Q}$ then if $[x,y]\cap \mathbb{Q}$ is closed it means the compliment is open i.e. $(-\infty,x) \cup (y,\infty)$ is open. The set $(-\infty,x) \cup (y,\infty)$ is obviously open as between any two rational numbers there is another rational number so I can find a ball with radius $r>0$ in the set. (Is this logic correct?) if $x,y$ are irrational however I don't know how to proceed - will the complement on the set still be $(-\infty,x) \cup (y,\infty)$? and if so how do I argue it is open (or if it isn't) • The compliment of the set under consideration is not what you say it is. – Paul Jan 26 '15 at 19:14 • I think a better way to approach this problem is to ask whether or not the set in question contains all its limit points. – Tim Raczkowski Jan 26 '15 at 19:18 In all cases the complement will be $$\Bbb Q\cap\Big((\leftarrow,x)\cup(y,\to)\Big)$$ (note that you really should include the intersection with $\Bbb Q$, unless you’ve previously established a convention that your interval notation is to be understood to mean intervals in $\Bbb Q$ rather than in $\Bbb R$), and your argument for this being open works equally well for all $x$ and $y$, not just the rational ones. All you need is the fact that between any two real numbers there is a rational. • Could you explain why in all cases the complement would be in that form? why for instance could I not have $(\leftarrow, x] \cup [y, \rightarrow)$ – hellen_92 Jan 26 '15 at 19:21 • I say this because, if $x,y$ where irrational, then wouldn't the set $\mathbb{Q} \cap [x,y]$ infact just be $\mathbb{Q} \cap (x,y)$ and then finding the complement of this would be what I wrote? – hellen_92 Jan 26 '15 at 19:25 • @hellen_92: In fact if $x$ and $y$ are irrational, then $$\Bbb Q\cap\Big((\leftarrow,x)\cup(y,\to)\Big)=\Bbb Q\cap\Big((\leftarrow,x]\cup[y,\to)\Big)\;,$$ so it makes no difference which you use. The point is that $(\leftarrow,x)\cup(y,\to)$ is the complement of $[x,y]$ in $\Bbb R$, so its intersection with $\Bbb Q$ is the complement in $\Bbb Q$ of $\Bbb Q\cap[x,y]$. – Brian M. Scott Jan 26 '15 at 19:25 • ah yes of course - is my explanation of why that set is open correct? Also - if say we have the set $[x,y] \cap \mathbb{Q}$ if $x,y$ were irrational would this set be open because of the same reasons above? – hellen_92 Jan 26 '15 at 19:26 • @hellen_92: Yes, if $x$ and $y$ are irrational, $(x,y)$ and $[x,y]$ have the same intersection with $\Bbb Q$, and that intersection is both open and closed in $\Bbb Q$ (or for short, clopen). – Brian M. Scott Jan 26 '15 at 19:26 Recall the following theorem. Theorem. Let $X$ be a subspace of a topological space $Y$ and let $E\subset X$. Then $E$ is closed in $X$ if there exists a set $W$ closed in $Y$ such that $E=X\cap W$. The proof of this theorem is not difficult and worth writing down yourself and good practice for thinking about subspaces. If you accept this theorem, your problem becomes easier. In your problem, we have \begin{align*} Y &= \Bbb R & X &=\Bbb Q & E &= [x,y]\cap\Bbb Q \end{align*} So your question translates to: Does there exist a set $W$ closed in $\Bbb R$ such that $E=W\cap\Bbb Q$? The answer is quite obvious when phrased this way. Do you see how to find $W$? • Be aware that this definition is not universal. In some treatments relatively open sets are defined as intersections with the subspace of an open set in the ambient space, and then relatively closed sets are defined as relative complements of relatively open sets. In that approach your definition becomes a theorem. – Brian M. Scott Jan 26 '15 at 19:18 • @BrianM.Scott Good point. Edited to reflect this. – Brian Fitzpatrick Jan 26 '15 at 19:22
2019-12-12T13:55:46
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1120750/for-which-values-of-x-y-is-x-y-cap-mathbbq-closed", "openwebmath_score": 0.9249981045722961, "openwebmath_perplexity": 146.1311770341298, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692257588072, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.841437861174907 }
https://math.stackexchange.com/questions/432244/two-questions-on-topology-and-continous-functions/432255
# Two questions on topology and continous functions I have two questions: 1.) I have been thinking a while about the fact, that in general the union of closed sets will not be closed, but I could not find a counterexample, does anybody of you have one available? 2.) The other one is, that I thought that one could possibly say that a function f is continuous iff we have $f(\overline{M})=\overline{f(M)}$(In the second part this should mean the closure of $f(M)$). Is this true? • 1) take $\bigcup_{n\in\mathbb{N}}[1/n,1]$ – user127.0.0.1 Jun 29 '13 at 9:57 • The second statement is missing a quantifier (for all subsets $M$ of $X$?). And are you defining $f$ to be closed as implying also that it is continuous? – Henno Brandsma Jun 29 '13 at 10:10 • yes for all subsets and no f should not be closed – user66906 Jun 29 '13 at 10:14 (1) Take the closed sets $$\left\{\;C_n:=\left[0\,,\,1-\frac1n\right]\;\right\}_{n\in\Bbb N}\implies \bigcup_{n\in\Bbb N}C_n=[0,1)$$ Or to be even lazier, consider that generally singletons are closed (for varying definitions of "generally"). A set is always the union of its points, so you can write $(0,1)$ as $\bigcup_{x \in (0,1)} \{ x \}$. (2) is not true. Because it implies that $f$ is a closed map, i.e, maps closed sets to closed sets. However, there are continuous maps which are not closed. For example consider the projection $\pi: \mathbb{R}^2 \rightarrow \mathbb{R}$ given by $(x,y) \mapsto x$. Observe that the set $$C = \{(x,1/x) : x\in \mathbb{R} - \{0\}\}$$ is closed in $\mathbb{R}^2$. However, $\pi(C) = \mathbb{R} - \{0\}$ is not closed in $\mathbb{R}$. Yet $\pi$ is definitely continuous. The correct equivalence for continuity given by the closure operator is $f(\overline{M}) \subseteq \overline{f(M)}$ for every subset $M$ of $X$. Edit: Here's my argument for why $C$ is closed. Suppose $(a,b) \in \mathbb{R}^2$ lies inside $\overline{C}$. We will show that $(a,b)$ must be in $C$. Since $\mathbb{R}^2$ is first countable, there exists a sequence $y_n$ in $C$ converging to $(a,b)$. By the definition of $C$, we can write $y_n = (x_n,1/x_n)$. Therefore $x_n$ converges to $a$ and $1/x_n$ converges to $b$. Now note that if $x_n$ were to converge to $0$, $1/x_n$ would diverge. Hence we conclude that $a \neq 0$. Thus $1/x_n$ converges to $1/a$. Since we are in a Hausdorff space, the sequence $1/x_n$ cannot converge to two different points. Thus $b = 1/a$ and hence $(a,b) = (a,1/a) \in C$. • and it sounds so nice: "close points map to close points" – citedcorpse Jun 29 '13 at 10:33 • $C:=\{(x, 1/x): x\in \mathbb{R} -{0}\}$ is closed? I do not see it. It is unbounded right? – user51196 Jun 29 '13 at 10:56 • Sorry, I guess I have misunderstood Compact set (closed and bounded), with a Closed set. However, I do not see why your set C is closed. – user51196 Jun 29 '13 at 11:06 • @noether it's the graph of $1/x$, which is continuous where defined, so it's a closed set (intuitively it's clear that the complement is open) – citedcorpse Jun 29 '13 at 11:15 • @noether: I included an argument for why $C$ is closed in my edit. Let me know if something's wrong with it. – Cihan Jun 29 '13 at 21:15 A function $f:X \to Y$ between topological spaces sends closed sets to closed sets (what I call $f$ is closed) iff $$\forall A \subset X: \overline{f[A]} \subset f[\overline{A}]$$ If $f$ is closed and $A \subset X$, then $\overline{A}$ is closed, so $f[\overline{A}]$ is also closed. As $A \subset \overline{A}$, $f[A] \subset f[\overline{A}]$ and so also $\overline{f[A]} \subset \overline{f[\overline{A}]} = f[\overline{A}]$ as the latter set is closed. On the other hand, if $f$ satisfies the closure property, and $C \subset X$ is closed, then $$f[C] \subset \overline{f[C]} \subset f[\overline{C}] = f[C]$$ as $C$ is closed. It follows that $f[C]$ equals its closure, hence is closed. So $f$ is a closed map. Sort of dually: $f$ is continuous iff $$\forall A \subset X : f[\overline{A}] \subset \overline{f[A]}$$ So the other inclusion then holds. If $f$ is continuous, and $A \subset X$, $$A \subset f^{-1}[f[A]] \subset f^{-1}[\overline{f[A]}]$$ and the latter set is closed by continuity of $f$ (inverse images of closed sets are closed). So as it contains $A$, it also contains $\overline{A}$, which is the smallest closed set containing $A$. So $$\overline{A} \subset f^{-1}[\overline{f[A]}]$$ which implies $f[\overline{A}] \subset \overline{f[A]}$. On the other hand, if $f$ satisfies the second closure property, let $C \subset Y$ be closed. Then taking $A = f^{-1}[C]$ then $$f[\overline{f^{-1}[C]}] \subset \overline{f[f^{-1}[C]]} \subset \overline{C} = C$$ This implies (by definition of $f^{-1}$) that $\overline{f^{-1}[C]} \subset f^{-1}[C]$ which means that $f^{-1}[C]$ is closed. So $f$ is continuous, as inverse images of closed sets are closed. So the equality $\overline{f[A]} = f[\overline{A}]$ for all subsets $A$ of $X$ is exactly saying that $f$ is both closed and continuous.
2020-05-28T05:09:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/432244/two-questions-on-topology-and-continous-functions/432255", "openwebmath_score": 0.9789884686470032, "openwebmath_perplexity": 117.78782218708493, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692271169855, "lm_q2_score": 0.8615382040983515, "lm_q1q2_score": 0.8414378519284926 }
https://math.stackexchange.com/questions/1329553/prove-a-matrix-of-binomial-coefficients-over-mathbbf-p-satisfies-a3-i
# Prove a matrix of binomial coefficients over $\mathbb{F}_p$ satisfies $A^3 = I$. (This problem is problem $1.16$ in Stanley's Enumerative Combinatorics Vol. 1). Let $p$ be a prime, and let $A$ be the matrix $A = \left[\binom{j+k}{k} \right]_{j,k = 0}^{p-1}$, taken over the field $\mathbb{F}_p$. Show that $A^3 = I$, the identity matrix. (Note that $A$ vanishes below the main antidiagonal, i.e. $A_{jk} = 0$ if $j + k \geq p$). Moreover, how many eigenvalues of $A$ are equal to $1$? In the case of $p = 5$, for instance, we have $$A = \left(\begin{array}{ccccc} 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 3 & 4 & 0\\ 1 & 3 & 1 & 0 & 0\\ 1 & 4 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0\\ \end{array} \right).$$ Somewhat surprisingly, we have $$A^2 = \left(\begin{array}{ccccc} 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 4 & 1\\ 0 & 0 & 1 & 3 & 1\\ 0 & 4 & 3 & 2 & 1\\ 1 & 1 & 1 & 1 & 1\\ \end{array} \right),$$ that is the matrix flips horizontally and vertically; this can also be seen as reflecting across the main antidiagonal. In all other cases I've tested, this held as well; the strategy I came up with for proving $A^3 = I$ is to show that this flipping occurs, and that the flipping happens again when squaring $A^2$. This would show $A^4 = A$. Since $A$ is clearly full rank, this would imply $A^3 = I$. Numerically, this flipping is captured in the statement $$(A^2)_{i,j} = \sum\limits_{k = 0}^{p-1}\binom{i + k}{k} \binom{k + j}{j} \equiv \binom{2p - i - j - 2}{p - i - 1} \pmod p.$$ I haven't been able to make any sense of this equivalence, combinatorially or otherwise; I've also made no progress on the eigenvalue part of the question, but admittedly I haven't spent as much time thinking about it. Any help---even if it has nothing to do with the strategy that I've tried---is greatly appreciated. EDIT: I just figured out how to prove $A^3 = I$, and am now working on the eigenvalue problem. Here's the proof for $A^3 = I$. Lemma: $A_{p - i - 1,j} \equiv (-1)^{j}\binom{i}{j} \pmod{p}$. Proof of Lemma: We note that $A_{p - i - 1,j} = \binom{p - i + j - 1}{j}$ is a polynomial in $p$ with integer coefficients. Thus, when viewed $\pmod{p}$, we have that it is equivalent to its constant term. The constant term is $$\frac{(-i + j - 1)(-i + j - 2)\cdots(-i)}{j!} = (-1)^j\frac{i(i-1)\cdots(i - (j - 1))}{j!} = (-1)^{j}\binom{i}{j}$$ as claimed. We now have \begin{align*} A^2_{p-i-1,p-j-1} &= \sum\limits_k A_{p-i-1,k}A_{k,p-j-1} \\ &= \sum\limits_k A_{p-i-1,k}A_{p-j-1,k} &\text{ due to symmetry of } A \\ &= \sum\limits_k (-1)^{2k}\binom{i}{k}\binom{j}{k} &\text{ by the Lemma} \\ &= \sum\limits_k \binom{i}{k}\binom{j}{j - k} \\ &= \binom{i + j}{j} &\text{ by Vandermonde's Identity} \\ &= A_{i,j}. \end{align*} Similarly, we have \begin{align*} A^4_{i,j} &= \sum\limits_k A^2_{i,k} A^2_{k,j} \\ &= \sum\limits_k A_{p - i - 1,p - k - 1} A_{p - k - 1, p - j -1}\\ &= \sum\limits_k A_{p - i - 1,p - k - 1} A_{p - j - 1, p - k -1} &\text{ by symmetry of }A \\ &= \sum\limits_k (-1)^{p - k - 1}\binom{i}{p - k - 1}(-1)^{p - k -1}\binom{j}{p - k -1} &\text{ by the Lemma} \\ &= \sum\limits_k \binom{i}{p - k - 1}\binom{j}{p - k - 1} \\ &= \sum\limits_k \binom{i}{k} \binom{j}{k} &\text{ by redefining }k \\ &= \binom{i + j}{j} &\text{ by Vandermonde}\\ &= A_{i,j}. \end{align*} Thus $A^4 = A$. Since $A$ is full rank, this implies that $A^3 = I$, as desired. • A can be factored as $BB^T$ for a triangular matrix B consisting of elements from pascals triangle using LU decomposition. This should not be hard to show. This might help but I have no idea. – Per Alexandersson Jun 18 '15 at 1:11 • also note that the final coeficients (after multiplications) should be divisable by $p$ or be $0 \mod p$ (except few cases, which should be the diagonals, why? because they would be multiplied by similar coeficients) – Nikos M. Jun 24 '15 at 5:06 • "is a polynomial in $p$ with integer coefficients": rather, with coefficients whose denominators are coprime to $p$. – darij grinberg Aug 16 '15 at 19:15 Notice that ${k+j\choose j} \equiv (-1)^j{p-k-1 \choose j} \pmod p$ for $0\le k+j \le p-1$ and ${k+j\choose j} \equiv 0 \pmod p$ for $k+j\ge p$, $j\le p-1$ (since $j!$ is coprime to $p$). Therefore, \begin{align} \sum_{j = 0}^{p-1} {k+j\choose j} z^j = \sum_{j = 0}^{p-k-1} (-1)^j{p-k-1 \choose j} z^j = (1-z)^{p-k-1} \tag{1} \end{align} in $\mathbb F_p(z)$. Then for $x\in \mathbb{F}_p$, $x\neq 1$, using that $(1-x)^{p-1} = 1$ in $\mathbb F_p$, $$\sum_{j=0}^{p-1} {k+j\choose j} x^j = \frac{1}{(1-x)^{k}},\ k = 0,\dots,p-1.\tag{2}$$ Consider now the vectors $e(x) = (1,x,x^2,\dots,x^{p-1})^{\top}$, $x\in \mathbb{F}_p$. For $x\notin \{0,1\}$, from (2) we have $$e(x) \overset{A}{\longrightarrow} e((1-x)^{-1}) \overset{A}{\longrightarrow} e(-(1-x) x^{-1})\overset{A}{\longrightarrow} e(x).\tag{3}$$ Further, $$e(0) \overset{A}{\longrightarrow} e(1) \overset{A}{\longrightarrow} (0,\dots,0,1)^\top \overset{A}{\longrightarrow} e(0),$$ where the middle arrow follows from plugging $z=1$ to (1). The vectors $e(x)$, $x\in \mathbb F_p$, are linearly independent. Therefore, $A^3 = I$. Concerning the eigenvalue question, there are two cases depending on whether the equation $x = (1-x)^{-1}$, equivalently, $(2x-1)^2 = -3$, is solvable in $\mathbb F_p$. (A pair of) solutions exist iff $p= 1\pmod 6$. 1. There is no solution, so $p=6k-1$ or $p=2$. Then all elements except $0,1$ are divided into the groups of three: $x,(1-x)^{-1}, (1-x)x^{-1}$. Hence, noting that $(0,\dots,0,1)^\top = -e(0)-\dots - e(p-1)$, a vector has eigenvalue $1$ iff its decomposition in the base $\{e(0), \dots,e(p-1)\}$ has the same coefficients before $e(x), e((1-x)^{-1}), e((1-x)x^{-1})$ and zero coefficients before $e(0)$ and $e(1)$. Consequently, the space of vectors with eigenvalue $1$ has dimension $(p-2)/3 = 3k-1$ (or $0$ for $p=2$), and this is the answer. 2. There are solutions $\omega$, $\omega^{-1}$, then $p=6k+1$. As above, elements except $0,1,\omega,\omega^{-1}$ are divided into threes; a vector has eigenvalue $1$ iff its decomposition in the base $\{e(0), \dots,e(p-1)\}$ has the same coefficients before $e(x), e((1-x)^{-1}), e((1-x)x^{-1})$ and zero coefficients before $e(0)$ and $e(1)$. Consequently, the space of vectors with eigenvalue $1$ has dimension $(p-4)/3 + 2= 3k+1$, and this is the answer. • "Differentiate with respect to $x$" I'd rather not. – darij grinberg Aug 16 '15 at 16:41 • @darij grinberg, it's a typo: I differentiate w.r.t. $z$, the identity $\sum_{j=0}^{p-1} {k+j\choose j} z^j = \frac{1-z^p}{(1-z)^{k+1}}$ in $\mathbb F_p(z)$, not the identity (1) in $\mathbb F_p$. – zhoraster Aug 16 '15 at 19:02 • Something's still off, as witnessed by the term $1+z+\cdot+x^{p-1}$. Where exactly do you switch from $z$ to $x$ ? – darij grinberg Aug 16 '15 at 19:13 • @darij grinberg, ok now? – zhoraster Aug 17 '15 at 5:37 • Looks a lot better. There was a confusion between $j$ and $k$, where $j$ was first a constant from $0$ to $p-1$ and later became an index running over all $\mathbb N$; I have fixed it (the $j$ and $k$ switched roles). This is an interesting proof, turning the matrix $A$ into an "almost-permutation matrix" by conjugating by a Vandermonde matrix. It still lacks a few details (I had to spend a while figuring out why $e\left(1\right) \overset{A}{\to} \left(0,0,\ldots,0,1\right)^T$; this comes from the "parallel summation" ... – darij grinberg Aug 17 '15 at 12:35
2019-04-23T23:50:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1329553/prove-a-matrix-of-binomial-coefficients-over-mathbbf-p-satisfies-a3-i", "openwebmath_score": 0.9999313354492188, "openwebmath_perplexity": 427.1396891462622, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983085088220004, "lm_q2_score": 0.855851154320682, "lm_q1q2_score": 0.8413745075485399 }
https://math.stackexchange.com/questions/670405/does-the-extended-euclidean-algorithm-always-return-the-smallest-coefficients-of
# Does the Extended Euclidean Algorithm always return the smallest coefficients of Bézout's identity? Bezout's identity says that there are integers $x$ and $y$ such that $ax + by = gcd(a, b)$ and the Extended Euclidean Algorithm finds a particular solution. For instance, $333\cdot-83 + 1728\cdot16 = \gcd(333, 1728) = 9$. Will the Extended Euclidean Algorithm always return the smallest $x$ and $y$ that satisfy the identity? By small, I mean that $|x|, |y|$ are minimized. Because $333\cdot(-83 + 1728t) + 1728\cdot(16 - 333t) = \gcd(333, 1728)$ is also a solution for all $t \in \mathbf{Z}$. • You should write $333\cdot(-83 + 192t) + 1728\cdot(16 - 37t) = \gcd(333, 1728)$ Feb 10, 2014 at 2:55 • There are many variations on "the" extended Euclidean algorithm. Which version do you refer to? Feb 10, 2014 at 3:01 • Will Jagy - I am not sure what you mean by "You put the smaller number first; try gcd(16,83) by your exact method. You should get the second best pair." ----- Bill Dubuque - I am referring to the one found here: ----- en.wikibooks.org/wiki/Algorithm_Implementation/Mathematics/… Feb 10, 2014 at 3:22 • – lhf Apr 23, 2020 at 1:05 We can assume that the GCD is $1$, because the Euclidean algorithm for $a/g,\,b/g$ is just the algorithm for $a,b$ "scaled down", that is, the quotients remain the same and the other numbers are divided by $g$. We'll also assume that $a>b>1$. As has been pointed out in comments, there are various implementations of the Euclidean algorithm, but suppose you do it this way, taking $n$ steps: \eqalign{ a&=q_1b+r_1\cr b&=q_2r_1+r_2\cr &\ \vdots\cr r_{n-2}&=q_{n-2}r_{n-1}+1\ .\cr} Then you find the Bezout identity by reversing the procedure: \eqalign{1 &=r_{n-2}-q_{n-2}r_{n-1}\cr &=r_{n-2}-q_{n-2}(r_{n-3}-q_{n-3}r_{n-2})\cr &=-q_{n-2}r_{n-3}+(q_{n-2}q_{n-3}+1)r_{n-2}\cr &=\cdots\cr &=xa+yb\ .\cr} Then we have $|x|\le b/2$ and $|y|\le a/2$. This can be proved by induction on $n$. If $n=1$ we have just one line $a=qb+1$, so the Bezout identity is $a-qb=1$: the coefficients are $x=1$, $y=-q$ and we have $$|x|\le b/2\ ,\quad |y|=q\le qb/2<a/2\ .$$ Now suppose that a procedure of $n-1$ steps gives $$bX+r_1Y=1$$ where by induction we may assume $$|X|\le r_1/2\ ,\quad |Y|\le b/2\ .$$ Then the final step is $$1=bX+(a-qb)Y=aY-(qY-X)b=ax+by$$ where $$x=Y\ ,\quad y=-(qY-X)\ .$$ Therefore $$|x|=|Y|\le b/2\quad\hbox{and}\quad |y|\le q|Y|+|X|\le qb/2+r_1/2=a/2\ ,$$ and this completes the proof by induction. Since the general solution for $x$ is $x=x_0+bt$, any value of $x$ between $-b/2$ and $b/2$ must be numerically the smallest possible; and similarly for $y$. • Thank you David for your very detailed response. I guess what I am asking is that, are the x and y that are found by the Extended Euclidean Algorithm the smallest x and y (in the absolute value sense) that satisfy ax + by = 1? Feb 10, 2014 at 4:03 • Have just added an extra sentence at the end of my answer which I hope clears this up. Feb 10, 2014 at 4:31 • @David When you reverse the procedure how do you jump from line 2 to line 3. Why've you shortened $-q_{n-2}r_{n-3}$ to just $-q_{n-2}$? Apr 2, 2016 at 13:00 • @Dhruv typo, fixed. Apr 3, 2016 at 23:30 This question, as well as an extension to the GCD of more than two numbers, is analyzed in Section 3 of this paper by Majewski and Havas: http://staff.itee.uq.edu.au/havas/1994mh.pdf They show that the same bound holds for the case of more than two numbers: one can always find coefficients that are at most half the largest number in absolute value. They also show how to find those coefficients very efficiently. • The extension to greatest common divisors of more than two numbers is doubtless an interesting Question, but it's not clear if you are actually answering the Question posed here, whether Extended Euclidean algorithm can promise the optimal size coefficients. The Question is a bit vague (about how size should be measured, and what specific variants of Extended Euclidean algorithm may be specified), but the older Answer offers some details to address that. May 24, 2014 at 4:21
2022-06-30T13:27:40
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/670405/does-the-extended-euclidean-algorithm-always-return-the-smallest-coefficients-of", "openwebmath_score": 0.7923735976219177, "openwebmath_perplexity": 267.3827278661618, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850902023108, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.8413745074378395 }
https://math.stackexchange.com/questions/2459842/when-i-solve-for-eigenvectors-using-determinant-is-it-possible-to-have-an-entry
# When I solve for eigenvectors using determinant, is it possible to have an entry in the eigenvector 0 and the other anything? $$\left[\begin{array}{ccc} 4 & 0 \\ 2 & 2 \end{array}\right]$$ I am trying to find the two eigenvalues associated with this matrix. So I find the eigenvalues $\lambda$ that make $\det(A-\lambda I)=0$. $$(4-\lambda)(2-\lambda)=0.$$ With $\lambda=4$, I got $\left(\begin{array}{ccc} 0 & 0 \\ 2 & -2 \end{array}\right)\times \left(\begin{array}{ccc} u_1\\ u_2\end{array}\right)=\left(\begin{array}{ccc} 0\\ 0\end{array}\right)$. So eigenvector $u$ is $[1,1]$. But the problem is with $\lambda=2$, I got $\left(\begin{array}{ccc} 2 & 0 \\ 2 & 0 \end{array}\right)\times \left(\begin{array}{ccc} v_1\\ v_2\end{array}\right)=\left(\begin{array}{ccc} 0\\ 0\end{array}\right)$. So $v_1$ has to be $0$, while $v_2$ can be anything. What is eigenvector $v$? • You already said the answer: set the first entry to 0 and the section be one to anything. – Zach Boyd Oct 6 '17 at 2:57 • I have edited the matrix next to the eigenvector associated with $\lambda=4$, hope that is what you meant to put. – Ahmed S. Attaalla Oct 6 '17 at 2:57 You've found the eigenvectors, for $\lambda =2$ it is $\begin{pmatrix}0 \\ v_2 \end{pmatrix}$ for any $v_2 \neq 0$. Eigenvectors are never unique, as any scalar multiple of an eigenvector is still an eigenvector. If all you need is one eigenvector, you can take $v_2$ to be anything so $\begin{pmatrix}0 \\ 1 \end{pmatrix}$ works. Why is any scalar multiple of an eigenvector also an eigenvector? Suppose $v$ is an eigenvector of an operator $A$ corresponding to an eigenvalue $\lambda$, then by definition, $v\neq 0$ and $v$ satisfies the eigenvalue equation$$Av=\lambda v.$$ Now let $a\neq 0$ be a constant, then $$A(av)=a(Av)=a(\lambda v)=\lambda(av)$$ so $av$ also satisfies the eigenvalue equation and $av\neq 0$, hence $av$ is also an eigenvector of $A$ for any $a \neq 0$. All eigenvectors have the property that any scalar multiple is also an eigenvector with the same eigenvalue. So your first answer for $\lambda = 4$ could be rephrased "$(a,a),$ where $a$ is anything." Similarly your second answer "$(0,v_2)$ where $v_2$ is anything" makes perfect sense.
2019-07-18T21:20:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2459842/when-i-solve-for-eigenvectors-using-determinant-is-it-possible-to-have-an-entry", "openwebmath_score": 0.9663059115409851, "openwebmath_perplexity": 113.34782012624831, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850882200038, "lm_q2_score": 0.8558511488056151, "lm_q1q2_score": 0.8413745021267598 }
https://scipython.com/book2/chapter-10-general-scientific-programming/examples/numerical-stability-of-an-integral-solved-by-recursion/
# Numerical stability of an integral solved by recursion The integral $$I_n = \int_0^1 x^n e^x\,\mathrm{d}x \quad n=0,1,2,\cdots$$ suggests a recursion relation obtained by integration by parts: $$I_n = \left[ x^n e^x \right]_0^1 - n\int_0^1x^{n-1}e^x\,\mathrm{d}x = e - nI_{n-1}$$ terminating with $I_0 = e-1$. However, this algorithm, applied "forwards" for increasing $n$ is numerically unstable since small errors (such as floating point rounding errors) are magnified at each step: if the error in $I_n$ is $\epsilon_n$ such that the estimated value of $I_n$ is $I_n' + \epsilon_n$ then $$\epsilon_n = I_n' - I_n = (e-nI_{n-1}') - (e - nI_{n-1}) = n(I_{n-1} - I_{n-1}') = -n\epsilon_{n-1},$$ and hence $|\epsilon_n| = n!\epsilon_{0}$. Even if the error in $\epsilon_0$ is small, that in $\epsilon_n$ is larger by a factor $n!$ which can be huge. The numerically stable solution, in this case, is to apply the recursion backwards for decreasing $n$: $$I_{n-1} = \frac{1}{n}(e - I_n) \quad \Rightarrow \epsilon_{n-1} = -\frac{\epsilon_n}{n}.$$ That is, errors in $I_n$ are reduced on each step of the recursion. One can even start the algorithm at $I'_N = 0$ and providing enough steps are taken between $N$ and the desired $n$ it will converge on the correct $I_n$. import numpy as np import matplotlib.pyplot as plt def Iforward(n): if n == 0: return np.e - 1 return np.e - n * Iforward(n-1) def Ibackward(n): if n >= 99: return 0 return (np.e - Ibackward(n+1)) / (n+1) N = 35 Iforward = [np.e - 1] for n in range(1, N+1): Iforward.append(np.e - n * Iforward[n-1]) Ibackward = [0] * (N+1) for n in range(N-1,-1,-1): Ibackward[n] = (np.e - Ibackward[n+1]) / (n+1) n = range(N+1) plt.plot(n, Iforward, label='Forward algorithm') plt.plot(n, Ibackward, label='Backward algorithm') plt.ylim(-0.5, 2) plt.xlabel('$n$') plt.ylabel('$I(n)$') plt.legend() plt.show() The figure below shows the forwards algorithm becoming extremely unstable for $n>16$ and fluctuating between very large positive and negative values; conversely, the backwards algorithm is well-behaved.
2021-04-20T00:43:49
{ "domain": "scipython.com", "url": "https://scipython.com/book2/chapter-10-general-scientific-programming/examples/numerical-stability-of-an-integral-solved-by-recursion/", "openwebmath_score": 0.8117455244064331, "openwebmath_perplexity": 1017.060338957209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850902023109, "lm_q2_score": 0.8558511469672594, "lm_q1q2_score": 0.8413745020160595 }
https://economics.stackexchange.com/questions/48003/negative-definite-vs-semi-definite-hessian-sufficient-vs-necessary-conditions
# Negative Definite vs Semi-definite Hessian - Sufficient vs Necessary conditions? When a Hessian matrix is negative definite at a critical point then that critical point is a local maximum (Sufficient Condition). As per the calculus wiki: Link, when the Hessian is negative semi-definite then, we can only conclude that it is not a local minimum. This seems to suggest that negative semi-definiteness is a necessary condition, not a sufficient one. Can anyone provide an example of a multiple variable function where we have a negative semi-definite Hessian but not a local maximum? As per my thinking, if we evaluate the hessian to be negative semi-definite at the critical point it must also be a local maximum, but clearly calculus wiki disagrees. The simplest example is $$-x^3$$ in the single variable case, or $$-x_1^3-x_2^3$$ in the case of two variables. The Hessian matrix is negative semi-definite at $$(0,0)$$, but there is no maximum at this point. • @Kinno: Yes, the mere finding of a negative semi-definite Hessian does not imply that there is no maximum at this point. Consider $-x_1^4-x_2^4$ for instance, whose Hessian is nsd at $(0,0)$. Hmm, I am not sure that there is a general method allowing you to conclude in all cases. I would recommend to go back to the definition of a maximum and try to study whether $f(x_1,x_2) \leq f(x_1^*,x_2^*)$ for any $(x_1,x_2)$. In our example $-x_1^4-x_2^4 \leq 0$ and so $(0,0)$ corresponds to a global maximum of $f$, even though the Hessian is not negative definite at this point. Oct 18 at 12:02
2021-12-08T02:36:33
{ "domain": "stackexchange.com", "url": "https://economics.stackexchange.com/questions/48003/negative-definite-vs-semi-definite-hessian-sufficient-vs-necessary-conditions", "openwebmath_score": 0.7912972569465637, "openwebmath_perplexity": 153.417330708438, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850862376966, "lm_q2_score": 0.8558511488056151, "lm_q1q2_score": 0.8413745004301998 }
https://math.codidact.com/posts/286398
### Communities tag:snake search within a tag user:xxxx search by author id score:0.5 posts with 0.5+ score "snake oil" exact phrase created:<1w created < 1 week ago post_type:xxxx type of post Q&A # Matrices with rotational symmetry +3 −0 I've seen a claim without proof that the characteristic polynomials of matrices with rotational symmetry (i.e. $n \times n$ matrices $A$ with $A_{i,j} = A_{n+1-i,n+1-j}$) always factor into the product of the characteristic polynomials of smaller matrices which can be derived from blocks of the original matrix. Is there an elementary proof, and can the result be generalised? Why does this post require moderator attention? You might want to add some details to your flag. Why should this post be closed? #### 0 comment threads +3 −0 Notation: $A^{\leftarrow}$ denotes $A$ with the columns reversed; $A^{\uparrow}$ denotes $A$ with the rows reversed; $A^{\leftarrow \uparrow} = A^{\uparrow \leftarrow}$ is denoted $A^{\circ}$ and is the rotation of $A$ by $180^{\circ}$. Consider first a $(2n+1)\times(2n+1)$ block matrix $\begin{pmatrix} A & v & B^{\circ} \\ h & c & h^{\leftarrow} \\ B & v^{\uparrow} & A^{\circ} \end{pmatrix}$ where $A, B$ are $n \times n$, $v$ is $n \times 1$, $h$ is $1 \times n$, and $c$ is $1 \times 1$. We have \begin{eqnarray*}\det \begin{pmatrix} A & v & B^{\circ} \\ w & c & w^{\leftarrow} \\ B & v^{\uparrow} & A^\circ \end{pmatrix} &=& (-1)^{n(n+1)/2} \det \begin{pmatrix} A & v & B^{\circ} \\ B^{\uparrow} & v & A^{\leftarrow} \\ w & c & w^{\leftarrow} \end{pmatrix} \tag{permuting rows} \\ &=& \det \begin{pmatrix} A & B^{\uparrow} & v \\ B^{\uparrow} & A & v \\ w & w & c \end{pmatrix} \tag{permuting cols} \\ &=& \det \begin{pmatrix} A - B^{\uparrow} & B^{\uparrow}-A & 0 \\ B^{\uparrow} & A & v \\ w & w & c \end{pmatrix} \tag{subtracting rows} \\ &=& \det \begin{pmatrix} A - B^{\uparrow} & 0 & 0 \\ B^{\uparrow} & A + B^{\uparrow} & v \\ w & 2w & c \end{pmatrix} \tag{adding cols} \\ &=& \det (A - B^{\uparrow}) \det \begin{pmatrix} A + B^{\uparrow} & v \\ 2w & c \end{pmatrix} \tag{by Leibniz formula} \end{eqnarray*} Note that the characteristic polynomial is a special case, since $\chi_M(x) = \det (M - xI)$. The $(2n)\times(2n)$ block matrix $\begin{pmatrix} A & B^{\circ} \\ B & A^{\circ} \end{pmatrix}$ has an almost identical derivation yielding $$\det \begin{pmatrix} A & B^{\circ} \\ B & A^{\circ} \end{pmatrix} = \det (A - B^{\uparrow}) \det (A + B^{\uparrow})$$ Why does this post require moderator attention? You might want to add some details to your flag. #### 0 comment threads Featured Hot Posts This community is part of the Codidact network. We have other communities too — take a look! You can also join us in chat! Want to advertise this community? Use our templates! Like what we're doing? Support us!
2023-02-01T08:29:09
{ "domain": "codidact.com", "url": "https://math.codidact.com/posts/286398", "openwebmath_score": 1.0000081062316895, "openwebmath_perplexity": 1309.2700589742697, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850902023109, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.8413744984015393 }
https://math.stackexchange.com/questions/2127606/finding-particular-solutions-to-pdes-transport-equation
# Finding particular solutions to PDEs (transport equation) Suppose I have the following PDEs: (for $u(t,x)$) (1) $u_{t}-u_{x}=-x$ (2) $u_{t}+2u_{x}=1$ (3) $u_{t}+u_{x}+u=e^{x+3t}$ (4) $2u_{t}+u_{x}=sin(x-t)$ For all equations, it is easy to find the homogenous part of the general solution because they are all transport/transport with decay equations. However, I am having trouble finding particular solutions for (1) and (2). For equations (3) and (4), the method of undetermined coefficients used in ODEs seems to work to find particular solutions. However, for equations (1) and (2), it doesn't. Am I supposed to find the particular solutions by intuition in those cases? Taking $u_{p}(t,x)=x-t$ for equation (2) works, but according to wolframalpha, it is not the particular solution ($u_{p}=t$ is). Are there many kinds of particular solutions? When solving an equation, should I find all of them or does one suffice? • In (1) try making a substitution $u = v + f(x)$ for some suitable function $f(x)$ so that $-f'(x) = -x$. Same trick works in (2): try $v = u + g(x)$ or $v = u + h(t)$ and pick $g$ or $h$ such that you can cancel the right hand side. – Winther Feb 3 '17 at 16:40 • @Winther, thank for your answer. For (1), I get $u(t,x)=A(x+t)+\frac{x^2}{2}+c$. Is this correct? (wolframlpha provides me with the following, different solution: $u(t,x)=A(x+t)-\frac{t^2}{2}-tx$) – Omrane Feb 3 '17 at 16:52 • No Wolfram-Alpha gives you $B(x+t) - \frac{t^2}{2} - tx$ (it does not have to be the same function as your $A$). Is there a way of picking $B(z)$ relative to $A(z)$ such that we have $A(z) = B(z) + g(z)$? Since $A$ represents a general function this would mean that it's the same solution. This is the same issue as your question of $x-t$ relative to $t$. There are always infinitely many possible particular solutions to these kind of equations and it does not matter which one you pick. – Winther Feb 3 '17 at 16:53 • @Winther, I see. So the important thing is to include at least one function, different from the general function A, in our general solution? I thought general solution meant to include ALL possible solutions. – Omrane Feb 3 '17 at 17:02 • Yes and remember that a general function is a general fuction so it could really be anything until you impose initial conditions for which it usually uniquely determines the solution. As an exericise to convince yourself that this makes sense you could try to impose an initial condition like $u(x,0) = x^2$ using both your solution and WAs solution to determine $A$ and $B$ and see that this leads to the same solution in both cases. – Winther Feb 3 '17 at 17:07 I've tried the following variant of the method of characteristics and works well with some type of quasi-linear pde's. For $(1)\;u_{t}-u_{x}=-x$ we write this sistem of equalities. $$\frac{\mathrm dt}{1}=\frac{-\mathrm dx}{1}=\frac{-\mathrm du}{x}$$ With the first and second ratio: $$\frac{\mathrm dt}{1}=\frac{-\mathrm dx}{1}\implies t=-x+c_1\;;t+x=c_1$$ With the second and the third: $$\frac{-\mathrm dx}{1}=\frac{-\mathrm du}{x};\;x\mathrm dx=\mathrm du;\;\frac{x^2}{2}+c_2=u$$ Now, the intersection of the two surfaces obtained are the characteristics, so we need a relation to spread these curves to form a new surface, the solution of the pde, so is $c_2=A(c_1)$ with $A(u)$ any differentiable single variable function. The general solution is: $$u(t,x)=A(x+t)+\frac{x^2}{2}$$ Chosing $A(x+t)=B(x+t)-(1/2)(x+t)^2$, $u(t,x)=B(x+t)+x^2/2-x^2/2-xt-t^/2=B(x+t)-xt-t^2/2$ For $(2)\;u_{t}+u_{x}+u=\exp(x+3t)$ we get $$\frac{\mathrm dt}{1}=\frac{\mathrm dx}{1}=\frac{\mathrm du}{e^{x+3t}-u}$$ First and second: $$\frac{\mathrm dt}{1}=\frac{\mathrm dx}{1}\implies t=x+c_1\;;t-x=c_1$$ With this, $\exp(x+3t)=\exp(4x+3c_1)$ So, $$\frac{\mathrm dx}{1}=\frac{\mathrm du}{e^{4x+3c_1}-u};\;(e^{4x+3c_1}-u)\mathrm dx-\mathrm du=0$$ We can solve it finding an integrating factor: $M(x)=\exp(x)$ $$e^x(e^{4x+3c_1}-u)\mathrm dx-e^x\mathrm du=(e^{5x-3c_1}-e^xu)\mathrm dx-e^x\mathrm du=0\;\text{is now exact}\;\implies$$ $$\implies \mathrm d\left(\frac{1}{5}e^{5x+3c_1}-ue^x\right)=0$$ $$u=c_2e^{-x}+\frac{1}{5}e^{4x+3c_1}=c_2e^{-x}+\frac{1}{5}e^{x+3t}$$ And, as in the previous case, with $c_2=f(c_1)$, the general solution is: $$u(t,x)=f(t-x)e^{-x}+\frac{1}{5}e^{x+3t}$$ For (4), $2u_{t}+u_{x}=sin(x-t)$, the system is: $$\frac{\mathrm dt}{2}=\frac{\mathrm dx}{1}=\frac{\mathrm du}{\sin(x-t)}$$ First and second: $$\frac{\mathrm dt}{2}=\frac{\mathrm dx}{1}\implies t=2x+c_1\;;t-2x=c_1$$ Second and third: $$\frac{\mathrm dx}{1}=\frac{\mathrm du}{\sin(x-t)}\;\text{with}\;\sin(x-t)=\sin(-x-c_1)$$ $$\frac{\mathrm dx}{1}=\frac{\mathrm du}{\sin(-x-c_1)}\;;\sin(-x-c_1)\mathrm dx=\mathrm du$$ $$\cos(-x-c_1)+c_2=u\;;u=\cos(x-t)+c_2$$ And with $c_2=g(c_1)$ as before: $$u(t,x)=cos(x-t)+g(t-2x)$$
2020-01-28T07:45:54
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2127606/finding-particular-solutions-to-pdes-transport-equation", "openwebmath_score": 0.8228423595428467, "openwebmath_perplexity": 159.5405364856828, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850877244272, "lm_q2_score": 0.8558511451289037, "lm_q1q2_score": 0.8413744980880997 }
https://math.stackexchange.com/questions/2120974/maximum-likelihood-estimation-of-two-unknown-parameters/2121072
# Maximum Likelihood Estimation of two unknown parameters Here is a question: We have a machine A that functions with probability $\theta_1 \theta_2$, and a machine B that functions with probability $\theta_1 \theta_2^2$. A random sample of $n$ A machines and an independent random sample of $n$ B machines are selected. Of these, $n_1$ and $n_2$ function respectively. Find the MLEs of $\theta_1$ and $\theta_2$. I have problems understand the question when it says $n_1$ and $n_2$. Does that mean each type of machine samples have $2$ out of $n$ working? and is there any difference we need to draw between Sample A and B since sample B is independent? Thanks a lot! • Yes I can confirm that. Thanks! – Febday Jan 30 '17 at 15:11 • Cool! You should take a look at MathJax - If you are planning on staying around, as this makes your posts more likely to be answered by the population on this site :). – Chinny84 Jan 30 '17 at 15:16 Your likelihood function looks like this $$(\theta_1\theta_2)^{n_1}(1-\theta_1\theta_2)^{n-n_1}(\theta_1\theta_2^2)^{n_2}(1-\theta_1\theta_2^2)^{n-n_2}$$ Maximizing log likelihood we can find that $$\theta_1\theta_2^2= \frac{n_2} n$$ $$\theta_1\theta_2= \frac{n_1} n$$ From here I think we can infer that $\theta_2=n_2/n_1$ and $\theta_1=n_1^2/nn_2$ If someone could check my solution I would appreciate that. • This looks right but I'd have said more than this. – Michael Hardy Jan 30 '17 at 16:59 • Do you mean it is unclear how to derive that $\theta_1\theta_2^2= \frac{n_2} n$ and $\theta_1\theta_2= \frac{n_1} n$? – Markoff Chainz Jan 30 '17 at 17:03 • Why doesn't the likelihood function involve $n$ chooses $n_1$ and $n$ chooses $n_2$? – Febday Jan 30 '17 at 17:12 • Because we dont need the probability of any $n_1$ machines functioning from $n$. We have exact sample of $n_1$ and $n_2$ units – Markoff Chainz Jan 30 '17 at 17:16 • @Febday : Because likelihood differing only by a constant positive factor are equivalent. And "constant" in this case means not depending on $(\theta_1,\theta_2).\qquad$ – Michael Hardy Jan 30 '17 at 17:21 You have the likelihood function $$L(\theta_1,\theta_2) = \text{constant} \times (\theta_1\theta_2)^{n_1} (1-\theta_1\theta_2)^{n-n_1} (\theta_1\theta_2^2)^{n_2}(1-\theta_1\theta_2^2)^{n-n_2} \tag 1$$ where "constant" means not depending on $\theta_1$ or $\theta_2$. It start by letting $\alpha=\theta_1 \theta_2$ and $\beta = \theta_1 \theta_2^2.$ That transforms $(1)$ to $$\alpha^{n_1} (1-\alpha)^{n-n_1} \beta^{n_2} (1-\beta)^{n-n_2}.$$ The logarithm of this expression is $$\ell = n_1 \log \alpha + (n-n_1)\log(1-\alpha) + n_2\log\beta + (n-n_2)\log(1- \beta).$$ So we have $$\frac{\partial\ell}{\partial\alpha} = \frac{n_1} \alpha - \frac{n-n_1}{1-\alpha}.$$ This is $0$ precisely when $\alpha = \dfrac{n_1}n$. By symmetry we similarly see that $\partial\ell/\partial\beta=0$ when $\beta = \dfrac{n_2} n.$ Given \begin{align} \theta_1\theta_2 & = \alpha = \frac{n_1} n \tag 2 \\[10pt] \text{and } \theta_1\theta_2^2 & = \beta = \frac{n_2} n \tag 3 \end{align} we can divide the left side of $(3)$ by the left side of $(2)$ to get $\theta_2,$ and doing the same with the right sides we get $\theta_2=n_2/n_1.$ We can divide the square of the left side of $(2)$ by the square of the left side of $(3)$ to get $\theta_1$, and then doing the same with the right sides we have $\theta_1 = n_1^2/n_2 n.$ Here we have used what you often see called the "invariance" of maximum-likelihood estimates, but it's really equivariance rather than invariance. Note also the mere fact of the derivative being $0$ does not prove that there is a global maximum. In this case the function of either $\alpha$ or $\beta$ is $0$ at the two extreme points $0$ and $1$, and is positive between those two extremes, and is continuous. That proves there is a global maximum somewhere strictly between $0$ and $1$. The function is also everywhere differentiable, so the derivative must be $0$ at a non-endpoint maximum. And then we find that there is only one point where the derivative is $0$ and we can conclude that's where the global maximum is.
2019-12-06T01:21:54
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2120974/maximum-likelihood-estimation-of-two-unknown-parameters/2121072", "openwebmath_score": 0.903330385684967, "openwebmath_perplexity": 234.58624607005808, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850837598123, "lm_q2_score": 0.8558511414521923, "lm_q1q2_score": 0.8413744910804595 }
https://math.stackexchange.com/questions/2801297/the-independent-random-variables-x-and-y-are-uniformly-distributed-on-the-in/2801369
# The independent random variables $X$ and $Y$ are uniformly distributed on the intervals [-1,1] for $X$ and $Y$… The independent random variables $X$ and $Y$ are uniformly distributed on the intervals $[-1,1]$ for $X$ and $[0,2]$ for $Y$. Evaluate the probability that $X$ is greater than $Y$, $P(X>Y)$. My solution: $$P(X>Y) = \frac{\mbox{ area of triangle }}{\mbox{ area of dotted square }}=\frac{\int_{0}^{1}xdx}{4}=\frac{\left [ \frac{x^{2}}{2} \right ]\Big|_0^1}{4}=\frac{\frac{1}{2}}{4}=\frac{1}{8}.$$ I just wanted to clarify that this is correct. Thanks for reading and replying!! • Makes sense. the only time $x \gt y$ is in that triangle. Clearly by geometry it will be $\frac 18$ and as you have done, is correct. – Tony Hellmuth May 30 '18 at 1:18 From a more probabilistic sense: $$P(X>Y) = P(Y<X) =\int_{0}^{1}F_Y(x)f_X(x)dx=\int_{0}^{1} \frac x2 \frac 12dx={\left [ \frac{x^{2}}{8} \right ]\Big|_0^1}=\frac 18$$ This is following the law of total probability: $$P(Y<X) =\int P(Y<X|X=x)P(X=x)dx$$ Why is the integral from $0$ to $1$? $F_Y(x)=0$ when $x<0$. • Cheers for edit Graham always get slightly confused with r.v. and constants. – Tony Hellmuth May 30 '18 at 1:34 Your solution is correct.   Of course, you do not actually need to integrate anything, since the joint density is uniform.   Just use the geometry. $$\dfrac{\text{area of triangle}}{\text{area of rectangle}}= \dfrac{\tfrac 12}{4}=\dfrac 18$$ Still there is nothing wrong with practicing integration, and it is useful to remember to do so for cases where distribution is non-uniform. We can focus on interval $(0,1)$. With the law of total probability it is $P(X>Y)=$ $P(X>Y|0<X,Y<1)\cdot P(0<X,Y<1 )+P(X>Y|0<X,Y<1)\cdot \underbrace{P(0>X,Y>2 )}_{=0}$ Due the independence of $X$ and $Y$ we have $P(X>Y)=P(X>Y|0<X,Y<1)\cdot P(0<X<1)\cdot P( 0<Y<1 )$, where $P(0<X<1)=\frac{1-0}{1-(-1)}=\frac12$, $P(0<Y<1)=\frac{1-0}{2-0}=\frac12$ Since $X$ and $Y$ are both uniform and identical distributed on $(0,1)$ we get $P(X>Y|0<X,Y<1)=\frac12$ Therefore $P(X>Y)=\frac12\cdot \frac12\cdot \frac12=\frac18$
2019-12-06T01:07:37
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2801297/the-independent-random-variables-x-and-y-are-uniformly-distributed-on-the-in/2801369", "openwebmath_score": 0.9274419546127319, "openwebmath_perplexity": 180.60631159920337, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759654852756, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8413641234942559 }
http://math.stackexchange.com/questions/338432/how-to-prove-the-relation-between-the-floor-function-and-the-number-of-divisors
# how to prove the relation between the floor function and the number of divisors I am trying to get an intuitive meaning (a proof) or why the following is true. $$\sum_{k = 1}^n \sum_{d|k} 1 = \sum_{d = 1}^n \left[\frac{n}{d} \right]$$ I know that $f(x) = [x] = \sum_{n \leq x} 1$ but I can't see it in the case of a fraction. Edit: can someone please tell me what I am doing wrong. I believe its something to do with summations. I know the divisor function $\tau(k) = \sum_{d|k} 1$ so consider $$\sum_{n \leq x} \tau(n) = \sum_{n \leq x}\sum_{d|n} 1 = \sum_{n \leq x} \left[ \frac{x}{n} \right]$$ so this must mean $$\sum_{d|n} 1 = \left[ \frac{x}{n} \right]$$ which is clearly false. So what am I missing? - Regarding your "Edit". It's not true that if the sum of F is equal to the sum of G, then F = G. –  user58512 Mar 23 '13 at 1:51 I see. Thank you. –  Tyler Hilton Mar 23 '13 at 1:53 Consider first $$\sum_{k=1}^n \sum_{d|k} 1$$ Here is a table k | divisors of k -------------------- n | ..| ... 9 | 1 3 9 8 | 1 2 4 8 7 | 1 7 6 | 1 2 3 6 5 | 1 5 4 | 1 2 4 3 | 1 3 2 | 1 2 1 | 1 The sum is just going up the rows of this table counting the number of divisors: how many things appear in that row. Now think about counting the same thing the other way, so you're thinking about the sum over columns: How many times does each divisor divide the numbers below n. 2 occurs as a divisor of half the numbers below n, 3 occurs as a divisor for 1/3 of them and so on.. This gives $$\sum_{d=1}^n \left[\frac{n}{d}\right]$$ - Is 1 not a divisor? –  Tyler Hilton Mar 23 '13 at 1:58 it is, it should be in the table too. I 've added it. –  user58512 Mar 23 '13 at 2:05 For each positive integer $d\le n$ there are $\left\lfloor\dfrac{n}d\right\rfloor$ multiples of $d$ in the set $\{1,2,\dots,n\}$. Use this fact to reverse the order of summation: \begin{align*} \sum_{k=1}^n\sum_{d\mid k}1&=\sum_{d=1}^n\sum_{k\text{ is a multiple of }d}1\\\\ &=\sum_{d=1}^n\left\lfloor\frac{n}d\right\rfloor\;. \end{align*}
2014-04-17T02:28:55
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/338432/how-to-prove-the-relation-between-the-floor-function-and-the-number-of-divisors", "openwebmath_score": 0.8311999440193176, "openwebmath_perplexity": 314.1025840596626, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9808759649262344, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8413641230147282 }
http://math.stackexchange.com/questions/385574/summation-of-logs/385599
# Summation of logs Are there any useful identities for quickly calculating the sum of consecutive logs? For example $\sum_{k=1}^{N} log(k)$ or something to this effect. I should add that I am writing code to do this (as opposed to doing this on a calculator) so N can be very large. - $\log a+\log b=\log (a\cdot b)$ – lab bhattacharjee May 8 '13 at 14:57 For large $N$, we have $N!\approx N^Ne^{-N}\sqrt {2\pi N}$ (Stirling formula) and hence $$\sum_{k=1}^N\ln k\approx\left( N+\frac12\right)\ln N-N+\frac12\ln(2\pi).$$ - The Euler-Maclaurin Sum Formula, can also be used to get an asymptotic expansion. It gives $$\sum_{k=1}^n\log(k)=\overbrace{\vphantom{\frac12}C}^{\frac12\log(2\pi)}+\overbrace{\vphantom{\frac12}n\log(n)-n}^{\int f(n)\,\mathrm{d}n}+\overbrace{\frac12\log(n)}^{\frac12f(n)}+\overbrace{\frac1{12n}}^{\frac1{12}f'(n)}-\overbrace{\frac1{360n^3}}^{\frac1{720}f'''(n)}+\dots$$ The constant $\frac12\log(2\pi)$ is derived as in the proof of Stirling's Formula. - Hint: Use the fact that $\log(a)+\log(b)=\log(ab)$ Your expression simply becomes $\sum_1^N \log k=\log(N!)$, and now you can have Stirling Approximation to approximate $N!$ - I am trying to do this with a computer, so for large $N$, calculating N! may not be possible – user74255 May 8 '13 at 14:58 @Inceptio, it's better to show the means than end – lab bhattacharjee May 8 '13 at 14:59 @labbhattacharjee: Considered it. Edited.:) – Inceptio May 8 '13 at 15:03 Will the approximation of N! interfere with the log calculation? I am not sure what O(ln(n)) is – user74255 May 8 '13 at 15:04 @user74255: No. You can calculate the approximate value of it, and then take $\log$ of it. – Inceptio May 8 '13 at 15:07 In the particular case of $$\sum_{k=1}^N\log k=\log N!=\log\Gamma(N+1)$$ this is just the loggama function, which is implemented in many software systems. This is fast and accurate. For example, it is lngamma in GP, lgamma in C (math.h), LogGamma in Mathematica, lnGAMMA in Maple, LogGamma in Magma, gammaln in MatLab, lnGamma in Mathcad, log_gamma in Sage, math.lgamma in Python, and gammaln in Perl (Math::SpecFun::Gamma). -
2016-02-08T10:33:09
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/385574/summation-of-logs/385599", "openwebmath_score": 0.913544774055481, "openwebmath_perplexity": 512.0193222854198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759654852756, "lm_q2_score": 0.857768108626046, "lm_q1q2_score": 0.8413641217110516 }
https://math.stackexchange.com/questions/2888319/what-is-the-abelianization-of-langle-x-y-z-mid-x2-y2z2-rangle/2888542
# What is the abelianization of $\langle x,y,z\mid x^2=y^2z^2\rangle?$ Let $G=\langle x,y,z\mid x^2=y^2z^2\rangle$. What is the abelianization of this group? (Also, is there a general method to calculate such abelianizations?) Update: I know how to get a presentation of the abelianization by adding relations like $xy=yx$ and so on. However is it possible to express it as a direct sum of cyclic groups as per the fundamental theorem for finitely generated abelian groups? Thanks. • The given group is a quotient of the free group $F_3$ on three generators; consider it how it behaves under the abelianization map $F_3 \to \mathbb{Z}^3$. (Note, for example, that $G^{ab}$ is the maximal abelian quotient of $G$.) – anomaly Aug 20 '18 at 2:27 • @anomaly Thanks. I can get a presentation of the abelianization by adding relations like $xy=yx$ and so on. However is it possible to express it as a direct sum of cyclic groups as per the fundamental theorem for finitely generated abelian groups? – yoyostein Aug 20 '18 at 2:44 • keywords: Tietze transformations – janmarqz Aug 20 '18 at 3:00 • Do you know a proof for the fundamental theorem of finitely generated abelian groups? The one I know is very much of the flavor, "Give me a presentation for the abelian group, and I will decompose it into primitive factors" - maybe your proof does that too? – Milo Brandt Aug 20 '18 at 3:12 • @janmarqz I would say that the keywords here are not Tietze transformations, but Smith Normal Form. – Derek Holt Aug 20 '18 at 7:08 You can rewrite your relator such that it has $0$ exponent sum in two of the generators, as the map $x\mapsto xyz, y\mapsto y, z\mapsto z$ is a Nielsen transformation: \begin{align*} \langle x, y, z\mid x^{2}=y^2z^2\rangle &\cong\langle x, y, z\mid x^{2}z^{-2}y^{-2}\rangle\\ &\cong\langle x, y, z\mid (xyz)^{2}z^{-2}y^{-2}\rangle \end{align*} Under the abelinisation map we then get the group: \begin{align*} \langle x, y, z\mid (xyz)^{2}z^{-2}y^{-2}\rangle^{ab}&=\langle x, y, z\mid x^2\rangle^{ab}\\ &\cong \mathbb{Z}^2\times(\mathbb{Z}/2\mathbb{Z}) \end{align*} This is a specific case of a more general phenomenon, where one can adapt the Euclidean algorithm to rewrite using automorphisms a word $W\in F(a, b, \ldots)$ such that it has zero exponent sum in all but one of the relators. For example, writing $\sigma_x$ for the exponent sum of the relator word in the letter $x$: \begin{align*} &\langle a, b\mid a^6b^8\rangle&&\sigma_a=6, \sigma_b=8\\ &\cong\langle a, b\mid (ab^{-1})^6b^8\rangle&&\text{by applying}~a\mapsto ab^{-1}, b\mapsto b\\ &=\langle a, b\mid (ab^{-1})^5ab^7\rangle&&\sigma_a=6, \sigma_b=2\\ &\cong\langle a, b\mid (a(ba^{-3})^{-1})^5a(ba^{-3})^7\rangle&&\text{by applying}~a\mapsto a, b\mapsto ba^{-3}\\ &\cong\langle a, b\mid (a^4b^{-1})^5a(ba^{-3})^7\rangle&&\sigma_a=0, \sigma_b=2 \end{align*} You can think of this as a "non-commutative Smith normal form", but it is more useful in this context than the Smith normal form as it gives you more information than just the abelianisation. For example, it is used in the HNN-extension version of the Magnus hierarchy ($a$ is the stable letter, and the associated subgroups are free by the Freiheitssatz; see J. McCool and P. Schupp, On one relator groups and HNN extensions, Journal of the Australian Mathematical Society, Volume 16, Issue 2, September 1973 , pp. 249-256 doi). • Seems like overkill not to abelianize first and just work in the category of abelian groups. – C Monsour Aug 20 '18 at 13:35 • @CMonsour Probably, but this is what I did when I solved the problem :-) [Also, as I said in the post, the general idea of reducing exponent sums to $0$ has applications beyond abelianisations.] – user1729 Aug 20 '18 at 13:39 I would think of it as starting with $\Bbb{Z}^3=\langle x,y,z\rangle$ and then quotienting out the cyclic subgroup $\langle x^{-2}y^2z^2\rangle$. You can see easily that $x^{-1}yz$, $y$, and $z$ are an alternate set of generators for $G$, so you get $\Bbb{Z}^2\times \Bbb{Z}/2\Bbb{Z}$ generated by $y, z,$ and $x^{-1}yz$ when you take the quotient. There is no need to know anything about non-abelian groups, since you can abelianize first and then mod out by the relation. (Abelianization is just modding out by some relations, and it doesn't matter which relations you mod out first--you get to the same place in the end.) If we have $$\langle x,y,z\ |\ x^{-2}y^2z^2=1\rangle^{\rm ab}=\langle x,y,z\ |\ x^{-2}y^2z^2=[x,y]=[x,z]=[y,z]=1\rangle$$ with one of the Tietze moves $$\langle x,y,z,t\ |\ t=x^{-1}yz, t^2=[x,y]=[x,z]=[y,z]=1\rangle$$ With another, now we arrange $$\langle x,y,z,t\ |\ x=yzt^{-1}, t^2=[x,y]=[x,z]=[y,z]=1\rangle$$ and finally get $$\langle y,z,t\ |\ t^2=[y,z]=1\rangle$$ which clearly is $$\Bbb Z+\Bbb Z+\Bbb Z_2$$.
2019-10-17T07:44:11
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2888319/what-is-the-abelianization-of-langle-x-y-z-mid-x2-y2z2-rangle/2888542", "openwebmath_score": 0.8758471608161926, "openwebmath_perplexity": 385.91434471406995, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759621310288, "lm_q2_score": 0.8577681068080749, "lm_q1q2_score": 0.8413641170506816 }
http://math.stackexchange.com/questions/185515/is-this-3d-curve-a-circle/185535
# Is this 3D curve a circle? The following is a curve in $3$ dimensions: $$\begin{eqnarray} x & = & \cos(\theta) \\ y & = & \cos(\theta - \pi/3) \\ z & = & \cos(\theta - 2\pi/3) \end{eqnarray}$$ Is the curve a circle? If it is, what about this curve in $4$ dimensions? $$\begin{eqnarray} x & = & \cos(\theta) \\ y & = & \cos(\theta - \pi/4) \\ z & = & \cos(\theta - 2\pi/4) \\ w & = & \cos(\theta - 3\pi/4) \end{eqnarray}$$ I don't know if there is something like a circle in $4$-D. If there is, is this curve the $4$-D version of a circle? - First one: i.imgur.com/tQvqg.png –  user2468 Aug 22 '12 at 17:41 Probably it is easiest to first show that $x^2+y^2+z^2$ is constant. Then you have show that the points lie on a two-dimensional subspace, which is less obvious. This strategy should work for the 4D case too. –  Rahul Aug 22 '12 at 17:42 @JenniferDylan, how did you do this figure? –  Sigur Aug 22 '12 at 17:47 @Sigur I used k3dsurf.sourceforge.net –  user2468 Aug 22 '12 at 17:49 @zjk, if you want to take a circle and embed it in a higher-dimensional space without distorting it, you would naturally want it to still lie in a flat two-dimensional portion of the space, in other words, a two-dimensional subspace. Linear algebra would be a useful thing to learn in this context. –  Rahul Aug 22 '12 at 18:56 Here is a simple treatment of the $n$-dimensional case, where the point on the curve is $\vec x = (x_0, \ldots, x_{n-1})$ with $x_i = \cos(\theta-\pi i/n)$. 1. The curve lies on a sphere. We have $x_i^2 = \cos^2(\theta-\pi i/n) = \frac12+\frac12\cos(2\theta-2\pi i/n)$. So $$\|\vec x\|^2 = \sum x_i^2 = \frac n2 + \frac12 \sum \cos(2\theta-2\pi i/n).$$ The latter term is zero because it is the sum of $n$ equally spaced sinusoids (it is equivalently the $x$-component of the sum of $n$ unit vectors equally spaced along the unit circle, or the real part of the sum of the $n$th roots of $e^{2n\theta\sqrt{-1}}$; in either case, the entire thing is zero by symmetry). So $\|\vec x\|^2$ is a constant, $\frac n2$, independent of $\theta$. 2. The curve lies on a two-dimensional subspace. We have $x_i = \cos(\theta-\pi i/n) = a_i\cos\theta + b_i\sin\theta$ for some fixed $a_i$ and $b_i$ independent of $\theta$. Then $\vec x = \vec a\cos\theta + \vec b\sin\theta$. So $\vec x$ lies on the two-dimensional subspace spanned by $\vec a$ and $\vec b$. Thus, $\vec x$ lies on the intersection of a sphere in $n$ dimensions and a two-dimensional subspace, i.e. a sphere in two dimensions, also known as a circle. - If you use basic trigonometric identities, you can show the first set of three equations is a piece of this plane: $y-z=x$. - Also, using trig $x^2 + y^2 + z^2 = 3/2.$ So for all $\theta,$ the distance between $(x, y, z)$ and the origin $= \sqrt{3/2}.$ Putting it together sphere $\bigcap$ plane $=$ circle. –  user2468 Aug 22 '12 at 17:41 @JenniferDylan Good way to put it :) –  rschwieb Aug 22 '12 at 17:49 Using trigonometric identities, we get $$\cos(\theta-\frac\pi3) = \cos\theta\cos\frac\pi3+\sin\theta\sin\frac\pi3=\frac12\cos\theta+\frac12\sqrt3\sin\theta,$$ $$\cos(\theta-\frac{2\pi}3) = \cos\theta\cos\frac{2\pi}3+\sin\theta\sin\frac{2\pi}3=-\frac12\cos\theta+\frac12\sqrt3\sin\theta.$$ Thus, if we write $\gamma=(x,y,z)$ we get $\gamma = v_1\cos\theta + v_2\sin\theta$ with $v_1=(1,\frac12,-\frac12)$ and $v2=(0,\frac12\sqrt3,\frac12\sqrt3)$. Thus, we have at least an ellipse. Moreover, it is easy to check that $v1\cdot v2=0$. Furthermore, $v_1^2 = 1 + \frac14+\frac14=\frac32$ and $v_2^2=\frac34+\frac34=\frac32$; thus the vectors are orthogonal and of equal length, and therefore it is indeed a circle. In the general case, we have $(v_1)_k = \cos \frac{(k-1)\pi}d$ and $(v_2)_k=\sin \frac{(k-1)\pi}d$, where $d$ is the dimension of the vector space ($d=4$ in your second case). Thus, $$v_1\cdot v_2 = \sum_{k=1}^d\cos\frac{(k-1)\pi}d\sin\frac{(k-1)\pi}d = \frac12\sum_{k=1}^d\sin\frac{(k+1)2\pi}{d} = 0$$ and $$v_2^2-v_1^2 = \sum_{k=1}^d\left(\cos^2\frac{(k+1)\pi}d-\sin^2\frac{(k+1)\pi}d\right) = \sum_{k=1}^d\cos\frac{(k+1)2\pi}{d} = 0,$$ The construction thus gives a circle in any dimension. - For at least the three-dimensional case, here's a more mechanical method (i.e. less enlightening than Rahul's nice answer) to verify if your curve is a circle: We try to evaluate the curvature $\kappa$ and torsion $\tau$ of the given curve. By the fundamental theorem of space curves, a space curve is uniquely determined (up to rigid motions) by $\kappa(s)$ and $\tau(s)$; if, in addition, $\tau(s)=0$ (i.e. the curve is flat) and $\kappa(s)$ is a constant greater than zero, then we know that the space curve is indeed a circle. Using formula 26 here for the curvature, we have \begin{align*} \kappa&=\frac{\|\mathbf r^\prime\times \mathbf r^{\prime\prime}\|}{\|\mathbf r^\prime\|^3}\\ &=\frac{\left\|\langle-\sin\,\theta,\cos\left(\theta+\frac{\pi}{6}\right),\cos\left(\theta-\frac{\pi}{6}\right)\rangle\times\langle-\cos\,\theta,-\sin\left(\theta+\frac{\pi}{6}\right),\sin\left(\frac{\pi}{6}-\theta\right)\rangle\right\|}{\left\|\langle-\sin\,\theta,\cos\left(\theta+\frac{\pi}{6}\right),\cos\left(\theta-\frac{\pi}{6}\right)\rangle\right\|^3}\\ &=\sqrt\frac23 \end{align*} Using formula 3 here for the torsion, we have \begin{align*} \tau&=\frac{\mathbf r^\prime\times \mathbf r^{\prime\prime}\cdot\mathbf r^{\prime\prime\prime}}{\kappa^2}\\ &=\frac1{2/3}\begin{vmatrix}-\sin\,\theta&\cos\left(\theta+\frac{\pi}{6}\right)&\cos\left(\theta-\frac{\pi}{6}\right)\\-\cos\,\theta&-\sin\left(\theta+\frac{\pi}{6}\right)&\sin\left(\frac{\pi}{6}-\theta\right)\\\sin\,\theta&-\cos\left(\theta+\frac{\pi}{6}\right)&-\cos\left(\theta-\frac{\pi}{6}\right)\end{vmatrix}\\ &=0 \end{align*} (Note that the determinant is easily seen to be zero, since the third row is a multiple of the first.) Since $\tau=0$, the curve is flat; in addition, since $\kappa=\sqrt{2/3}$, we find that our space curve is a circle with radius $1/\kappa=\sqrt{3/2}$. - $\cos3\theta=cos3(\theta)$ $\cos3(\theta-\frac{\pi}{3})=cos(3\theta-\pi)=-\cos3\theta$ As $-y=-\cos(\theta-\frac{\pi}{3})=cos(\theta+\frac{2\pi}{3})$, $\cos3(\theta+\frac{\pi}{3})=\cos(2\pi+3\theta)=\cos3\theta$ $\cos3(\theta-\frac{2\pi}{3})=cos(3\theta-2\pi)=\cos3\theta$ Now, $\cos3\theta=4cos^3\theta-3\cos\theta$ If $\cos3\theta=a$ and $\cos\theta=t$ So, $x,-y,z$ are the roots of $4t^3-3t-a=0$ $=>x+(-y)+z=0$ $=>x(-y)+(-y)z+zx=\frac{3}{4}$ $=>x^2+y^2+z^2=(x+(-y)+z)^2-2(x(-y)+(-y)z+zx)=0+2\frac{3}{4}$ $=>x^2+y^2+z^2=\frac{3}{2}$ Observe that $(x,y,z)$ satisfy a general plane equation $Ax+By+CZ+D=0$ where $A,B,C,D$ constants, not all zeros. Also, satisfies the equation of the general circle in 3-D, $(x-a)^2+(y-b)^2+(z-c)^2=d^2$. $a=b=c=0, d^2=\frac{3}{2}$ In case of $x,y,z,w$, $z=\cos(\theta-\frac{2\pi}{4})=\sin\theta$, $\sqrt2 y=\cos\theta+\sin\theta$, $\sqrt2 w=-\cos\theta+\sin\theta$ So,$x^2+z^2=1$ and $w^2+y^2=1$ $x^2+y^2+z^2+w^2=2$ Now $\sqrt2 (y+w)=2\sin\theta=2z=>\sqrt 2z=y+w$ Similarly, $y-w=\sqrt 2x$ Observe that $(x,y,z,w)$ satisfies two general plane equations $Ax+By+CZ+Dw=E$ where $A,B,C,D,E$ constants, not all zeros. Also, satisfies the equation of the general circle in 4-D, $(x-a)^2+(y-b)^2+(z-c)^2+(w-d)^2=e^2$ . $a=b=c=d=0, e^2=2$ Again, we know $\cos nx=$Real part of $(\cos x+i\sin x)^n=(\cos x)^n+^nC_2(\cos x)^{n-2}(\sin x)^2+^nC_4(\cos x)^{n-4}(\sin x)^4+...$ Observe there is no term containing $=(\cos x)^{n-1}$ As $\cos n(2x-\frac{2r_i\pi}{n})=\cos(2nx-2r_i\pi)=\cos 2nx=C(say)$ So, $\cos (2x-\frac{2r_i\pi}{n})=R_i$(say), where all $r_i$s are distinct integers with $0 ≤r_i< n$ are the roots of the equation $2^{n-1}y^n+C_1y^{n-2}+...-C=0$ So, $\sum R_i=0$ as the coefficient of $y^{n-1}$ is 0. If $x_i=\cos (x-\frac{r_i\pi}{n})=>R_i=2(x_i)^2-1$ So, $\sum (2(x_i)^2-1)=0 =>\sum (x_i)^2=\frac{n}{2}$ This is another way of generalization("The curve lies on a sphere") already achieved by Rahul Narain. - You need to put some more words and say that you're trying to prove $x^2 + y^2 + z^2 = 3/2$, and that $x, y, z$ satisfy a plane equation (similar wording for the 4D case). –  user2468 Aug 22 '12 at 18:19 It is a circle. If you look at the distance from the origin $$r = \sqrt{ \cos^2(\theta) + \cos^2(\theta+\frac{\pi}{3}) + \cos^2(\theta+\frac{2\pi}{3}) }$$ which simplifies to $r = \frac{\sqrt{6}}{2}$. With the 4-dimensional case the distance simplifies to $r=\sqrt{2}$. I wonder how to prove the general case of $$r^2 = \sum_{i=1}^N \left[ \cos^2\left( \theta + \frac{i-1}{N} \pi \right) \right] = \frac{N}{2}$$ - Constant distance from origin implies a sphere not a circle. See this and this. –  user2468 Aug 22 '12 at 18:11 @JenniferDylan: Yes this is correct, and if the curve was an offset circle then the distance would not be constant either. I guess I needed to show that the curve was planar also. –  ja72 Aug 22 '12 at 19:36
2014-12-22T20:59:03
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/185515/is-this-3d-curve-a-circle/185535", "openwebmath_score": 0.9572219252586365, "openwebmath_perplexity": 291.2724173812747, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759604539051, "lm_q2_score": 0.8577681068080749, "lm_q1q2_score": 0.8413641156120983 }
https://math.stackexchange.com/questions/2513736/given-that-a-is-an-odd-multiple-of-1183-find-the-greatest-common-divisor-of
# Given that $a$ is an odd multiple of $1183$, find the greatest common divisor of $2a^2+29a+65$ and $a+13$. Given that $a$ is an odd multiple of $1183$, find the greatest common divisor of $2a^2+29a+65$ and $a+13$. I know there exists some slick technique to simplify this problem. Any hints are greatly appreciated. We can use Euclid's algorithm for a few steps: $$\gcd(2a^2 + 29a + 65, a + 13) = \gcd(2a^2 + 29a + 65 - 2a(a+13), a+13)\\ = \gcd(3a + 65, a+13) = \gcd(3a+65 - 3(a + 13), a + 13)\\ = \gcd(26, a+13)$$ which is necessarily $1, 2, 13$ or $26$, just by looking at the first term. By factoring $1183$, and considering that $a$ is an odd multiple, you should be able to conclude. • I'm not sure I'm able to see what $a$ is from this. Nov 11 '17 at 20:41 • I know that $1183=7*13^2$ Nov 11 '17 at 20:41 • And an odd multiple of that, plus $13$, is an even multiple of $13$. Nov 11 '17 at 21:30 • What does that imply? That $a=1183$ since the $gcd(26, 1183+13)=26$ Nov 11 '17 at 21:36 • Isee. The answer is $26$ since the $gcd(26, a+13)=26$ when $a=1183k$ where $k=1,3,5,7,9,...$. Nov 11 '17 at 22:03 HINT let $d=(2a^2+29a+65,a+13)$. We notice that $-13$ is a root of $2a^2+29a+39$. Another root is $-\frac32$ So we have $2a^2+29a+65=(a+13)(2a+3)+26$ Thus $d=((a+13)(2a+3)+26,a+13)\Rightarrow$ $d|(a+13)(2a+3)+26, d|a+13\Rightarrow d|(a+13)(2a+3)\Rightarrow d|26$
2021-10-23T15:32:40
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2513736/given-that-a-is-an-odd-multiple-of-1183-find-the-greatest-common-divisor-of", "openwebmath_score": 0.8226304650306702, "openwebmath_perplexity": 109.30670454801536, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759626900699, "lm_q2_score": 0.8577681031721325, "lm_q1q2_score": 0.8413641139638007 }
http://www.b254.com/factoring/2gsquare2plus4gminus15.html
### Factoring 2g^2+4g-15 Solution The variable we want to find is g We will solve for g using quadratic formula -b +/- sqrt(b^2-4ac)/(2a), graphical method and completion of squares. ${x}_{}=\frac{-b±\sqrt{{b}^{2}-4ac}}{2a}$ Where a= 2, b=4, and c=-15 Applying values to the variables of quadratic equation -b, a and c we have ${g}_{}=\frac{-4±\sqrt{{4}^{2}-4x\mathrm{2x}\mathrm{-15}}}{2x2}$ This gives ${g}_{}=\frac{-4±\sqrt{{4}^{2}-\mathrm{-120}}}{4}$ ${g}_{}=\frac{-4±\sqrt{136}}{4}$ ${g}_{}=\frac{-4±11.6619037897}{4}$ ${\mathrm{g1}}_{}=\frac{-4+11.6619037897}{4}$ ${\mathrm{g2}}_{}=\frac{-4-11.6619037897}{4}$ ${g}_{1}=\frac{7.66190378969}{4}$ ${g}_{1}=\frac{-15.6619037897}{4}$ The g values are g1 =   1.91547594742 and g2 =   -3.91547594742 ### Factoring Quadratic equation 2g^2+4g-15 using Completion of Squares 2g^2+4g-15 =0 Step1: Divide all terms by the coefficient of g2 which is 2. ${g}^{2}+\frac{4}{2}x-\frac{15}{2}=0$ Step 2: Keep all terms containing x on one side. Move the constant to the right. ${g}^{2}+\frac{4}{2}g=\frac{15}{2}$ Step 3: Take half of the x-term coefficient and square it. Add this value to both sides. ${g}^{2}+\frac{4}{2}g+{\left(\frac{4}{4}\right)}^{2}=\frac{15}{2}+{\left(\frac{4}{4}\right)}^{2}$ Step 4: Simplify right hand sides of expression. ${g}^{2}+\frac{4}{2}g+{\left(\frac{4}{4}\right)}^{2}=\frac{136}{16}$ Step 2: Write the perfect square on the left. ${\left(g+\frac{4}{4}\right)}^{2}=\frac{136}{16}$ Step 2: Take the square root on both sides of the equation. $g+\frac{4}{4}=±\sqrt{\frac{136}{16}}$ Step 2: solve for root g1. ${g}_{1}=-\frac{4}{4}+\frac{11.6619037897}{4}=\frac{7.66190378969}{4}$ ${g}_{1}=1.91547594742$ Step 2: solve for root g2. ${g}_{2}=-\frac{4}{4}-\frac{11.6619037897}{4}=\frac{-15.6619037897}{4}$ ${g}_{2}=-3.91547594742$ ### Solving equation 2g^2+4g-15 using Quadratic graph g2 + g + = 0 Solutions how to factor polynomials? Polynomials can be factored using this factoring calculator how to factor trinomials Trinomials can be solved using our quadratic solver Can this be used for factoring receivables, business, accounting, invoice, Finance etc No this cannot be used for that If you spot an error on this site, we would be grateful if you could report it to us by using the contact email provided. send email to contact on our site. Other Variants of 2g^2+4g-15 are below
2019-07-16T13:07:59
{ "domain": "b254.com", "url": "http://www.b254.com/factoring/2gsquare2plus4gminus15.html", "openwebmath_score": 0.2706487774848938, "openwebmath_perplexity": 2795.8348856123985, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759654852756, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8413641127950302 }
http://math.stackexchange.com/questions/165118/how-does-partial-fraction-decomposition-avoid-division-by-zero
# How does partial fraction decomposition avoid division by zero? This may be an incredibly stupid question, but why does partial fraction decomposition avoid division by zero? Let me give an example: $$\frac{3x+2}{x(x+1)}=\frac{A}{x}+\frac{B}{x+1}$$ Multiplying both sides by $x(x+1)$ we have: $$3x+2=A(x+1)+Bx$$ when $x \neq -1$ and $x \neq 0$. What is traditionally done here is $x$ is set to $-1$ and $0$ to reveal: $$-3+2=-B \implies 1=B$$ and $$2=A$$ so we find that $$\frac{3x+2}{x(x+1)}=\frac{2}{x}+\frac{1}{x+1}$$ Why can $x$ be set equal to the roots of the denominator (in this case, $0$ and $-1$) without creating a division by zero problem? - If two polynomials in $x$ are equal for infinitely many $x$, then they are equal for all $x$. –  GEdgar Jul 1 '12 at 1:17 I would say "traditionally" we equate (coefficients of) like powers of x (in the numerator). After all we need something that works for higher degrees in general. –  hardmath Jul 1 '12 at 1:18 Good question! This is my crude interpretation (see Bill's answer for a shot of rigor) What is actually being equated is the numerator, not the denominator. So in your example, you have that $$\frac{{3x + 2}}{{x\left( {x + 1} \right)}} = \frac{A}{x} + \frac{B}{{x + 1}}$$ if $$\frac{{3x + 2}}{{x\left( {x + 1} \right)}} = \frac{{A\left( {x + 1} \right) + Bx}}{{x\left( {x + 1} \right)}}$$ if $${3x + 2 = A\left( {x + 1} \right) + Bx}$$ $$3x + 2 = \left( {A + B} \right)x + A$$ which implies $${A + B}=3$$ $$A=2$$ which in turn gives what you have. When we equate numerators we "forget" about the denominators. We're focused in the polynomial equality $$3x + 2 = \left( {A + B} \right)x + A$$ only. Thought it might be unsettling to be replacing by the roots of the denominators, we're not operating on that, so we're safe. - Hint $\$ If $\rm\:f(x),\,g(x)\,$ and $\rm\:h(x)\!\ne\! 0\:$ are polynomial functions over $\rm\:\mathbb R\:$ (or any infinite field) then $$\rm\begin{eqnarray}\dfrac{f(x)}{h(x)} = \dfrac{g(x)}{h(x)} &\Rightarrow&\rm\ f(x) = g(x)\ \ for\ all\,\ x\in\mathbb R\, \ such\ that\ h(x)\ne 0\\ &\Rightarrow&\rm\ f(x) = g(x)\ \ for\ all\ \,x\in \mathbb R \end{eqnarray}$$ since $\rm\:p(x) = f(x)\!-\!g(x) = 0\:$ has infinitely many roots, viz. all $\rm\:x\in \mathbb R\:$ except the finitely many roots of $\rm\:h(x),\,$ so $\rm\:p\:$ is the zero polynomial, since a nonzero polynomial over a field has only finitely many roots (no more than its degree). Hence $\rm\: 0 = p = f -g\:\Rightarrow\: f = g.$ Thus to solve for the variables that occur in $\rm\:g\:$ it is valid to evaluate $\rm\:f(x) = g(x)\:$ at any $\rm\:x\in \mathbb R,\:$ since it holds true for all $\rm\:x\in \mathbb R\:$ (which includes all real roots of $\rm h).$ - Paying careful attention to the logic of the first step, we are saying that (for a given $A$ and $B$), the equation $$\frac{3x+2}{x(x+1)}=\frac{A}{x}+\frac{B}{x+1}$$ holds for all $x \neq 0,-1$ if and only if the equation $$3x+2=A(x+1)+Bx$$ holds for all $x \neq 0,-1$. Now, if we can find an $A$ and a $B$ so that $3y+2=A(y+1)+By$ holds for all values of $y$, then clearly $3x+2=A(x+1)+Bx$ holds for all $x \neq 0,-1$. So if substituting $y=0$ and $y=-1$ allows us to find $A$ and $B$, then we get a good answer. Incidentally, a stronger statement is true: the equation $$3x+2=A(x+1)+Bx$$ holds for all $x \neq 0,-1$ if and only if the equation $$3y+2=A(y+1)+By$$ holds for all $y$. So this guarantees that we don't lose any solutions to the former problem when we solve it by instead considering the latter problem. Aside: if one pays attention to what they mean, one doesn't really need to to introduce a new dummy variable $y$. However, I hoped it might add a bit more clarity if the variable $x$ is always restricted to be $\neq 0,-1$. It may be useful to note that you use a similar sort of reasoning for limits. e.g. to find the value of $$\lim_{x \to 0} \frac{x^2}{x}$$ you observe that $x^2/x = x$ for all $x \neq 0$ so that $$\lim_{x \to 0} \frac{x^2}{x} = \lim_{x \to 0} x$$ and then you apply the fact that $x$ is continuous at $0$ to obtain $$\lim_{x \to 0} x = 0$$ - Let $f(x)=p_1(x)/q_1(x)$, $g(x)=p_2(x)/q_2(x)$ be two rational functions. If $f(x)=g(x)$ for all $x$ s.t. $q_1(x)\neq 0$ and $q_2(x)\neq 0$, then $p_1(x)=p_2(x)$ everywhere. This implies that you can get rid of the denominators (for instance multiplying both sides by the least common denominator) and enforcing the equality of functions only in the numerators.
2014-07-22T18:12:12
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/165118/how-does-partial-fraction-decomposition-avoid-division-by-zero", "openwebmath_score": 0.9398812055587769, "openwebmath_perplexity": 149.29333552611186, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759618515083, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8413641096781006 }
http://math.stackexchange.com/questions/787892/open-ball-of-radius-r-0-is-empty
# Open ball of radius, r = 0 is empty? Is $B(a;0) = \{x : d(a, x) < 0\} = \varnothing$? And if so, is it always the case? The reason I ask is because I want to know if the open interval $(a,a) = \varnothing$ when $a \in \mathbb{R}$. Thank you. Kind regards, Marius - Not every question involving sets has to do with set theory. – Asaf Karagila May 9 '14 at 13:57 ## 3 Answers From the definition of a metric, $d(x,y)\geq0$ for all $x,y$ and $d(x,y)=0\iff x=y$, hence the set of $x$ such that $d(a,x)<0$ is empty, as no such $x$ exists. - The fact about $d(x,y) = 0$ iff, etc is unnecessary. We just need $d(x,y) \ge 0$ for this to hold. This is more general, as pseudometrics need not satisfy the second property. – Henno Brandsma May 9 '14 at 13:56 Sure, I just wrote it because they are normally bundled together into the single axiom positive definitness. – Dan Rust May 9 '14 at 13:57 I am used to them being separate. So we can consider pseudometrics, quasimetrics (no symmetry) etc. These are often useful! – Henno Brandsma May 9 '14 at 13:59 The fact that an open interval $(a,a)$ is empty has nothing to do with metrics: If there were an $x \in (a,a)$, by definition, $a < x$ and $x < a$, and then it follows that $a < a$ by transitivity of orders. And by the axioms for (strict) orders, this is false for all $a$. That $B(a,r) = \emptyset$ for $r \le 0$ follows from $x \in B(a,r) \Rightarrow 0 \le d(a,x) < r \le 0$, which would imply $0 < 0$, likewise impossible. - That is the usual definition, yes. (Although it is not common to talk about an open ball of radius $0$.) You might think to define $B(a,0)={a}$ but in general this is not an open set - we want open balls to be open! -
2016-05-02T03:50:38
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/787892/open-ball-of-radius-r-0-is-empty", "openwebmath_score": 0.9406955242156982, "openwebmath_perplexity": 396.8606248092315, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759590563021, "lm_q2_score": 0.8577681013541613, "lm_q1q2_score": 0.8413641090636662 }
http://people.cs.umass.edu/~sheldon/teaching/cs335/lec/knn-decision-tree-demo.html
# Lecture 7: KNN and Decision Trees¶ These demos illustrate various aspects of KNN and decision tree classification. They also show how to use built-in implementations of machine learning methods from scikit-learn. This is very useful! ## Imports¶ Run this cell. In [52]: import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg from mpl_toolkits.mplot3d import Axes3D from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets from sklearn.neighbors import DistanceMetric from sklearn.neighbors import NearestNeighbors from sklearn import tree from sklearn.externals.six import StringIO #import pydot from subprocess import call import warnings warnings.filterwarnings('ignore') %matplotlib inline ## Demo 1: Distance Functions¶ Run this code to set up plotting In [53]: #Create a desne grid of points on the unit scquare xx, yy = np.meshgrid(np.arange(-1, 1,0.01), np.arange(-1,1,0.01)) xl = xx.flatten() yl = yy.flatten() xy = np.vstack((xl,yl)).T def plot_distance_contours(dist): dl = dist.pairwise(xy,[[0,0]]) dg = dl.reshape(xx.shape) plt.figure(1, figsize=(6, 5)); CS=plt.contour(xx,yy,dg); plt.clabel(CS, inline=1, fontsize=10) plt.ylim((-1,1)); plt.xlim((-1,1)); plt.colorbar(); plt.plot([0],[0],'ow'); ## Visualizing different distance functions¶ There are a number of standard distance functions to choose from. Any one of them might be best for a particular application. We will look at three of them. ### Euclidean distance¶ The plot below shows the contours of $d(\mathbf{x}, 0)$, i.e, the distance of points in the plane from the origin, for the Euclidean distance function. (This is also the special case of Minkowski distance when $p=2$.) In [54]: dist = DistanceMetric.get_metric('euclidean') plot_distance_contours(dist) ### Manhattan distance¶ Here we see the distance of all points in the plane from the origin under the "Manhattan distance" ($p=1$) In [55]: dist = DistanceMetric.get_metric('manhattan') plot_distance_contours(dist) ### Chebyshev distance¶ We get another special case of the Minkowski distance called Chebyshev distance when $p \rightarrow \infty$. It looks like this. In [56]: dist = DistanceMetric.get_metric('chebyshev') plot_distance_contours(dist) ## Demo 2: KNN Classification in Action¶ ### Setup¶ Run the code below to load and plot the seeds data set. In [57]: #Load Data x = data[:,[0,2]] y = data[:,-1] x=x-np.mean(x,axis=0); x=x/np.std(x,axis=0); #x=x[[10,200],:] #y=y[[10,200]] #Plot data set labels=['sr','og','^b'] for i in [1,2,3]: plt.plot(x[y==i,0],x[y==i,1],labels[i-1]); plt.xlabel('area'); plt.ylabel('compactness'); print y.shape (210,) ### Choose distance function and $k$¶ Select which cells below to run to choose from our three distance functions, and then select a value of $k$. Use Euclidean Distance ($\ell_2$ norm): $||x-y||_2 = \sqrt{\sum_{d=1}^D (x_d-y_d)^2}$ In [43]: metric='euclidean' Use Manhattan Distance ($\ell_1$ norm): $||x-y||_1 = \sum_{d=1}^D |x_d-y_d|$ In [44]: metric='manhattan' Use Chebyshev Distance ($\ell_{\infty}$ norm): $||x-y||_{\infty} = \max_d |x_d-y_d|$ In [45]: metric='chebyshev' Select a number of Neighbors In [46]: K=1 ### See the result¶ Run the code below to fit a classifier and visualize the result. The colored areas in the final plot show the predictions for different regions of the plane. Experiment with different choices of distance metric and values of $k$ to see how the predictions change. In [47]: #Fit the specified classifier clf = neighbors.KNeighborsClassifier(K, weights='uniform',metric=metric) clf.fit(x, y) #Prepare grid for plotting decision surface gx1, gx2 = np.meshgrid(np.arange(min(x[:,0]), max(x[:,0]),(max(x[:,0])-min(x[:,0]))/200.0 ), np.arange(min(x[:,1]), max(x[:,1]),(max(x[:,1])-min(x[:,1]))/200.0)) gx1l = gx1.flatten() gx2l = gx2.flatten() gx = np.vstack((gx1l,gx2l)).T #Compute a prediction for every point in the grid gyhat = clf.predict(gx) gyhat = gyhat.reshape(gx1.shape) #Plot the results cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) for i in [1,2,3]: plt.plot(x[y==i,0],x[y==i,1],labels[i-1]); plt.xlabel('area'); plt.ylabel('compactness'); plt.pcolormesh(gx1,gx2,gyhat,cmap=cmap_light) plt.colorbar(); plt.axis('tight'); plt.title("Metric: %s, K:%d"%(metric,K)); ### Convergence of KNN in the Infinite Data Limit¶ One of the nice properties of KNN is that it is guaranteed (under a suitable formalization of the problem) to converge to the "right answer" as the number of training examples goes to infinty. Run the code below to set up a problem. The colored regions of the plane illustrate the "ground truth" --- i.e., the correct label for each point in the plane. In [48]: #Generate a Random Ground Truth Decision Boundary #Get a random set of labeled points xtrue,ytrue = datasets.make_classification(n_samples=50, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=2, class_sep=0); xtrue = xtrue - np.min(xtrue,axis=0); xtrue = xtrue / np.max(xtrue,axis=0); #Fit a KNN classifier to the points clf_true = neighbors.KNeighborsClassifier(1, weights='uniform',metric='euclidean') clf_true.fit(xtrue, ytrue) #Prepare grid for plotting decision surface gx1, gx2 = np.meshgrid(np.arange(0, 1,0.01 ),np.arange(0,1,0.01)) gx1l = gx1.flatten() gx2l = gx2.flatten() gx = np.vstack((gx1l,gx2l)).T #Compute the true classifier boundary gytrue = clf_true.predict(gx) gytrue = gytrue.reshape(gx1.shape) #Plot the true decision surface plt.figure(1,figsize=(5,5)) cmap_light = ListedColormap(['#FFAAAA','#AAAAFF']) plt.pcolormesh(gx1,gx2,gytrue,cmap=cmap_light) plt.title('Ground Truth'); ### Observe that KNN can recover the ground truth¶ • Run the code below to fit KNN models and plot the resulting predictions for training sets of increasing size. • Observe that as the training set gets bigger, the KNN predictions converge to the ground truth. • But note: KNN predictions with smaller datasets are very noisy. Other models can do better with small datasets. In [49]: #Show convergence to ground truth N=[10,100,1000,10000] plt.figure(1,figsize=(3*(1+len(N)),3)) cmap_light = ListedColormap(['#FFAAAA','#AAAAFF']) labels=['.r','.b'] #Plot ground truth plt.subplot(1,1+len(N),1) plt.pcolormesh(gx1,gx2,gytrue,cmap=cmap_light) plt.axis('tight'); plt.title('Ground Truth'); for n in range(len(N)): #Sample and label a data set xsamp = np.random.rand(N[n],2) ysamp = clf_true.predict(xsamp) clf_est = neighbors.KNeighborsClassifier(1, weights='uniform',metric='euclidean') clf_est.fit(xsamp, ysamp) #Compute the estimated classifier boundary gyhat = clf_est.predict(gx) gyhat = gyhat.reshape(gx1.shape) err = np.sum(np.sum(np.abs(gyhat-gytrue)))/float(np.prod(gyhat.shape)) #Plot estimate plt.subplot(1,1+len(N),n+2) for i in [0,1]: plt.plot(xsamp[ysamp==i,0],xsamp[ysamp==i,1],labels[i],markersize=1); plt.pcolormesh(gx1,gx2,gyhat,cmap=cmap_light) plt.axis('tight'); plt.title("Esitmate N=%d, Err=%.2f"%(N[n],err)) ## Demo 3: Decision Trees¶ Now we will fit decision tree models to the same data set. Select parameters below for the splitting heuristic (either 'gini' or 'entropy') and stopping criteria (max_depth, min_samples_split). In [50]: criterion='gini'; #criterion='entropy'; max_depth=10; min_samples_split=1; ### Fit and visualize the model¶ Now run this code to learn decision trees of increasing depth and visualize the predictions. This gives you a sense of how the recursive splitting proceeds. In [51]: #Load Data x = data[:,[0,2]] y = data[:,-1] #Prepare grid for plotting decision surface gx1, gx2 = np.meshgrid(np.arange(min(x[:,0]), max(x[:,0]),(max(x[:,0])-min(x[:,0]))/200.0 ), np.arange(min(x[:,1]), max(x[:,1]),(max(x[:,1])-min(x[:,1]))/200.0)) gx1l = gx1.flatten() gx2l = gx2.flatten() gx = np.vstack((gx1l,gx2l)).T for depth in np.hstack(([1,2,3],np.arange(4,max_depth,2))): #Fit the Decision Tree classifer and make predictions clf = tree.DecisionTreeClassifier(criterion=criterion, max_depth=depth,min_samples_split=min_samples_split ) clf.fit(x, y) #Compute a prediction for every point in the grid gyhat = clf.predict(gx) gyhat = gyhat.reshape(gx1.shape) #Plot the results plt.figure(depth) labels=['sr','og','^b'] cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) for i in [1,2,3]: plt.plot(x[y==i,0],x[y==i,1],labels[i-1]); plt.xlabel('area'); plt.ylabel('compactness'); plt.pcolormesh(gx1,gx2,gyhat,cmap=cmap_light); plt.colorbar(); plt.clim(0.01,3) plt.axis('tight'); plt.title("Criterion: %s, Max Depth:%d"%(criterion,depth)); plt.show() #Plot the tree #requires pydot package #dot_data = StringIO(); #tree.export_graphviz(clf,out_file=dot_data, # feature_names=['area','compactness']); #graph = pydot.graph_from_dot_data(dot_data.getvalue()); #graph.write_pdf('seed%d.pdf'%(depth)) ;
2017-12-15T04:35:40
{ "domain": "umass.edu", "url": "http://people.cs.umass.edu/~sheldon/teaching/cs335/lec/knn-decision-tree-demo.html", "openwebmath_score": 0.3988993763923645, "openwebmath_perplexity": 6380.290869866497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126500692716, "lm_q2_score": 0.8596637577007393, "lm_q1q2_score": 0.8413637944677987 }
https://math.stackexchange.com/questions/3263633/integrating-int2-0-xex2dx
# Integrating $\int^2_0 xe^{x^2}dx$ Well what I was thinking was to integrate the indefinite integral first. $$u=x^2$$, $$x=\sqrt u$$ $$du=2xdx = 2\sqrt {u} dx$$ $$dx= \frac{1}{2\sqrt{u}}du$$ $$\int xe^{x^2} dx = \int \sqrt{u}\frac{1}{2\sqrt{u}} du =\frac{1}{2}\int e^u du = \frac{1}{2}e^u =\frac{1}{2}e^{x^2} +C$$ Now I can evaluate $$\frac{1}{2}e^{x^2}\Big|_0^2= \frac{1}{2} e^{4} -\frac{1}{2} e^0 =\frac{1}{2}e^4-1$$ so my answer should be $$\frac{1}{2}e^4-1$$ Is this correct? It's been a while since I've done stuff like this. • Yes, it is correct. Jun 15 '19 at 19:56 • If you're not sure whether your antiderivative is correct, differentiate it. If you get $x e^{x^2}$, it's correct. Jun 15 '19 at 19:58 • You're missing a pair of parentheses in the evaluation. Jun 15 '19 at 20:06 Also, one might set $$g(x) = e^{x^2}; \tag 1$$ then $$g'(x) = 2xe^{x^2}; \tag 2$$ then $$\displaystyle \int_0^2 xe^{x^2} \; dx = \dfrac{1}{2} \int_0^2 g'(x) \; dx = \dfrac{1}{2}(g(2) - g(0))$$ $$= \dfrac{1}{2}(e^{2^2} - e^0) = \dfrac{1}{2} (e^4 - 1) = \dfrac{1}{2}(e^4 - 1) = \dfrac{1}{2}e^4 - \dfrac{1}{2}. \tag{3}$$ If one wants to use indefinite integrals, we write $$\displaystyle \int xe^{x^2} \; dx = \dfrac{1}{2} \int g'(x) \; dx = \dfrac{1}{2}g(x) + C = \dfrac{1}{2}e^{x^2} + C, \tag 4$$ and then proceed to take $$g(2) - g(0) = \dfrac{1}{2}e^4 - \dfrac{1}{2}; \tag 5$$ the constant of integration $$C$$ of course has been cancelled out of this expression. You mean $$\frac12 e^4-\frac12$$.
2021-09-25T07:13:37
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3263633/integrating-int2-0-xex2dx", "openwebmath_score": 0.877921462059021, "openwebmath_perplexity": 365.8322709010761, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126525529015, "lm_q2_score": 0.8596637541053281, "lm_q1q2_score": 0.841363793084011 }
https://math.stackexchange.com/questions/4455916/why-arent-int-0-pi-int-11erdr-d-theta-and-int-02-pi-int-01erdr
# Why aren't $\int_0^\pi\int_{-1}^1e^rdr\,d\theta$ and $\int_0^{2\pi}\int_0^1e^rdr\,d\theta$ equal? Doesn't this violate the Change of Variables thm? Why aren't these two integrals equal? $$\int_0^\pi \int_{-1}^{1} e^r \,dr\,d\theta \qquad\neq\qquad\int_0^{2\pi} \int_{0}^{1} e^r \,dr\,d\theta$$ Let me explain why I'm asking. This is the change of variables theorem for double integrals: Now, suppose that we have the unit disc $$D \subset R^2$$ and the transformation $$T$$ given by $$x=r\cos\theta$$ and $$y=r\sin\theta$$. Then the rectangle in the $$r\theta$$-plane $$-1 \leq r \leq 1, 0 \leq \theta < \pi$$ maps injectively to the unit disc under $$T.$$ So in theory, it seems like we should be able to integrate in polar coordinates using this region $$-1 \leq r \leq 1, 0 \leq \theta < \pi$$, in addition to the "usual" region $$0 \leq r \leq 1, 0 \leq \theta < 2\pi$$. Then why aren't the above two integrals equal, and more importantly, why does this not violate the change of variables theorem? • If you want to use the change of variable theorem, you need to propose a change of variables $(x,y)\mapsto (u(x,y),v(x,y))$. May 23 at 2:55 • How can $r$ be negative? – lcv May 23 at 3:17 Let's start from the "usual" region version and work backwards to get an integral in terms of $$x$$ and $$y.$$ One simple way to do this with the polar transformation we're used to is to simply factor out an $$r$$ for our Jacobian: \begin{aligned} \int_0^{2\pi} \int_0^1 e^r dr d\theta &= \iint_R \frac{e^r}{r} r dr d\theta \\ &= \iint_D \frac{e^{\sqrt{x^2 + y^2}}}{\sqrt{x^2 + y^2}} dy dx \end{aligned} noting that we can say $$r = \sqrt{x^2+y^2}$$ here because $$r$$ is always positive in our region. However, if we want to solve this same integral using the other parametrization of the unit disc then we have to note that because $$r$$ is negative over some parts of the region, we have to use $$\sqrt{x^2 + y^2} = |r|$$ so we end up with a slightly different integrand: \begin{aligned} \iint_D \frac{e^{\sqrt{x^2 + y^2}}}{\sqrt{x^2 + y^2}} dy dx &= \iint_{R_2} \frac{e^{|r|}}{|r|} |r| dr d\theta\\ &= \int_0^{\pi} \int_{-1}^1 e^{|r|} dr d\theta \end{aligned} noting that the absolute value around the determinant of the Jacobian can also not be dropped in this case. Now, noting that the integrand is even in $$r$$, we will see that the result of this integral will match the result of the first. \begin{aligned} \int_0^{\pi} \int_{-1}^1 e^{|r|} dr d\theta &= \left(\int_0^\pi d\theta\right)\left(\int_{-1}^1 e^{|r|} dr\right)\\ &= \pi \left(2 \int_0^1 e^{|r| }dr\right)\\ & = \left(\int_0^{2\pi} d\theta\right)\left(\int_0^1 e^r dr\right)\\ & = \int_0^{2\pi}\int_0^1 e^r dr d\theta \end{aligned} So ultimately, the reason that the two proposed integrals don't match is simply that they don't correspond to each other. Why aren't these two integrals equal? $$\int_0^\pi \int_{-1}^{1} e^r \,\mathrm dr\,\mathrm d\theta \ne \int_0^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta\tag1$$ The common integrand $$e^r$$ does not vary the same way over the two different integration domains (let's call them $$S_1$$ and $$S_2,$$ respectively), which merely have the same measure and geometric representation. Consequently, the two integrals are not guaranteed to be equal. Indeed, taking their difference: \begin{align}&\int_0^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta -\int_0^\pi \int_{-1}^{1} e^r \,\mathrm dr\,\mathrm d\theta\\ ={}& \left(\int_0^\pi \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta+\int_\pi^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta\right) -\left(\int_0^\pi \int_{-1}^{0} e^r \,\mathrm dr\,\mathrm d\theta+\int_0^\pi \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta\right)\\ ={}&\int_\pi^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta-\int_0^\pi \int_{-1}^{0} e^r \,\mathrm dr\,\mathrm d\theta\\ ={}& \pi\left(\int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta -\int_{-1}^{0} e^r \,\mathrm dr\,\mathrm d\theta\right)\\ ={}&\pi\,(1.72-0.63)\\ \ne{}&0.\\\end{align} So in theory, it seems like we should be able to integrate in polar coordinates using this region $$-1 \leq r \leq 1, 0 \leq \theta < \pi$$ Only $$S_2,$$ but not $$S_1,$$ is in polar coordinates. We can consider the entire inequation $$(1)$$ to be residing in a “coordinate system” that is simply not isomorphic to $$\mathbb R^2$$ (and thus maps the given geometric region to multiple integration domains). As such, notice that to fill in this blank $$\int_0^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta = \int_0^\pi \int_{-1}^{1} \fbox{\phantom{filler}}\,\mathrm dr\,\mathrm d\theta,$$ just replace each instance of $$r$$ with $$|r|.$$ why does this not violate the change of variables theorem? This theorem isn't necessary here, but can be invoked via \begin{align}x&\color{red}=r\left|\cos\theta\right|,\\y&\color{red}=r\sin\theta,\\&f\Big(g(r,\theta),h(r,\theta)\Big)\det \left| \frac{\partial(x,y)}{\partial(r,\theta)} \right|\color{red}=e^{|r|}\ne e^r;\end{align} then $$\int_0^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta \\=\int_0^{\frac\pi2} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta +\int_{\frac\pi2}^{\frac{3\pi}2} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta +\int_{\frac{3\pi}2}^{2\pi} \int_{0}^{1} e^r \,\mathrm dr\,\mathrm d\theta \\\color{red}=\int_0^{\frac\pi2} \int_{0}^{1} e^{|r|} \,\mathrm dr\,\mathrm d\theta +\left(\int_{\frac\pi2}^\pi \int_{0}^{1} e^{|r|} \,\mathrm dr\,\mathrm d\theta +\int_{0}^{\frac{\pi}2} \int_{-1}^{0} e^{|r|} \,\mathrm dr\,\mathrm d\theta\right) +\int_{\frac{\pi}2}^{\pi} \int_{-1}^{0} e^{|r|} \,\mathrm dr\,\mathrm d\theta \\=\int_0^\pi \int_{-1}^{1} e^{|r|} \,\mathrm dr\,\mathrm d\theta.$$
2022-06-26T12:15:38
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4455916/why-arent-int-0-pi-int-11erdr-d-theta-and-int-02-pi-int-01erdr", "openwebmath_score": 0.9912928342819214, "openwebmath_perplexity": 334.91910333217936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126519319941, "lm_q2_score": 0.8596637541053281, "lm_q1q2_score": 0.8413637925502394 }
http://mathoverflow.net/questions/94083/capped-binomial-random-variables/94132
# Capped binomial random variables Consider a random variable $X = \sum_{i=1}^{m} X_i$, where each $X_i$ is an indicator random variable that is $1$ with probability $k/m$ and $0$ otherwise. We are interested in the quantity $S_X(m) = E[\min(X,k)]$. The motivation is that we have a bin of capacity $k$. At each step, a ball is thrown into the bin with probability $k/m$. At the end of $m$ steps, we ask how many balls are in the bin. This quantity, in expectation, is precisely $S_X(m)$. Now suppose we throw "fractional balls", i.e., instead of having $\{0,1\}$ random variables $X_i$s, we have random variables $Y_i$ that have support $[0,1]$. We retain the same expectation, i.e., $E[Y_i] = k/m = E[X_i]$ and the $Y_i$'s are iid. Let $S_Y(m) = E[\min(Y,k)]$, where $Y = \sum_{i=1}^{m} Y_i.$ The question I am interested in is whether $S_Y(m) \geq S_X(m)$? I have an intuition for why this must be true: the variance of $X_i$ is at least as much as that of $Y_i$ --- this is because $E[X_i^2] = E[X_i]$, where as $E[Y_i^2] \le E[Y_i]$. Thus, one would expect a random variable with smaller variance (namely $Y = \sum_i Y_i$) to be more concentrated around the mean than a random variable with larger variance (namely $X = \sum_i X_i$), thus implying the result. Roughly speaking, one would expect the "wastage" of balls due to overflowing the capacity $k$ of the bin occurs lesser when we have fractional balls than integer balls. However this is not a definitive proof. Is there a simple proof for this? - The answer to your question is positive and, for example, follows immediately from Corollary 4 of C. A. León and F. Perron (2003), Extremal properties of sums of Bernoulli random variables, Statistics and Probability Letters, vol. 62, 345–354. A slight specialization of the corollary states: Let $\newcommand{\E}{\mathbb E} Y = Y_1 + \cdots + Y_n$ be a sum of iid random variables taking values in $[0,1]$ with mean $\E Y_i = \mu$ and let $X \sim \mathrm{Bin}(n,\mu)$. For any convex function $g : [0,n] \to \mathbb R$, $$\E g(Y) \leq \E g(X) \ .$$ Your result follows by noting that $g(x,a) = -\min(x,a)$ is convex in $x$ for any $a \in \mathbb R$. The paper appeals to other references, but an easy and direct proof can be constructed, so we may as well give a version of it here as several short lemmas. In what follows below, we assume $(Y_i)$ are iid on $[0,1]$ with distribution function $F$ and mean $\mu$ and that $(X_i)$ are iid Bernoulli random variables also with mean $\mu$. The function $g$ is assumed to be an arbitrary convex function defined on the appropriate domain. Lemma 1: If $Y \sim F$ and $X \sim \mathrm{Ber}(\mu)$, then $\E g(Y) \leq \E g(X)$. Proof: $Y = 1 \cdot Y + 0 \cdot (1-Y)$, so, by convexity, $\E g(Y) \leq g(1) \E Y + g(0) \E (1-Y) = \E g(X)$. Lemma 2: $\E g(Y_1 + Y_2) \leq \E g(X_1 + X_2)$. Proof: Assume wlog that $(Y_1,Y_2)$ is independent of $(X_1,X_2)$. If $g$ is convex, then so is $g(y+\cdot)$, hence $$\E g(Y_1 + Y_2) = \int_0^1 \E g(y + Y_2) \ \mathrm dF \leq \int_0^1 \E g(y + X_2) \ \mathrm dF = \E g(Y_1 + X_2) \leq \E g(X_1 + X_2) \ .$$ Corollary: Let $Y = Y_1 + \cdots + Y_n$ and $X = X_1 + \cdots X_n$, where $X \sim \mathrm{Bin}(n,\mu)$ as in the problem. Then, $\E g(Y) \leq \E g(X)$. Proof: Extend the previous lemma by induction. The desired result now follows by taking the aforementioned choice for $g$ with $a = n \mu$. - Thanks for the nice reference, and for taking the time to write a complete proof. – Balu Apr 16 '12 at 14:36 You might look into convex orders. I think (maybe) Y is larger than X in that ordering, and it's preserved under convolution. - $Y$ is smaller than $X$ in the convex order, by bounding from above any convex function on $[0,1]$ by the secant line passing through the points $(0,g(0))$ and $(1,g(1))$; this is Lemma 1 in cardinal's answer. And yes, convex order inequalities between between laws on the line (or a measurable vector space over the reals) "can be multiplied" with respect to convolution; this is Lemma 2 from above formulated in its natural generality. Books treating such things include \textit{Stochastic Orders} by Shaked and Shantikhumar (2007) and \texit{Comparison Methods ...} by Müller and Stoyan (2002). – Lutz Mattner Mar 2 '13 at 15:36
2016-05-02T23:14:51
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/questions/94083/capped-binomial-random-variables/94132", "openwebmath_score": 0.9838100671768188, "openwebmath_perplexity": 179.7142072907256, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126494483641, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.84136379217459 }
https://www.physicsforums.com/threads/lorentz-contraction-of-moving-line-of-charge.710182/
# Lorentz Contraction of Moving Line of Charge 1. Sep 13, 2013 ### leonardthecow 1. The problem statement, all variables and given/known data A point charge +q rests halfway between two steady streams of positive charge of equal charge per unit length λ, moving opposite directions and each at c/3 relative to the point charge. With equal electric forces on the point charge, it would remain at rest. Consider the situation from a frame moving right at c/3. a) Find the charge per unit length of each stream in this frame. b) Calculate the electric force and the magnetic force on the point charge in this frame, and explain why they must be related the way they are. 2. The attempt at a solution I know that the charge density of each moving line of charge will change due to relativistic length contraction. I also managed to show that the new charge density is simply the original charge density multiplied by the Lorentz factor: L=L0/γ, where L is the contracted length λ0=Q/L0, λ=Q/L, where lamda is the new charge density Solve each for Q, set them equal, and rearrange to obtain λ/λ0=L0/L=γ, ∴ λ=γλ0 But here's where I'm stuck. I know I now need to find the Lorentz factor γ for each moving line of charge. I also know that γ=(1-v2/c2)-1/2. In this formula, I need to know v, which I thought would be computed using a Lorentz transformation for velocity, namely u'=(u-v)/(1-uv/c2). I do have the final answer to part a, which is given as λ(√8)/3 and 5λ(√8)/12 in the textbook. However, I can't produce these values, because when I insert the values that I thought would be correct into the velocity transformation equation for u and v (namely c/3 for each), one u' would be equal to 0, yielding a Lorentz factor of 1, and the other u' would be incorrect as well. I think I understand the method behind solving the problem (although I could be wrong), but it seems like I'm having trouble conceptually trying to figure out the velocities relative to one another. Any help would be greatly appreciated, thanks! 2. Sep 13, 2013 ### TSny I think you have the right idea. In order to see where you might be making a mistake, we need to see more details of your calculation. What did you get for the speed of each line charged line in the new frame? EDIT: You are correct that you can use the relativistic formula for addition of velocities. Last edited: Sep 13, 2013 3. Sep 13, 2013 ### leonardthecow I did the following: For the line of charge moving in the same direction as the frame: Let u1=velocity of stream of particles moving in same direction as moving frame, relative to stationary point charge Let v=velocity of frame relative to stationary point charge Let u1'=velocity of stream of particles moving in same direction as moving frame, relative to moving frame u1'=(u1-v)/(1-u1v/c2) u1'=(c/3-c/3)/(1-(c/3)(c/3)/c2)=0 γu1=(1-0/c2)-1/2=1 ∴ λ10. (But I know this is incorrect) Similarly, for the line of charge moving in the opposite direction as the frame: Let u2=velocity of stream of particles moving in opposite direction as moving frame, relative to stationary point charge Let v=velocity of frame relative to stationary point charge Let u2'=velocity of stream of particles moving in opposite direction as moving frame, relative to moving frame u2'=(-u2-v)/(1-u2v/c2) u2'=(-c/3-c/3)/(1-(c/3)(c/3)/c2) u2'=(-2c/3)/(1-(c2/9)/c2) u2'=(-2c/3)/(1-1/9) u2'=(-2c/3)/(8/9)=-18c/24=-3c/4 γu2=(1-(-3c/4)2/c2)-1/2 γu2=(1-(9/16))-1/2 γu2=(7/16)-1/2 ∴ λ2=(7/16)-1/2λ0. (But I know this is incorrect) Is it a sign error in setting up the relative velocity transformations? 4. Sep 13, 2013 ### TSny This is correct. But you want to express λ1 in terms of λ rather than λ0. In the statement of the problem, the symbol λ stands for the charge density of each line in the original frame of reference (where each line is moving at c/3). I think there is a sign error in the second line. See if you can spot it. 5. Sep 13, 2013 ### leonardthecow Ah I see it now! So, because of length contraction, λ=Q/L, L=L0 λ=Qγ/L0 λ/γ=Q/L=λ0. For the moving line of charge, γ=(1-(c/3)2/c2)-1/2 γ=(1-1/9)-1/2 γ=(8/9)-1/2 So, because λ10, λ1=λ/γ ∴ λ=(8/9)-1/2λ1 λ=√(8)λ1/3. I'm not sure what the sign error would be in the second line, my logic is that v is still in the same direction, so its sign wouldn't change, but now the stream of charge I'm considering is flowing in the opposite direction as the first. Should it instead be a positive u plus a positive v in the numerator? 6. Sep 13, 2013 ### TSny Good. The numerator is fine. u1 is negative. v is positive. The sign error occurs when you substitute values in the denominator of u1'=(u1-v)/(1-u1v/c2). 7. Sep 13, 2013 ### yands I would like to jump in and help. However I really don't get what you mean by steady stream of positive charge of equal linear charge density. :( 8. Sep 13, 2013 ### yands Do you mean <<<<<<rod moving left point charge rod moving right >>>>>> 9. Sep 13, 2013 ### leonardthecow So, assuming I'm not making another dumb mistake, here's where I end up: u2'=(-c/3-c/3)/(1-(-c/3)(c/3)/c2) u2'=(-2c/3)/(1+(c2/9)/c2) u2'=(-2c/3)/(1+1/9) u2'=(-2c/3)/(10/9)=-18c/30=-3c/5 γu2=(1-(-3c/5)2/c2)-1/2 γu2=(1-(9/25))-1/2 γu2=(16/25)-1/2 γ2=5/4 ∴ λ=4λ2/5 .....but that still doesn't work out. Sorry to be a bother here, I just can't see the error 10. Sep 13, 2013 ### leonardthecow And hey yands, yeah I believe the set up is meant to be a line of charge moving left, a line of charge moving right, a stationary charge in between, and a moving frame. The way I set it up was one line of positive charge moving in the -x direction, one in the +x direction, and the moving frame also moving in the +x direction 11. Sep 13, 2013 ### TSny Looks good. Oops. Can you spot the error with the left hand side of this equation? Did you really want to use the charge density λ of the lines in the original frame where they are moving at c/3? 12. Sep 13, 2013 ### yands The rod moving with the frame will have the same value of its linear charge density since in this frame its speed is zero. The other rod would have a negative speed according to Lorenz transformation U'=(U-v)/(1-(Uv/c^2)) Last edited: Sep 13, 2013 13. Sep 13, 2013 ### yands Calculating Lorentz fraction µ for the second rod in the frame of reference of the first rod to find L' = Lµ^-1 Then since charge is invariant Q=λL=λ'L' 14. Sep 13, 2013 ### leonardthecow Ohhhhhh, okay, so let me think this through. Really, it should be something like λ22=λ/γ1, because not only is the line of charge lorentz contracted because it is moving at c/3, but also because it is moving relative to the moving frame, hence why both lorentz factors are involved. I think solving for λ2 then yields the correct answer, because then you would have λ2=λγ21 λ2=(λ5√(8))/12. Phew!! I think all the subscripts got me lost, thanks a bunch for helping me out though, would've been lost without the guidance 15. Sep 13, 2013 Good work!
2017-10-22T07:30:14
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/lorentz-contraction-of-moving-line-of-charge.710182/", "openwebmath_score": 0.8011196255683899, "openwebmath_perplexity": 589.0182513351446, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.978712650690179, "lm_q2_score": 0.8596637541053281, "lm_q1q2_score": 0.8413637914826959 }
https://math.stackexchange.com/questions/998969/show-inclusion-of-lp-spaces-in-a-space-of-finite-measure
Show inclusion of $L^p$ spaces in a space of finite measure Let $1 \leq p_1 \leq p_2 \leq +\infty$. Show that in a space of finite measure we have that $L^{p_2} \subset L^{p_1}$. Could you give me some hints what I could do?? Let $F = |f|^{p_1}$ and $G = 1$. Apply the Holder inequality $||FG||_1 \leq ||F||_p ||G||_q$ where $p = p_2/p_1 > 1$ and $1/p + 1/q = 1$. Note that $||G||_q = \mu(X)^{1-p_1/p_2}$ is finite as the underlying measure space $(X,\mu)$ is finite. This now gives you the bound you're looking for. Yes? What we have so far is $$\int |f|^{p_1} d\mu = \| FG \|_1 = \| |f|^{p_1}.1 \|_1 \ \leq \ \| \ \ |f|^{p_1} \ \|_{p_2/p_1} \ . \ \mu(X)^{1- p_2/p_1} \ \ \ \ \ --(*)$$ Now $$\| \ |f|^{p_1} \ \|_{p_2/p_1} = \left( \int \left(|f|^{p_1}\right)^{p_2/p_1} d\mu \right)^{p_1/p_2} = \left( \int |f|^{p_2} d\mu \right)^{p_1/p_2}$$ Substitute this expression back into (*) and take the $p_1$-th root of both sides, $$\left( \int |f|^{p_1} d\mu \right)^{1/p_1} \ \leq \ \left( \int |f|^{p_2} d\mu \right)^{1/p_2} . \left(\mu(X)^{1- p_2/p_1}\right)^{1/p_1}$$ That is $$\| f \|_{p_1} \leq C \| f \|_{p_2}$$ where $C = \left(\mu(X)^{1- p_2/p_1}\right)^{1/p_1}$. • Could you explain me why we take $p=p_2/p_1$?? Also, how did you find $||G||_q$?? – Mary Star Nov 10 '14 at 10:57 • To show $L^{p_2} \subset L^{p_1}$ we want to show that $f \in L^{p_2} \rightarrow f \in L^{p_1}$. So if we can show that $$\| f \|_{p_1} \geq C \| f \|_{p_2}$$ for some constant $C$ we're done. The choice of $F$ and $G$ I've listed gives it to us. It's not obvious, but it works. $\|G\|_q$ is going to be just the measure of the space $X$ under $q = 1 - p_1/p_2$ because $$\int_X 1^q \ d\mu = \mu(X) \rightarrow \|G\|_q = \left( \int_X 1^q \ d\mu \right)^{1/q} = \mu(X)^{1/q} = \mu(X)^{1 - p_1/p_2}$$ – Simon S Nov 10 '14 at 13:19 • Could you explain me further why we have to show that $$||f||_{p_1}\geq C||f||_{p_2}$$ ?? – Mary Star Nov 10 '14 at 15:37 • Ugh, sorry reversed the direction of the inequality there. We want to show $\|f\|_{p_2}$ bounded implies $\|f\|_{p_1}$ bounded. That is $$\|f\|_{p_1} \leq C \|f\|_{p_2}$$ – Simon S Nov 10 '14 at 15:39 • I got it!! What I stillndont understand is how we show the inequality $||f||_{p_1}\leq C ||f||_{p_2}$... Replacing F and G at Hilder s inequality we get $$|||f|^{p_1}||_{p_1}\leq |||f|^{p_1}||_{p_2/p_1} \mu(X)^{1-p_1/p_2}$$ how do we continue?? – Mary Star Nov 10 '14 at 15:59 Apply Holder's inequality with conjugate exponents $p_{2}/p_{1}$ and $p_{2}/(p_{2}-p_{1})$. When $p_{2} < \infty$, this gives, for $f \in L^{p_{2}}$ $$\Vert f \Vert_{p_{1}}^{p_{1}} = \int |f|^{p_{1}} \cdot 1 \leq \Vert |f|^{p_{1}} \Vert_{p_{2}/p_{1}} \Vert 1 \Vert_{p_{2}/(p_{2}-p_{1})} = \Vert f \Vert_{p_{2}}^{p_{1}} \mu(X)^{p_{2}-p_{1}/p_{2}}$$ It's even easier to show when $p_{2} = \infty$. Other than that, I'm not sure how you would show that the containments are strictly proper, though there are straightforward examples for $L^{1}$ and $L^{2}$. • Could you explain why you take these conjugate exponents at Holders inequality?? Also you say for $f\in L^{p_2}$ and then you calculate the norm $p_1$, why?? – Mary Star Nov 10 '14 at 15:40
2019-11-18T21:09:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/998969/show-inclusion-of-lp-spaces-in-a-space-of-finite-measure", "openwebmath_score": 0.90184086561203, "openwebmath_perplexity": 266.7642702726622, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126463438263, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.8413637895057313 }
https://math.stackexchange.com/questions/2646186/how-many-different-3-digit-numbers-can-be-formed-if-the-digits-1-2-2
How many different $3$-digit numbers can be formed if the digits $1$, $2$, $2$, $3$, $4$ are placed on separate cards? Any suggestion on solving this problem to get rid of double/over counting? The digits $1$, $2$, $2$, $3$, and $4$ are placed on separate cards. How many different $3$-digit numbers can be formed by arranging the cards? I tried... the case where all the numbers are distinct....like $1,2,3,4,5$ to make $3$-digit # and that is $5 \cdot 4 \cdot 3 = 60$ possibilities. Now I need to correct for over counting as $2$ is a repeated digit. Any help is appreciated. • Welcome to MathSE. Here is a tutorial on how to typeset mathematics on this site. – N. F. Taussig Feb 11 '18 at 18:07 Consider cases: Three different numbers are used: There are four ways to select the hundreds digit, three ways to select the tens digit, and two ways to select the units digit. Hence, there are $4 \cdot 3 \cdot 2 = 24$ such numbers. Two different numbers are used: There must be a repeated $2$. Choose two of the three positions for the two $2$s. Choose one of the other three digits for the free position. There are $\binom{3}{2}\binom{3}{1} = 9$ such numbers. Total: Since the two cases above are mutually exclusive and exhaustive, there are $24 + 9 = 33$ three-digit numbers that can be formed with the given digits. Imagine you have $\color{red}{2}$ and $\color{blue}{2}$, so that 2s are distinct. Every solution with a 2 in it has a corresponding solution where red and blue are swapped. These pairs of solutions should be considered the same; this will get rid of double-counting. So, to avoid double-counting, you need to add up (1) the number of solutions without any twos, plus (2) half the number of solutions containing twos. There are $3! = 6$ solutions that don't contain twos, and there are $60-6=54$ solutions containing twos. Hence the correct number of solutions, avoiding double-counting, is: $$6 + \frac{1}{2}(54) = 6 + 27 = 33$$
2019-12-12T03:40:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2646186/how-many-different-3-digit-numbers-can-be-formed-if-the-digits-1-2-2", "openwebmath_score": 0.7702440619468689, "openwebmath_perplexity": 213.00600709554945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126457229186, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.8413637889719595 }
https://math.stackexchange.com/questions/965444/evaluating-an-indefinite-integral-with-an-inverse-trigonometric-function
# Evaluating an indefinite integral with an inverse trigonometric function I'm really stumped on a homework problem asking me to evaluate $\int \frac{ln\ 6x\ sin^{-1}(ln6x)}{x}dx$, and after a few hours of trying different approaches I'd definitely be appreciative for a bump in the right direction. As a caveat, I should add that I've already received credit for the assignment, I'm simply looking to fully understand how to complete the integral. Here's what I've done so far: I noticed a good u-substitution, so I let $u = ln\ 6x$ and $du = \frac{1}{x}dx$ So I rewrote my integral as $\int u\ sin^{-1}(u)\ du$ This particular section is allowing me to use formulas for integration so I've chosen: $$\int x^n sin^{-1}x\ dx = \frac{1}{n+1} \left(x^{n+1}sin^{-1}x-\int\frac{x^{n+1}dx}{\sqrt{1-x^2}}\right)$$ Which gets me: $$\frac{1}{2} \left(u^2sin^{-1}u-\int\frac{u^2du}{\sqrt{1-u^2}} \right)$$ Now I use a second formula for integration which states: $$\int \frac{x^2}{\sqrt{a^2-x^2}}dx = -\frac{x}{2}\sqrt{a^2-x^2} + \frac{a^2}{2}sin^{-1}\frac{x}{a}+C$$ So this brings me to: \begin{align} &\frac{1}{2} \left(u^2sin^{-1}u-\left(-\frac{u}{2}\sqrt{1-u^2}+\frac{1}{2}sin^{-1}u\right)\right)+C \\ & = \frac{1}{4} \left( u \, \sqrt{1- u^{2}} + (2 u^{2} -1) \, \sin^{-1}(u) \right) + C \end{align} However, even when I replace $u$ with $ln\ 6x$ I can't seem to find a way to get to the answer, which is: $$\frac{1}{4}\left((2 \, \ln^2(6x)-1) \, \sin^{-1}(\ln(6x)) + \ln(6x) \, \sqrt{1-\ln^2(6x)}\right)+C$$ Is my fundamental approach flawed or am I simply missing something in the latter stages of simplification? • When you get to $\int\frac{u^2}{\sqrt{1-u^2}}du$, try making a trig substitution. – user84413 Oct 9 '14 at 17:06 • There were two minor errors, but it was easier to correct your result than type an almost identical result. In general, the process is correct. – Leucippus Oct 9 '14 at 17:51 • Thank you both, this helped a lot. – user153085 Oct 9 '14 at 20:03 Hints; for the second integral try the substitution $$u=\sin(t)$$ When you get $$\sin^2(t)$$ in the integrand, use the identity $$\sin^2(t)=\frac12 -\frac12 \cos(2t)$$
2019-10-14T13:48:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/965444/evaluating-an-indefinite-integral-with-an-inverse-trigonometric-function", "openwebmath_score": 0.9940322041511536, "openwebmath_perplexity": 189.96377343603348, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126500692716, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.8413637874300498 }
https://proofwiki.org/wiki/Definition:Set_Partition/Definition_2
# Definition:Set Partition/Definition 2 ## Definition Let $S$ be a set. A partition of $S$ is a set of non-empty subsets $\Bbb S$ of $S$ such that each element of $S$ lies in exactly one element of $\Bbb S$. ## Also defined as Some sources do not impose the condition that all sets in $\Bbb S$ are non-empty. This is most probably more likely to be an accidental omission rather than a deliberate attempt to allow $\O$ to be an element of a partition. The point is minor; proofs of partitionhood usually include a demonstration that all elements of such a partition are indeed non-empty.
2020-12-03T07:59:04
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/Definition:Set_Partition/Definition_2", "openwebmath_score": 0.9527775049209595, "openwebmath_perplexity": 289.7059598196392, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9787126513110865, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8413637867381557 }
https://mathhelpboards.com/threads/systems-of-equations-further-understanding.7180/
# Systems of equations - further understanding #### Yankel ##### Active member Hello again, I have a few more questions regarding systems of equations, I will collect them all here in one post since they are small. 1. The first is the following system: x+2y-3z=a 3x-y+2z=b x-5y+8z=c I need to determine the relation between a,b and c for which the system has infinite solution, unique solution or no solution. I did some row operations and got: $\begin{pmatrix} 1 &2 &-3 &a \\ 0 &-7 &11 &b-3a \\ 0 &0 &0 &2a-b+c \end{pmatrix}$ I conclude that when 2a-b+c=0 there is an infinite solution and when it ain't equal 0, there is no solution. A unique solution is not possible. However, Maple got the same matrix but claims that there is no solution either way...is it a computer bug or I am mistaken ? 2. A is a matrix over the R field with dimensions 3X4. The rank of A is 1. How many degrees of freedom (parameters, i.e. t,s,...) does the family of solutions of Ax=0 has ? 3. If Ax=b has infinite solution, then Ax=c has infinite solution or no solution. True or False ? Thanks a lot ! #### Petrus ##### Well-known member Hello again, I have a few more questions regarding systems of equations, I will collect them all here in one post since they are small. 1. The first is the following system: x+2y-3z=a 3x-y+2z=b x-5y+8z=c I need to determine the relation between a,b and c for which the system has infinite solution, unique solution or no solution. I did some row operations and got: $\begin{pmatrix} 1 &2 &-3 &a \\ 0 &-7 &11 &b-3a \\ 0 &0 &0 &2a-b+c \end{pmatrix}$ I conclude that when 2a-b+c=0 there is an infinite solution and when it ain't equal 0, there is no solution. A unique solution is not possible. However, Maple got the same matrix but claims that there is no solution either way...is it a computer bug or I am mistaken ? 2. A is a matrix over the R field with dimensions 3X4. The rank of A is 1. How many degrees of freedom (parameters, i.e. t,s,...) does the family of solutions of Ax=0 has ? 3. If Ax=b has infinite solution, then Ax=c has infinite solution or no solution. True or False ? Thanks a lot ! Hello, 1. For it to be infinity soloution you want them to be linear dependen 2. Dim ker (A) Tells you how many parameters there is, edit: 1. Yes it looks correct for me what you Said notice that I have not checked your progress! Regards, $$\displaystyle |\pi\rangle$$ Last edited: #### Deveno ##### Well-known member MHB Math Scholar Umm...don't trust computers, they lie to you. OBVIOUSLY, there is the solution (0,0,0) when a = b = c = 0. perhaps not as obviously, there are also the solutions of the form: t(-1,11,7) for any real number t, when a = b = c = 0. Thus given some vector (a,b,c) for which 2a - b + c = 0 (like, for example: (1,1,-1)), we can conclude we have the infinite number of solutions: (2/7,13/7,1) + t(-1,11,7), since: A(2/7,13/7,1) = (2/7 + 26/7 - 3, 6/7 - 13/7 + 2,2/7 - 65/7 + 8) = (1,1,-1) and A(t(-1,11,7)) = t(A(-1,11,7)) = t(0,0,0) = (0,0,0) So clearly Maple is wrong about the number of solutions. For #2, the rank-nullity theorem tells you that: nullity(A) = dim(ker(A)) For #3: on these types of problems it's good to play with some simple examples. Try using: $A = \begin{bmatrix}1&0\\0&0 \end{bmatrix}$ $b = \begin{bmatrix}1\\0 \end{bmatrix}$ and $c = \begin{bmatrix}2\\0 \end{bmatrix}$ or $c = \begin{bmatrix}0\\2 \end{bmatrix}$ Now suppose the statement is false: this means that we have a UNIQUE solution x0 of Ax = c, but infinitely many of Ax = b. Pick two DIFFERENT solutions of Ax = b, say x = x1, x2. Since these are different solutions, x1 - x2 ≠ 0, so x1 - x2 + x0 ≠ x0. Now A(x1 - x2 + x0) = A(x1) - A(x2) + A(x0​) = b - b + c =....?
2021-09-22T14:42:46
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/systems-of-equations-further-understanding.7180/", "openwebmath_score": 0.7576556205749512, "openwebmath_perplexity": 703.2218095518988, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9787126488274565, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.841363784603069 }
https://math.stackexchange.com/questions/7643/produce-an-explicit-bijection-between-rationals-and-naturals
# Produce an explicit bijection between rationals and naturals? I remember my professor in college challenging me with this question, which I failed to answer satisfactorily: I know there exists a bijection between the rational numbers and the natural numbers, but can anyone produce an explicit formula for such a bijection? • Do you need a formula or does the picture and explanation in en.wikipedia.org/wiki/… suffice? See also en.wikipedia.org/wiki/… – lhf Oct 24 '10 at 3:10 • I wasn't familiar with pairing functions, so let me look at that more closely. My professor insisted, though, that I come up with a formula, and of course that would also require that equivalent pairs (in the rational number sense) shouldn't get counted more than once. – Alex Basson Oct 24 '10 at 4:55 • @lhf. Maybe you should post your comment as an answer; otherwise, it's not unlikey that this question remains unanswered. – d.t. Oct 24 '10 at 5:21 • Could you provide a list of features that you consider legitimate to include in your formula? Often when these questions are posed, responses are met with "that doesn't count as a formula." – Douglas S. Stones Oct 24 '10 at 5:43 • Don't know if it would count as "explicit" but every rational number occurs exactly one in the Calkin-Wilf sequence en.wikipedia.org/wiki/Calkin%E2%80%93Wilf_tree – Jyotirmoy Bhattacharya Oct 24 '10 at 6:42 We will first find a bijection $$h_{+}:\mathbb Z^+\to \mathbb Q^+$$. From there, we easily get a bijection $$h:\mathbb Z\to \mathbb Q$$ by defining: $$h(n)=\begin{cases}h_{+}(n)&n>0\\ 0&n=0\\ -h_{+}(-n)&n<0 \end{cases}$$ From there, we can use any of the bijections $$\mathbb N\to\mathbb Z$$ to get our bijection between $$\mathbb N$$ and $$\mathbb Q$$. (We'll need a specific such bijection below, $$s$$.) Now, every positive integer can be written uniquely as $$p_1^{a_1}p_2^{a_2}\cdots$$, where the $$p_1=2,p_2=3,p_3=5,\dots$$ is the sequence of all primes, and the $$a_i$$ are non-negative integers, and are non-zero for only finitely many $$i$$s. Similarly, every positive rational number can be written uniquely as $$p_1^{b_1}p_2^{b_2}\cdots$$ where the $$b_i$$ are integers and only finitely many of the $$b_i$$ are non-zero. So define $$s:\mathbb N\to\mathbb Z$$ (where we take $$\mathbb N$$ to include $$0$$): $$s(n)= (-1)^n\left\lfloor\frac{n+1}{2}\right\rfloor$$ The sequence $$s(0),s(1),s(2),s(3),\dots$$ would be $$0,-1,1,-2,2\dots$$, and this is a bijection from $$\mathbb N$$ to $$\mathbb Z$$. The only properties we really need for $$s$$ is that $$s$$ is a bijection and $$s(0)=0$$. Then for any $$n=p_1^{a_1}p_2^{a_2}\cdots\in\mathbb Z^+$$, define $$h_{+}(n)=p_1^{s(a_1)}p_2^{s(a_2)}\cdots$$ This then defines our bijection $$h_{+}:\mathbb Z^+\to \mathbb Q^{+}$$. A potientially interesting feature of $$h_+$$ is that it is multiplicative - that is, if $$\gcd(m,n)=1$$ then $$h_{+}(m,n)=h_+(m)h_{+}(n).$$ • Why hasn't this been upvoted more? Good answer! (+1) – fancynancy Feb 16 '15 at 17:46 • For your bijection between $\mathbb{N}$ and $\mathbb{Q}$, why have you defined it as $\rho_1$ instead of just $\eta$ or something else (I don't see anything to indicate significance of the index)? – fancynancy Feb 16 '15 at 17:52 • @fancynancy I think just because it was a variant of $\rho$. I think the function I now call $\rho_1$ was just called $\rho$, but then I realized I needed the intermediate function, which was more "important" in some way, so I called that $\rho$ and renamed this $\rho_1$. It's just a function name. – Thomas Andrews Feb 16 '15 at 18:13 • That's what I thought--just wanted to make sure. Thanks for clarifying. – fancynancy Feb 16 '15 at 18:17 • Because in each case, only finitely many of the exponents can be non-zero. @RFZ – Thomas Andrews Nov 7 '17 at 22:52 We will first create a bijection from $\mathbb{N}$ to $\mathbb{Q}^{+}$ and then use this to create a bijection from $\mathbb{N}$ to $\mathbb{Q}$. Step One: Let us first define Stern's diatomic series. This process formalizes the Stern-Brocot tree mentioned above. $a_{1} = 1 \\ a_{2k}=a_{k} \\ a_{2k+1}=a_{k}+a_{k+1}$ To get a feel for this series, let us list out the first few terms. $a_{1}=1 \\ a_{2}=a_{1}=1 \\ a_{3}=a_{1}+a_{2}=1+1=2 \\ a_{4}=a_{2}=1 \\ a_{5}=a_{2}+a_{3}=1+2=3 \\ a_{6}=a_{3}=2 \\ a_{7}=a_{3}+a_{4}=2+1=3 \\ a_{8}=a_{4}=1$ Now to obtain the $n^{th}$ rational number, we define $f: \mathbb{N} \rightarrow \mathbb{Q}^{+}$, by $f(n)= \dfrac{a_{n}}{a_{n+1}}$. Let us list out the first few terms. $f(1)= a_{1}/a_{1+1} = 1/1 \\ f(2)= a_{2}/a_{2+1} = 1/2 \\ f(3)= a_{3}/a_{3+1} = 2/1 \\ f(4)= a_{4}/a_{4+1} = 1/3 \\ f(5)= a_{5}/a_{5+1} = 3/2 \\ f(6)= a_{6}/a_{6+1} = 2/3 \\ f(7)= a_{7}/a_{7+1} = 3/1$ This function enables us to say that the $6^{th}$ rational number is $2/3$. Moreover, this function is a bijection. For proof of this, see Theorem 5.1 here http://faculty.plattsburgh.edu/sam.northshield/08-0412.pdf. Since $f$ is a bijection this implies that $f^{-1}$ exists. That means given a rational number we can find the corresponding natural number. For example suppose you have a fraction, say it is $1/4$. Can we determine the $n$ such that $f(n)=1/4$? The answer is a resounding yes. Given a positive rational number, $q \in \mathbb{Q}$, the $n$ such that $f(n)=q$ is found by $n=f^{-1}(q)$. This function, $f^{-1}$, is given as follows: $f^{-1}(1)=1 \\ f^{-1}(q)= 2f^{-1} \bigg(\dfrac{q}{1-q} \bigg) ~ \text{if} ~ q<1 \\ f^{-1}(q) = 2f^{-1}(q-1)+1 ~\text{if}~ q>1$ As an example, we see from above that $f(5)={3/2}$. Let us plug $(3/2)$ into $f^{-1}$ and see if we get 5. $f^{-1}(3/2)=2f^{-1} \bigg(\dfrac{3/2}{1-(3/2)} \bigg)+1=2f^{-1} \bigg(\dfrac{1}{2} \bigg)+1.$ A quick calculation yields that $f^{-1} \bigg(\dfrac{1}{2} \bigg)=2$ and so we get $f^{-1}(3/2)=2f^{-1} \bigg(\dfrac{1}{2} \bigg)+1=2(2)+1=5$. Step Two: We showed there exists a bijection between $\mathbb{N}$ and $\mathbb{Q}^{+}$. We now attempt to show there exists an explicit bijection between $\mathbb{N}$ and $\mathbb{Q}$. Using the work done in Step One, it appears easier to first create a bijection between $\mathbb{Z}$ and $\mathbb{Q}$. The reason for doing so is because we have already created a bijection from the positive integers (natural numbers) to the positive rationals. So it only seems natural that by adding in the negative integers, we can map them to the negative rationals and thus obtain a bijection. We do this as follows: $$g(z) = \begin{cases} \dfrac{a_{z}}{a_{z+1}}, & \text{if } z>0 \\ \\ - \dfrac{a_{-z}}{a_{-(z-1)}}, & \text{if } z<0 \\ \\ 0, & \text{if } z=0 \end{cases}$$ where the $a_{i}$ term refers to the $i^{th}$ term in Stern's diatomic series. We already referenced a proof by Northshield showing that $g(z)=\dfrac{a_{z}}{a_{z+1}}$ if $z>0$ is a bijection from $\mathbb{N} \rightarrow \mathbb{Q}^{+}$. Equivalently, we may write this as $g$ is a bijection from $\mathbb{Z}^{+}$ to $\mathbb{Q}^{+}$ for $z>0$. Now, it follows by the symmetry of the problem that $g(z)=- \dfrac{a_{-z}}{a_{-(z-1)}}$ is a bijection from $\mathbb{Z}^{-}$ to $\mathbb{Q}^{-}$ if $z<0$. That is, $g$ is a bijection between the negative integers and the negative rationals. So we have covered all the positive and negative rationals. The only element in the rationals that is not accounted for is the zero element. So we shall have the integer $0$ mapping to the rational number $0$. However, $g$ is a bijection from the integers to the rationals. We wish to find a bijection from the natural numbers to the rationals. So we shall now define the well-known bijection from the natural numbers to the integers. $$h(n) = \begin{cases} \dfrac{n}{2}, & \text{if }n\text{ is even} \\ -\dfrac{n-1}{2}, & \text{if }n\text{ is odd} \end{cases}$$ You may check for yourself that $h$ is a bijection. It follows that $g~\circ~ h: \mathbb{N} \rightarrow \mathbb{Q}$ is a bijection since the composition of two bijections is a bijection. Thus, we have an explicit bijection from $\mathbb{N}$ to $\mathbb{Q}$. However, given a rational number, can we find what this rational number maps to in the set of natural numbers? Although I do not prove it, the answer is yes and is given by the following piece-wise defined function which is an extension of the function defined in Step One. We first define $g^{-1}: \mathbb{Q} \rightarrow \mathbb{Z}$ as $$g^{-1}(q) = \begin{cases} 2f^{-1}(q-1)+1, & \text{if } q>1 \\ 1, & \text{if } q=1 \\ 2f^{-1} \bigg(\dfrac{q}{1-q} \bigg), & \text{if } 0<q<1 \\ 0, & \text{if } q=0 \\ -2 \Bigg(f^{-1} \bigg(\dfrac{-q}{1+q}\bigg) \Bigg), & \text{if } -1<q<0 \\ -1, & \text{if } q=-1 \\ -2(f^{-1}(-q-1)+1), & \text{if } q<-1 \end{cases}$$ We now define the function $h^{-1}: \mathbb{Z} \rightarrow \mathbb{N}$ as follows: $$h^{-1}(z)= \begin{cases} 2z, & \text{if } z>0 \\ 1, & \text{if } z=0 \\ -2z-1, & \text{if } z<0 \\ \end{cases}$$ Then $h^{-1} \circ g^{-1}: \mathbb{Q} \rightarrow \mathbb{N}$ is the bijection we are looking for. • Superb exposition! ... but I think you've made a little typo in the bit where you tell how to do the explicit inverse of f for arbitrary input: in the actual example you've put $\operatorname{f^{-1}}({3\over2})=2\operatorname{f^{-1}}\left(\frac{{3\over2}}{1-{3\over2}}\right)+1$; & I think it ought to be $2\operatorname{f^{-1}}({3\over2}-1)+1$. – AmbretteOrrisey Dec 12 '18 at 6:38 • What this boils down to as an algorithm, is: commence the euclidian algorithm on the numerator & denominator, & represent the quotients as run lengths of bits from right to left, beginning with 0 if the fraction <1 & 1 if >1. Also proceed all the way to the remainder of 0, rather than only to a remainder of 1. The very last step is to OR the leftmost bit with 1. This algorithm has indeed already been expounden by Calkin & Wilf ... though I do not have a specific reference handy. – AmbretteOrrisey Dec 17 '18 at 8:04 # Preliminaries I will use the Continued Fraction conception. First, let us consider only rationals that are less than 1. So $$q < 1, \quad q\in\mathbb{Q}$$ So every rational $q$ can be written as continued fraction: $$q = \cfrac{1}{a_1 + \cfrac{1}{a_2 +\cfrac{1}{a_3 + ...}}} := [a_1, a_2, a_3, ...]$$ Note that none of the $a_i$ is zero and for every $q\in\mathbb{Q}$ its q.f. is of finite length. Also note that we use only that kind of q.f.'s in which all numerators are 1's. # Formula Let us construct a bijection $\Phi$ between rationals and naturals as follows: $$\Phi: q \mapsto \prod_{i=1}^{n_q}p_{i}^{a_i - 1},$$ where $n_q$ is length of q.f. for $q$ and $p_i$ is $i$th prime number. The inverse is straightforward. # Example $$\Phi\Big(\frac{30}{43}\Big) = 2^03^15^27^3 = 25725$$ This is because $$\frac{30}{43} = \cfrac{1}{1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{1}{4}}}} := [1,2,3,4]$$ And vice-versa: $$\Phi^{-1}(225) = \frac{10}{13} = \cfrac{1}{1+\cfrac{1}{3+\cfrac{1}{3}}}$$ This is because $$225 = 2^03^25^2$$ Of course this works iff there is bijection between those kind of continued fractions and rationals. But it is not too hard to prove. P.S. I feel that I miss something. Please, verify. • It's very good! It provides a bijection between $\mathbb{Q} \cap (0,1)$ and the integers $>0$. Indeed, any such positive integer corresponds to an element of finite support in $\prod_{p } \mathbb{N}$. Any such element of finite support, increase by $1$ all the components up to the last $\ne 0$ element, Now compose the continued fraction from it (implicitly you use that the continued fraction does not end with $1$ as a last quotient). – Orest Bucicovschi Oct 11 '17 at 16:59 • @orangeskid, It is impossible (nonsense) for q.f. to end with 1. – LRDPRDX Oct 11 '17 at 17:55 • @Wolfgang Very interesting, and thank you! This second example, however, confuses me. As you observe, $\frac{10}{13} := [1, 3, 3]$. So shouldn't $\Phi(\frac{10}{13}) = 2^{1-1}3^{3-1}5^{3-1} = 2^03^25^2 = 225$? – Alex Basson Oct 12 '17 at 0:50 • @AlexBasson, Good catch! Of course, should. – LRDPRDX Oct 12 '17 at 3:13 Recently I was reading some papers by Don Zagier and found this one most interesting. Here, you can get not only a satisfactory proof of the bijection, but also you will have the notion of the rational number immediately after, or before, a given number, which we don't have in Cantor's proof. Theorem: The map $$S(x)=\frac{1}{2\lfloor x\rfloor-x+1}$$ has the property that, among the sequence $$S(0),S(S(0),S(S(S(0)),\cdots$$ every positive rational numbers appears once and only once. Therefore if we write $$S^n(x)$$ for $$n^{th}$$ iterate of $$S$$, then we obtain an explicit bijection $$F:\mathbb{N}\to \mathbb{Q}^{+}$$ by $$F(n)=S^n(0)$$. The proof is explained in the link I have mentioned above. This is a bijection between the Stern-Brocot tree and the tree of Natural numbers. Every left node is given by $L_n = [2 P_n ]$ and every right one by $R_n= [2 P_n +1 ]$ where $P_n$ is the value of the parent node and $P_0=[1]$. We have the sequence of transformations $P_n \rightarrow [ L_n , R_n ]$, $L_n \rightarrow P_{n+1}$, $R_n \rightarrow P^\prime_{n+1}$ . In list notation for the tree (count the brackets) this is $$n = 1 \mapsto [1,[2],[3]]$$ $$n = 2 \mapsto [1,[2,[4], [5]], [3,[6], [7]]]$$ $$n = 3 \mapsto [1,[2,[4,[8],[9]],[5,[10],[11]]],[3,[6,[12],[13]],[7,[14],[15]]]]$$ and so on. The method used here cobbles together parts and pieces of the Euler's totient function to create our sequence that bijectively covers all the rational numbers. The function is implemented using the python programming language, but the interested reader can figure out what is happening by inspecting the output. Here is the program: #--------*---------*---------*---------*---------*---------*---------*---------* # Desc: Define a bijection of natural numbers to the rational numbers #--------*---------*---------*---------*---------*---------*---------*---------* import sys import fractions def moreTicks(curTick): for k in range(1, curTick): if fractions.gcd(curTick, k) == 1: moreTicksList.append(fractions.Fraction(k, curTick)) return curTick + 1 #--------*---------*---------*---------*---------*---------*---------*---------# while True:# M A I N L I N E # #--------*---------*---------*---------*---------*---------*---------*---------# # # initialize state of function machine step = 0 negSide = 0 posSide = 0 curTick = 2 phiList = [] print(0, ', ', end='') while True: #print('expand by phi(n) count on working range') moreTicksList = [] curTick = moreTicks(curTick) for i in range(negSide, posSide + 1): for f in moreTicksList: print(f + fractions.Fraction(i, 1), ', ', end='') for f in moreTicksList: phiList.append(f) negSide = negSide - 1 print(negSide, ', ', end='') for f in phiList: print(f + fractions.Fraction(negSide, 1), ', ', end='') posSide = posSide + 1 print(posSide, ', ', end='') for f in phiList: print(f + fractions.Fraction(posSide, 1), ', ', end='') step = step + 1 if step == 7: print('...', end='') sys.exit() Here is the output sequence (you can use the slider to see further out): 0 , 1/2 , -1 , -1/2 , 1 , 3/2 , -2/3 , -1/3 , 1/3 , 2/3 , 4/3 , 5/3 , -2 , -3/2 , -5/3 , -4/3 , 2 , 5/2 , 7/3 , 8/3 , -7/4 , -5/4 , -3/4 , -1/4 , 1/4 , 3/4 , 5/4 , 7/4 , 9/4 , 11/4 , -3 , -5/2 , -8/3 , -7/3 , -11/4 , -9/4 , 3 , 7/2 , 10/3 , 11/3 , 13/4 , 15/4 , -14/5 , -13/5 , -12/5 , -11/5 , -9/5 , -8/5 , -7/5 , -6/5 , -4/5 , -3/5 , -2/5 , -1/5 , 1/5 , 2/5 , 3/5 , 4/5 , 6/5 , 7/5 , 8/5 , 9/5 , 11/5 , 12/5 , 13/5 , 14/5 , 16/5 , 17/5 , 18/5 , 19/5 , -4 , -7/2 , -11/3 , -10/3 , -15/4 , -13/4 , -19/5 , -18/5 , -17/5 , -16/5 , 4 , 9/2 , 13/3 , 14/3 , 17/4 , 19/4 , 21/5 , 22/5 , 23/5 , 24/5 , -23/6 , -19/6 , -17/6 , -13/6 , -11/6 , -7/6 , -5/6 , -1/6 , 1/6 , 5/6 , 7/6 , 11/6 , 13/6 , 17/6 , 19/6 , 23/6 , 25/6 , 29/6 , -5 , -9/2 , -14/3 , -13/3 , -19/4 , -17/4 , -24/5 , -23/5 , -22/5 , -21/5 , -29/6 , -25/6 , 5 , 11/2 , 16/3 , 17/3 , 21/4 , 23/4 , 26/5 , 27/5 , 28/5 , 29/5 , 31/6 , 35/6 , -34/7 , -33/7 , -32/7 , -31/7 , -30/7 , -29/7 , -27/7 , -26/7 , -25/7 , -24/7 , -23/7 , -22/7 , -20/7 , -19/7 , -18/7 , -17/7 , -16/7 , -15/7 , -13/7 , -12/7 , -11/7 , -10/7 , -9/7 , -8/7 , -6/7 , -5/7 , -4/7 , -3/7 , -2/7 , -1/7 , 1/7 , 2/7 , 3/7 , 4/7 , 5/7 , 6/7 , 8/7 , 9/7 , 10/7 , 11/7 , 12/7 , 13/7 , 15/7 , 16/7 , 17/7 , 18/7 , 19/7 , 20/7 , 22/7 , 23/7 , 24/7 , 25/7 , 26/7 , 27/7 , 29/7 , 30/7 , 31/7 , 32/7 , 33/7 , 34/7 , 36/7 , 37/7 , 38/7 , 39/7 , 40/7 , 41/7 , -6 , -11/2 , -17/3 , -16/3 , -23/4 , -21/4 , -29/5 , -28/5 , -27/5 , -26/5 , -35/6 , -31/6 , -41/7 , -40/7 , -39/7 , -38/7 , -37/7 , -36/7 , 6 , 13/2 , 19/3 , 20/3 , 25/4 , 27/4 , 31/5 , 32/5 , 33/5 , 34/5 , 37/6 , 41/6 , 43/7 , 44/7 , 45/7 , 46/7 , 47/7 , 48/7 , -47/8 , -45/8 , -43/8 , -41/8 , -39/8 , -37/8 , -35/8 , -33/8 , -31/8 , -29/8 , -27/8 , -25/8 , -23/8 , -21/8 , -19/8 , -17/8 , -15/8 , -13/8 , -11/8 , -9/8 , -7/8 , -5/8 , -3/8 , -1/8 , 1/8 , 3/8 , 5/8 , 7/8 , 9/8 , 11/8 , 13/8 , 15/8 , 17/8 , 19/8 , 21/8 , 23/8 , 25/8 , 27/8 , 29/8 , 31/8 , 33/8 , 35/8 , 37/8 , 39/8 , 41/8 , 43/8 , 45/8 , 47/8 , 49/8 , 51/8 , 53/8 , 55/8 , -7 , -13/2 , -20/3 , -19/3 , -27/4 , -25/4 , -34/5 , -33/5 , -32/5 , -31/5 , -41/6 , -37/6 , -48/7 , -47/7 , -46/7 , -45/7 , -44/7 , -43/7 , -55/8 , -53/8 , -51/8 , -49/8 , 7 , 15/2 , 22/3 , 23/3 , 29/4 , 31/4 , 36/5 , 37/5 , 38/5 , 39/5 , 43/6 , 47/6 , 50/7 , 51/7 , 52/7 , 53/7 , 54/7 , 55/7 , 57/8 , 59/8 , 61/8 , 63/8 , ... The sequence 'calibrates' the rational number 'tick marks' on our ideal 'measuring rod'.
2019-07-22T19:07:32
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/7643/produce-an-explicit-bijection-between-rationals-and-naturals", "openwebmath_score": 0.9798880815505981, "openwebmath_perplexity": 522.3228458510459, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126457229185, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8413637819342102 }
http://mathhelpforum.com/algebra/99298-finding-numbers.html
1. ## finding numbers i have a question to do with quadratic equations but what i'm asking isn't really about quadratics but i need to it to complete the question. basically i've been set the question factorise $54-15x-25x^2$. i know that you have to start by multiplying -25 and 54 which gives -1350. i then have to find two numbers that multiply together to give -1350 but also add together to give -15. is there a method i should use for finding the numbers i'm looking for? thanks 2. Originally Posted by mark i have a question to do with quadratic equations but what i'm asking isn't really about quadratics but i need to it to complete the question. basically i've been set the question factorise $54-15x-25x^2$. i know that you have to start by multiplying -25 and 54 which gives -1350. i then have to find two numbers that multiply together to give -1350 but also add together to give -15. is there a method i should use for finding the numbers i'm looking for? thanks Let x and y be the 2 numbers .. xy=-1350 x+y=-15 Then solve the simultaneous equation . 3. can you show me how to solve that simultaneous equation please? 4. Originally Posted by mark can you show me how to solve that simultaneous equation please? Ok you have 2 equations here . $xy=-1350$--- 1 $x+y=-15$ --- 2 From 2 , make x the subject , then we get $x=-15-y$---3 Now we substitute 3 into 1 , we get $(-15-y)(y)=-1350$ $ -15y-y^2+1350=0 $ Solve for y , i got $y=-45 , 30$ When $y=-45$, $x-45=-15$ , $x=30$ and when $y=30$ , $x+30=-15$ , $x=-45$ So there are 2 sets of values for x and y which are -45 , 30 and 30 , -45 which are the same . So when u say x is 30 , then y will be -45 and vice versa Hope this helps . 5. thanks, i understood that up to $-15y - y^2 + 1350 = 0$ then you said "solve for y". how do you solve for y from that? Mark 6. Originally Posted by mark thanks, i understood that up to $-15y - y^2 + 1350 = 0$ then you said "solve for y". how do you solve for y from that? Mark Make it positive so that it's easier to solve .. $y^2+15y-1350=0$ $ (y+45)(y-30)=0 $ y=-45 , y=30 7. Originally Posted by mark i have a question to do with quadratic equations but what i'm asking isn't really about quadratics but i need to it to complete the question. basically i've been set the question factorise $54-15x-25x^2$. i know that you have to start by multiplying -25 and 54 which gives -1350. i then have to find two numbers that multiply together to give -1350 but also add together to give -15. is there a method i should use for finding the numbers i'm looking for? thanks mathaddict has provided the solution you sought. Just curious, but could you post what you are going to do with those values? 8. i would factorise them into $(54 - 45x) + (30x - 25x^2)$ which would then go to $9(6 - 5x) + x(30 - 25x)$ then $9(6 - 5x) + 5x(6 - 5x)$ then $(9 + 5x) (6 - 5x)$ 9. Hello, mark! Factor: . $54-15x-25x^2$. I know that you have to start by multiplying -25 and 54 which gives -1350. i then have to find two numbers with a product of -1350 and a sum of -15. Is there a method i should use for finding the numbers i'm looking for? I would factor out a -1: . $-\,(25x^2 + 15x - 54)$ . . and disregard the leading minus-sign for now. Multiply the first and last coefficients: . $25\cdot54 \:=\:1350$ Note the sign of the last term of the quadratic: . $25x^2 + 15x - 54$ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $\uparrow$ . . If it is "+", we want a sum. . . If it is "-", we want a difference. We have "-", so we factor 1350 into two parts whose difference is the middle coefficient, 15. How do we factor 1350 into two parts? Divide 1350 by 1,2,3, . . . and keep the ones that "come out even". . . $\begin{array}{ccc} \text{Factors} & \text{Difference} \\ \hline 1\cdot1350 & 1349 \\ 2\cdot675 & 5673 \\ 3\cdot450 & 447 \\ 5\cdot270 & 265 \\ 6\cdot226 & 219 \\ 9\cdot150 & 141 \\ 10\cdot135 & 125 \end{array}$ . . . $\begin{array}{cccccc}16\cdot90 &&&& 75 \\ 18\cdot75 &&&& 57 \\ 25\cdot54 &&&& 19 \\ 30\cdot45 &&&& 15 & \Leftarrow\:\text{ There!} \end{array}$ We want the middle term to be $+15x$, so we will use $-30x \text{ and }+45x$ We have: . $25x^2 -30x + 45x - 54$ Factor: . $5x(5x-6) + 9(5x-6)$ Factor: . $(5x-6)(5x+9)$ Restore the leading minus-sign: . $-(5x-6)(5x+9)$ The answer can also be written: . $(6-5x)(9+5x)$ 10. thankyou soroban, that was actually very useful
2018-02-25T00:40:29
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/99298-finding-numbers.html", "openwebmath_score": 0.7521131634712219, "openwebmath_perplexity": 347.6375996351792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126444811033, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8413637808666667 }
https://math.stackexchange.com/questions/1871530/shortcut-for-finding-cube-of-the-numbers
# Shortcut for finding cube of the Numbers Is there a shortcut for finding cube of a particular number like $68^3$ ? If anyone knows how to solve for two- and three digit numbers, can you please share the answer? • there's a shortcut? – Gregory Grant Jul 26 '16 at 10:59 • I found some links by googling: youtube.com/watch?v=FktRm6Ts8w0 and burningmath.blogspot.com/2013/10/… – Gregory Grant Jul 26 '16 at 11:03 • Hint : Use the expansion of $(70-2)^3$ – Peter Jul 26 '16 at 11:15 • Another hint: Notice that the linked method above, as well as the solutions posted by callculus and me, all employ the 3rd row of Pascal's Triangle: 1 3 3 1 – Grey Matters Jul 28 '16 at 17:30 • See the book " The Trachtenberg Speed System Of Basic Mathematics" by Jakow Tractenberg. Or the Wikipedia article about it. – DanielWainfleet Jul 28 '16 at 22:24 You can apply the binomial theorem for a more comfortable calculation: $$(a+b)^n=\sum_{k=0}^n~{n\choose k} \cdot a^{n-k} \cdot b^k$$ First note that $68=70-2$ Therefore in your case it is $$(70-2)^3=\sum_{k=0}^3~{3\choose k} \cdot 70^{3-k} \cdot (-2)^k$$ $=1\cdot 70^3\cdot 1+3\cdot 70^2\cdot (-2)+3\cdot 70\cdot (-2)^2+1\cdot 1 \cdot (-2)^3$ $=343,000-6\cdot 4900+12\cdot 70-8$ $=343,000-29.400+840-2=313,600+832=314,432$ In this case it can be calculated without using a calculator. Trying to see how I think the following: if the number has small prime factors, you can first factoring them and then apply the binomial theorem appropriately with respect to a multiple of 10, or calculate first the square of large prime factors. For example for your $68$ one can do $$68=2^2\cdot 17\rightarrow68^3=2^6\cdot17^3=2^6\cdot17(20-3)^2$$ Perhaps for three digit numbers may be convenient to write instead of $abc$ the expression $$abc=10ab+c$$ after factorization (when it is easy of course, such as the $2^2$ for $68$) wich besides could lead to a two digit number such as 237 with its factor $3$. Arthur Benjamin wrote a paper called "Squaring, Cubing, and Cube Rooting" which contains a great shortcut for cubing. It's similar to the method posted by callculus, but the math is broken up in a different way. Let's refer to the number to be cubed as $x$. The closest multiple of 10 to the given number will be called $z$, and we'll define the difference, $d$, as $x-z$. Note that, since $z$ is the closest multiple of 10, $d$ will always range from $-5$ to $5$. The following formula is worked from the inside out: $$z(z(z+3d)+3d^{2})+d^{3}$$ A couple of things to remember that help make things quicker: a) $3d^{2}$ will always be positive, and b) Since d can only be $\pm1, \pm2, \pm3, \pm4,$ or $\pm5,$ then $3d^{2}$ can only ever be $3, 12, 27, 48,$ or $75$ respectively. Let's try this with your example, $68$. So, $x=68, z=70,$ and $d=-2$. Start with the $3d^{2}$. You need to get to the point where you know the answers from memory. Since we're dealing with $d=-2$, this is equal to 12. Multiply this by $z$: $$12\times70=840$$ Next, add $d^{3}$ to this total. Since $d=-2$, then $d^{3}=-8$: $$840-8=832$$ After this, there's just 2 more steps, and they won't affect the last 2 digits, so you can just write down the last two digit ($32$ in this case), and remember only the remaining digits ($8$ in this this case). The net step is to multiply $(z+3d)\times(z^{2})$. Since $z=70$ and $d=-2$, then $70+(-6)=64$. To make things easier, just multiply that by $(\frac{1}{10}z)^{2}$. In this case, you'd be multiplying by 49 instead of 4,900. So, for this step: $$64(49) \\ 64(50-1) \\ 3,200-64 \\ 3,136$$ Finally, recall the 8 (or whatever digits they happen to be) from earlier. Simply add that amount to this total: $$3,136+8=3,144$$ Now, write this number down to the left of the digits you wrote down earlier (the $32$ in this case), and you have your answer! $$68^{3}=314,432$$ Practice this by cubing smaller numbers to get used to the pattern, and then work your way up to higher numbers as you get more comfortable with the process. This works well for 2-digit numbers. It can be used for 3-digit numbers, as well, by defining $z$ as the nearest multiple of 100, but you obviously need to be comfortable with quickly squaring and cubing 2-digit numbers first. After linking to the above article, Colin Beveridge of http://www.flyingcoloursmaths.co.uk/ shared the following shortcut with me, used for cubing 2-digit numbers ending in $5$. Given a number of the form $10n+5$, the thousands digits can be calculated as: $$((n(n+1)(2n+1))/2)+\left \lfloor n/4 \right \rfloor$$ That $\left \lfloor n/4 \right \rfloor$ represents the floor function (divide, and always round down to the nearest whole number). The last 3 digits cycle in a pattern: n: 0 1 2 3 4 5 6 7 8 9 (10n+5)^3 mod 1000: 125 375 625 875 125 375 625 875 125 375 If you prefer not to memorize this table, you can work out the following formula to get the same result: $$125((2(n \ mod \ 4))+1)$$ For example, what is 65^3? In this case, n=6. Start by calculating $((n(n+1)(2n+1))/2)$, keeping in mind that either $n$ or $n+1$ will always be even: $$((6)(7)(13))/2 \\ (3)(7)(13) \\ (21)(13) \\ 273$$ Work out $\left \lfloor n/4 \right \rfloor$ for n=6: $$\left \lfloor 6/4 \right \rfloor \\ \left \lfloor 1.5 \right \rfloor \\ 1$$ $$273+1=274$$ $$65^{3}=274,625$$
2019-10-21T16:12:40
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1871530/shortcut-for-finding-cube-of-the-numbers", "openwebmath_score": 0.7544633150100708, "openwebmath_perplexity": 234.91066406353437, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126482065489, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8413637805504227 }
https://www.hpmuseum.org/forum/thread-10782.html
MC: Ping-Pong Cubes 05-22-2018, 03:16 PM Post: #1 Joe Horn Senior Member Posts: 1,460 Joined: Dec 2013 MC: Ping-Pong Cubes You always thought that Ping-Pong™ only involved planes and spheres, right? Here's a programming mini-challenge that involves Ping-Pong Cubes! Definition 1: A Ping-Pong Number is any multi-digit integer whose consecutive digits always alternate between even and odd. For example, 18505 is a Ping-Pong Number because its consecutive digits are odd, even, odd, even, odd. 16850 is not (because 6 and 8 are both even). Any pair of even digits, or pair of odd digits, anywhere in the number, disqualifies it from being a Ping-Pong Number. The name of course comes from the concept of a rally on a Ping-Pong table marked like this: Note: Dotted lines are not necessarily to scale; on a perfect table, the probability of hitting any digit is the same. Definition 2: A Ping-Pong Cube is a Ping-Pong Number which is the cube of another Ping-Pong Number. The smallest Ping-Pong Cube is 5832, because it is a Ping-Pong Number and it is the cube of 18 which is also a Ping-Pong Number. The first 5 Ping-Pong Cubes are: 5832 = 18^3 12167 = 23^3 614125 = 85^3 658503 = 87^3 1030301 = 101^3 Your mini-challenge, should you choose to accept it, is to write an HP calculator program that finds the first ten Ping-Pong Cubes. Don't worry; the 10th one is not very large. A much bigger challenge, which has eluded me thus far, is to find the 11th Ping-Pong Cube. It must be very large, if it even exists. <0|ɸ|0> -Joe- 05-22-2018, 03:52 PM (This post was last modified: 05-23-2018 05:04 AM by Didier Lachieze.) Post: #2 Didier Lachieze Senior Member Posts: 1,142 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-22-2018 03:16 PM)Joe Horn Wrote:  Your mini-challenge, should you choose to accept it, is to write an HP calculator program that finds the first ten Ping-Pong Cubes. Don't worry; the 10th one is not very large. A much bigger challenge, which has eluded me thus far, is to find the 11th Ping-Pong Cube. It must be very large, if it even exists. On the Prime the following program will solve the mini-challenge: Code: EXPORT PPCube() BEGIN  PRINT();  FOR N FROM 10 TO 10000 DO   IF ΠLIST(CONCAT(ΔLIST(ASC(STRING(N,1))),ΔLIST(ASC(STRING(N^3,1)))) MOD 2) THEN PRINT(N); END;  END; END; 05-22-2018, 04:05 PM Post: #3 Joe Horn Senior Member Posts: 1,460 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-22-2018 03:52 PM)Didier Lachieze Wrote:  On the Prime the following program will solve the mini-challenge... Wow! It finds all 10 instantaneously. Very cool! <0|ɸ|0> -Joe- 05-22-2018, 08:46 PM (This post was last modified: 06-23-2018 09:09 PM by Valentin Albillo.) Post: #4 Valentin Albillo Senior Member Posts: 386 Joined: Feb 2015 RE: MC: Ping-Pong Cubes . Hi, Joe: (05-22-2018 03:16 PM)Joe Horn Wrote:  The first 5 Ping-Pong Cubes are: [...] Your mini-challenge, should you choose to accept it, is to write an HP calculator program that finds the first ten Ping-Pong Cubes. Don't worry; the 10th one is not very large. Easy as pie. The following 4-liner (159 bytes) I've written for the HP-71B plus HP's String LEX does the job and finds all 10 such numbers less than 1,000 in 0.43 sec. under J-F's Emu71: 1  A$="10101010101" @ FOR I=10 TO 999 @ IF NOT FNP(I) THEN 2 ELSE IF FNP(I^3) THEN DISP I;I^3 2 NEXT I 3 DEF FNP(N) @ N$=STR$(VAL(REPLACE$(STR$(N),"9","1"))+VAL(A$[1,LEN(STR$(N))])) 4 IF SPAN(N$,"13579") THEN FNP=NOT SPAN(N\$,"02468") ELSE FNP=1 >SETTIME 0 @ CALL @ TIME 18  5832 23  12167 85  614125 87  658503 101  1030301 103  1092727 301  27270901 303  27818127 363  47832147 725  381078125 0.43 sec. FNP is a 2-line user-defined function that returns 1 (True) if its argument is a PPN and 0 (False) if it isn't. Notice that it doesn't use any kind of looping at all, it gets the answer straight away, in linear time. Looking for all such numbers less than 10,000 takes less than 5 seconds but finds no additional ones. Quote:A much bigger challenge, which has eluded me thus far, is to find the 11th Ping-Pong Cube. It must be very large, if it even exists. My program would find it in principle but it's limited to the 12-digit native floating point results so searching for N>=10,000 would take multiprecision computations. I'll give it a try if I find the time. Regards. V. . P.S.: Edited to feature an improved version (4 lines instead of 5 and also somewhat faster). 05-23-2018, 12:14 PM (This post was last modified: 05-23-2018 01:22 PM by Didier Lachieze.) Post: #5 Didier Lachieze Senior Member Posts: 1,142 Joined: Dec 2013 RE: MC: Ping-Pong Cubes A version of the mini-challenge for the 42S in 39 steps / 79 bytes: Code: 00 { 79-Byte Prgm } 01▸LBL "PPC" 02 ALL 03 CF 29 04 SF 21 05 9 06 STO "N" 07▸LBL 00 08 1 09 STO+ "N" 10 1ᴇ4 11 RCL "N" 12 X=Y? 13 RTN 14 XEQ 01 15 X≠0? 16 GTO 00 17 RCL "N" 18 RCL× "N" 19 RCL× "N" 20 XEQ 01 21 X=0? 22 VIEW "N" 23 GTO 00 24▸LBL 01 25 CLA 26 ARCL ST X 27 ATOX 28 2 29 MOD 30▸LBL 02 31 ATOX 32 X=0? 33 RTN 34 2 35 MOD 36 X≠Y? 37 GTO 02 38 1 39 END After XEQ "PPC" the program will stop at each Ping-Pong Cube found, press R/S so see the next one. If a printer is enabled (PON) the program will print the Ping-Pong Cubes without stopping. A .raw file is also attached so you can load it in Free42 or your DM42. 05-24-2018, 01:19 PM Post: #6 Werner Senior Member Posts: 349 Joined: Dec 2013 RE: MC: Ping-Pong Cubes I exhausted the 34 digits of Free42, I'm afraid. No 11th Ping Pong Cube for x^3 <= 10^34 I used the following program, that generates all Ping Pong numbers with I digits and verifies if the third power is also a Ping Pong number. Code: 00 { 128-Byte Prgm } 01▸LBL "PPC" 02 STO "I" 03 1.009 04 XEQ 99 05 RTN 06▸LBL 97 07 RCL IND ST X 08▸LBL 99 09 STO IND ST Y 10 IP 11 1.00902 12 + 13 2 14 MOD 15 DSE ST Y 16 GTO 99 17▸LBL 01 18 RCL "I" 19 1ᴇ3 20 0 21▸LBL 14 22 % 23 RCL IND ST Z 24 IP 25 + 26 DSE ST Z 27 GTO 14 28 STO 00 29 ENTER 30 X↑2 31 × 32 XEQ 88 33 X=0? 34 VIEW 00 35 ISG 01 36 GTO 01 37 RCL "I" 38 1 39▸LBL 95 40 X<Y? 41 ISG ST X 42 RTN 43 ISG IND ST X 44 GTO 97 45 GTO 95 46▸LBL 88 47 1 48 X<>Y 49 R↓ 50▸LBL 87 51 R↑ 52 X=0? 53 RTN 54 10 55 ÷ 56 IP 57 X<>Y 58 LASTX 59 0.2 60 MOD 61 X≠Y? 62 GTO 87 63 1 64 END Input the number of digits eg. 3 PPC will result in the 6 solutions being shown in the printout if you have PON selected (I'm running Free42 on a PC). For input 12 change line 3 to 1.002 since 1e34^(1/3) = 215 443 469 003 Cheers, Werner 05-25-2018, 08:20 PM Post: #7 DavidM Senior Member Posts: 744 Joined: Dec 2013 RE: MC: Ping-Pong Cubes An interesting challenge, Joe. This has a similar feel to some of your past challenges (Pan-prime Cube and All-odd digits come to mind). I suspect that some of the optimizations from the Pan-prime cube challenge could be helpful for this one; in particular, knowing in advance that certain root digit suffixes will create invalid results might speed things up. I didn't take it that far, though. Here's my "plain UserRPL" approach: Code: \<<   {     DUP 2. MOD SWAP 10. / IP 1.     WHILE       OVER     REPEAT       ROT PICK3 2. MOD       SWAP OVER XOR       ROT AND       ROT 10. / IP SWAP     END     NIP NIP   }   0.   10.   WHILE     OVER 10. <   REPEAT     IF       PICK3 OVER       SWAP EVAL     THEN       IF         PICK3 OVER         DUPDUP * *         SWAP EVAL       THEN         SWAP 1. + SWAP         DUP 4. ROLLD       END     END      1. +   END   DROP2 DROP \>> The code within the list brackets above is a subroutine that takes a real number as input and returns a 1 if the input was valid, otherwise a 0. It simply checks the parity of each digit after the first to make sure it isn't the same as the previous one. The basic approach was to execute a loop on sequential integers starting with 10, stopping only after the 10th solution is found. In each loop iteration, the root is checked for validity before proceeding to check the cube. I opted to skip some obvious code shortcuts due to their performance impact. That said, I'm sure there are much faster UserRPL implementations possible -- I hope others will post their own. The above program completes in 24.6 seconds on my 50g. For those who might like to compare a SysRPL approach, I submit the following translation of the above program. It uses the exact same approach as the UserRPL one above, and completes in less than half the time of the above at 9.6 seconds. Note: I use Debug4x, which doesn't require the code within a WHILE clause to be encapsulated in a ":: ... ;" block. You would have to add the :: and ; symbols to the WHILE clauses if you don't use Debug4x as your compiler: Code: ::    CK0NOLASTWD    ( Valid Ping Pong Check )    ' ::       DUP %2 %MOD %0<> SWAP %10 %/ %IP TRUE       BEGIN          OVER %0<>       WHILE          ROT 3PICK %2 %MOD %0<>          SWAPOVER XOR          ROTAND          ROT %10 %/ %IP SWAP       REPEAT       ROTROT2DROP    ;    ( count of solutions )    BINT0    ( bind local vars )    FLASHPTR 2LAMBIND    ( seed initial test )    %10    ( do until 10 solutions found )    BEGIN       1GETLAM BINT10 #<    WHILE       DUP 2GETEVAL IT ::          DUPDUP DUP %* %* 2GETEVAL IT ::             1GETLAM #1+ 1PUTLAM             DUP          ;       ;       %1+    REPEAT    ( drop the count )    DROP    ( abandon local vars )    ABND ; ...and finally, to show how much time is spent in that laborious ping-pong validation, I replaced the validation subroutine from the previous SysRPL example with this Saturn code object to see how much improvement I could get. Using this brought the total time down to 1.26 seconds: Code:    CODEM       SAVE       A=DAT1 A       AD1EX       D1+5       A=DAT1 W       ST=1 0       B=0 S       B+1 S       C=A M       CSL W       A-1 X       C&B S       D=C S       {          CSL W          C&B S          ?C=D S ->{ ST=0 0 }          D=C S          A-1 X          UPNC       }       LOAD       LA 03AC0       ?ST=1 0 ->{ LA 03A81 }       DAT1=A A       RPL    ENDCODE 05-25-2018, 10:55 PM Post: #8 ijabbott Senior Member Posts: 596 Joined: Jul 2015 RE: MC: Ping-Pong Cubes I don't know if there is a clever trick to this problem (I can't think of one, but I'm not that clever!) or if it's just a brute force search. If the latter, it's not very interesting to me, 05-25-2018, 11:45 PM Post: #9 Valentin Albillo Senior Member Posts: 386 Joined: Feb 2015 RE: MC: Ping-Pong Cubes (05-25-2018 10:55 PM)ijabbott Wrote:  I don't know if there is a clever trick to this problem (I can't think of one, but I'm not that clever!) or if it's just a brute force search. If the latter, it's not very interesting to me, There are many solutions (different approaches) for this problem and further there are two kinds of clever tricks which can be used to greatly decrease the search and/or the searching time: - on the one hand, you can avoid looping through every digit of a number to ascertain whether it's a PP or not. My solution above uses one such trick, as noted in the description, and not using loops greatly increases the speed there, almost linear time to check every number large or small. - on the other hand, you can avoid testing essentially 10$$^N$$ N-digit numbers by using a clever but simple trick, reducing the search time by orders of magnitude. I've devised four or five different solutions using none, some or all of these techniques (the one above uses just the first kind) but they will be left an an exercise for the reader as I don't see much interest, if any, in HP-71B-coded solutions. Have a nice weekend. V. . 05-26-2018, 06:47 AM (This post was last modified: 05-26-2018 06:50 AM by pier4r.) Post: #10 pier4r Senior Member Posts: 1,989 Joined: Nov 2014 RE: MC: Ping-Pong Cubes (05-25-2018 10:55 PM)ijabbott Wrote:  I don't know if there is a clever trick to this problem (I can't think of one, but I'm not that clever!) or if it's just a brute force search. If the latter, it's not very interesting to me, The very fact that you don't know should tell you that it is interesting, as figuring out yourself which case applies is already a challenge. There is also an hint. Werner went throughthe numbers up to magnitude 10^34 in likely some hours or days , do you estimate it can be brute force? Also little note, if it is not interesting, ignore it. There are billions of uninteresting discussions online, we don't go writing around that we are not interested. Writing that we are not interested is first and foremost uninteresting (nobody asked us) and secondly it sounds a tad confrontational. Wikis are great, Contribute :) 05-26-2018, 05:01 PM Post: #11 DavidM Senior Member Posts: 744 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-25-2018 10:55 PM)ijabbott Wrote:  I don't know if there is a clever trick to this problem (I can't think of one, but I'm not that clever!) or if it's just a brute force search. If the latter, it's not very interesting to me, Most challenges/contests posted here are crafted by people who have already found some amount of optimization to the problem at hand prior to initiating the challenge. Sometimes there's no known shortcuts, though, and the challenge is presented in the hopes that someone may discover a useful approach to a solution that hasn't yet been thought up. In this particular case, the similarities with some other challenges provide some clues as to some possible optimizations. In my case, I didn't use any of them and instead simply wanted to use a "generic" solution as a basis, then refining the process with increasingly specialized translations of the code. Some here have expressed interest in SysRPL, so I thought a RPL->SysRPL->Saturn progression might show some interesting performance differences. That said, I did try several methods before settling on the "brute force" test that my RPL/SysRPL code used, including Didier's ΔLIST/ΠLIST approach. I was actually surprised to find that even some optimized list processing approaches were slower than what I ended up using. I realized that ANDing the numbers with"111.." would result in "10101..." results for ping-pong numbers, which could then be divided by 11 for interesting intermediate results, etc. But all those transformations in RPL ended up being slower than simply testing for alternating parity of each digit. That's part of the fun of challenges like this -- you learn a lot through the experimentation. "The journey is the reward." 05-26-2018, 08:19 PM Post: #12 Juan14 Junior Member Posts: 24 Joined: Jan 2014 RE: MC: Ping-Pong Cubes The first program is a subroutine that checks if the number is a ping-pong number, I called it ISPP? « DUP 2. MOD 1 WHILE PICK3 10 > OVER AND REPEAT DROP SWAP 10. / IP DUP 2. MOD ROT OVER XOR END NIP NIP » The next program uses ISPP? To check for ping-pong numbers starting with 10, if a ping-pong number is found checks if the square of that number is a ping-pong number, if so stores it in a list and the process repeat ten times in a START-NEXT loop. « { } 10 1 10 START DO 1 + UNTIL IF DUP ISPP? THEN DUP SQ ISPP? ELSE 0 END END SWAP OVER + SWAP NEXT DROP » 05-26-2018, 11:31 PM Post: #13 Valentin Albillo Senior Member Posts: 386 Joined: Feb 2015 RE: MC: Ping-Pong Cubes (05-26-2018 06:47 AM)pier4r Wrote: (05-25-2018 10:55 PM)ijabbott Wrote:  I don't know if there is a clever trick to this problem (I can't think of one, but I'm not that clever!) or if it's just a brute force search. If the latter, it's not very interesting to me, The very fact that you don't know should tell you that it is interesting, as figuring out yourself which case applies is already a challenge. Chill out, Pier. As you can see, ijabbott is very new to this forum, just 69 posts right now, so he probably can't know for sure whether Joe has a reputation for posting brute-force search problems or not. Not having direct experience about Joe's challenges, he does the rational thing which is to directly ask for information to older members in this forum who surely should know, instead of entering a challenge which, for all he knows, might be uninteresting. Quote:There is also an hint. Werner went throughthe numbers up to magnitude 10^34 in likely some hours or days , do you estimate it can be brute force? Of course it can. If you meant that Werner explored all numbers from, say, 10 up to 10^34, he didn't say that. He said that he explored numbers up to 215 443 469 003, which is the cube root of 10^34, and that number is only 2e11, not 1e34, and requires on the order of 1e8 tests, which is entirely doable in at most a few hours by simple brute force. Quote:Writing that we are not interested is first and foremost uninteresting (nobody asked us) and secondly it sounds a tad confrontational. Consider what I said above, think about it, and then decide who's the one being confrontational. This is no helpful or friendly way to address a fairly new member. Have a nice weekend. V. . 05-27-2018, 05:06 AM Post: #14 Joe Horn Senior Member Posts: 1,460 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-26-2018 05:01 PM)DavidM Wrote:  ... That's part of the fun of challenges like this -- you learn a lot through the experimentation. I totally agree. Over the years, I've had the most fun writing programs which presented many possible approaches. And the very best ones were the ones which, while writing them, suggested FAR BETTER approaches than the current approach which was then promptly discarded and replaced by the better approach. At first it seems that all the hours spent writing code for the now-abandoned approach were wasted, but they weren't really wasted, because the newer, better approach only came to light because of the development of the original approach. A few times that development / discovery / replacement process happened more than once during the same project, resulting in many routines on the cutting room floor and much learning along the way. Although discarding already-written code (and replacing it with better code) might be frustrating and expensive when developing commercial software, it tickles me pink when it happens while programming just for the fun of it. One of the goals of the "mini-challenges" is to share opportunities for that delightful experience. <0|ɸ|0> -Joe- 05-27-2018, 08:13 AM (This post was last modified: 05-27-2018 08:17 AM by pier4r.) Post: #15 pier4r Senior Member Posts: 1,989 Joined: Nov 2014 RE: MC: Ping-Pong Cubes (05-26-2018 11:31 PM)Valentin Albillo Wrote:  Consider what I said above, think about it, and then decide who's the one being confrontational. This is no helpful or friendly way to address a fairly new member. I see it differently. Yes if the post count would equal the sum of all life experiences of a person, then I would be with you. Instead I presume that here on the forum 99.9% of the userbase is 16 years old or older (I am much older than 16, I am near to EOL). So independently from whom create the discussion (Joe or another member) and what the discussion states (n1), I don't find it nice entering the discussion saying something on the line "hmm, I don't know, it doesn't seem interesting to me" (at least this is how I perceived it). To explain myself I propose this analogy. There is a person, as presumed above 16 year old or older, that goes to a park. In this park there are plenty of open activities and groups form around them. This person decides to go to a random group and say "hello, may I participate?" "Sure!" "Ok what are you doing here?" "Oh, we try to build the highest lego tower" "Meh, boring" Then he does the same with other groups. I don't find it nice because one of my ears (n2) hear it as "meh, why are you wasting your time on it?". Now it is completely legit that someone finds this or that activity uninteresting or boring. What I find not ok is that between the option to do the following: "look, ask, decide and in the case leave or ignore", one does instead "look, ask, decide, voice that the activity is uninteresting". The last part, the one about voicing that an activity is not interesting, is the one that is not nice to me. And now another example. If I would enter each of your Short and sweet challenges, and I would say "oh, they are mostly built around the 71B, how uninteresting for me" (n3), would you consider it nice? I can surely think the sentence, but voicing it is either a lack of tact or is belittling or is confrontational. At least, according to my perception. Now it is also true that I could have decided to ignore the message. In my case I decided to answer to it (and from there this discussion). n1: I am the first that create a lot of math challenges that are trivial for many here. n2: https://en.wikipedia.org/wiki/Four-sides_model n3: of course they are not. I am amazed at the 71B capabilities that you expose nicely. Wikis are great, Contribute :) 05-27-2018, 10:05 AM Post: #16 brickviking Senior Member Posts: 330 Joined: Dec 2014 RE: MC: Ping-Pong Cubes The original reply to the question being this: Quote:I don't know if there is a clever trick to this problem (I can't think of one, but I'm not that clever!) or if it's just a brute force search. If the latter, it's not very interesting to me, I let that one slide, as nobody's going to look pretty if they try to prosecute the guy for expressing his opinions. Jumping up and down about it won't look pretty either. So I let it slide. Yes, that's his opinion. Yes, he's expressing it in a scenario where it could be perceived to be rude. But no, nobody wins. (1) Brute force searches can be boring because there's no finesse in finding it, just overwhelming the problem space by hitting everywhere on the dartboard, which will of course find the response. Eventually. Sometimes this is the only feasible way of solving the problem when there's no obvious mathematical formula behind a response. (2) Searching for far faster routines can be (a) considerably challenging, and (b) extremely rewarding, especially when you can prove they exist. (3) Let's get back to peoples' replies about the original topic, that of routines for finding ping-pong cubes. 'nuff said from me. (Post 232) Regards, BrickViking HP-50g |Casio fx-9750G+ |Casio fx-9750GII (SH4a) 05-27-2018, 12:30 PM (This post was last modified: 05-27-2018 12:47 PM by ijabbott.) Post: #17 ijabbott Senior Member Posts: 596 Joined: Jul 2015 RE: MC: Ping-Pong Cubes To clarify my previous point, I really meant it is not mathematically interesting to me, or just searching for numbers that have some arbitrary property is not interesting to me. I understand some people are interested in finding out the fastest or most efficient ways to search for those numbers on their particular devices. I didn't mean to come across as confrontational. I suppose the only interesting part to me is whether there are any ping-pong cubes beyond 725^3. Intuitively, they should be less common for larger numbers because there are more digits in the cube that need to alternate their decimal digit even parity. Anyway, I wrote a small C++ program with the GNU MP library for bignum support, and left it running overnight. The first 10 results popped up in a fraction of a second, but it didn't find an 11th result. Code: #include <iostream> #include <gmpxx.h> using namespace std; static mpz_class nextpp(mpz_class n) {     mpz_class a;     mpz_class pow = 1;     mpz_class d;     n += 2;     a = n;     while ((d = (a % 10)) < 2)     {         pow *= 10;         a /= 10;         if (a >= 10)         {             n += pow;             a += 1;         }         else         {             break;         }     }     if (a < 10)     {         n = a * pow;         if ((a % 2) == 0)         {             pow /= 10;         }         else         {             pow /= 100;         }         while (pow)         {             n += pow;             pow /= 100;         }     }     return n; } static bool ispp(mpz_class a) {     mpz_class parity = (a % 2);          while (a /= 10)     {         if ((a % 2) == parity)             return false;         parity = 1 - parity;     }     return true; } int main() {     mpz_class i, cube;          for (i = 10; ; i = nextpp(i))     {         cube = i * i * i;         if (ispp(cube))             cout << i << " " << cube << "\n";         }     return 0; } 05-27-2018, 02:50 PM Post: #18 DavidM Senior Member Posts: 744 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-27-2018 12:30 PM)ijabbott Wrote:  I suppose the only interesting part to me is whether there are any ping-pong cubes beyond 725^3... As I recall some of Joe's previous challenges (and/or posted observations), I suspect a big motivator for him is answering that same question. Certainly it is a curiosity that there would exist a nicely-rounded quantity of 10 solutions, in relative close proximity, but then nothing else even close. So if we simply look at answering that specific question, we very quickly find that a simple brute-force search will run for quite some time with no success. It is inevitable that optimizations need to be applied which will both speed up the test and limit the input to have any hope of being useful. Discovering the "arbitrary properties" of the numbers that can be ruled out (or in) becomes an imperative for any hope of speeding up the search (regardless of platform), and the more advanced mathematical minds here may even discover a proof of existence/nonexistence in the process (one can dream!). I'm certainly not trying to tell you what you should find interesting, but rather trying to explain how some of us connect what you did find interesting to other aspects of the problem that you didn't. As time permits, I will continue to experiment with this challenge. It's my expectation that others will find a variety of optimizations long before I do, and I will celebrate with them when they do. That's the better part of this unique community that keeps me coming back -- the collective learning/sharing process (and the confirmation that I'm not the only person in the world who still appreciates these well-designed, geeky devices ). Thanks for sharing your code and contributing to this puzzle! 05-27-2018, 03:02 PM Post: #19 DavidM Senior Member Posts: 744 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-26-2018 08:19 PM)Juan14 Wrote:  The first program is a subroutine that checks if the number is a ping-pong number, I called it ISPP? « DUP 2. MOD 1 WHILE PICK3 10 > OVER AND REPEAT DROP SWAP 10. / IP DUP 2. MOD ROT OVER XOR END NIP NIP » That's a nicer (and faster) RPL approach to testing ping-pong validity than I came up with, Juan! Using your better routine lowered the run time in my original RPL attempt by 22%. Executing it as a separate global variable also saved a bit of time as well. (05-26-2018 08:19 PM)Juan14 Wrote: Code: ...    THEN     DUP SQ ISPP? ... I did have to change the "SQ" above to "3. ^" in order to match the original problem, though. Just curious... did you find any interesting results from testing the squares? 05-27-2018, 04:48 PM Post: #20 David Hayden Member Posts: 249 Joined: Dec 2013 RE: MC: Ping-Pong Cubes (05-27-2018 05:06 AM)Joe Horn Wrote:  Although discarding already-written code (and replacing it with better code) might be frustrating and expensive when developing commercial software, it tickles me pink when it happens while programming just for the fun of it. I'm going a bit off topic here but I hope some readers might find this interesting. Sometimes discarding code frustrating and expensive, but sometimes it's the best thing you can do. In the late 90's we totally rewrote a core piece of our software. It had grown into a mass of spaghetti code that was very hard to maintain and enhance. I think the real interesting thing about that project was that we needed that first implementation to truly understand the requirements of the program. That rewrite is still going strong nearly 20 years after going online and through almost constant enhancement. In another case, we deliberately wrote a piece of thow-away code. The throw-away couldn't handle the rapidly growing load that we anticipated, but it could be written and deployed quickly. It gave us time to write a more robust implementation that could handle the load which increased four orders of magnitude. Dave « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2019-07-21T00:21:26
{ "domain": "hpmuseum.org", "url": "https://www.hpmuseum.org/forum/thread-10782.html", "openwebmath_score": 0.45289504528045654, "openwebmath_perplexity": 1845.7878489250068, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9787126444811033, "lm_q2_score": 0.8596637469145054, "lm_q1q2_score": 0.8413637791072295 }
http://dkxo.animalhousesrl.it/ampere-circuital-law.html
Find PowerPoint Presentations and Slides using the power of XPowerPoint. Do you have PowerPoint slides to share? If so, share your PPT presentation slides online with PowerShow. 4 The Magnetic Field of Filamentary Currents 141 3. doc), PDF File. Show through an example, how this law enables an easy evaluation of this magnetic field when there is a symmetry in the system? (ii) What does a toroid consist of? Show that for an ideal toroid of closely wound turns, the magnetic field. A scientist can use Ampere's law to determine the magnetic field associated with a given current or current associated with a given magnetic field, if there is no time changing electric field present. AMPERE’S LAW. Ampere's law is defined in terms of an arbitrary surface and the closed loop that forms its boundary. 1800-212-7858. Applications of Ampere’s Law: Expression for Magnetic Field Due to Solenoid and Toroid: A cylindrical coil of a large number of turns is called a solenoid. coaxial cable Week 10 Induced EMF and Inductance a. Stationary charges produce electric fields proportional to the magnitude of the charge. Field inside the solenoid: Consider a closed path pqrs. BIOT SAVART LAW. In its original form, the current enclosed by the loop only refers to free current caused by moving charges, causing several issues regarding the conservation of electric charge and the. The conductor is 20m long and there is a potential difference of 0. Apply Ampere's circuital law to find magnetic field inside and outside of a toroidal solenoid. MFI due to an infinite sheet of current and a long current carrying filament – Point form of Ampere’s circuital law – Maxwell’s third equation, Curl (H)=Jc, Field due to a circular loop, rectangular and square loops. AP Physics C – Applications of Ampere’s Circuital Law – Magnetic Field due to Straight Infinitely Long Thick Current Carrying Cylinders and Pipes I n the post dated 1 st February 2008 the equations to be noted in connection with magnetic fields were given. An ohm is equivalent to a volt per ampere. Using Ampere's circuital law for P B. Magnetic Boundary Conditions Magnetic boundary conditions are the conditions that a or (or ) field must satisfy at the boundary between two different magnetic media. (9) Chapter 3. According to this law, the line integral of a magnetic field over a closed path is equal to μ 0 times the net current linked by the area enclosed by that path. , ), this extra term can be neglected. Moving Charges n Magnetism 04 :Ampere's Circuital Law :Magnetic Field due to Infinte wire n Cylinder - Duration: 57:51. Free Online APPLICATIONS OF AMPERE S CIRCUITAL LAW Practice and Preparation Tests. (a)inside the toroid is constant. Questions: a. (i) State Ampere's circuital law. Just as the role played by the Gauss's law in electrostatics, in magnetostatics is the Ampere's circuital law plays the same role. Define ampere-turn. 2 Ampere's Circuital Law and its Applications. Infinitely Long Line Current. Solutions to Ampere's Law Problems 1) From Ampere’s law, the magnetic field at point a is given by 0 2 a a a I B r P S, where I a is the net current through the area of the circle of radius. The equation describing the magnetic field due to a single, nonrelativistic charged particle moving at constant velocity is often referred to as the “Biot-Savart law for a point charge. 0 C ³ B r dl I P • Amperes law states that the line integral of 𝐵( ) around a closed contour C is proportional to the total current I flowing through this closed contour (𝐵( ) is not conservative!). These notes are very helpful to prepare Electricity & Magnetism for BSc and are in accordance with paper pattern of Punjab University-Lahore, GCU-Lahore, GCU-Faislabad, University of Sargodha-Sargodha and all other universities of Punjab and Pakistan. Do mathematically rigourous formulations of Ampère's law $(1)$ exist under more relaxed assumptions on $\boldsymbol{J}$, like the quoted case of $\boldsymbol{J}$ constant on a (bounded or unbounded) region and null outside of it, and, if they do, how can they be proved?. Both (a) and (b) d. This is the collogue page for discussin impruivements tae the Ampère's circuital law article. Power was stored in a 20 ampere-hour nickel-cadmium battery. 8 Magnetic Vector Potential 163. Search Result for ampere s circuital law. Redistribution of Free Charge. txt) or read online for free. In its historically original form, Ampère's Circuital Law relates the magnetic field to its electric current source. Ampere’s Law. Show through an example, how this law enables an easy evaluation of this magnetic field when there is a symmetry in the system? (ii) What does a toroid consist of? Show that for an ideal toroid of closely wound turns, the magnetic field. inside a solenoid with n turns per unit length. Ampère's circuital law is now known to be a correct law of physics in a magnetostatic situation: The system is static except possibly for continuous steady currents within closed loops. Ampere's Circuital Law - Free download as Word Doc. 1V DC between its two ends. These, two first, experiments demonstrate qualitatively Ampere’s Law (Ampère's circuital law). Newton's second law states that the rate of change of momentum is proportional. A scientist can use Ampere's law to determine the magnetic field associated with a given current or current associated with a given magnetic field, if there is no time changing electric field present. Questions: a. In its original form, Ampère's Circuital Law relates the magnetic field to its electric current source. gov/vehiclesandfuels/ http://www. The integral form of Ampère’s Law uses the concept of a line integral. Ampere’s Circuital law states that. A solenoid is a long coil of wire closely wound in the form of helix as shown in Figure 3. Ampere’s Circuital Law; The Solenoid and the Toroid; Force between Two Parallel Currents, the Ampere; Torque on Current Loop, Magnetic Dipole; The Moving Coil Galvanometer; Class XII NCERT Physics Text Book Chapter 4 Moving Charges and Magnetism is given below. Alternatively: this observations shows that during charging/ discharging, the circuit is (momentarily) complete and there is a ‘current flow’ between the capacitor plates also. Use Ampere’s law to find the field at or near the center of such a long solenoid. The circuital law implies that ∇* B =0. Another aspect of this amazing law is that just as the role played by the Gauss's law in electrostatics, the same is played in magnetostatics is played by the Ampere's law. Ampère-Maxwell circuital law is investigated and used to model the Earth as a series of stacked Faraday discs, which create the magnetoqua-sistatic field. Ampere circuital law Treating free charges separately from bound charges, The equation including Maxwell's correction in terms of the H -field is the H -field is used because it includes the magnetization currents, so J M does not appear explicitly, see H -field and also Note: Second, there is an issue regarding the propagation of. This derivation is now generally accepted as a historical landmark in physics by virtue of uniting electricity, magnetism and optics. 7 Ampere's Circuital Law in Differential Form (Maxwell's Curl Equation for the Static Magnetic Field) 3. 2 Ampere's Circuital Law and its Applications. We will now apply Ampere circuital law to calculate magnetic field of a toriod. In the 1820's, Ampere first identified that all magnetic effects are caused by the charged particles in motion, i. (9) Chapter 3. state ampere s circuital law - Physics - TopperLearning. The PowerPoint PPT presentation: "Magnetostatic Field: Ampere Circuital Law" is the property of its rightful owner. Basically, you select some loop (i. Solenoids and toroids are widely used in motors, generators, toys, fan-windings, transformers, electromagnets, etc. circuital 2015 December, “Hubo elecciones y las ganamos”, in El Nacional ‎ [1] : Afortunadamente, la trampa circuital , en esta oportunidad, es el factor de ponderación que atenúa el efecto perverso de esa trampa y compensa los votos escamoteados por el abuso, el chantaje y el ventajismo. The magnetic induction due to an infinitely long straight current carrying conductor is B (2 π a ) is the product of the magnetic field and the circumference of the circle of radius ‘ a ’ on which the magnetic field is constant. 602 176 634 × 10 −19 coulomb. Subscribe to view the full document. This equation, known as Ampère’s circuital law, is highly mathematical, requiring university level mathematics to use and understand. the biot savart law, ampere's circuital law, gauss's law for magnetism In a magnetostatic situation- The magnetic field B as obtained from the Biot-Savart law, manages to always obey Ampere's law and Gauss's law for magnetism. 6 Ampere's Circuital Law in Integral Form 154 3. Electromagnetism. Ampere’s law states that magnetic fields are related to the electric current produced in them. Define ampere-turn. Using Ampere's circuital law, find the magnetic flux density at the centre of a long solenoid carrying current. In was derived using hydrodynamics in 1861 by James Clerk Maxwell. Basically, you select some loop (i. This implies, magnetic field outside the solenoid is 0. This law only tells magnetic field produced by current not all magnetic field present Limitation of this law is- It is valid only when there is electric fie. Lecture Notes By S. In the first step, a physical interpretation of current as moving charges carrying their electric fields with them simplifies the derivation of the magnetic field of current in a straight infinitely long conductor. gov/grants/index. [2 marks] Still considering the electromagnet above, from Ampere's circuital law is there any region in the path of the magnetic flux where the magnitude of the H-field is greater or smaller than any other region?. term on the right side of Ampere's circuital law?term on the right side of Ampere's circuital law? 2. times the net current i passing through the area enclosed by the closed curve. Ampere's law is a relationship between the tangential component of magnetic field at points on a closed curve and the net current through the area bounded by the curve. (b) Two long co-axial insulated solenoids, S1 and S2 of equal lengths are wound one over the other as „ shown in the figure. This implies, magnetic field outside the solenoid is 0. The first law can be derived from the second law but I don't think the second law can be derived from the third law. 602 176 634 × 10 −19 C. (b) Use it to derive an expression for magnetic field insdie, along the axis of an air cored solenoid. Ampère's law determines the magnetic field associated with a given current, or the current associated with a given magnetic field, provided that the electric field does not change over time. AMPERE’S LAW. Use Ampére's law to determine the magnetic field strength… a distance r away from an infinitely long current carrying wire. 3)andgoingthroughthe. 2 Ampere’s Circuital Law and its Applications. 6 The Law of Conservation of Charge 110 2. Dhal Ampere’s Circuital Law Andre Marie Ampere stated, “The line integral of magnetic field along a closed loop is equal to µ. Find PowerPoint Presentations and Slides using the power of XPowerPoint. Contact Us. The best-known and simplest example of Ampère's force law, which underlies the definition of the ampere, the SI unit of current, is as follows: For two thin, straight, stationary, parallel wires, the force per unit length one wire exerts upon the other in the vacuum of free space is ,. Electromagnetism. Ampere's law related the integrated magnetic field around a closed loop to the electric current passing through the loop. Ampere's Law [Equation 2] states that if we add up (integrate) the Magnetic Field along this blue path, then numerically this should be equal to the enclosed current I. ”1 Introductory calculus-based physics books usually state this law without proof. APPLICATIONS OF AMPERE'S CIRCUITAL LAW (i) Magnetic field induction due to a current carrying straight conductor Consider a point P at a distance R from the straight conductor. A 70-Ampere-hour battery for example, at a discharge rate of 3. 2- Flow of current in conductors. A scientist can use Ampere's law to determine the magnetic field associated with a given current or current associated with a given magnetic field, if there is no time changing electric field present. In its original form, Ampère's Circuital Law relates the magnetic field to its electric current source. Faraday’s experimental law has been used to obtain one of Maxwell’s equations in differential form , which shows that a time-varying Magnetic field produces an Electric field. Ampere's Circuital Law - Free download as Word Doc. I want to find magnetic field generated by infinitly long wire using ampere's law but ampere's law is gives us scaler how can i get vectorial solutions ampere's law : u0*i/(2*pi*d) 0 Comments. but experimental tests actually show that ∇* B = dE/dtc 2. These, two first, experiments demonstrate qualitatively Ampere’s Law (Ampère's circuital law). doc), PDF File. Since this path encloses the whole current /, according to Ampere's law j pd = 2irp Amperian path. In the electrical and electronic. Lecture 10 - Ampere's Law Overview. Physics Wallah - Alakh Pandey 254,964 views. (9) Chapter 3. The application of Ampere's circuital law involves finding the total current enclosed by a closed path. state ampere s circuital law - Physics - TopperLearning. Answer: Ampere's Circuital Law states the relationship between the current and the magnetic field created by it. (4) Ampere's circuital law. The circuital law implies that ∇* B =0. The mathematical proof of this is beyond the scope of this guide; however it can be said that the equivalence between Biot-Savart and Ampere's Laws will be brought out by determining due to an infinitely long conductor carrying a steady current through it. Simulating Faraday's law in Matlab. A scientist can use Ampere's law to determine the magnetic field associated with a given current or current associated with a given magnetic field, if there is no time changing electric field present. Find PowerPoint Presentations and Slides using the power of XPowerPoint. Apply the Circuital Law to simple situations, e. Both (a) and (b) d. The application of Ampere’s circuital law involves finding the total current enclosed by a closed path. Basically, you select some loop (i. Then magnetic field is in the form of circle. • Ampere’s Circuital Law in integral form states that “the circulation of the magnetic flux density in free space is proportional to the total current through the surface. Free Online AMPERE S CIRCUITAL LAW Practice and Preparation Tests. It is the magnetic equivalent of Gauss's Law. In its original form, the current enclosed by the loop only refers to free current caused by moving charges, causing several issues regarding the conservation of electric charge and the. Ampère's circuital law is now known to be a correct law of physics in a magnetostatic situation: The system is static except possibly for continuous steady currents within closed loops. 1 Faraday’s Law and Ampere’s Circuital Law 130. A solenoid is a long coil of wire closely wound in the form of helix as shown in Figure 3. txt) or read online for free. 2 Advanced texts often present it either without proof or as a special case of a complicated mathematical formalism. And yes, the Biot-Savart law does the same but Ampere’s law uses the case high symmetry. Under these circumstances Ampère's circuital law collapses in a heap. Indraprastha Institute of Information Technology Delhi ECE230. area of two dimensional shapes. com, find free presentations research about Ampere S Circuital Law PPT. A uniform sheet of surface current density Ks=Ky ay in z=0 plane. Proof of Ampere's circuital law: Consider along straight conductor carrying current I. Ampere's Circuital Law is an important law of magnetostatics that enables one to easily calculate the magnetic field generated by a current flowing through a wire. APSL ( Ampere Protection Services LLP) has been incorporated with a mission of introducing new protection technologies for electrical and power systems, both for industrial and domestic use since 2014 from Gurgaon, Haryana. This is the collogue page for discussin impruivements tae the Ampère's circuital law article. Ampere’s Law is used to find the magnetic field generated by currents in highly symmetric geometries like the infinitely long wire and the solenoid. I am trying to prove it, but haven't been. Ampere's law states that:The line integral of magnetic field B along a closed path due to current is equal to the product of the permeability of free space and the current enclosed by the closed path. Better protectors exceed peak ratings of 1000 joules and 40, 000 amperes. eg: to evaluate the magnetic field at some point along the axis of a current loop. S2 is the brown, paraballoidal surface only. The integral form of Ampère's Law uses the concept of a line integral. Simulating Faraday's law in Matlab. Convection and Conduction currents, Dielectric constant, lsotropic and homogeneous Dielectrics. [ 9 ], §528). The magnetic field can be visualized in terms of flux lines, which form closed loops interlinking with the winding. English-German online dictionary developed to help you share your knowledge with others. The direction of the magnetic field follows the right hand rule for the straight wire. These are the list of electrical laws you must know. Stationary charges produce electric fields proportional to the magnitude of the charge. According the Ampere circuital law, the line integral of magnetic field intensity H ̅ around a closed path is equal to the direct current enclosed by that path. pdf), Text File. Ampere's law is stated below for the sake of the curious, but it will not be necessary to use it in physics 232: the formulas we need for the B fields of solenoid and a long straight wire can instead be taken on faith. Using Ampere's law, one can determine the magnetic field associated with a given current or current associated with a given magnetic field, providing there is no time changing electric field present. Now, using Ampere’s circuital law to this path, we have Therefore, B = 0. alternate case: ampère's circuital law. 2 Ampere's Circuital Law and its Applications. Search Result for ampere s circuital law. d • Just as Gauss’s law follows from Coulomb’s law, so Ampere’s circuital law follows from Ampere’s force law. Differential form of (i) Gauss law of electrostatics (ii) Gauss Law of magnetostatics (iii) Faraday’s laws of electromagnetic induction (iv) Ampere Circuital law (steady currents and time varying currents) (v) Gauss law of dielectrics (vi) Ampere circuital law in presence of magnetic medium 12. Ampere's Law • Ampère's circuital law, discovered by André-Marie Ampère in 1826, relates the integrated magnetic field around a closed loop to the electric current passing through the loop. Basically, you select some loop (i. doc), PDF File. The differential form of Ampere’s Circuital Law for magnetostatics (Equation \ref{m0118_eACL}) indicates that the volume current density at any point in space is proportional to the spatial rate of change of the magnetic field and is perpendicular to the magnetic field at that point. By applying Ampere’s circuital law, B 2πr = µ o I. txt) or read online for free. Using a solenoid of few turns, the pattern of magnetic field is examined by using iron filings. It was discovered in 1826 by Andrew-Marie Ampere [1]. electromagnetism it yields Coulomb's law, Faraday's law of induction and Amp~re's expression for the force between current elements [9,10]. , a closed path through space), and walk along the. Fun Video: Ampere's Circuital Law Force Between 2 Parallel Wires and Solenoid video for Class 12 is made by best teachers who have written some of the best books of Class 12. In all other cases the law is incorrect unless Maxwell's correction is included (see below). Newton's second law states that the rate of change of momentum is proportional. The form of Ampere's force law commonly given was derived by Maxwell and is one of several expressions consistent with the original experiments of Ampère and Gauss. The law was stated in the year 1820 by Jean Baptisle Biot and Felix Savart. Abstract: Ampere's circuital law (ACL) and the law of Biot-Savart (LBS) are applied to the computation of the magnetic flux density at the center of the square loop. The integral form of Ampère's Law uses the concept of a line integral. Estimating the reduction of radiated emissions from TFT-LCD panel using network analyzer with a bulk current injection probe. It has gotten 386 views and also has 4. Gauss's law describes the relationship between an electric field and the generating electric charges: The electric field points away from positive charges and towards negative charges. Ampere's Circuital Law Ampère's law relates magnetic fields to electric currents that produce them. 2 Advanced texts often present it either without proof or as a special case of a complicated mathematical formalism. By applying Ampere’s circuital law, B 2πr = µ o I. Ampere's Circuital Law and which of the following law in electrostatics. ⇐ Ampere’s Law (Differential Form) A more rigorous way to prove this relation for arbitrary volume current density distributions Jrfree ()′ GG is as follows: We start with the formula for B(r) GG that we obtained earlier (see P435 Lect. It is the magnetic equivalent of Gauss's Law. Just as the role played by the Gauss's law in electrostatics, in magnetostatics is the Ampere's circuital law plays the same role. pdf), Text File. State Ampere’s circuital law in differential form for the general case of an Posted 2 years ago University of SS outhern California School Of Engineering Department Of Electrical Engineering EE. Another aspect of this amazing law is that just as the role played by the Gauss's law in electrostatics, the same is played in magnetostatics is played by the Ampere's law.   Apply Ampere's circuital law to find magnetic field inside and outside of a toroidal solenoid. Apply Ampere’s Principle to infinitely long thin wire b. Dhal Ampere’s Circuital Law Andre Marie Ampere stated, “The line integral of magnetic field along a closed loop is equal to µ. Can anyone please explain what is the inconsistency in ampere's circuital law that led Maxwell to propose the need for displacement current? If diagram is required please mail to mc_collins. Preparation: Before coming to the Lab, deduce the magnetic field generated by current line with the Ampère's Circuital Law (³B dl I P 0). dl = µ0 I ∫B. Amperes law of force gives the magnetic force between two current carrying circuits in an otherwise empty universe. 대칭성이 있는 문제를 다룰 때 매우 유용하게 사용한다. Use this law to obtain the expression for the magnetic field inside an air cored toroid of average radius, having ‘n’ turns per unit length and carrying a steady current I. Ampere’s Circuital Law – Free download as Word Doc. A third new equation is constructed that relates gravity to the very near field of the Earth. Remember: (1) Like conduction current displacement current is also a source of magnetic field. Ampere's law states that:The line integral of magnetic field B along a closed path due to current is equal to the product of the permeability of free space and the current enclosed by the closed path. The line integral of magnetic field is given by, For path pq, and are along the same direction, For path rs, B = 0 because outside the solenoid field is zero. The first law can be derived from the second law but I don't think the second law can be derived from the third law. In the figure below, the integral of H about closed paths a and b gives the total current I, while the integral over path c gives only that portion of the current that lies within c. (9) Chapter 3. Maxwell derived it again electrodynamically in his 1861 paper On Physical Lines of Force and it is now one of the Maxwell equations, which form the basis. Subscribe to view the full document. Now, due to symmetry, the magnetic field will be uniform (not varying) at a distance r from the wire. Use Ampere’s circuital law, to obtain the expression for the magnetic field due to current I in a long solenoid having n numbers of turns per unit length. Ampere's Law, specifically, says that the magnetic field created by an electric current is proportional to the size of that electric current with a constant of proportionality equal to the permeability of free space. Simulating Faraday's law in Matlab. I want to acquire conductivity and I used Ampere's circuital law. He formulated the Ampere’s circuital law in 1826 , which relates the magnetic field associated with a closed loop to the electric current passing through it. dl for a closed curve is equal to µ0 times the net current I threading through the area bounded by the curve. Apply Ampere’s Principle to infinitely long thin wire b. Hence the law needs modification. By using this law, complex problems are solved in magnetostatics. These, two first, experiments demonstrate qualitatively Ampere’s Law (Ampère's circuital law). which Ampere's law is to be applied, is known as an Amperian path (analogous to the term Gaussian surface). Această lege spune că integrarea densității câmpului magnetic (B) de-a lungul unei căi imaginare închise este egală cu produsul curentului închis de calea și permeabilitatea mediului. Ampere's Circuital Law Ampere's law is is analogous to Gauss's law in electrostatics. apparent power. This equation applies to situations where the electric current is constant. Or / Describe the working of a moving coil galvanometer. This course is the introductory course in electromagnetic theory. Show through an example, how this law enables an easy evaluation of this magnetic field when there is a symmetry in the system? (ii) What does a toroid consist of? Show that for an ideal toroid of closely wound turns, the magnetic field. Ampere’s magnetic circuital law 255. Links are added to Ampere's circuital law and Lorentz force and Biot-Savart law. There is, therefore, a need to include this current ‘ flowing’ across the ‘gap’. amplifier - general purpose inverting amplifier. Ampere’s law can be valuable when calculating magnetic fields of current distributions with a high degree of symmetry. Line integral of the magnetic field B around any closed curve is equal to 0 times the net current i threading through the area enclosed by the curve i. Gauss's law: Gauss's law, also known as Gauss's flux theorem, is a law relating the distribution of electric charge to the resulting electric field. In physics, Ampère's Circuital law, discovered by André-Marie Ampère, relates the circulating magnetic field in a closed loop to the electric current passing through the loop. Answer: Ampere’s Circuital Law states the relationship between the current and the magnetic field created by it. Contact Us. but experimental tests actually show that ∇* B = dE/dtc 2. ----> ampere's law : which is to be used while finding magnetic fields inside the enclosed surface. Newton's second law states that the rate of change of momentum is proportional. From Ampere’s Circuital law which is applicable to Steady Magnetic fields. Electrical and Electronic Theorems. Sources of magnetic field: 1- Permanent magnet. It depends on the point of view what you consider the fundamental laws of nature. inAmpère's Circuital Law. Maxwell derived it again electrodynamically in his 1861 paper On Physical Lines of Force and it is now one of the Maxwell equations, which form the basis. In the 1820's, Ampere first identified that all magnetic effects are caused by the charged particles in motion, i. Current that does not go through “Amperian Loop” does not contribute to the integral 2. The law is valid in the magnetostatic approximation, and is consistent with both Ampère's circuital law and Gauss's law for magnetism. The first law can be derived from the second law but I don't think the second law can be derived from the third law. In the field line. d\vec{l} = \mu_{0} I _{encl} Following the integration path, we have:. 602 176 634 × 10 −19 C. By symmetry all points at distance r will be on a circle of radius R. Ampere’s circuital law – the integration of around any closed path is equal to the net current enclosed by that path. Ampère's circuital law is now known to be a correct law of physics in a magnetostatic situation: The system is static except possibly for continuous steady currents within closed loops. In was derived using hydrodynamics in 1861 by James Clerk Maxwell. The Biot–Savart law, Ampère's circuital law, and Gauss's law for magnetism. In Ampere's circuital law, what is the purpose of an 'Amperian Path'? - Published on 05 Oct 15. This law states that the integral of magnetic field density (B) along an imaginary closed path is equal to the product of current enclosed by the path and permeability of the medium. The law is valid in the magnetostatic approximation, and is consistent with both Ampère's circuital law and Gauss's law for magnetism. , ), this extra term can be neglected. (a) State Ampere's circuital law, expressing it in the integral form. Lecture 10 - Ampere's Law Overview. It was discovered in 1826 by Andrew-Marie Ampere [1]. As the direction of current is from north to south represented by thumb, the direction iof magnetic field is vertically upwards in east direction of wire. It follows therefore from these three tests, that the repulsive force. The direction of the magnetic field follows the right hand rule for the straight wire. Ampere’s law states that magnetic fields are related to the electric current produced in them. 5 = 4 × 10-6 T The direction of magnetic field can be found out by applying right hand thumb rule. Now, using Ampere's circuital law to this path, we have Therefore, B = 0. Application of Ampere's circuital law to two and three dimensional finite element analysis Abstract: A postprocessor has been developed for two- and three-dimensional magnetic fields that calculates the magnetomotive force (MMF) drop along an arbitrary path of finite elements. [6] Click "show" in the box below for an outline of the proof.   Apply Ampere's circuital law to find magnetic field inside and outside of a toroidal solenoid. Links are added to Ampere's circuital law and Lorentz force and Biot-Savart law. Infinitely Long Line Current. The Biot-Savart law explains how currents produce magnetic fields, but it is difficult to use. Electrical and Electronic Theorems. state ampere s circuital law - Physics - TopperLearning. [ 9 ], §528). 3)andgoingthroughthe. 2- Flow of current in conductors. inside a toroid (a toroidal solenoid) with a total of N turns. The integral form of Ampère's Law uses the concept of a line integral. Stationary charges produce electric fields proportional to the magnitude of the charge. We shall see later that we can rescue Ampère's circuital law by adding an extra term involving a time derivative to the right-hand side of the field equation. pdf), Text File. By symmetry all points at distance r will be on a circle of radius R. Ampère's circuital law explained. Search Result for ampere s circuital law. Further, Ampere’s circuital law is analyzed from the particle point of view using the electric-magnetic field relation. Whst is ampere circuital law. Account this problem concept of displacement current was introduced by Maxwell. Without getting into tedious mathematical equations, we are going to understand what the law is, how Ampere was defined, and how this path breaking law changed physics at that time. View and Download PowerPoint Presentations on Ampere S Circuital Law PPT. Ampere’s Circuital Law. Ampere Circuital Law (contd. 0 C ³ B r dl I P • Amperes law states that the line integral of 𝐵( ) around a closed contour C is proportional to the total current I flowing through this closed contour (𝐵( ) is not conservative!). inside a solenoid with n turns per unit length. Ampere's Circuital Law. State Ampere’s circuital law and prove it for the magnetic field produced by a straight curre. 8 Magnetic Vector Potential 163. Computation of magnetic field intensity. In all other cases the law is incorrect unless Maxwell's correction is included (see below). Alternatively: this observations shows that during charging/ discharging, the circuit is (momentarily) complete and there is a ‘current flow’ between the capacitor plates also. In its original form, the current enclosed by the loop only refers to free current caused by moving charges, causing several issues regarding the conservation of electric charge and the.
2019-12-08T17:56:49
{ "domain": "animalhousesrl.it", "url": "http://dkxo.animalhousesrl.it/ampere-circuital-law.html", "openwebmath_score": 0.7762930393218994, "openwebmath_perplexity": 527.8623371915256, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9787126438601955, "lm_q2_score": 0.8596637451167997, "lm_q1q2_score": 0.8413637768140202 }
http://math.stackexchange.com/questions/644861/if-both-integers-x-and-y-can-be-represented-as-a2-b2-4ab-prove-that
# If both integers $x$ and $y$ can be represented as $a^2 + b^2 + 4ab$, prove that $xy$ can also be represented like this … There is a set $Q$ which contains all integral values that can be represented by $$a^2 + b^2 + 4ab$$, where $a$ and $b$ are also integers. If some integers $x$ and $y$ exist in this set, prove that $xy$ does too. I really have no idea how I can go about solving this. I tried simple multiplication of the two assuming one to be $(a^2 + 4ab + b^2)$ and other as $(c^2 + 4cd + d^2)$ but ultimately it leads to a long equation I can make no tail of :/ Any help whatsoever would be greatly appreciated - I have updated your post to LaTeX. Please see that the updates are correct. –  Jeel Shah Jan 20 '14 at 12:48 @hardmath Fixed! Thanks for the catch! –  Jeel Shah Jan 20 '14 at 13:01 ## 2 Answers Since $a^2+b^2+4ab=(a+2b)^2-3b^2$, your numbers are exactly the numbers of the form $x^2-3y^2$. Now $x^2-3y^2$ is the norm of the algrebraic number $x+y\sqrt{3}$, so you have the identity $$(x^2-3y^2)(u^2-3v^2)=(xu+3yv)^2-3(xv+yu)^2$$ (multiplicativity of norms). - Thank you so much, now I can finally sleep with this homework done. –  skatter Jan 20 '14 at 12:54 To make the resulting identity explicit in terms of $a, b, c, d$, if $f(x,y) = x^2 + 4xy + y^2$, then $$f(ac-bd,ad+4bd+bc) = f(a,b) f(c,d).$$ –  heropup Jan 20 '14 at 13:13 Generalization: $$(a^2+nb^2)(c^2+nd^2)=(ac\pm nbd)^2+n(ad\mp bc)^2$$ $$n=-m\implies (a^2-mb^2)(c^2-md^2)=(ac\mp mbd)^2-m(ad\mp bc)^2$$ -
2015-08-05T00:17:09
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/644861/if-both-integers-x-and-y-can-be-represented-as-a2-b2-4ab-prove-that", "openwebmath_score": 0.9189427495002747, "openwebmath_perplexity": 376.90562546358586, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683498785867, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.8413616283826036 }
https://math.stackexchange.com/questions/2287686/how-many-ways-of-arranging-given-7-two-digit-positive-integers-so-that-the-sum-o
# how many ways of arranging given 7 two digit positive integers so that the sum of every four consecutive integer is divisible by 3? in how many ways can I arrange the numbers: 21,31,41,51,61,71,81 such that the sum of every four consecutive numbers is divisible by three? Though I am not an expert on modulo math, I do know that if we were to take MOD 3 on all of the numbers in the list, I would get the following in respective order: $0_{21}, 1_{31}, 2_{41}, 0_{51}, 1_{61}, 2_{71}, 0_{81}$ (the subscript correlates to what original number it represents) and clearly if we were to match the values so that the sum is a multiple of three, the numbers added up would also be a multiple of three. But upon realizing that the numbers must be consecutive and that if taking any four consecutive numbers in a set of 7 terms, I got stuck here and do not know how to proceed. • Could you give an example of what you're looking for? There are some ambiguities in your question (specifically, do you mean digits or numbers?) – Michael Burr May 19 '17 at 12:01 • The OP explicitly talks about concecutive "numbers", and also his (good) start points in that direction. – drhab May 19 '17 at 12:05 • There is no ambiguity. The question asks about arranging the numbers $21,...,81$ with every four consecutive numbers having some property. There is no mention of digits. – Especially Lime May 19 '17 at 12:07 If you want the sums $a_1+a_2+a_3+a_4$ and $a_2+a_3+a_4+a_5$ to both be multiples of $3$, then you must have $a_1\equiv a_5$ mod $3$. Similarly $a_2\equiv a_6$, $a_3\equiv a_7$. This means that you need to pair these numbers off in pairs which are equal mod $3$, and the other one, $a_4$, must be $0$ mod $3$ (because there are three numbers which are $0$ mod $3$). So your sequence, mod $3$, must be one of the following: • $0,1,2,0,0,1,2$ • $0,2,1,0,0,2,1$ • $1,0,2,0,1,0,2$ • $1,2,0,0,1,2,0$ • $2,0,1,0,2,0,1$ • $2,1,0,0,2,1,0$. All of these work. Once you've chosen one of these sequences you can fill it in by replacing the $0$s with $21,51,81$ in some order, the $1$s with $31,61$ in some order, and the $2$s with $41,71$ in some order. There are therefore $6\times6\times2\times2=144$ ways to do this in total. • why is it 6x6x2x2? I understand all of the potential sequences that you have laid out but i don't understand the miltoplication – John Rawls May 23 '17 at 15:45 • The first $6$ is the six different sequences of $0$s, $1$s and $2$s. The second $6$ is $3!$ for the number of different orders $21,51,81$ can go in (they have to go in the three $0$ places, but you can put them in those places in any order). Then there are $2$ ways to put in $31$ and $61$ (either $31$ goes in the first $1$ place and $61$ goes in the second, or vice versa), and similarly there are $2$ ways to put in $41$ and $71$ in the two $2$ places. Since all of these are independent choices you multiply them together. – Especially Lime May 23 '17 at 21:44
2020-09-22T05:05:32
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2287686/how-many-ways-of-arranging-given-7-two-digit-positive-integers-so-that-the-sum-o", "openwebmath_score": 0.7982357144355774, "openwebmath_perplexity": 175.16152624639776, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683480491555, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.8413616268240146 }
http://math.stackexchange.com/questions/325648/discrete-math-set-theory
# Discrete Math: Set Theory Can anyone help me check if my solution is correct? Link here, sorry it kinda look too messy when i tried to paste d) A class has 175 students. The following table shows the number of students studying one or more of the following subjects. Subject No. of students Mathematics 100 Physics 70 Chemistry 46 Mathematics and Physics 30 Mathematics and Chemistry 28 Physics and Chemistry 23 Mathematics, Physics and Chemistry 18 (i) How many students are enrolled in Mathematics alone, Physics alone and Chemistry alone? (ii) Are there students who have not been offered any one of these subjects? Provide your explanation using a Venn diagram. SOLUTION: Solution temporarily hidden to avoid plagiarism by other students (edited by question owner) Therefore, students who took Mathematics alone is 62, students who took Physics alone is 37 and students who took Chemistry alone is 31. Thanks to brian for pointing the arithmetic error :) - I think you needed to provide more context and actually wrote out where you are having issues. Some people find "check my homework" to be rude behavior and items are better posted as questions asking for guidance and help. Regards –  Amzoti Mar 9 '13 at 17:39 @Natsume I've tried to add information from your paste to the post; I hope the formatting is acceptable. You can get some basic help by clicking on questionmark icon when editing and also here. –  Martin Sleziak Mar 9 '13 at 18:23 @MartinSleziak thanks! –  Natsume Mar 10 '13 at 3:23 @Amzoti it is a kind of my coursework too, but at least i tried to make a solution and asked here if my solution is correct. I f it's wrong so i can learn where is the error like brian pointed out –  Natsume Mar 10 '13 at 3:30 @Natsume: I am not the police, I just saw that your post had been down-voted several times and was trying to point out what that could have happened. It is okay to post such things, but understand that it needs the appropriate context - that is the only point I was trying to make. Regards –  Amzoti Mar 10 '13 at 3:32 Your approach is correct, but you made an arithmetic mistake near the beginning that threw everything off: $30-18=12$, not $28$, so $e=12$. I get $60$, $35$, and $13$, respectively, for the numbers of students taking only mathematics, only physics, and only chemistry.
2015-05-23T07:35:31
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/325648/discrete-math-set-theory", "openwebmath_score": 0.7429651618003845, "openwebmath_perplexity": 1014.2232424498234, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683464026671, "lm_q2_score": 0.8519528057272543, "lm_q1q2_score": 0.8413616235651772 }
https://math.stackexchange.com/questions/2794966/perpendicular-distance-from-ax-by-1-to-origin
# perpendicular distance from $ax-by = 1$ to origin I have a problem from a basic number theory book that asks for the perpendicular distance from the line $ax - by = 1$ to the origin. My approach was to find the area of the triangle formed by the line and the axes, find the base of the triangle situated at the diagonal, then use those to find the height of the triangle, which would be the perpendicular distance. I found that the $x$ intercept is $\frac1a$, and the y intercept is $-\frac1b$. So if you consider the base to be the diagonal, it is $\sqrt{\frac{1}{a^2} + \frac{1}{b^2}}$. Then the area of the triangle is $\frac{1}{2ab}$. Thus the height is $\frac{1}{2ab\sqrt{\frac{1}{a^2} + \frac{1}{b^2}}}$ = $\frac{1}{2\sqrt{(a^2 + b^2}}$, which I thought would be the answer. However, the book states that the answer is $\frac{1}{\sqrt{a^2 + b^2}}$, so I'm off by $\frac12$. Did I make a mistake with the area of the triangle? If not, where did I go wrong? Thank you! ## 3 Answers You forgot to include the factor of $1/2$ in the expression of the area using the diagonal and altitude to the origin. That will cancel the $1/2$ in the other expression when you solve for $h$. Note that the height is $2Area/Base$ and you have lost here a factor 2. As an alternative by similarity of right triangles we obtain $$\frac{d}{\frac1b}=\frac{\frac1a}{\sqrt{\frac1{a^2}+\frac1{b^2}}}\implies d=\frac{\frac1{ab}}{\sqrt{\frac{a^2+b^2}{a^2b^2}}}\implies d=\frac1{\sqrt{a^2+b^2}}$$ Minimum Distance Since \begin{align} 1 &=\left|\,ax-by\,\right|\\ &=\left|\,(a,-b)\cdot(x,y)\,\right|\\ &\le\left|\,(a,-b)\,\right|\left|\,(x,y)\,\right| \end{align} we have \begin{align} \left|\,(x,y)\,\right| &\ge\frac1{\left|\,(a,-b)\,\right|}\\ &=\frac1{\sqrt{a^2+b^2}} \end{align} If $(x,y)=\frac{(a,-b)}{a^2+b^2}$, then $ax-by=1$ and $\left|\,(x,y)\,\right|=\frac1{\sqrt{a^2+b^2}}$ Thus, the minimum of $\left|\,(x,y)\,\right|$ is $\frac1{\sqrt{a^2+b^2}}$. Perpendicular Distance If $ax_1-by_1=1$ and $ax_2-by_2=1$, then $$(a,-b)\cdot(x_1-x_2,y_1-y_2)=1-1=0$$ Thus, $(a,-b)$ is perpendicular to the line containing $(x_1,y_1)$ and $(x_2,y_2)$; i.e. the line $ax-by=1$. This means the vector from the origin to $\frac{(a,-b)}{a^2+b^2}$ is perpendicular to the line $ax-by=1$ and the point $\frac{(a,-b)}{a^2+b^2}$ is on the line $ax-by=1$. Thus, the perpendicular distance from the origin to the line is $$\left|\frac{(a,-b)}{a^2+b^2}\right|=\frac1{\sqrt{a^2+b^2}}$$
2019-07-20T05:55:01
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2794966/perpendicular-distance-from-ax-by-1-to-origin", "openwebmath_score": 0.945293128490448, "openwebmath_perplexity": 86.70515357463668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683506103591, "lm_q2_score": 0.8519528019683106, "lm_q1q2_score": 0.8413616234377184 }
https://math.stackexchange.com/questions/3127675/how-to-find-the-total-number-of-pages-which-a-book-has-when-the-clues-given-indi
# How to find the total number of pages which a book has when the clues given indicate a range? This problem doesn't seem very complicated but I got stuck at trying to understand what is the meaning of the last clue involving an integer and a range. Can somebody help me? The problem is as follows: Marina is reading a novel. The first day she read a third of the book, the second day she read the fourth parts of what it was left, the third day she read half of what it was left to read, the fourth day she read the fifth parts of what it was still left to read, the fifth day she decided to end the novel and found that it was left less than $$70$$ pages. If she always read an integer number of pages and never read less than $$14$$ pages. How many pages did the novel had? The alternatives given in my book were as follows: $$\begin{array}{ll} 1.&\textrm{360 pages}\\ 2.&\textrm{240 pages}\\ 3.&\textrm{180 pages}\\ 4.&\textrm{300 pages}\\ 5.&\textrm{210 pages}\\ \end{array}$$ I'm lost with this problem. What would be the correct way to go?. So far what I attempted to do was the following: I thought that the number of pages that the book has to be $$x$$. Since it said that the first day she read a third of the book I defined it as: $$\frac{1}{3}x$$ On the second day it is said that she read the fourth parts of what it was left so that would account for: $$\frac{1}{4}\left(x-\frac{1}{3}x\right)=\frac{1}{4}\left(\frac{2x}{3}\right)=\frac{x}{6}$$ The third day: $$\frac{1}{2}\left(x-\frac{x}{6}\right)=\frac{1}{2}\left(\frac{5x}{6}\right)=\frac{5x}{12}$$ The fourth day: $$\frac{1}{5}\left(x-\frac{5x}{12}\right)=\frac{1}{5}\left(\frac{7x}{12}\right)=\frac{7x}{60}$$ The fifth day: She decides to end reading the novel but, what it was left was less than $$70$$ pages. So this would translate as: $$x-\frac{7x}{60}<70$$ This would become into: $$\frac{53x}{60}<70$$ Therefore: $$53x<4200$$ $$x<\frac{4200}{53}$$ However this fraction is not an integer. There is also another piece of information which mentioned that she always read no less than $$14$$ pages. If during the first day she read a third of the novel then this would be: $$\frac{1}{3}x>14$$ So $$x>42$$ But, on the fourth day she read: $$\frac{7x}{60}>14$$ Therefore: $$x>120$$ How come x can be greatest than $$42$$ and at the same time $$120$$?. Am I understanding this correctly?. If I were to select the greatest value and put it in the range which I found earlier: $$120 and round to the nearest integer: $$120 Which doesn't make sense. If it were $$\frac{5x}{12}>14$$ $$x>33$$ (rounded to the nearest integer) Which would be: $$33 But again this doesn't produce an reasonable answer within the specified range in the answers. Did I overlooked something or perhaps didn't understood something right?. Can somebody help me with this inequation problem?. Compute the fractions of the book read per day: • On day 1, $$\frac13$$ of the novel was read, leaving $$\frac23$$. • On day 2, $$\frac23×\frac14=\frac16$$ was read, leaving $$\frac23×\frac34=\frac12$$. • On day 3, $$\frac12×\frac12=\frac14$$ was read, the same fraction being left. • On day 4, $$\frac14×\frac15=\frac1{20}$$ was read, leaving $$\frac14×\frac45=\frac15$$ that was finished off on day 5. Letting $$x$$ be the number of pages in the book, because at most 70 pages were left on day 5 we have $$\frac15x<70$$ or $$x<350$$. Because at least 14 pages were read per day, including day 4, we have $$\frac1{20}x>14$$ or $$x>280$$. Only option 4 satisfies both inequalities, so the novel had 300 pages. • I would really like to understand what you meant. But the thing is you ommited some steps and I don't get very clearly where do those fractions come?. I don't get the idea in the second step for day 2. I understand the part of $\frac{1}{4}\times\frac{2}{3}$ but why should I multiply $\frac{2}{3} \times \frac{3}{4}$? In other words why should be multiplied what it is left instead of summing them?. – Chris Steinbeck Bell Mar 2 at 21:33 Your method is good, but you made a mistake on the third day. Indeed, she read a third of what was left. So she read : $$\frac{1}{2}\left( x - \frac{1}{3}x-\frac{1}{6}x \right) = \frac{1}{2}\frac{1}{2}x = \frac{1}{4}x$$ So on the fourth day, she read $$\frac{1}{5}\left(1 - \frac{1}{2} - \frac{1}{4}\right)x = \frac{1}{5}\frac{1}{4}x = \frac{1}{20}x$$ So on the fifth days, there's : $$x\left(1 - \frac{1}{3} - \frac{1}{6} - \frac{1}{4} - \frac{1}{20}\right) = \frac{1}{5}{x}$$ Pages left. To sum up : First day : $$\frac{1}{3}x$$ Second day : $$\frac{1}{6}x$$ Third day : $$\frac{1}{4}x$$ Fourth day : $$\frac{1}{20}x$$ You want $$\frac{x}{20} \ge 14$$ so $$x\ge 280$$. Furthermore, you need $$\frac{1}{5}x< 70$$ so $$x<350$$. The only possibility now is answer 4 : 300 pages. • Thanks for the confidence boost. I really needed it. I think the source of my confusion was that I did not consider the passage mentions each day individually and not the number of pages that had been read until that day. Hence for each day I need to account what it was read on the prior day. Because of this for the third day $\frac{1}{2}\left(x-\left(\frac{x}{3}+\frac{x}{6}\right)\right)$ as $\frac{1x}{3}$ accounts for day 1 and $\frac{1x}{6}$ accounts for day two, so they must be summed up and so on until get to the last day. Did I understood this part correctly? – Chris Steinbeck Bell Mar 2 at 21:12 • Now I get to some confusion, why do you use $\leq$ and $\geq$ with $<$ and $>$ almost interchangeably? By the time I got to $\frac{x}{5}$ it is obviously stated in the problem that is less than $70$. But the passage also mentions that everyday she read no less than $14$ pages wouldn't this meant that I can use the other found quantities as well?. Let's say $\frac{1}{4}x\geq 14$ hence $x\geq 56$ isn't it?. But also meant that on the first day $\frac{1}{3}x\geq 14$ so $x \geq 42$. I'm kind of confused, having found these inequalities don't produce contradictory results?. – Chris Steinbeck Bell Mar 2 at 21:24 • Or just because they're inequalities it is possible to have different answers. It turns that I may need to look for the one which produces a highest number so I can reduce the boundary and find what number of pages they're asking. This problem in particular offered alternatives, but could this have been solved without any? by only concluding something from what it is given? – Chris Steinbeck Bell Mar 2 at 21:25 • Yes you understood correctly ! I use both < and \le because I don't really care about an exact boundary, I just need a good enough one to answer the question, but there is no logic in the use of one or the other. – aleph0 Mar 3 at 20:44 • Thanks for that but I have a question. Does it mean that given the conditions at itself it cannot be found the number of pages without checking the alternatives given?. – Chris Steinbeck Bell Mar 5 at 22:29
2019-05-27T06:11:10
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3127675/how-to-find-the-total-number-of-pages-which-a-book-has-when-the-clues-given-indi", "openwebmath_score": 0.760388970375061, "openwebmath_perplexity": 605.6191521631964, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147185726374, "lm_q2_score": 0.8652240964782011, "lm_q1q2_score": 0.8413566462791143 }
https://math.stackexchange.com/questions/1874159/how-can-i-answer-this-putnam-question-more-rigorously/1874166
# How can I answer this Putnam question more rigorously? Given real numbers $a_0, a_1, ..., a_n$ such that $\dfrac {a_0}{1} + \dfrac {a_1}{2} + \cdots + \dfrac {a_n}{n+1}=0,$ prove that $a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n=0$ has at least one real solution. My solution: Let $$f(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n$$ $$\int f(x) = \dfrac {a_0}{1} x + \dfrac {a_1}{2}x^2 + \cdots + \dfrac {a_n}{n+1} x^{n+1} + C$$ $$\int_0^1 f(x) = \left[ \dfrac {a_0}{1} + \dfrac {a_1}{2} + \cdots + \dfrac {a_n}{n+1} \right]-0$$ $$\int_0^1 f(x) = 0$$ Since $f$ is continuous, by the area interpretation of integration, it must have at least one zero. My question is, is this rigorous enough? Do I need to prove the last statement, perhaps by contradiction using Riemann sums? Is this a theorem I can/should quote? • There's a standard lemma that if $f$ is a strictly positive (negative) function, then $\int f$ is strictly positive (negative). I suppose you could prove this easily with Riemann sums. Then apply the contrapositive to conclude that $f$ changes sign (or is identically zero), and invoke continuity. – user296602 Jul 28, 2016 at 17:59 • I just want to complement you on your beautiful solution. Did you find it yourself ? What is your mathematical level (undergrad, grad school, etc)? Jul 29, 2016 at 4:00 • @user230452 Thank you very much. I did find it myself, but this problem was posed as a challenge in my calculus book in an integration chapter (Calculus by Larson/Edwards, $5^{th}$ ed., Page $346$ #$175$) so that was a giveaway. Once I saw that the integral had coefficients like the series, I remembered another problem I saw before somewhere else: "Find the sum of the coefficients of $f(x)=(x+3)^{30} (x+1)^{10}$". The answer is $4^{30} \cdot 2^{10}$; it is found by evaluating $f(1)$ (for any function). This led me to set one of the limits of integration to $1$, and $0$ was the first thing – Ovi Jul 29, 2016 at 5:32 • @user230452 I tried for the other limit. I am an undergrad who just finished discrete math, I am about to start the more exciting classes like analysis and number theory :) – Ovi Jul 29, 2016 at 5:33 • @Ovi Well done, Ovi ... Can you tell me which book you used for your discrete Math course ? Was it Kenneth Rosen or such ? I like discrete math a lot. Problems do become easier when you know their context, like you did here. Now that you've done it once, always look at a sum and ask yourself if the terms are the coefficients of a series of its derivative or integral. It's a common trick with sums and products. Keep up the hard work ! Jul 29, 2016 at 6:09 Why not write it the other way round? The polynomial function $$F(x)=\sum_{k=0}^n\frac{a_k}{k+1}x^{k+1}$$ is a differentiable function $\Bbb R\to\Bbb R$ with derivative $$F'(x)=\sum_{k=0}^na_kx^k.$$ We are given that $F(1)=0$, and clearly $F(0)=0$. Hence by Rolle's theorem, there exists $x\in(0,1)$ such that $F'(x)=0$, as was to be shown. • Indeed, this is the rigorous proof of the area-interpretation described in the OP. Jul 28, 2016 at 22:32 Your proof looks fine. If you wanted to expand, you could add the following: Suppose $f(x)>0$ for all $x>0$. Then we must have $$\int_0^1f(x)\ dx>0,$$ But we have already shown that $\int_0^1f(x)\ dx=0$, a contradiction. If we assume $f(x)<0$ for all $x>0$, we arrive at a similar contradiction. You can also prove it using the mean value theorem. You showed that $$\int_{0}^{1}f\left(x\right)dx=0$$ now since $f$ is continuous by the mean value theorem for integrals we have that exists some $c\in\left(0,1\right)$ such that $$f\left(c\right)=\int_{0}^{1}f\left(x\right)dx$$ so $$f\left(c\right)=0$$ as wanted.
2022-08-14T03:41:36
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1874159/how-can-i-answer-this-putnam-question-more-rigorously/1874166", "openwebmath_score": 0.9209803938865662, "openwebmath_perplexity": 181.75357949588494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147153749275, "lm_q2_score": 0.8652240912652671, "lm_q1q2_score": 0.841356638443245 }
https://math.stackexchange.com/questions/2488025/concerning-the-solution-to-the-non-homgeneous-second-order-ode/2488045
# Concerning the solution to the non-homgeneous second order ODE Does any second order linear differential equation have two linearly independent solution ? What about the non-homogeneous DE of the form $$y''+ay'+by=f(x)$$ I know that it has as solution $$y=c_1y_1+c_2y_2+y_p$$ where $$c_1y_1+c_2y_2$$ is the solution to the homogeneous part . While $$y_p$$ is the particular solution due to non-homogeneous part. My question :Does this mean that the equation has three independent solutions ?$$y1 , y_2 , y_p$$ • $y_1$ and $y_2$ are not solutions to the equation you have, only to the associated homogeneous equation. Furthermore, you have a infinite number of solutions to your equation parametrized by $c_1$ and $c_2$. – Tony Oct 24 '17 at 18:31 • No. Actually, you have a 2-dimensional $\mathbb{R}$-linear space spanned by $y_1$ and $y_2$ translated by $y_p$ (affine space). – Wang Oct 24 '17 at 18:42 No, because there isn't an arbitrary constant in front of $y_p$. Roughly speaking, the magnitude of the particular solution $y_p$ is determined by the "source term" $f(x)$; you can't double $y_p$ and still have a solution as you can with the independent homogeneous solutions $y_1$ and $y_2$. As an example, consider the first-order ODE $$y' + y = A (\sin x + \cos x)$$ for some constant $A \neq 0$. The homogenous solution (only one, since this is a first-order equation) is $$y_1(x) = e^{-x}$$ while the* particular solution is $$y_p(x) = A \sin x.$$ The general solution is therefore $$y(x) = c_1 e^{-x} + A \sin x.$$ But there is still only one free parameter in the solution, since this is a first-order ODE. In particular, it's not too hard to show that $$\tilde{y}(x) = c_1 y_1(x) + 2 y_2(x)$$ is not a solution of the original ODE; just plug it in to the ODE to see this. In fact, it's not really great to talk about the space of solutions of a non-homogeneous ODE as being "linearly independent", since this implicitly invokes the idea that they are vectors that we can add together. For a homogeneous linear ODE, this is valid, since the linear combination of any two solutions is also a solution. But the set of solutions of a non-homogeneous linear ODE does not form a vector space. (In particular, this set will not contain the element $y(x) = 0$.) As noted by @Wang in the comments, the space of solutions is an affine space rather than a vector space. (Aside: I should also note that there isn't a unique particular solution to a non-homogeneous ODE. Given a particular solution $y_p(x)$, it's always possible to find another particular solution $\tilde{y}_p(x)$ that differs from $y_p(x)$ by a combination of the homogeneous solutions. This doesn't affect the overall structure of the solution, since these differences can just be absorbed into the coefficients of the homogeneous solutions in the complete general solution. But it can confuse students who are learning the material when they find a particular solution for an ODE, compare it to the answer in the back of the book, discover that their answer is "wrong", and not realize that their solution differs from the back-of-the-book solution by a multiple of one of the homogeneous solutions.) As said by @Tony, $y_1$ and $y_2$ are not solutions to the non-homogeneous equation. The general solution can be seen as an "affine" combination of some solution of the NH-ODE and two linearly independent solutions of the H-ODE, with arbitrary coefficients. It represents a double infinity of solutions, from which you can select a member by specifying two constraints, which determine the two unknown coefficients. If you want three particular independent solutions, you can pick $y_p, y_p+y_1$ and $y_p+y_2$ (a "canonical basis"), but there are many other possibilities. Note that $y_p$ is certainly linearly independent of $y_1$ and $y_2$, otherwise the ODE would be homogeneous. • So the rule that says "the general solution of the second order ODE contains 2 independent solutions " is not correct unless we are talking about an homogeneous ODE. Right ? – MCS Oct 24 '17 at 20:50 • @Sousa: I wouldn't say that. A general solution of a non-homogenous linear ODE can always be written in the form $y = y_p + c_1 y_1 + c_2 y_2$. It would be misleading to say that this solution does not "contain two independent solutions". However, the statement "the general solution of a second-order ODE is the linear combination of two independent solutions" is only true for linear homogeneous ODEs. – Michael Seifert Nov 5 '17 at 18:13 • @Sousa: no, wrong. – Yves Daoust Nov 5 '17 at 19:10 • YvesDaoust: I'm agreeing with you here; my comment was meant to correct @Sousa's statement, by pointing out a common statement about homogeneous ODEs that isn't true for non-homogeneous ones. Sorry if that wasn't clear. – Michael Seifert Nov 5 '17 at 21:33 Lets see... So we have a nonhomogenous linear ODE in the form of y" + p(x)y' + q(x)y = r(x) The general solution of y" + p(x)y' + q(x)y = r(x) is the sum of (1) the general solution of the homogenous linear ODE y" + p(x)y' + q(x)y = 0 and (2) the particular solution of y" + p(x)y' + q(x)y = r(x). So lets start with (1) the general solution of y" + p(x)y' + q(x)y = 0. We know that the solution is y = c1y1 + c2y2 We will designate this as yh(x) Now, with (2) solution of y" + p(x)y' + q(x)y = r(x) We will designate yp(x) as any solution of y" + p(x)y' + q(x)y = r(x) We get a particular solution of y" + p(x)y' + q(x)y = r(x) when specific values are assigned to the arbitrary constants of yh(x) of y(x) = yh(x) + yp(x) The relation between the solutions are as follows: 1. The sum of a solution of y" + p(x)y' + r(x)y = r(x) and a solution of y" + p(x)y' + r(x)y = 0 [yh(x) + yp(x)] is a solution of y" + p(x)y' + q(x)y = r(x) 2. If there are two solutions of y" + p(x)y' + r(x)y = r(x), the difference is a solution of y" + p(x)y' + r(x)y = 0 So, I do not think y1 is an independent solution of the nonhomogenous linear ODE, and y2 is not either. However, yp is a solution of the nonhomogenous linear ODE on an open interval containing no arbitrary constants. New contributor Candace Agonafir is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
2019-10-17T01:05:14
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2488025/concerning-the-solution-to-the-non-homgeneous-second-order-ode/2488045", "openwebmath_score": 0.818098783493042, "openwebmath_perplexity": 288.75720006724805, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147209709196, "lm_q2_score": 0.8652240860523328, "lm_q1q2_score": 0.8413566382158981 }
https://math.stackexchange.com/questions/1856019/if-f-g-are-quadratic-forms-over-mathbbr-and-f-is-positive-definite-can
# If $f,g$ are quadratic forms over $\mathbb{R}$ and $f$ is positive definite, can you reduce the both simultaneously to sum of squares? If $f,g$ are quadratic forms over $\mathbb{R}$ and $f$ is positive definite, can you reduce the both simultaneously to sum of squares? This question appeared from a friend of mine and I did not understand the relation between being positive definite and can be diagonalizable simultaneously. I appreciate any help! Thanks • Wouldn't that at least require that $g$ is also posdef? – Hagen von Eitzen Jul 11 '16 at 15:00 • @Hagen, I dont know, that question asks just $f$ being posdef. – L.F. Cavenaghi Jul 11 '16 at 15:02 I'm assuming that we are talking about a finite-dimensional real vector space $V$. You can use the positive definite form $f$ in order to install a scalar product $\langle\cdot,\cdot\rangle$ on $V$ such that $f(x)=\langle x,x\rangle$ for all $x$. Then the matrix of $f$ with respect to any orthonormal basis is simply the identity matrix. By the spectral theorem for real symmetric matrices you then can choose an orthonormal basis of $V$ which diagonalizes $g$. With respect to this basis both $f$ and $g$ are diagonalized. • thank you so much! What a wonderful answer. – L.F. Cavenaghi Jul 11 '16 at 15:59 A relatively direct consequence of the (nontrivial) spectral theorem is the following (Lang, Algebra, chap. XV, corollary 7.3): Let $E$ be a non-zero finite dimensional vector space over the reals, with a positive definite symmetric form $f$. Let $g$ be another symmetric form on $E$. Then there exists a basis of $E$ which is orthogonal for both $f$ and $g$. In this basis, $f$ and $g$ would then have the following forms $$f(x) = \sum_{i=1}^{\dim E} \lambda_i\,x_i^2 \qquad g(x)=\sum_{i=1}^{\dim E} \mu_i\, x_i^2.$$ Because $f$ is positive definite, $\lambda_i > 0$ and you could modify the basis so that $\lambda_i = 1$. But of course, there is no way to change the signs of the $\mu_i$. Alternatively, you could change the basis so that $\mu_i \in \{-1,0,1\}$, but you have to make a choice: you cannot have a simple form for both the $\lambda_i$'s and the $\mu_i$'s. I hope this falls to your definition of "reducing simultaneously" the forms to "sums of squares". Anyway, there isn't really hope for a better statement. • thank you so much! Nice presentation, but for simplicity, I choose Christian answer. – L.F. Cavenaghi Jul 11 '16 at 16:00 Positive definite probably simplifies the presentation, but all you really need is one of them ( the Hessian or Gram matrices) to be invertible. I like the discussion in the first edition of Horn and Johnson; see case II.b.1 in the table. Here is a good example, in which the initial approach gives a not-quite diagonal pair of matrices, and an extra step is needed (but guaranteed to work). Congruence and diagonalizations
2019-11-12T13:12:52
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1856019/if-f-g-are-quadratic-forms-over-mathbbr-and-f-is-positive-definite-can", "openwebmath_score": 0.9357563257217407, "openwebmath_perplexity": 143.43352756018353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147193720648, "lm_q2_score": 0.8652240686758841, "lm_q1q2_score": 0.841356619935416 }
https://mathhelpboards.com/threads/sets-and-venn-diagrams.5946/
# Sets and Venn diagrams #### bergausstein ##### Active member the associative axioms for the real numbers correspond to the following statements about sets: for any sets A, B, and C, we have $(A\cup B)\cup C=A\cup (B\cup C)$ and $(A\cap B)\cap C=A\cap (B\cap C)$. Illustrate each of these statements using Venn diagrams. can you show me how to draw the first one with the union of the sets, after that I'll try to illustrate the second statement. just want to get an idea how to go about it.thanks! #### Evgeny.Makarov ##### Well-known member MHB Math Scholar The task to "illustrate" is not really mathematical. It basically says, "Draw a picture that you believe would be helpful for grasping associativity of set union". But what people believe to be helpful or relevant may differ. I would draw it like this. The idea is that red union is done first and then one adds the blue set. In the end we are interested in the colored region, which is the same in both cases. #### bergausstein ##### Active member this is what i tried for the associativity of intersection. the black circle is A, the red one is B and the blue one is C. #### bergausstein ##### Active member Follow up question. can we also illustrate the distributive law of intersection over union? and union over intersection? how would that look like? #### LATEBLOOMER ##### New member yes we can. A = BLACK B = RED C = BLUE for the intersection over union i would illustrate it like this here's my try. is this correct? #### caffeinemachine ##### Well-known member MHB Math Scholar this is what i tried for the associativity of intersection. the black circle is A, the red one is B and the blue one is C. View attachment 1151 Good job! - - - Updated - - - here's my try. View attachment 1154 is this correct? To me this is okay. But as Evgeny.Makarov pointed out this might not be okay to someone else. Cuz he might say that 'no this does not illustrate the identity correctly' and no one can do anything about it. Don't give it too much importance. Just be sure to understand why $A\cup(B\cap C)=(A\cup B)\cap (A\cup C)$ is true. Can you show this without a diagram? #### bergausstein ##### Active member let say $A=\{1,2,3\}$, $B=\{4,5,6\}$, $C=\{6,7,8\}$ $A\cup (B\cap C)=(A\cup B)\cap (A\cup C)$ $\{1,2,3\}\cup \{6\}=\{1,2,3,4,5,6\}\cap \{1,2,3,4,6,7,8\}$ $\{1,2,3,6\}=\{1,2,3,6\}$ but i know there's a more general way to show why that is true. can you show me your work? thanks! I'm weak when it comes to generalizing. #### caffeinemachine ##### Well-known member MHB Math Scholar let say $A=\{1,2,3\}$, $B=\{4,5,6\}$, $C=\{6,7,8\}$ $A\cup (B\cap C)=(A\cup B)\cap (A\cup C)$ $\{1,2,3\}\cup \{6\}=\{1,2,3,4,5,6\}\cap \{1,2,3,4,6,7,8\}$ $\{1,2,3,6\}=\{1,2,3,6\}$ but i know there's a more general way to show why that is true. can you show me your work? thanks! I'm weak when it comes to generalizing. The general way of showing that $X=Y$ is to show that $X\subseteq Y$ and $Y\subseteq X$. Here $X=A\cup(B\cap C)$ and $Y=(A\cap B)\cap (A\cup C)$. Let $x\in X$. Can you show that $x$ is in $Y$ too? #### bergausstein ##### Active member if $X\subseteq Y$ and $Y\subseteq X$ it follows that $X=Y$ and we may conclude that X and Y have precisely the same elements. we can now say that $x\in Y$ given that $x\in X$. am i right? and for educational purposes can anybody show me your complete work for proving the statement $A\cup(B\cap C)=(A\cup B)\cap (A\cup C)$ thanks!
2021-03-03T20:23:27
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/sets-and-venn-diagrams.5946/", "openwebmath_score": 0.6741513609886169, "openwebmath_perplexity": 693.7062553990111, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9683812318188366, "lm_q2_score": 0.8688267864276108, "lm_q1q2_score": 0.841355553677971 }
https://mathematica.stackexchange.com/questions/83614/how-can-i-plot-histogram-with-the-same-number-of-values-in-every-bin
# How can I plot histogram with the same number of values in every bin? For example I have 100 values sample. I'd like to build histogram in which every bin contains, for example, 10 values. How can i do that? Thanks. You can use the values of the quantiles of your sample as bin delimiters for your histogram. You can think of $n$-quantiles as those threshold values that divide your data set into $n$ equal-sized subsets. Let's generate some sample data and set your requirements, i.e. number of points per bin: SeedRandom[10] sample = RandomVariate[NormalDistribution[], 200]; datapointsperbin = 10; numberofbins = IntegerPart[Length[sample]/datapointsperbin]; This is what a regular histogram with evenly spaced bins would look like for that sample: Histogram[sample] Now we use Quantile to calculate numberofbins quantiles for your distribution, then we use those values as bin delimiters for your histogram. Histogram[ sample, {Table[Quantile[sample, i/numberofbins], {i, 1, numberofbins - 1}]} ] You can see from the vertical axis of the histogram that each bin contains 10 samples, as specified by the value of datapointperbin. Having done this, however, I still wonder why you need such a histogram. Of course, if what you needed was to calculate the intervals that would accomplish such binning, given your sample, the magic is all in the Quantile function, so you can get those values directly as well: Table[Quantile[sample, i/numberofbins], {i, 1, numberofbins - 1}] {-1.8614, -1.42414, -1.21859, -0.971859, -0.905122, -0.707023, -0.470983, -0.274088, -0.163548, 0.0100698, 0.122639, 0.271601, 0.383704, 0.475579, 0.608299, 0.873699, 1.03975, 1.33463, 1.81741} • Wow, thanks for help. One more question: how to change y-axis value from 'number of values in each bin' to 'probability density function value for each bin'? – instajke May 16 '15 at 18:02 • Histogram can do that for you: just add "PDF" as the bin height specification, as the following: Histogram[ sample, Table[Quantile[sample, i/numberofbins], {i, 1, numberofbins - 1}]}, "PDF" ]. – MarcoB May 16 '15 at 18:16 You can also define a function that produces the required bin list: ClearAll[bF] bF[n_] := {Quantile[#, Range[# - 1]/# &[Quotient[Length@#, n]]]} & where we used the fact that the second argument of Quantile can be a List. data = RandomVariate[NormalDistribution[], 200]; Row[Histogram[data, bF[10][data], #, PlotLabel -> Style[#, 16, "Panel"], ChartElementFunction -> "GlassRectangle", ImageSize -> 400, ChartStyle -> 63] & /@ {"PDF", "Count"}] Update: With multiple datasets, we can specify the bin lengths in a number of ways. Using the one of the data sets as the source for specifying the bin lengths, we get an interesting comparative histogram of the two data sets: datab = RandomVariate[NormalDistribution[], {2, 200}]; Row[Histogram[datab, bF[10][First@datab], #, PlotLabel -> Style[#, 16, "Panel"], ChartElementFunction -> "GlassRectangle", ImageSize -> 400, ChartStyle -> {Red, Blue}, ChartLegends -> Placed[{"data1", "data2"}, Bottom]] & /@ {"PDF", "Count"}] Row[Histogram[datab, bF[10][Last@datab], #, PlotLabel -> Style[#, 16, "Panel"], ChartElementFunction -> "GlassRectangle", ImageSize -> 400, ChartStyle -> {Red, Blue}, ChartLegends -> Placed[{"data1", "data2"}, Bottom]] & /@ {"PDF", "Count"}] Specifying bin lengths based on Joined data sets, and on the Union of data sets, we get the following: Row[Histogram[datab, bF[20][Join @@ datab], #, PlotLabel -> Style[#, 16, "Panel"], ChartElementFunction -> "GlassRectangle", ImageSize -> 400, ChartStyle -> {Red, Blue}, ChartLegends -> Placed[{"data1", "data2"}, Bottom]] & /@ {"PDF", "Count"}] Row[Histogram[datab, bF[10][First@Union@datab], #, PlotLabel -> Style[#, 16, "Panel"], ChartElementFunction -> "GlassRectangle", ImageSize -> 400, ChartStyle -> {Red, Blue}, ChartLegends -> Placed[{"data1", "data2"}, Bottom]] & /@ {"PDF", "Count"}] • That's absolutely amazing. Thank You a lot. – instajke May 16 '15 at 21:01 • @instajke, my pleasure. – kglr May 16 '15 at 21:14 • Very nice and great idea. – Algohi May 16 '15 at 21:14 • Thank you @Algohi. – kglr May 16 '15 at 21:19
2019-09-20T11:20:40
{ "domain": "stackexchange.com", "url": "https://mathematica.stackexchange.com/questions/83614/how-can-i-plot-histogram-with-the-same-number-of-values-in-every-bin", "openwebmath_score": 0.4522348642349243, "openwebmath_perplexity": 4591.732847964715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812336438724, "lm_q2_score": 0.8688267796346599, "lm_q1q2_score": 0.8413555486854448 }
https://math.stackexchange.com/questions/2092647/how-many-possible-numbers-do-i-have/2092650
# How many possible numbers do I have? Stupid question from stupid non-math-orientated person here. I have a list of four-digit sequences. These sequences consist of and iterate through a letter of the alphabet followed by a range of numbers from 100-999. So the list starts at A100, followed by A101, A102... A999, B100, B101... right up to Z999. Assuming each number in the list is unique and there are no repeats, how many permutations does that result in? How would I calculate it? I had initially thought it was as simple as: or # $26 * 899 =$ 23,376 numbers ...but on looking deeper into the maths behind permutations and combinations I feel like I may have made a stupid assumption there. If I have and my initial calculation was wrong, how exactly would I go about doing this? ## 2 Answers You are right except for the $899$! Note that $100$ is the first number. $101$ is the second, ..., $199$ is the 100th, ..., $999$ is the 900th! So there are $26 \cdot 900$ items in the list. • Damn, great catch. You also have no idea how glad I am that I'm not as bad at math as I thought, lol. Thanks so much. – Hashim Jan 10 '17 at 23:53 • @Hashim maths can be quite intuitive! As long as you use your brain to think, you won't do as bad as you think you will – RGS Jan 10 '17 at 23:56 Remember that the first number is $100$, so the $900^{th}$ number is 999.. You were right except this. so: Letters $(26)\times$ range $(900)$. $$26(900) = 23,400 \ \mathrm{solutions}$$ Keep in mind that you're range and total aren't the same thing. You have a total of $900$ numbers in each letter, even though you're range is 100-999 (inclusive) The generally proper way to express your range is 100-1000, as it's generally accepted that the last number is excluded from a set. This is also known as the fence post problem, explained here • Your answer is incorrect, there are not 899 numbers but 900. – RGS Jan 10 '17 at 23:45 • @RSerrao fixed. I read the question, and still did it wrong myself. – Travis Jan 10 '17 at 23:46 • Haha, I'm glad to hear it wasn't just me. Thanks for the extra info about the range vs total. I also didn't know that about the last number being excluded - is there any particular reason for that? Seems like it could be confusing. – Hashim Jan 11 '17 at 0:02 • @Hashim, it makes the math part less confusing, $1000-100 = 900$ while $999-100 = 899$, which leads to the exact mistake you made. – Travis Jan 11 '17 at 0:04
2019-10-17T18:28:12
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2092647/how-many-possible-numbers-do-i-have/2092650", "openwebmath_score": 0.5959674119949341, "openwebmath_perplexity": 559.0347299998241, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812327313545, "lm_q2_score": 0.8688267796346599, "lm_q1q2_score": 0.8413555478926248 }
https://math.stackexchange.com/questions/909871/expressing-12-sin-omega-t-10-in-cosine-form
# Expressing $12\sin( \omega t - 10)$ in cosine form $$12\sin( \omega t - 10)$$ I understand how it's solved when using the graphical method, however I'm having trouble understanding something about the trigonometric identities method. The solution in the text book goes like this (It wants positive amplitudes) : (All angles are in degrees) $$12\cos( \omega t - 10 - 90)$$ $$12\cos( \omega t - 100)$$ I know that in order to convert from sine to cosine angle you either add or subtract $90$ degrees. What I don't understand is whether I should add or subtract to get the equivalent with positive amplitude. The way I approach this is that I imagine the graph where $+\cos \omega t$ is the positive $x$-axis, $-\cos \omega t$ is the negative $x$-axis, $+\sin \omega t$ is the negative $y$-axis and $-\sin \omega t$ is the positive $y$-axis. Since I want to change from positive amplitude sine to positive amplitude cosine I add $90$ degrees. But apparently that is incorrect. Please explain this to me. • Maybe this Wikipedia article might help you to understand the sin and cosine phases en.wikipedia.org/wiki/Cosine#Unit-circle_definitions There are nice graphs on the right. – Matthias Aug 26 '14 at 15:14 • You're applying the transformation wrong. The typical relation is $\sin(\alpha)=\cos(90°-\alpha)$, so you have to substract the original argument to 90 (and you're doing the opposite). – cjferes Aug 26 '14 at 15:14 The identities you can use are: \begin{align} \sin x&=\cos(90°-x)\\ \cos x&=\cos(-x) \end{align} Therefore $$\sin(\omega t-10°)=\cos(90°-(\omega t-10°))= \cos(100°-\omega t)=\cos(\omega t-100°).$$ Of course, you could also directly use $$\sin x=\cos(90°-x)=\cos(x-90°).$$ • What about $-10\cos(\omega t+50)$ to sine? $-10\sin(90-(\omega t+50)$ becomes $-10\sin(40-\omega t)$ becomes $10\sin(\omega t -40)$ correct? – Osama Qarem Aug 26 '14 at 16:40 Look at the graphs of $\cos,\sin$, you can see that $\cos x = \sin (x+ {\pi \over 2})$ or the equivalent $\sin x = \cos (x- {\pi \over 2})$ (use the appropriate change in degrees if you prefer). In your case, let $x=\omega t -10°$, then $\sin (\omega t -10°) = \cos(\omega t -10°-90°)$. • Which is $\sin\left(x\right)=\cos\left(x-90\right)$ here – Matthias Aug 26 '14 at 15:15
2019-10-19T01:40:31
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/909871/expressing-12-sin-omega-t-10-in-cosine-form", "openwebmath_score": 0.9726850390434265, "openwebmath_perplexity": 330.2888163091678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812318188366, "lm_q2_score": 0.8688267796346599, "lm_q1q2_score": 0.8413555470998049 }
http://mathematica.stackexchange.com/questions/13580/proving-a-recurrence-in-mathematica
# Proving a recurrence in Mathematica I have $$j_n=\int_0^1 x^{2n} \sin(\pi x)dx.$$ How do I show that $$j_{n+1}= \frac{1}{\pi^2}(\pi- (2n+1)(2n+2)j_n)\, ?$$ I keep getting a recurring integration by parts and I can't simplify it. Please tell me where I'm going wrong. - Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Read the FAQs! 3) When you see good Q&A, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. ALSO, remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign –  chris Oct 26 '12 at 16:27 Integration by parts gives $$\int_0^1 x^k \sin(\pi x) dx = \frac{1}{\pi} + \frac{k}{\pi}\int_0^1 x^{k-1} \cos(\pi x) dx$$ and $$\int_0^1 x^k \cos(\pi x) dx = - \frac{k}{\pi}\int_0^1 x^{k-1} \sin(\pi x) dx.$$ Applying these rules in succession to $j_{n+1}$ immediately gives $$j_{n+1} = \frac{1}{\pi} - \frac{(2n+1)(2n+2)}{\pi^2} j_n$$ which clearly is equal to the desired formula. Notice that this works for any real $n \gt -1$ (so that the integrals converge). Brute-force attempts at integration produce error functions (Erf), Dawson's integrals (DawsonF), or generalized hypergeometric functions (HypergeometricPFQ), and these do not readily simplify. Let's try something more elementary. Having seen that the mathematical solution above requires two integrations by parts, we ought to be interested in the second derivative of $x^{2n+2}\sin(\pi x)$: D[x^(2 n + 2) Sin[Pi x], {x, 2}] $(2 n+1) (2 n+2) x^{2 n} \sin (\pi x)-\pi ^2 x^{2 n+2} \sin (\pi x) + \color{Red}{2 \pi (2 n+2) x^{2 n+1} \cos (\pi x)}$ The first two terms are exactly what we would be integrating when evaluating $(2n+1)(2n+2)j_{n} - \pi^2 j_{n+1}$ in order to check the equality we're trying to prove. The last term is a problem: how to make it go away? Well, still motivated by integrations by parts, we ought to examine a similar derivative, D[x^(2 n + 1) Sin[Pi x], x] $(2 n+1) x^{2 n} \sin (\pi x) + \color{Red}{\pi x^{2 n+1} \cos (\pi x)}$ The second term is a constant multiple of what we're trying to get rid of and the first one is not worrisome: it's proportional to the integrand for $j_n$. So, an appropriate linear combination of these two derivatives ought to clear things up a bit: Expand[D[x^(2 n + 2) Sin[Pi x], {x, 2}] - 2 (2 + 2 n) D[x^(2 n + 1) Sin[Pi x], x] , x] $-(2 n+1) (2 n+2) x^{2 n} \sin (\pi x)-\pi ^2 x^{2 n+2} \sin (\pi x)$ This can be stated in reverse: the indefinite integral of the result is what we differentiated, so let's just take one less derivative to find out what it is! D[x^(2 n + 2) Sin[Pi x], x] - 2 (2 + 2 n) x^(2 n + 1) Sin[Pi x] $\pi x^{2 n+2} \cos (\pi x)-(2 n+2) x^{2 n+1} \sin (\pi x)$ By the Fundamental Theorem of Calculus, we can find $-(2n+1)(2n+2)j_n - \pi^2 j_{n+1}$ by evaluating this antiderivative at the endpoints $0$ and $1$: FullSimplify[% /. {x -> #} & /@ {0, 1} // Differences, Assumptions -> n > -1] $\{-\pi \}$ That is, these Mathematica manipulations have demonstrated that \eqalign{ -(2n+1)(2n+2)j_n - \pi^2 j_{n+1} &= \int_0^1 \left(-(2n+1)(2n+2)x^{2n}\sin{\pi x} - \pi^2 x^{2n+2}\sin(\pi x)\right) dx \\ &= \left(\pi x^{2 n+2} \cos (\pi x)-(2 n+2) x^{2 n+1} \sin (\pi x)\right)\mid_0^1 \\ &= -\pi. } This obviously is algebraically equivalent to the desired result. But in the process we have obtained an indefinite integral for the integrands associated with this linear combination of $j_n$ and $j_{n+1}$. - I have more (mathematica) questions than answers. Let us define J[n_] := Integrate[ x^(2 n) Sin[Pi x], {x, 0, 1}] (* (Pi*HypergeometricPFQ[{1 + n}, {3/2, 2 + n}, -Pi^2/4])/(2 + 2*n) *) and Clear[JJ]; JJ[n_] := JJ[n]=1/Pi^2 (Pi - (2 n ) (2 n -1) JJ[n - 1]) // Simplify JJ[0] = J[0]; We have Table[J[n] - JJ[n], {n, 0, 6}] // Simplify (* {0,0,0,0,0,0,0} *) which suggests the recursion is 'likely' (Thanks to KennyColnago for pointing out an offset in the recursion) My question would be: why Table[J[n],{n,0,4}]//FindSequenceFunction does not provide the recursion? And I don't understand why Clear[j]; eqn = j[n] == b j[n - 1] + c /. Solve[Table[J[n] == b J[n - 1] + c, {n, 1, 2}] // Simplify, {b,c}][[1]] Table[eqn /. j -> J, {n, 1, 3}] // Simplify produces (* {True,True,False} *) - There is an index problem. Your definition of JJ[n_] has JJ[n-1] on the right-hand side, and so should have the $n$ there as $n-1$. That is, JJ[n_]:= 1/Pi^2 (Pi - (2 (n-1) + 2) (2 (n-1) + 1) JJ[n - 1]). Thus I believe the recursion is correct. –  KennyColnago Oct 24 '12 at 18:27 @KennyColnago oops!!! –  chris Oct 24 '12 at 18:30 Why are you asking a question as an answer? You should ask a new question instead –  rm -rf Oct 24 '12 at 21:07 @rm-rf I did provide a partial answer; I wanted to have a clean proof and, having been told about FindSequenceFunction not long ago on this mathematica.stackexchange.com/questions/11042/… I was expecting it to work. –  chris Oct 25 '12 at 7:39 Quite a manual process : Solve the indefinite integral, then calculate the definite one : indefInt[n_, x_] = Simplify[Integrate[x^(2 (n)) Sin[Pi x], x], Assumptions -> {0 <= x <= 1, n \[Element] Integers}]; int[n_] = Assuming[n \[Element] Integers, Limit[indefInt[n, x], x -> 1, Direction -> +1] - Limit[indefInt[n, x], x -> 0, Direction -> -1]]; Now define an identity regarding the incomplete Gamma function : recursion = Gamma[n_, z_] -> (n - 1) Gamma[n - 1, z] + z^(n - 1) Exp[-z]; Finally put everything together : en = FullSimplify[int[n] /. recursion, Assumptions -> {n \[Element] Integers, n > 0}] ; enp1 = FullSimplify[int[n + 1] /. recursion /. recursion /. recursion, Assumptions -> {n \[Element] Integers, n > 0}]; FullSimplify[enp1 - 1/Pi^2 (Pi - (2 n + 1) (2 n + 2) en), Assumptions -> {n \[Element] Integers, n > 0}] (* 0 *) ` - do you know why mma does not provide your closed form as the naive integration solution? Do you know why it does not find the recursion either? –  chris Oct 24 '12 at 19:19 @chris It gives a closed form solution for the integral but it involves hypergeometric functions and I am not aware of similar recursions. –  b.gatessucks Oct 24 '12 at 19:24
2013-12-07T04:01:50
{ "domain": "stackexchange.com", "url": "http://mathematica.stackexchange.com/questions/13580/proving-a-recurrence-in-mathematica", "openwebmath_score": 0.8507829904556274, "openwebmath_perplexity": 1677.6706499081017, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812318188366, "lm_q2_score": 0.8688267762381844, "lm_q1q2_score": 0.8413555438107218 }
http://math.stackexchange.com/questions/235350/what-is-the-difference-between-kernel-and-null-space
# What is the difference between kernel and null space? What is the difference, if any, between kernel and null space? I previously understood the kernel to be of a linear map and the null space to be of a matrix: i.e., for any linear map $f : V \to W$, $$\ker(f) \cong \operatorname{null}(A),$$ where • $\cong$ represents isomorphism with respect to $+$ and $\cdot$, and • $A$ is the matrix of $f$ with respect to some source and target bases. However, I took a class with a professor last year who used $\ker$ on matrices. Was that just an abuse of notation or have I had things mixed up all along? - ## 2 Answers The terminology "kernel" and "nullspace" refer to the same concept, in the context of vector spaces and linear transformations. It is more common in the literature to use the word nullspace when referring to a matrix and the word kernel when referring to an abstract linear transformation. However, using either word is valid. Note that a matrix is a linear transformation from one coordinate vector space to another. Additionally, the terminology "kernel" is used extensively to denote the analogous concept as for linear transformations for morphisms of various other algebraic structures, e.g. groups, rings, modules and in fact we have a definition of kernel in the very abstract context of abelian categories. - The mapping could in general be a homomorphism of algebraic structures. What else do you have in mind? –  Manos Nov 12 '12 at 14:15 "Was that just an abuse of notation or have I had things mixed up all along?" Neither. Different courses/books will maintain/not maintain such a distinction. If a matrix represents some underlying linear transformation of a vector space, then the kernel of the matrix might mean the set of vectors sent to 0 by that transformation, or the set of lists of numbers (interpreted as vectors in $\mathbb{R}^n$ representing those vectors in a given basis, etc. The context should make things clear and every claim about, say, dimensions of kernels/nullspaces should still hold despite the ambiguity. As manos said, "kernel" is used more generally whereas "nullspace" is used essentially only in Linear Algebra. -
2014-10-24T11:35:36
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/235350/what-is-the-difference-between-kernel-and-null-space", "openwebmath_score": 0.8654059171676636, "openwebmath_perplexity": 354.46320638731646, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812327313546, "lm_q2_score": 0.8688267711434708, "lm_q1q2_score": 0.8413555396699167 }
https://physics.stackexchange.com/questions/295239/rigid-body-equilibrium-word-problem
# Rigid body equilibrium word problem In my 100-level university physics course, we are just starting to touch on rigid bodies and tension. While I am fairly certain that I approached this the right way, I would appreciate if someone could look at my work and confirm that this is a valid solution to the following problem: When you arrive at your favorite restaurant, you are greeted by a large wooden sign. The left end of the sign is held by a bolt, the right end is tied to a rope that makes an angle of $20^\circ$ with the horizontal. If the sign is uniform, $3.2m$ long, and has mass of $16kg$, what is (a) the tension in the rope and (b) the magnitude and direction of force, $\vec P$, exerted by the bolt? Starting by listing the sums of forces per direction: $\sum \vec F_x = \vec P_x = \vec T \cos \theta$ $\sum \vec F_y = \vec P_y + \vec T \sin \theta = \vec F_g \rightarrow \vec P_y = \vec F_g - \vec T \sin \theta$ $\sum \tau = \vec P_y (0L) + \vec T \sin \theta (L) = \vec F_g (\frac{1}{2} L) \rightarrow \vec T = \frac{\vec F_g}{2 \sin \theta}$ NOTE: I use $g = 10 m/s^2$ for simplicity. $\vec T = \frac{\vec F_g}{2 \sin \theta} = \frac{mg}{2 \sin \theta} = \frac{(16 kg)(10 m/s^2)}{2 \sin 20^\circ} = 234 N$ $\vec P_x = \vec T \cos \theta = (234 N) \cos 20^\circ = 220 N$ $\vec P_y = \vec F_g - \vec T \sin \theta = mg - \vec T \sin \theta = (16 kg)(10 m/s^2) -(234 N) \sin 20^\circ = 80 N$ $|\vec P| = \sqrt{(\vec P_x)^2 + (\vec P_y)^2} = \sqrt{(220 N)^2 + (80 N)^2} = 234 N$ $\theta_P = \arctan(\frac{P_y}{P_x}) = \arctan(\frac{80 N}{220 N}) = 20^\circ$ (a) The tension in the rope is $234 N$. (b) The magnitude of the force exerted by the bolt is $234 N$ while the direction is $20^\circ$ to the horizontal. I suppose what has me second-guessing myself is that the magnitude and direction of the force exerted by the bolt are the same as the tension, just flipped about the y-axis. Assuming this is correct, is this because the sign is in static equilibrium and the net forces must be zero, so the only way for that to be possible is for forces along the x- and y-axes to balance out, and it just so happens that the tension in the y-direction is half of the force of gravity in this problem? • Asking us to check your work is not a good question here. This is not a homework site. – sammy gerbil Nov 28 '16 at 0:08 • @sammygerbil And yet, there is a "homework-and-exercises" tag. – Jordan Nov 28 '16 at 0:30 • Yes, and to go with it there is a homework (and exercises) policy. The main points are : 1. show your attempt (which you have done), and 2. ask about a conceptual difficulty. Your question in the final paragraph could count as the latter. I apologise, I overlooked that. – sammy gerbil Nov 28 '16 at 0:43 • @sammygerbil - if you read all the way to the end of the question, there's a conceptual question here: "is it coincidence that these things balance out, or is that supposed to happen?" – Floris Nov 28 '16 at 0:44 • @sammygerbil - we were writing at the same time... and we agree. – Floris Nov 28 '16 at 0:47 The left hand diagram shows the three forces acting on the sign: the attractive force due to the Earth (weight), the force exerted by the rope (tension) and the force on the sign due to the bolt. Because this is a static equilibrium situation the lines of action of the three forces must all meet at a point which is $X$ in the diagram. The right hand diagram is the vector addition of the three forces which must be zero as the sign is in static equilibrium. This is sometime called the "triangle of forces". The symmetry of the situation shows that the angle that the line of action of the force on the sign due to the bolt relative to the horizontal is $20^\circ$. I hope that this also shows that this sort of problem can be solved in a few lines using the sine rule. $\dfrac{16\;g}{\sin 40}= \dfrac{\text{tension}}{\sin 70}= \dfrac{\text{bolt force }}{\sin 70}$. • Yay for using diagrams. Funny thing is, I had a different picture in mind (where the sign was between two walls). Yours makes much more sense. Aha moment. – Floris Dec 1 '16 at 1:32 Your statement in the last paragraph is correct. When there is static equilibrium, we know that both the forces and torques must balance. To balance the forces, we know that the horizontal components of force must be equal and opposite. The sum of the vertical components must equal the force of gravity. And balancing torques means that the two vertical forces must be equal (so the board does not start rotating). All these things together mean that the two forces must be equal, except for a sign flip in Y.
2020-05-29T07:15:45
{ "domain": "stackexchange.com", "url": "https://physics.stackexchange.com/questions/295239/rigid-body-equilibrium-word-problem", "openwebmath_score": 0.7003765106201172, "openwebmath_perplexity": 227.5960125496929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812327313546, "lm_q2_score": 0.868826769445233, "lm_q1q2_score": 0.8413555380253751 }
https://math.stackexchange.com/questions/2288217/2-8-2-stephen-abbott-absolute-convergence-for-double-series
# 2.8.2 Stephen Abbott : Absolute convergence for double series So here is the problem. Given absolute convergence for a double series (infinite sum over $|a_{ij}|$) , show the double series $(a_{ij})$ converges . The proof strategy is: 1) keep one index fixed - so given i is fixed we know the series over j converges absolutely to some $b_i$ i.e $\Sigma_j |a_{ij}|$ converges to $b_i$, so the actual series $\Sigma_j a_{ij}$ must converge to a $b'_i$. Also we know $b'_i \leq b_i$ (because $a_{ij} \leq |a_{ij}|$). Now if we knew $b'_i \geq 0$, then comparison test applies (2.7.4). And we are done (just have to take infinite sum over $b_i$). But - it is not obvious to me why $b'_i \geq 0$ (i know $b_i \geq 0$ for sure as its the limit of the absolute values). I am missing something obvious - HELP! Based on T(op?) Gunn's response, we have $|b'_i| <b_i$, so $|b'_i| \geq 0$ so comparison test claims $b'_i$ has absolute convergence - hence $b'_i$ has convergence. • Time to learn some math typesetting. – zhw. May 19 '17 at 19:12 • just did! would help if someone can figure this out – pythOnometrist May 19 '17 at 19:18 • Your notion of double series convergence is $\lim_{N\to \infty} \sum_{i=1}^{N}\sum_{j=1}^{N} a_{ij}$ exists and is finite? – zhw. May 19 '17 at 19:22 • yeah using rectangles/ squares etc. – pythOnometrist May 19 '17 at 19:27 Suppose $\sum_i \sum_j |a_{ij}|$ converges. Then for each $i$, $\sum_j |a_{ij}|$ converges. Hence $\sum_j a_{ij}$ converges. The key observation from here is the "Infinite Triangle Inequality": $$\left\lvert \sum_{j = 1}^\infty a_{ij} \right\rvert \le \sum_{j = 1}^\infty |a_{ij}|.$$ • Neat - is there a reference I can go over the derivation - the condition i suppose holds when the series in the absolute sign on lhs converges and the rhs series converges. Is that it? – pythOnometrist May 19 '17 at 19:28 • Well if we're using sums over squares as our starting point, I'd say you have more work left to do. – zhw. May 19 '17 at 19:50 The comparison test can only be used if the terms of the series are nonnegative. Suppose the series $\sum_{i,j}|a_{ij}|$ converges. Let $b_{ij} = \max(a_{ij},0$) and $c_{ij} = \max(-a_{ij},0)$. Since $0 \leqslant b_{ij} \leqslant |a_{ij}|$ and $0 \leqslant c_{ij} \leqslant |a_{ij}|$ we can now apply the comparison test to conclude that $\sum_{i,j}b_{ij}$ and $\sum_{i,j}c_{ij}$ converge. Therefore, we have convergence of $$\sum_{i,j}a_{ij} = \sum_{i,j}(b_{ij} - c_{ij}) = \sum_{i,j}b_{ij} - \sum_{i,j}c_{ij}$$ Also be aware that convergence of a double series (to $S$) in the strictest sense is that for any $\epsilon >0$ there exists a positive integer $N$ such that for all $n,m > N$ we have $$\left|\sum_{i=1}^m \sum_{j=1}^n a_{ij} - S \right| < \epsilon$$ • actually T. Gunn's answer addresses the non -negativity requirement. – pythOnometrist May 19 '17 at 19:55 Let $$S_N = \sum_{i,j=1}^{n} a_{ij}, \,\,T_n = \sum_{i,j=1}^{n} |a_{ij}|.$$ For $m<n,$  define $A(m,n) = \{(i,j): 1\le i,j\le n\} \setminus \{(i,j): 1\le i,j\le m\}.$ Then $$\tag 1 |S_n - S_m| = |\sum_{i,j \in A(m,n)}a_{ij}| \le \sum_{i,j \in A(m,m)}|a_{ij}| = T_n- T_m.$$ Because $T_n$ converges, it is Cauchy. $(1)$ then shows $S_n$ is Cauchy. This implies $S_n$ converges, which is the desired conclusion. By assumption for any $i \in \{1, 2, 3, \cdots \}$, $$\sum_{j = 1}^{\infty} |a_{ij}|$$ converges, i.e. $$\sum_{j = 1}^{\infty} a_{ij}$$ converges absolutely. So, $$-\sum_{j = 1}^{\infty} |a_{ij}| \leq \sum_{j = 1}^{\infty} a_{ij} \leq \sum_{j = 1}^{\infty} |a_{ij}|, \\ |\sum_{j = 1}^{\infty} a_{ij}| \leq \sum_{j = 1}^{\infty} |a_{ij}|.$$ Let $$b_i := \sum_{j = 1}^{\infty} |a_{ij}|, \\ c_i := \sum_{j = 1}^{\infty} a_{ij}.$$ Then, $$|c_i| \leq b_i.$$ $$\sum_{i=1}^{\infty} b_i$$ converges by assumption. So, $$\sum_{i=1}^{\infty} |c_i|$$ also converges, i.e. $$\sum_{i=1}^{\infty} c_i$$ converges absolutely. $$\sum_{i=1}^{\infty} c_i = \sum_{i=1}^{\infty} \sum_{j = 1}^{\infty} a_{ij}$$ converges.
2019-07-20T20:45:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2288217/2-8-2-stephen-abbott-absolute-convergence-for-double-series", "openwebmath_score": 0.9858547449111938, "openwebmath_perplexity": 429.91248580923366, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812318188365, "lm_q2_score": 0.8688267643505193, "lm_q1q2_score": 0.8413555322989299 }
https://math.stackexchange.com/questions/3040030/find-all-pythagorean-triples-x2y2-z2-where-x-21
Find all Pythagorean triples $x^2+y^2=z^2$ where $x=21$ Consider the following theorem: If $$(x,y,z)$$ are the lengths of a Primitive Pythagorean triangle, then $$x = r^2-s^2$$ $$y = 2rs$$ $$z = r^2+z^2$$ where $$\gcd(r,s) = 1$$ and $$r,s$$ are of opposite parity. According to the previous theorem,My try is the following: since $$x = r^2-s^2$$, $$x$$ is difference of two squares implying that $$x \equiv 0 \pmod 4$$. But $$x=21 \not \equiv 0 \pmod 4$$. Hence, there are no triangles having such $$x$$. Is that right? • I don't understand your argument at all. It is simply not true that the difference of two squares is always $\equiv 0 \pmod 4$. – lulu Dec 15 '18 at 0:10 • It was understandable before, but it is wrong. There are primitive triples with $21$. $(21,20,29)$, say – lulu Dec 15 '18 at 0:18 • The hint I wrote out in an earlier comment is close to a complete solution. You should be able to follow it to list all the triples with $21$. – lulu Dec 15 '18 at 0:20 • Squares are either $0 \text{ mod } 4$ or $1 \text{ mod } 4$, and hence there is at least one square which is $1 \text{ mod } 4$ and another which is $0 \text{ mod } 4$, whose difference will be $1 \text{ mod } 4$. – AlexanderJ93 Dec 15 '18 at 0:20 • @MagedSaeed Do not mind for misakes, that's the way we learn a lot! Your question was fine and properly posted. Bye – gimusi Dec 15 '18 at 0:33 Recall that $$3^2+4^2=5^2 \implies (3\cdot 7)^2+(4\cdot 7)^2=(5\cdot 7)^2$$ and note that $$(21, 220, 221)$$ is a primitive triple. Your criterion doesn't works because the remainder of squares $$\pmod 4$$ are $$0,1$$ therefore we can't comclude that $$z^2-y^2\equiv 0 \pmod 4$$ What we need to solve is $$21^2=441=3^2\cdot 7^2=(z+y)(z-y)$$ that is we need to try with • $$z-y=1 \quad z+y=441\implies (x,y,z)=(21,200,221)$$ • $$z-y=3 \quad z+y=147\implies (x,y,z)=(21,72,75)$$ • $$z-y=7 \quad z+y=63\implies (x,y,z)=(21,28,35)$$ • $$z-y=9 \quad z+y=49\implies (x,y,z)=(21,20,29)$$ • My method works if the question asks for primitive triangle. Right? – Maged Saeed Dec 15 '18 at 0:10 • @MagedSaeed Note that also $(21, 220, 221)$ is a primitive triple. – gimusi Dec 15 '18 at 0:13 • Can you find a general form of the solutions please. Refer to the question title. – Maged Saeed Dec 15 '18 at 0:18 • @MagedSaeed Your criterion doesn't work since we can have $z\equiv 1 \pmod 4$ and $y\equiv 0 \pmod 4$. – gimusi Dec 15 '18 at 0:20 • @MagedSaeed I've added something more! You are welcome, Thanks Bye – gimusi Dec 15 '18 at 0:31 We have $$21=x=k(m^2-n^2),\, y=2kmn,\, z=k(m^2+n^2)$$ where $$m,n, k \in \Bbb N$$ with $$\gcd (m,n)=1$$ and $$m,n$$ not both odd. So $$(m^2-n^2,k)\in \{(1,21),(3,7),(7,3),(21,1)\}.$$ Now $$m^2-n^2=1$$ is impossible, so $$(m,n,k)\in \{(2,1,7), (4,3,3),(11,10,1),(5,2,1)\},$$ giving $$(x,y,z)\in \{ (21,28,35), (21,72, 75),(21,220, 221),(21, 20, 29)\}.$$ We have $$m\leq 11$$ because if $$m\geq 12$$ then $$x\geq m^2-n^2\geq m^2-(m-1)^2=2m-1\geq 23>21...$$ There are 2 solutions $$(11,10)$$ and $$(5,2)$$ to $$m^2-n^2=21.$$
2019-05-21T10:26:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3040030/find-all-pythagorean-triples-x2y2-z2-where-x-21", "openwebmath_score": 0.7863243818283081, "openwebmath_perplexity": 335.68157817996104, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9664104982195784, "lm_q2_score": 0.8705972784807406, "lm_q1q2_score": 0.8413543496451815 }
https://math.stackexchange.com/questions/3345771/find-kth-power-of-a-square-matrix
# Find $k^{th}$ power of a square matrix I am trying to find the $$A^{k}$$, for all $$k \geq 2$$ of a matrix, $$\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$$ My approach: $$A^{2}=\begin{pmatrix} a^2 & ab+b \\ 0 & 1 \end{pmatrix}$$ $$A^{3}=\begin{pmatrix} a^3 & a^{2}b+ab+b \\ 0 & 1 \end{pmatrix}$$ $$A^{4}=\begin{pmatrix} a^4 & a^{3}b+a^{2}b+ab+b \\ 0 & 1 \end{pmatrix}$$ $$A^{5}=\begin{pmatrix} a^5 & a^{4}b+a^{3}b+a^{2}b+ab+b \\ 0 & 1 \end{pmatrix}$$ Continuing this way, we obtain $$A^{k}=\begin{pmatrix} a^k & (a^{k-2}+a^{k-3}+a^{k-4}+.....+1)b \\ 0 & 1 \end{pmatrix}$$ I am stuck here! I was wondering if you could give me some hints to move further. I appreciate your time. • Your new calculations are correct. It should be easy to spot a pattern for induction now. – user460426 Sep 6, 2019 at 0:35 • Your formula for $A^k$ doesn't quite match what you've written so far. Sep 6, 2019 at 1:46 • Any hints about how to apply induction in matrix setting? Sep 6, 2019 at 2:09 • @Barsal For the base case, verify that the proposed formula for $A^k$ works when $k = 1$. That is, sub in $k = 1$ into the formula, and make sure you get $A$. For the inductive step, multiply the formula for $A^k$ by $A$. It should hopefully simplify to the formula for $A^{k+1}$. Sep 6, 2019 at 2:37 • Thank you so much! Sep 6, 2019 at 2:47 Hint: Write $$A=D+B$$ here $$D$$ is diagonal. Use that $$B^2=0$$, $$DB=aB$$, $$BD=B$$. Writing $$A^n$$ as $$\begin{bmatrix}a^n & b_n\\ 0 & 1\end{bmatrix}$$. Expanding $$A^{n+1} = AA^n$$ leads to a recurrence relation of the form: $$b_{n+1} = a b_n + b$$ Since $$b_1 = b$$, solving the recurrence relation will lead to $$b_n = (a^{n-1}+ a^{n-2} + \cdots + 1)b = \begin{cases} \frac{a^n-1}{a-1} b, & a \ne 1\\ nb, & a = 1\end{cases}$$ Not the best way, but you could also try diagonalisation. The caveat with diagonalisation is that for certain values of $$a$$ and $$b$$ (in particular, if $$a = 1$$ and $$b \neq 0$$), the matrix won't be diagonalisable. However, if we make the assumption that $$a \neq 1$$, then we should end the process with a perfectly valid expression for $$A^n$$ that will work for all $$a \neq 1$$, and by continuity, we can conclude that it works for $$a = 1$$ too (or take the formula, and prove by induction). Also, I know you want hints, so I hid everything behind spoiler boxes. The eigenvalues are $$1$$ and $$a$$, and we will assume they are different. We have, $$A - I = \begin{pmatrix} a - 1 & b \\ 0 & 0 \end{pmatrix},$$ with eigenvector $$(-b, a - 1)$$. Also, $$A - aI = \begin{pmatrix} 0 & b \\ 0 & 1 - a \end{pmatrix},$$ with eigenvector $$(1, 0)$$. So, let $$P = \begin{pmatrix} 1 & -b \\ 0 & a - 1\end{pmatrix},$$ giving us $$P^{-1} = \frac{1}{a - 1}\begin{pmatrix} a - 1 & b \\ 0 & 1\end{pmatrix}.$$ We should have $$A = P\begin{pmatrix} a & 0 \\ 0 & 1\end{pmatrix}P^{-1},$$ so \begin{align*} A^n &= P\begin{pmatrix} a^n & 0 \\ 0 & 1\end{pmatrix}P^{-1} \\ &= \frac{1}{a - 1}\begin{pmatrix} 1 & -b \\ 0 & a - 1\end{pmatrix}\begin{pmatrix} a^n & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} a - 1 & b \\ 0 & 1\end{pmatrix} \\ &= \frac{1}{a - 1}\begin{pmatrix} 1 & -b \\ 0 & a - 1\end{pmatrix}\begin{pmatrix} a^{n+1} - a^n & ba^n \\ 0 & 1\end{pmatrix} \\ &= \frac{1}{a - 1}\begin{pmatrix} a^{n+1} - a^n & ba^n - b \\ 0 & a - 1\end{pmatrix} \\ &= \begin{pmatrix} a^n & b \frac{a^n - 1}{a - 1} \\ 0 & 1\end{pmatrix} \\ &= \begin{pmatrix} a^n & b(1 + a + a^2 + \ldots + a^{n-1}) \\ 0 & 1\end{pmatrix}.\end{align*} That last formula must hold (at least) for $$a \neq 1$$, but due to the continuity of matrix powers, it must also hold at $$a = 1$$ too. Hint: Use Cayley–Hamilton: $$A^2-(a+1)A+aI=0$$. • I got the same results using Caley-Hamilton. Then what will be next step? Sep 6, 2019 at 0:18 • I guess, this method is not a good one for finding large powers of A. Sep 6, 2019 at 0:19 • @Barsal, it is. Just reduce $A^2$ every time. Try the first few powers and you'll see a pattern. – lhf Sep 6, 2019 at 0:25 • @Barsal You can use Cayley-Hamilton to show that $A^k=\lambda I+\mu A$ and then use that this also holds when you substitute an eigenvalue of $A$ for $A$ in that equation to solve for the coefficients. – amd Sep 6, 2019 at 2:04 To add on to lhf's hints, do you know induction? If so, try to find patterns in each of the 4 entries individually once you have corrected your examples. • Yeah! I am looking for some pattern to apply induction. Sep 6, 2019 at 0:11 • You should find patterns in the upper left, lower left, and lower right corners. Can you see those? The upper right is a bit trickier Sep 6, 2019 at 0:22 • That's true! I am thinking something like, $A^{}=\begin{pmatrix} a^k & a^{k-2}b+a^{k-3}b+....+2ab+b \\ 0 & 1 \end{pmatrix}$ Sep 6, 2019 at 0:27 • That looks close! Maybe retry on $A^3$ again though? I'm getting something different Sep 6, 2019 at 0:28 • $A^{k}=\begin{pmatrix} a^k & a^{k-2}b+a^{k-3}b+a^{k-4}b+.....+b \\ 0 & 1 \end{pmatrix}$ Is it okay now? Sep 6, 2019 at 0:36 I'm partial to the SVD for this. The eigenvalues are $$a$$ and $$1$$ associated with eigenvectors $$(1,0)$$ and $$(b,1-a)$$ respectively. So, we get $$\begin{bmatrix} a & b \\ 0 & 1 \end{bmatrix}^n= \begin{bmatrix} 1 & b \\ 0 & 1-a \end{bmatrix} \begin{bmatrix} a^n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & \frac{-b}{1-a} \\ 0 & \frac{1}{1-a} \end{bmatrix} = \begin{bmatrix} a^n & \frac{b(1-a^n)}{1-a} \\ 0 & 1 \end{bmatrix}$$
2022-05-19T06:20:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3345771/find-kth-power-of-a-square-matrix", "openwebmath_score": 0.9998781681060791, "openwebmath_perplexity": 446.61239651997147, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.966410494349896, "lm_q2_score": 0.8705972751232808, "lm_q1q2_score": 0.8413543430315622 }
https://math.stackexchange.com/questions/972857/integrations-by-parts
# Integrations by parts Calculate $\int \cos (x) (1− \sin x)^2 dx$ . Can you integrate the different products separately? Does it have something to do with integration by parts? I have tried letting $u=(1− \sin x)^2$ but I don't think I'm heading in the right direction! Can anyone help? Thanks • You have asked several questions and you have only accepted one of them. Maybe you should start accepting answers so that way people will be willing to help you in the future. By the way, you get $2$ points when you accept an answer. – user139708 Oct 14 '14 at 5:48 • i did not know that. @HoracioOliveira. thank you – ojando Oct 14 '14 at 6:09 Put $u = 1 - \sin x \implies du = -\cos x dx$. Hence $$\int \cos x (1 - \sin x)^2 dx = - \int u^2 du = -\frac{u^3}{3} + C$$ Therefore, you dont need to use integration by parts. • why is it negative cosdx? – ojando Oct 15 '14 at 1:00 • how do you find c? @HoraciaOliveira – ojando Oct 15 '14 at 1:02 Try substituting $u = \sin(x)$ and $du = \cos(x)$ $$∫ \cos (x) (1−\sin x)^2 dx = ∫ (1-u)^2 du$$ You should be able to integrate that. Hint: $\cos x$ is the derivative of which function ? Let $\cos x = {1 \over 2} (e^{ix}+e^{-ix})$, $\sin x = {1 \over 2i} (e^{ix}-e^{-ix})$. Then we obtain (by expanding and then simplifying using the same rules): $\int \cos x (1 - \sin x)^2 dx = {1 \over 4} \int (5 \cos x - 4 \sin (2x) -\cos(3x) )dx$. • Why the downvote? – copper.hat Oct 14 '14 at 14:30 • what down vote? – ojando Oct 15 '14 at 1:00 • I have two upvotes and one downvote. I was just wondering why the downvote? – copper.hat Oct 15 '14 at 2:16
2020-02-22T07:46:09
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/972857/integrations-by-parts", "openwebmath_score": 0.813843309879303, "openwebmath_perplexity": 629.8549403041408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9664104962847373, "lm_q2_score": 0.8705972650509007, "lm_q1q2_score": 0.841354334981976 }
http://mathhelpforum.com/geometry/5263-two-circles-centre-each-others-circumference-print.html
# Two circles with the centre on each others circumference • Sep 1st 2006, 12:58 AM a4swe Two circles with the centre on each others circumference Problem: Two circles, both with the radius of R (and in the same plane) intersect so that the centre of one circle lies on the circumference of the other circle. Calculate the area inside BOTH of the circles. • Sep 1st 2006, 01:59 AM Glaysher If this is the case and the centre of one circle (circle 1) lies on the circumference of the other (circle 2) then since the radi are the same the centre of circle 2 lies on the circumference of circle 1 Area of circle 1 = $\pi r^2$ = Area of circle 2 Two circles considered seperately = $2\pi r^2$ The trick is to find the area of the intersection The part where the intersection is looks like the image attached By finding the angle in the diagram I can find the area of the secto between the two radi of circle 1 By drawing a vertical line downwards from the top point down to the horizontal line I create two right angled triangles with hypotenuse $r$ and adjacent (to half the angle I want) $\frac{r}{2}$. Call the angle I want $\theta$ Then $\cos \frac{\theta}{2} = \frac{\frac{r}{2}}{r}$ $= \frac{1}{2}$ $\frac{\theta}{2}= \frac{\pi}{3}$ in radians $\theta = \frac{2\pi}{3}$ Area Of Sector = $\frac{1}{2}r^2 \frac{2\pi}{3}$ Now you can do the rest Find the area of the segment by taking away the area of the triangle formed by the radi The area of the segment is half the area of the intersection so multiply by two and take away from $2\pi r^2$ EDIT: You will probably need to draw more diagrams to follow what I've done. It's not easy to draw them all so that I can post them here with the software I have. • Sep 1st 2006, 04:44 AM a4swe Thanks for the answer, there may be some more questions later. :) • Sep 1st 2006, 06:27 AM Soroban Hello, a4swe! Quote: Two circles, both with the radius of R (and in the same plane) intersect so that the centre of one circle lies on the circumference of the other circle. Calculate the area inside both of the circles. The intersection is a lens-shaped region. Code:             *         * /:::*       *  /::::::*         /:::::::::     *  /::::::::::*       /::::::::::::       /::::::::::::::     */::::::::::::::*     * 120°::::::::::*     *\::::::::::::::*       \::::::::::::::       \::::::::::::     *  \::::::::::*         \:::::::::       *  \::::::*         *-\:::*             * In Glaysher's excellent diagram, we see two equilateral triangles. . . Hence, we have a 120° sector plus two 60° segments. Since the sector occupies one-third of the circle: . $A_{\text{sector}} \:=\:\frac{1}{3}\pi R^2$ The area of a segment is: . $\text{(Area of 60}^o\text{ sector)} - \text{(Area of triangle)}$ . . . $A_{\text{segment}}\;= \;\frac{1}{6}\pi R^2 - \frac{\sqrt{3}}{4}R^2 \;= \;\frac{2\pi - 3\sqrt{3}}{12}R^2 $ Hence, the area of the two segments is: . $A_{\text{segments}} \:=\:\frac{2\pi-3\sqrt{3}}{6}R^2$ Therefore, the area of the intersection is: . . $A \;= \;\frac{1}{3}\pi R^2 + \frac{2\pi-3\sqrt{3}}{6}R^2 \;= \;\boxed{\frac{4\pi - 3\sqrt{3}}{6}R^2}$ • Sep 4th 2006, 03:34 AM malaygoel A somewhat similar question Hello to all of you! I have a question relating to circles. It is like this: There is a grass field in the circular shape, with the fence along the boundary. A goat is tied to the fence(at a stationary point) with a rope such that she can graze half of the area of the field. The problem is to find the ratio of the length of the rope to the radius of the circular field. (Can it be done without using calculus?) • Sep 4th 2006, 04:46 AM CaptainBlack Quote: Originally Posted by malaygoel Hello to all of you! I have a question relating to circles. It is like this: There is a grass field in the circular shape, with the fence along the boundary. A goat is tied to the fence(at a stationary point) with a rope such that she can graze half of the area of the field. The problem is to find the ratio of the length of the rope to the radius of the circular field. (Can it be done without using calculus?) This has to be done numerically, you end up with a mixed algebraic/transcendental equation to solve, which I believe has no known elementary solution. It can be solved numerically to whatever precision you want using the binary chop or search algorithm, which does not require any calculus. RonL • Sep 4th 2006, 05:27 AM Soroban Hello, Malay! Welcome back! As the Cap'n pointed out, it is a classic problem: "The Half-Pastured Goat". Even with Calculus, it cannot be solved by elementary methods. As RonL said, the solution can be approximated by numerical means. • Sep 4th 2006, 05:43 AM CaptainBlack Quote: Originally Posted by Soroban Hello, Malay! Welcome back! As the Cap'n pointed out, it is a classic problem: "The Half-Pastured Goat". Even with Calculus, it cannot be solved by elementary methods. As RonL said, the solution can be approximated by numerical means. Not only is it a classic problem it has more variants than you can shake a stick at. See here for a discussion of the problem and follow the links from there (and the links from the links ...) for variants. RonL
2017-01-23T19:10:44
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/geometry/5263-two-circles-centre-each-others-circumference-print.html", "openwebmath_score": 0.8332017660140991, "openwebmath_perplexity": 1132.767944062842, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9664104962847372, "lm_q2_score": 0.8705972566572503, "lm_q1q2_score": 0.841354326870264 }
http://math.stackexchange.com/questions/287932/convergence-of-ratio-test-implies-convergence-of-the-root-test
# Convergence of Ratio Test implies Convergence of the Root Test In Elias Stein and Rami Shakarchi's Complex Analysis textbook, we have the following exercise: Show that if $\{a_n\}_{n=0}^\infty$ is a sequence of complex numbers such that $$\lim_{n\to\infty}\frac{|a_{n+1}|}{|a_n|}=L,$$ then $$\lim_{n\to\infty}|a_n|^{1/n}=L.$$ I've been trying to prove this with no luck. The only thing I've thought of doing is $$\lim_{n\to\infty}\left(\frac{|a_{n+1}|^n}{|a_n|^n}\right)^{1/n},$$but this hasn't lead me anywhere except dead ends. Will someone provide a hint for me about how to proceed? Thanks! Minor update: I don't know if it's helpful yet, but I know we can write the limit as $$\lim_{n\to\infty}\left(\frac{|a_{n+1}a_n\cdots a_0|}{|a_n\cdots a_0|}\cdot\frac{1}{|a_n|}\right).$$This reminds me a lot of the geometric mean, which even has the exponents I'm trying to get... - As I recall, you divide the $a_n$ into two parts: a final part in which the ratio is within $\epsilon$ of L and an initial part which, because its length is bounded, can be shown to not affect the result. There are a lot of limit-type results which are proved this way. –  marty cohen Jan 27 '13 at 7:24 @martycohen: Do you mean something like $|L-|a_{N+1}/a_N|\space |<\varepsilon$? I'm not sure I follow what you mean by dividing $a_n$ into two parts if that isn't what you mean. –  Clayton Jan 27 '13 at 7:29 Which question is this? Which number/chapter? –  leo May 1 '13 at 2:15 @leo: Chapter $1$, Exercise $17$. –  Clayton May 1 '13 at 2:20 –  Martin Sleziak May 2 '14 at 15:15 By definition of limit, for all $\varepsilon>0$ there exists $N$ s.t. $$n>N \implies \left| \left| \frac{a_{n+1}}{a_n} \right|-L \right|<\varepsilon.$$ So $$|a_n|=\frac{|a_n|}{|a_{n-1}|}\cdots \frac{|a_{N+1}|}{|a_N|} |a_N|<(L+\varepsilon) ^{n-N} |a_N|$$ Take the $n$th root of both sides of an inequality, we get $$\sqrt[n]{|a_n|} <(L+\varepsilon)^{1-N/n}\sqrt[n]{|a_N|}.$$ If we take $n\to\infty$, we get $$\lim_{n\to\infty}\sqrt[n]{|a_n|} \le L+\varepsilon.$$ Since $\varepsilon$ is arbitary, we get $\lim_{n\to\infty}\sqrt[n]{|a_n|} \le L.$ Likewise we can get $\lim_{n\to\infty}\sqrt[n]{|a_n|} \ge L.$
2015-05-05T16:23:00
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/287932/convergence-of-ratio-test-implies-convergence-of-the-root-test", "openwebmath_score": 0.9161124229431152, "openwebmath_perplexity": 140.85876619623926, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9921841104018226, "lm_q2_score": 0.8479677545357569, "lm_q1q2_score": 0.841340132183491 }
https://math.stackexchange.com/questions/1243068/solving-for-probability-of-dependent-events
# Solving for probability of dependent events I was reading A First Course in Probability by Sheldon Ross. I read one of the problems and then tried building logic for it. Then read books solution which was completely different. So was guessing if my logic is wrong. Problem: An ordinary deck of 52 playing cards is randomly divided into 4 piles of 13 cards each. Compute the probability that each pile has exactly 1 ace. Solution given in the book: Define event: $E_1$ = {the ace of spades is in any one of the piles} $E_2$ = {the ace of spades and the ace of hearts are in different piles} $E_3$ = {the ace of spades, hearts and diamonds are all in different piles} $E_4$ = {all 4 aces are in different piles} The desired probability $P(E_1E_2E_3E_4) = P(E_1)P(E_2|E_1)P(E_3|E_1E_2)P(E_4|E_1E_2E_3)$ Now $P(E_1) = 1$ To find $P(E_2|E_1)$, consider the pile that contains the ace of spades. Probability that the ace of hearts is among the remaining 12 cards is $12/51$. Thus $$P(E_2|E_1) = 1 - \frac{12}{51} = \frac{39}{51}$$ Similarly, given that the ace of spades and hearts are in different ppiles, the set of the remaining 24 cards of these two piles is equally likely to be any set of 24 of the remaining 500 cards. The probability that the ace of diamonds is one of these 24 is $24/50$. Thus, $$P(E_3|E_1E_2) = 1 - \frac{24}{50} = \frac{26}{51}$$ Similarly, $$P(E_4|E_1E_2E_3) = 1 - \frac{39}{49} = \frac{13}{49}$$ Thus, $$P(E_1E_2E_3E_4) = \frac{39.26.13}{51.50.49} \approx .105$$ Thus 10.5 precent chnce that each pile will contain an ace. My logic was below and I feel it more intuitive and natural 1. First pile will contain 13 cards out of 52. Thus sample space will be $^{52}C_{13}$. The first pile is to contain 1 of 4 aces and rest 12 cards can be any of remaining 48. Thus the event space will be $^4C_1^{48}C_{12}$ Thus the probability that first pile will contain one of the aces is $$\frac{^4C_1*^{48}C_{12}}{^{52}C_{13}}$$ 2. Similarly for second pile, the probability for containing one of the aces is $$\frac{^3C_1*^{36}C_{12}}{^{39}C_{13}}$$ 3. Similarly for third pile, the probability for containing one of the aces is $$\frac{^2C_1*^{24}C_{12}}{^{26}C_{13}}$$ 4. Lastly for forth pile, the probability for containing the last of the aces is $$\frac{^1C_1*^{12}C_{12}}{^{13}C_{13}}$$ So the desired probability is: $$\frac{^4C_1*^{48}C_{12}}{^{52}C_{13}} * \frac{^3C_1*^{36}C_{12}}{^{39}C_{13}} * \frac{^2C_1*^{24}C_{12}}{^{26}C_{13}} * \frac{^1C_1*^{12}C_{12}}{^{13}C_{13}}$$ What I dont understand is 1. In book's solution, it says $E_4$ = {all 4 aces are in different piles} But I feel this is what we need to find. So the desired probability is $P(E_4)$. But book says its $P(E_1E_2E_3E_4)$. Is it like the probability $P(E_4)$ cannot be standalone? and we cannot specify/calculate $P(E_4)$ independently. Or $P(E_4) = P(E_1E_2E_3E_4)$ 2. The book's solution says: To find $P(E_2|E_1)$, consider the pile that contains the ace of spades. Probability that the ace of hearts is among the remaining 12 cards is $12/51$. I am not completely clear about how $\frac{12}{52}$ came. 3. Is there any logical correspondence in the two approaches? Do they align each other in terms of thinking they follow? If yes can you please put it in words how this correspondence exist? Also may be finding and putting correspondence in terms of similarities in the algebraic equation will help me get it more clearly. 4. Is the book's approach more sort of defining the events recursively? If yes, does this means that problem involving any sequence of dependent event can be solved in the way given in the book thus effectively defining the desired probability recursively and solving to the probability of base condition (in this particular problem base condition might be first pile contains only one ace that $E_1$). I am just trying to generalize the pattern of the question, so that I will know this method is applicable when I am confronted with some question. 5. What are the difference between two approaches? and Can both be equally used to solve problems? 6. All I am trying here is to make the book's approach logically more intuitive and structured to me, the way the other approach feels intuitive to me, so that I can form equations as in books' approach very easily and without confusion. So anything in that direction will be appreciable. The book follows very formula/equation-oriented way at explaining the solutions, not the logical way. PS: May be I am thinking unnecessarily. Or the question may sound very vague or rambling. But please, probability stuff always confused me and am giving it a last try. • What is the numerical value of your solution? The logic of the books solution is clearer if you draw the contingency tree for the problem. Also simulation gives a result close to the book answer, not proof that it is right but it is a reality check. Your answer if it differs from the book should at least be close to it if you are right and the book wrong. – Conrad Turner Apr 20 '15 at 8:21 • The answer comes 0.105498199279712 as can be seen in the output at the bottom of this C# code. – anir123 Apr 20 '15 at 17:13 • Which agrees with the book answer to 15 significant figures ... – Conrad Turner Apr 21 '15 at 9:13 ## 1 But I feel this is what we need to find. So the desired probability is $\mathsf P(E_4)$. But book says its $\mathsf P(E_1E_2E_3E_4)$. Is it like the probability $\mathsf P(E_4)$ cannot be standalone? and we cannot specify/calculate $\mathsf P(E_4)$ independently. Or $\mathsf P(E_4)=\mathsf P(E_1E_2E_3E_4)$ The event is the conjunction of the four events, $E_4 = E_1 E_2 E_3 E_4$ , because each subsequent event in the list is a subset of its previous.   $E_4\subseteq E_3\subseteq E_2\subseteq E_1$. Did the textbook skip this justification? It uses the conjunction to make it clear where the product comes from. $$\mathsf P(E_4)= \mathsf P(E_1E_2E_3E_4) = \mathsf P(E_1)\;\mathsf P(E_2\mid E_1)\;\mathsf P(E_3\mid E_1E_2)\;\mathsf P(E_4\mid E_1E_2E_3)$$ ## 2 I am not completely clear about how $12/52$ came. Given that the ace of spades is in one of the four piles of $13$ cards, then there are $51$ places the ace of hearts could be, but $12$ of these are in a pile you don't want it to be (that's the other cards in the same pile as the ace of spades). Thus the probability that the ace of hearts is not in the same pile as the ace of spades is $1-\tfrac{12}{51}$. • hi Graham, sorry for super late reply, I came across this problem during revision and ended up looking here. I think the book did not specify the first point you made. So is this the case with all probabilities specified by the multiplication rule? That is it always the case that: $P(E_1E_2E_3...E_n)=P(E_1)P(E_2|E_1)P(E_3|E_1E_2)...P(E_n|E_1...E_{n-1})=\bf{E_n}$ That is in the multiplication rule, does $E_n$ is always the subset of other events? – anir123 Aug 3 '15 at 20:28 • @Mahesha999 Not always. You have to set it up. Its the case whenever $E_4\subseteq E_3\subseteq E_2\subseteq E_1$; that is, if you are seeking to find $P(E_n)$ and $E_n$ is a subset of event $E_{n-1}$ it might be that $P(E_{n-1})$ and $P(E_n\mid E_{n-1})$ are easier to find. And so on. – Graham Kemp Aug 3 '15 at 22:19
2019-11-13T05:04:18
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1243068/solving-for-probability-of-dependent-events", "openwebmath_score": 0.8571433424949646, "openwebmath_perplexity": 282.9541044802467, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713865827593, "lm_q2_score": 0.8539127603871312, "lm_q1q2_score": 0.8413358094473402 }
https://math.stackexchange.com/questions/3522388/in-how-many-ways-can-20-distinct-students-be-placed-in-four-distinct-dorms-if
# In how many ways can $20$ distinct students be placed in four distinct dorms if each dorm needs to have at least one student? Question: $$20$$ distinct students are to be placed into four distinct dorms named: A, B, C, D. In how many ways can they be assigned to the four dorms, with the restriction that each dorm needs to have at least one student? My attempt: The question says that each dorms must have at least one student. So, my first attempt is since there are four dorms, then the first dorm has $$20$$ choices to take in one of the $$20$$ students, and the second dorms then has $$19$$ choices to take in one students. The third dorms has 18 choices, and the fourth has $$17$$ choices. Now, each dorms has one student, and that leaves $$16$$ students left who may enter any one of the dorms, so the arrangement is $$16^4$$. So there are $$20 \times 19 \times 18 \times 17 \times 16^4$$ arrangements. However, it seems that I can first distribute the 16 students into the dorms with $$16^4$$ arrangements, then distribute the remaining 4 students with $$4!$$ arrangements, so that each dorms has at least one student in them. So the total arrangement is $$16^4 \times 4! < 20 \times 19 \times 18 \times 17 \times 16^4$$. This don't seem right. I appreciate it very much if anyone could help me out on this problem. Thank you. • Just to be clear. The rooms are distinguishable, and the students are distinguishable. Is that correct? Label the students 1,...,20. If students 1 and 2, swap dorms that changes the arrangement? And if all the students in dorm A move to dorm B and vice versa, that changes the arrangement? – almagest Jan 25 '20 at 20:15 • @almagest yes, that's correct. All students and dorms are distinct. – TerminatorOfTerminators Jan 25 '20 at 20:25 • Your method over counts badly. If, say, you had $5$ students in $A$ then you count that arrangement at least $5$ times (once for every student in $A$), and so on. – lulu Jan 25 '20 at 20:26 • The usual method is to go by Inclusion Exclusion. That is, we start with $4^{20}$, which would be the answer if we allowed empty dorms. Now subtract the ways to do it with one specified empty dorm, namely, $4\times 3^{20}$, add back the ways with two specified empty dorms, and so on. – lulu Jan 25 '20 at 20:31 As has been remarked, your method probably should have led you to write $$20\times 19\times 17\times 16\times 4^{16}$$ but this is not correct. The problem is that, if multiple students are in a given norm, then there is no way to tell which one of them is the "special" one that you put in first, so you end up counting that arrangement once for every student in that dorm. Indeed, that answer is $$>4^{20}$$ which would be the correct answer if we allowed empty dorms, so the correct answer must be significantly smaller. To illustrate the problem, suppose you had three students, $$x,y,z$$ in two dorms, $$A,B$$. Now the correct answer is clearly $$6$$. Why? Well, if you allowed empty dorms there would be $$2^3=8$$ as each student has two choices. We then exclude the two cases $$((x,y,z), \emptyset)$$ and $$(\emptyset, (x,y,z))$$. Indeed the list of solutions is just $$((x,y),z)\quad ((x,z), y)\quad ((y,z),x)\quad (x,(y,z))\quad (y,(x,z))\quad (z,(x,y))$$ but your method would give us $$3\times 2\times 2^1=12$$ The usual way to do is via Inclusion Exclusion. There would be $$4^{20}$$ ways if we allowed empty dorms. We first correct this by subtracting off the cases in which one specified dorm is empty, to get a correction of $$-\binom 41\times (4-1)^{20}$$ and then we add back those cases with two specified empty dorms, and finally subtract the cases with three specified empty dorms. Thus the answer is $$\sum_{i=0}^3(-1)^i\times \binom 4i\times (4-i)^{20}=1,085,570,781,624$$ You are looking for the number of surjections from a set with $$20$$ elements into a set with $$4$$ elements. By the Twelvefold way, that expression is $$4!\left\{{20\atop4}\right\}=1\ 085\ 570\ 781\ 624$$ where the expression in brackets is a Stirling number of the second kind. There are $$4^{20}$$ to assign students to dorms if we allow dorms to be empty. If at least one dorm is empty there are $$3^{20}$$ ways to assign students to the remaining 3 dorms. There are 4 dorms that could be the empty dorm. But we are over-counting the cases were 2 dorms are empty. Inclusion - Exclusion: $$4^{20} - {4\choose 1} 3^{20} + {4\choose 2} 2^{20} - {4\choose 3} 1^{20}$$
2021-05-12T23:07:18
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3522388/in-how-many-ways-can-20-distinct-students-be-placed-in-four-distinct-dorms-if", "openwebmath_score": 0.8195756077766418, "openwebmath_perplexity": 242.86786074480838, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713878802045, "lm_q2_score": 0.8539127585282744, "lm_q1q2_score": 0.8413358087237669 }
http://oqxl.mayoume.pl/fourier-series-of-piecewise-function.html
# Fourier Series Of Piecewise Function In fact, for periodic with period , any interval can be used, with the choice being one of convenience or personal preference (Arfken 1985, p. the Gibbs phenomenon, the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity. 32) x t e dt T x t e dt T a T jk T t T jk t k ∫ ∫ = 1 ( ) − w 0 = 1 ( ) − (2p /) (3. series (plural series). equations and fourier integral representation. The Fourier series representation of f (x) is a periodic function with period 2L. fast fourier transform. Find the Fourier series for f(x) = x2 4; π 1 = { 1 2 if n = 1 0 if n > 1. The fourier transform; Fourier transform properties; Convolution and correlation; Fourier series and sampled waveforms Chapter 1 deals with the preliminary remarks of Fourier series from general point of view. Fourier Series. Loosely speaking, the Fourier series of converges pointwise to the Fourier periodic extension of. 4 Complex Fourier Series 12. Differential equations arising from L-R and R-C series circuits Examples of differential equations involving piecewise functions Laplace transforms of piecewise periodic functions. Library of functions for 2D graphics - runtime files. In this lecture we consider the Fourier Expansions for Even and Odd functions, which give rise to cosine and sine half range Fourier Expansions. Mark the statements as T(true) or F(false). Added matrix determinant calculation. folks-common (0. In other words, the analysis breaks down general functions into sums of simpler, trigonometric functions ; The Fourier series tells you what the amplitude and the frequency of the. This requires fto be periodic on [0;2ˇ]. All of our no cost ebooks are Lawfully Accredited in your Reassurance. Download Introduction To Fourier Analysis And Wavelets books, This book provides a concrete introduction to a number of topics in harmonic analysis, accessible at the early graduate level or, in some cases, at an upper undergraduate level. Produces the result Note that function must be in the integrable functions space or L 1 on selected Interval as we shown at theory sections. In particular, if L > 0then the functions cos nˇ L t and sin nˇ L t, n =1, 2, 3, are periodic with fundamental. Differential equations arising from L-R and R-C series circuits Examples of differential equations involving piecewise functions Laplace transforms of piecewise periodic functions. Piecewise[{{val1, cond1}, }, val] uses default value val if none of the condi apply. SERIES IN OPTICS AND OPTOELECTRONICS Series Editors: Robert G W Brown, University of California, Irvine, USA E Roy Pike, Kings This is now known as the Fourier series representation of a periodic function. The main goal is to have a Fourier series function able to work in exact mode for piecewise signals. 4 Complex Fourier Series 12. In this section we assume that the piecewise smooth function is defined on with a jump-discontinuity. 26 5 To use series solution methods and special functions like Bessels. 2 - Fourier Series and Convergence • State the definition of a Piecewise Continuous function. Sage has some rudimentary support for Fourier series, as part of the “piecewise-defined function” class, but it seems to be very slow and very flaky. Find more Mathematics widgets in Wolfram|Alpha. Theorem: L2 convergence. DEFINITION 12. Fourier series is one of the most intriguing series I have met so far in mathematics. net, 1fichier, Uptobox, Filefactory, Putlocker, mega. If f(x) is an odd function with period , then the Fourier Half Range sine series of f is defined to be. 3 Half-Range Expansions. The Fourier series of a function $f\in L^2([0,1])$ converges to $f$ in the $L^2$ norm. If the first argument contains a symbolic function, then the second argument must be a scalar. The following advice can save you time when computing. Fourier Sine and. libasan4 (7. They introduced so called “concentration factors” in order to improve the convergence rate. To start with, Amazon chose the wrong flag: the. Sine series. If I compute the antiderivative of the piecewise version of the abs function. Electrical Engineering: Ch 18: Fourier Series (10 of 35) The Dirichlet Conditions. functions in the series are discontinuous. 1, and Theorem 2. Where the coefficients a’s and b’s are given by the Euler-Fourier formulas: ∫ − = L L m dx L m x f x L a π ( )cos 1, m = 0, 1, 2. Exercises for MAT3320 Fabrizio Donzelli 1 Fourier Series 1. In this article, f denotes a real valued function on which is periodic with period 2L. Moving from the continuous to the discrete world. Generally speaking, we may find the Fourier series of any (piecewise continuous - see the tips) function on a finite interval. 1, lecture notes) Symmetry considerations. (Redirected from Piecewise-linear function). In other words, if is a continuous function, then. FOURIER SERIES When the French mathematician Joseph Fourier (1768–1830) was trying to solve a problem in heat conduction, he needed to express a function f as an infinite series of sine and cosine functions: 兺 共a f 共x兲 苷 a 0 1 n cos nx bn sin nx兲 n苷1 苷 a 0 a1 cos x a2 cos 2x a3 cos 3x b1 sin x b2 sin 2x b3 sin 3x. We shall shortly state three Fourier series expansions. fourier does not transform piecewise. The Fourier series of a piecewise continuous function with 8 segments and no discontinuities can be found from the above applet with Fn = 1. “≈” means that the Fourier series converges to f(x) under rather mild conditions. Test your coefficient function by using , and , with. The Fourier series is named after the French Mathematician and Physicist Josephs Fourier (1768 – 1830). For functions of two variables that are periodic in both variables, the. In this instance, the Fourier coefficients can be computed in closed form, segment by segment. Theorem Let f be a piecewise smooth function on the interval [0, L]. Find the Fourier series of the following functions. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. I'm trying to do problem 3, section 24. Recall how a convolutional layer overlays a kernel on a section of an image and performs bit-wise multiplication with all of the values at that location. In this section we assume that the piecewise smooth function is defined on with a jump-discontinuity. More generally, if fis p-periodic and piecewise continuous. With simpy like : p = Piecewise((sin(t), 0 < t),(sin(t), t < pi), (0 , pi < t), (0, t < 2*pi)) fs = fourier_series(p, (t, 0, 2*pi)). Ø Complex Exponential Fourier Series. Then fb= bg ⇒ f = g. The following advice can save you time when computing. Return the n -th cosine coefficient of the Fourier series of the periodic function f extending the piecewise-defined function self. For example, we can see that the series y(x,t) = X∞ n=1 sin nπx L An cos nπct L +Bn. Fourier series : Fourier series is able to represent any piecewise regular function in the range [0,2L] Dirichlet conditions: f(x) has only a finite number of discontinuities and only a finite number of extreme values (maximum and minimum). Both are used for designing electrical circuits, solving differential and integral equations. Fourier series is one of the most intriguing series I have met so far in mathematics. In this worksheet we will examine the Fourier Series expansions of several functions. When a function is discontinuous, its Fourier series doesn't necessarily equal the function. There are multiple uses for the fast Fourier transform algorithm. 10 DEFINITION (Fourier series). Since the function F (x) is continuous, we have for any because of the main convergence Theorem relative to Fourier series. Relation Between Trigonometric and Exponential Fourier Series. Find the Fourier series of the following piecewise defined function, on the interval [-1, 1]: h (x) = (-1-x if-1 ≤ x < 0 1-x if 0 < x ≤ 1 x. 92]: If f(x) is piecewise smooth on the interval F. Note that the Fourier coe cients X nare complex valued. So far, I have used 7 = 2L as my period, where L = 7/2 and have started solving for a0, an, and bn. In particular, we demonstrate that finite-dimensional Fourier frame ap-proximations of a piecewise-analytic function can be reprojected onto Gegenbauer polynomials in order to recover a pointwise exponen-. 2] Remark:The most notable missing conclusion in the theorem is uniform pointwise convergence. Matt Henry in doubt for West Indies series after injuring right thumb. 2 - Fourier Series and Convergence • State the definition of a Piecewise Continuous function. The function is f(x) = 1 if 0 < x < pi/2 and f(x. Assume that f is a 2π-periodic function which is piecewise smooth. Fourier analysis has been applied to stock trading, but research examining the technique has found little to no evidence that it is useful in practice. Before looking at further examples of Fourier series it is useful to distinguish two classes of functions for which the Euler-Fourier formulas for the coefficients can be simplified. A function f(x) is piecewise smooth on some interval if and only if f(x) is continuous and f0(x) is continuous on a nite collection of sections of the given interval. There is a small store, via DFTBA, including the viewer-requested plush pi creatures, socks displaying mathematical objects which live most naturally on a cylinder, the knot theory tie, and and another math merchandise. 1 Note: sinc (infinity) 1 & Max value of sinc(x) 1/x Note: First zero occurs at Sinc (+/-pi) Use the Fourier Series Table (Table 4. Fourier series summation and symbolic representation for algebraic functions. For convenience we use both common definitions of the fourier transform using the standard for this website variable f and the also used. 33) is referred to asanalysis equation. ODD AND EVEN FUNCTIONS. Compute Fourier Series Representation of a Function. This is an updated version of a package originally published in the Maple Application Center (2000). (Redirected from Piecewise-linear function). For example. 0-6ubuntu2) [universe]. The Fourier series converges to f (x)isthemean-squaresensein (a, b) provided only that f (x) is any function for which Z b a. This function is often used as an example of the application of Fourier series, and, therefore, it is convenient to take this function for comparative analysis of a traditional Fourier series expansion and the suggested method. De–nition of Fourier Series Suppose that L>0 and fis a function that is piecewise continuous on [ L;L]:The Fourier Series of frelative to [ L;L] is the sequence of functions fs ng1 n=1 given by S n(x) = A 0 + Xn k=1 A kcos kˇx L +B ksin kˇx L for all real numbers xwhere A 0 = 1 2L Z L L f(x)dx; A k= 1 L Z L L f(x)cos kˇx L dxfor k= 1;2. A graph often helps determine continuity of piecewise functions, but we should still examine the algebraic representation to verify graphical evidence. A Fourier cosine series has $$df/dx = 0$$ at $$x=0$$, and the Fourier sine series has $$f(x=0)=0$$. As an odd function, this has a Fourier sine series f(x) ˘. Daileda Fourier Series Introduction Periodic functions Piecewise smooth functions Inner products Definition 1: We say that f(x) is piecewisecontinuousif f has only finitely many discontinuities in any interval, and f(c+) and f(c−) exist for all c in the domain of f. Fourier Series Expansions of Functions. a) True b) False View Answer. Again, using MathView to handle the detailed manipulation allows Let's have a look at a simple notebook example where the Fourier series approximates a unit step function at x=0 and calculate the coefficients. Then the function. determines a well-de ned function f(x) which again is in Per L(R). Get the free "Fourier Series of Piecewise Functions" widget for your website, blog, Wordpress, Blogger, or iGoogle. 1 Baron Jean Baptiste Joseph Fourier (1768−1830). Thus, when f is considered extended to the whole real line, it is continuous everywhere, and is a 2-periodic function on R. 2) Uniform convergence and the Gibbs phenomenon (1. I'm taking a Fourier Analysis course using Churchill 's Fourier Series and Boundary Value Problems, 6th ed. SERIES IN OPTICS AND OPTOELECTRONICS Series Editors: Robert G W Brown, University of California, Irvine, USA E Roy Pike, Kings This is now known as the Fourier series representation of a periodic function. Fourier series In the following chapters, we will look at methods for solving the PDEs described in Chapter 1. Cosine Series. Line of Duty series six: Vicky McClure and Martin Compston risk wrath of Superintendent Hastings as they mess around in his office during filming. In case of the even function, for example x 2, coefficients b n were zero, because the integrand x 2 sin n π x - is odd function. Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. ""The Fourier series of an even function is a cosine series and the Fourier series of an odd function is a sine series"" this is shown in this video lecture. Daileda Fourier Series Introduction Periodic functions Piecewise smooth functions Inner products Definition 1: We say that f(x) is piecewisecontinuousif f has only finitely many discontinuities in any interval, and f(c+) and f(c−) exist for all c in the domain of f. Functions - What Does the Pharynx Do. The discrete-time Fourier transform is an example of Fourier series. Decompose the following function in terms of its Fourier series. It is noted that, like and , the weighted average is discontinuous at if. The function is f(x) = 1 if 0 < x < pi/2 and f(x) = 0 if pi/2 < x < pi. The only discontinuities allowed are jump discontinuities. f ( x) ∼ a 0 + ∑ n = 1 ∞ [ a n cos. If f(x) is an odd function with period , then the Fourier Half Range sine series of f is defined to be. Then its Fourier series converges everywhere (pointwise) to f. 1 Orthogonal Functions 12. 0-6ubuntu2) [universe]. Returns a piecewise function from a list of (interval, function) pairs. Классы интегрируемых функций. In this section we assume that the piecewise smooth function is defined on with a jump-discontinuity. 584 Chapter 9 Fourier Series Methods DEFINITION Fourier Series and Fourier Coefficients Let f(t) be a piecewise continuous function of period 2yr that is defined for all t. Just as the Fourier series expansion of the Bernoulli functions are useful in computing the special values of Dirichlet L-functions, we would like to see some applications to a certain generalization of Dirichlet L-functions and higher-order generalized Bernoulli numbers in near future. Ø Complex Exponential Fourier Series. In practice, one can only use a finite linear combination. This function is often used as an example of the application of Fourier series, and, therefore, it is convenient to take this function for comparative analysis of a traditional Fourier series expansion and the suggested method. Find the Fourier series of h (x) = x on the interval [-π, π]. Fourier Series: Summary December 4, 2007 Fix L>0 and let I := [ L;L], that is, the set of real numbers xsuch that L x L. Under some additional conditions (such as piecewise differentiability), this Fourier series of an arbitrary function by the orthogonal system with Fourier coefficients converges to on an interval at the points of continuity of , and to at the points of discontinuity of , where ). The piecewise linear function based on the floor function of time t, is an example of a sawtooth wave with period 1. The period is taken to be 2 Pi, symmetric around the origin, so the. I'm taking a Fourier Analysis course using Churchill 's Fourier Series and Boundary Value Problems, 6th ed. Fourier Convergence Theorem. Find the Fourier series of the following functions. Write a function m-file named coef_fourier. (Piecewise Smooth) A function is said to be piecewise smooth if it is continuous and its derivative is defined everywhere except possibly for a discrete set of points. Periodic extension of functions (1. per_f= piecewisea< xand x< b, f(x),. For now we are just saying that associated with any piecewise continuous function on [ ˇ;ˇ] is a certain series called a Fourier series. Sine series. Find more Mathematics widgets in Wolfram|Alpha. Library of functions for 2D graphics - runtime files. This makes it possible to apply the Poisson summation formula to describe the Fourier series expansion of a b-spline in terms of its Fourier transform. $\endgroup$ – Greg Martin yesterday $\begingroup$ (to guarantee the convergence to the function we need Dini's Criterion, stronger than continuity). Interpolation and Approximation - Astro Temple June 27th, 2020 | 641 | No Comments » Interpolation and Approximation by Rational Functions in the. On-Line Fourier Series Calculator is an interactive app to calculate Fourier Series coefficients (Up to 10000 elements) for user-defined piecewise functions up to 5 pieces, for example. Limit calculation added. The kernel is then shifted to another section of the. 2) Convergence of Fourier series. Learn about our use of cookies, and collaboration with select social media and trusted analytics partners hereLearn more about cookies, Opens in new tab. In mathematics and statistics, a piecewise linear, PL or segmented function is a real-valued function of a real variable, whose graph is composed of straight-line segments. 1 Note: sinc (infinity) 1 & Max value of sinc(x) 1/x Note: First zero occurs at Sinc (+/-pi) Use the Fourier Series Table (Table 4. Find more Mathematics widgets in Wolfram|Alpha. Although Fourier series or integrals of piecewise smooth functions may be slowly convergent, sometimes it is possible to accelerate their speed of convergence by adding and subtracting suitable combination of known functions. JPS, Fourier series 7 2. < tn ≤ 2L where f (t) is not differentiable, and if at each of these points the left and right-hand limits lim f (t) and lim f (t) exist (although they might not be equal). This document describes an alternative, where a function is instead decomposed into terms of the. Derivative numerical and analytical calculator. the n the approximated function shows amounts of. Sine series. If f(x) is an odd function with period , then the Fourier Half Range sine series of f is defined to be. To make things run reasonably efficiently, we’re going to have Sage do numerical, rather than symbolic, integrals. This requires fto be periodic on [0;2ˇ]. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. Generally speaking, we may find the Fourier series of any (piecewise continuous - see the tips) function on a finite interval. In this article, f denotes a real valued function on which is periodic with period 2L. In Maple, to compute [y]we use the command floor(y). The Crown series four: Princess Diana pleads 'to be loved' by the Royal Family as Gillian Anderson recreates Margaret Thatcher's brittle tone in new trailer. For any a > 0the functions cosat and sinat are periodic with period 2ˇ/a. If you are a student in one of the mathematical, physical, or engineering sciences, you will almost certainly find it necessary to learn. Given the Fourier series coefficients of a function on a rectangular domain in $\mathbb{R}^d$, assuming the function is piecewise smooth, we approximate the function by piecewise high order spline functions. The segments are set by the parameters 'a' to 'h'. For now we are just saying that associated with any piecewise continuous function on [ ˇ;ˇ] is a certain series called a Fourier series. For functions that are not periodic, the Fourier series is replaced by the Fourier transform. Both are used for designing electrical circuits, solving differential and integral equations. both can be written as piecewise continuous functions. $\endgroup$ – Greg Martin yesterday $\begingroup$ (to guarantee the convergence to the function we need Dini's Criterion, stronger than continuity). An in nite sum as in formula (1) is called a Fourier series (after the French engineer Fourier who rst considered properties of these series). If f is continuous and its Fourier coefficients are absolutely summable, then the Fourier series converges uniformly. In Exercises 11. This notebook develops the procedures in calculating Fourier series using MathView. Baron Jean Baptiste Joseph Fourier $$\left( 1768-1830 \right)$$ introduced the idea that any periodic function can be represented by a series of sines and cosines which are harmonically related. So the question is, can we write f(x) = a 0 + X1 k=1 b kcos(kx) + X1 k=1 c ksin(kx). Example 1 - A Piecewise Smooth Function. Since the need for numerical integration is therefore eliminated, this program will run much more quickly than the general form for Fourier series expansions. introduce one of the many ways that Fourier series are used in applications. A function is called C 1 -piecewise on some interval I= [a;b] if there exists a partition. Find the Fourier series of the following piecewise defined function, on the interval [-1, 1]: h (x) = (-1-x if-1 ≤ x < 0 1-x if 0 < x ≤ 1 x. EVE's Halloween Horrors are upon us, and New Eden will once again be haunted by a series of awesome events and offers that consist of new Crimson Harvest combat event sites across all space, daily login rewards, balance changes to Interdictors and Combat Interceptors, a new Proving Grounds. Functions - What Does the Pharynx Do. 6 Bessel and Legendre Series. Piecewise-Defined Function Example. A Basic Fourier Series, 72 3. Assume that f is a 2π-periodic function which is piecewise smooth. For more serious applications, pointwise convergence not known to be uniform is often useless. Differential equations involving piecewise functions -lapDE3. org Name Notes of Fluid Mechanics Author Qayyum Ullah Khan. This document describes an alternative, where a function is instead decomposed into terms of the form einx. Помогите пожалуйста решить ех:5 and 6 Это за четверть. The Fourier series representation of the function. Decompose the following function in terms of its Fourier series. I will now carefully formulate a theorem which A function is said to be piecewise continuous (some say sectionally continuous ) if. Remark: Activate the box Fourier series and increase, or decrease, the number of terms in the partial sum. The fundamental result on convergence of Fourier series, due to Dirichlet, states: Theorem. Let f(x) be an arbitrary piecewise continuous function on a finite interval (a,b). When a function is discontinuous, its Fourier series doesn't necessarily equal the function. Given an integer n ≥ 0, the n -th cosine coefficient of the Fourier series of f is defined by an = 1 L∫L − Lf(x)cos(nπx L)dx, where L is the half-period of f. IEEE Trans. For now we are just saying that associated with any piecewise continuous function on [ ˇ;ˇ] is a certain series called a Fourier series. FOURIER SERIES When the French mathematician Joseph Fourier (1768–1830) was trying to solve a problem in heat conduction, he needed to express a function f as an infinite series of sine and cosine functions: 兺 共a f 共x兲 苷 a 0 1 n cos nx bn sin nx兲 n苷1 苷 a 0 a1 cos x a2 cos 2x a3 cos 3x b1 sin x b2 sin 2x b3 sin 3x. Fourier series 1. In fact, for periodic with period , any interval can be used, with the choice being one of convenience or personal preference (Arfken 1985, p. MCQ->The Fourier series of a real periodic function has only Cosine terms if it is evenSine terms if it is evenCosine terms if it is oddSine terms if it is odd Which of the above statements are correct?. Using the same syntax as. Added matrix determinant calculation. The following theorem, which we state without proof, says that this is typical of the Fourier series of piecewise continuous functions. Functions in the Respiratory System. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. 14 4 To apply effective mathematical methods for the solutions of higher order ordinary differential equations. 92]: If f(x) is piecewise smooth on the interval F. Fourier Series of Functions with an Arbitrary Period Fourier Series Expansion on the Interval [−L,L] We assume that the function f (x) is piecewise continuous on the interval [−L,L]. The initial condition T ( x ,0) is a piecewise continuous function on the interval [0,L] that is zero at the boundaries. When a function is discontinuous, its Fourier series doesn't necessarily equal the function. Fourier series summation and symbolic representation for algebraic functions. Symbolic computation of Fourier series. Even and odd functions. This apps allows the user to define a piecewise function, calculate the coefficients for the trigonometric Fourier series expansion, and plot the approximation. For example, a piecewise polynomial function is a function that is defined by a polynomial on each of its sub-domains, but possibly by a different polynomial on Piecewise functions are defined using the common functional notation , where the body of the function is an array of functions and associated. Start your free trial. FOURIER SERIES When the French mathematician Joseph Fourier (1768–1830) was trying to solve a problem in heat conduction, he needed to express a function f as an infinite series of sine and cosine functions: 兺 共a f 共x兲 苷 a 0 1 n cos nx bn sin nx兲 n苷1 苷 a 0 a1 cos x a2 cos 2x a3 cos 3x b1 sin x b2 sin 2x b3 sin 3x Earlier, Daniel Bernoulli and Leonard Euler had used such. Conversely, the Fourier sine series of a function f : [0,L] → R is the Fourier series of its odd extension. In particular at each point x where f is continuous we have f(x) = a0 2 + ∑∞ n=1 an cos nπx L. 1 Orthogonal Functions 12. I'm trying to do problem 3, section 24. This, in turn, is made somewhat difficult by 2) and 3). Fourier Series Of Piecewise Function 1 Orthogonal Functions 12. The fourier transform 1 1 fourier transforms as integrals there are several ways to de ne the fourier transform of a function f. For purposes below we. t t+ i t t −. Moving from the continuous to the discrete world. $\endgroup$ – Greg Martin yesterday $\begingroup$ (to guarantee the convergence to the function we need Dini's Criterion, stronger than continuity). Desmos functions and their derivatives. The theorem that ""any reasonable [piecewise continuous] function of period 2pi has exactly one expression as a Fourier series"" is analysed. We consider Fourier series of and in the form of where and are Fourier coefficients defined as Then we propose a weighted average of and as follows: for. Theorem: L2 convergence. Solution: See Exercise Video 6 (handwritten notes: Examples Fourier series) 2. Using the substitution x = Ly π (−π ≤ x ≤ π), we can transform it into the function. Piecewise Continuous Functions, 68 3. 726 10 Fourier Series Applying these observations to the functions sint and cost with funda-mental period 2ˇ gives the following facts. fast fourier transform. To make things run reasonably efficiently, we’re going to have Sage do numerical, rather than symbolic, integrals. If f(x) is an odd function with period , then the Fourier Half Range sine series of f is defined to be. Sine series. The 1995 Hubble photo that changed astronomy - Duration: 5:27. Piecewise-smooth functions. We first define our piecewise smooth function and plot it: > restart: > g := x -> piecewise(x<=0,0, x<=3,x):. to, Nitroflare, Rapidgator, Filejoker, Filefox, Turbobit, Keep2Share, Uploaded. Convergence of Fourier Series The period 2L function f (t) is called piecewise smooth if there are a only finite number of points 0 ≤ t 1 < t 2 <. Added matrix determinant calculation. (US) IPA(key): /ˈsɪɹiz/, /ˈsiɹiz/. 0486659739,Mathematical Analysis,Fourier series,Functions, Orthogonal,Mathematical physics,Non-Classifiable,Reference, Language, Maps, Dictionaries Mathematics,Science/Mathematics,Functional analysis & transforms,Harry F, Davis,Fourier Series and Orthogonal Functions,Dover Publications. In this article, f denotes a real valued function on which is periodic with period 2L. Cosine Series. 10 DEFINITION (Fourier series). 2] Remark:The most notable missing conclusion in the theorem is uniform pointwise convergence. Then its Fourier series converges everywhere (pointwise) to f. In other words, if is a continuous function, then. If we are given a function f (x) on an interval [0, L] and we want to represent f by a Fourier Series we have two. the n the approximated function shows amounts of. The following advice can save you time when computing. The Fourier series of a periodic continuous-time signal ∑ ∑ +∞ =−∞ +∞ =−∞ = = k jk Tt k k x t a k e a e w 0 (2p /) (3. This notebook develops the procedures in calculating Fourier series using MathView. proves it as an application of the Fourier series convergence theorem!) 3. Piecewise(list_of_pairs, var=None)¶. The whole structure is not passed to another function with all members and their values. Test your coefficient function by using , and , with. The Fourier series for a number of piecewise smooth functions are listed in Table l of §21, and Theorem 2. There are countless types of symmetry, but the ones we want to focus on are. In particular at each point x where f is continuous we have f(x) = a0 2 + ∑∞ n=1 an cos nπx L. Then, there are constants a 0;a m;b m (uniquely de ned by f). 3 Characteristic Function. When a function is discontinuous, its Fourier series doesn't necessarily equal the function. the only thing you have to be careful is creating an a0 term when performing these operations, which There is actually a relevant theorem: given a continuous and piecewise-smooth function f, differentiating the Fourier series for f gives the. The coefficients for Fourier series expansions of a few common functions are given in Beyer (1987, pp. Economists often assume that a firm's production function is increasing and concave. 26 5 To use series solution methods and special functions like Bessels. In math, a piecewise function (or piecewise-defined function) is a function whose definition changes depending on the value of the independent variable. Ø Complex Exponential Fourier Series. Conversely, the Fourier sine series of a function f : [0,L] → R is the Fourier series of its odd extension. The following theorem, which we state without proof, says that this is typical of the Fourier series of piecewise continuous functions. 2) The entries are only piecewise continuous in time, with discontinuities in between. Example Verify the convergence of the series on the given interval:. Exercises for MAT3320 Fabrizio Donzelli 1 Fourier Series 1. The following advice can save you time when computing. The kernel is then shifted to another section of the. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform. JPS, Fourier series 7 2. 1 tells what the sums of these series are. Sarthak says: 15 Sep 2019 at 2:06 pm [Comment permalink]. I tried to find the Fourier Series of. Then the Fourier series of f(t) is the series cc--1-(a,1 cos iii H- b7, sin itt), (18) 'I = I where the Fourier coefficients a,, and b,, are defined by means of the. In each example below we start with a function on defined on an interval, plotted in blue; then we present the periodic extension of this function, plotted in red; then we present the Fourier extension of this function, plotted in green. On the fourth sheet of the example. $\endgroup$ – Greg Martin yesterday $\begingroup$ (to guarantee the convergence to the function we need Dini's Criterion, stronger than continuity). I'm taking a Fourier Analysis course using Churchill 's Fourier Series and Boundary Value Problems, 6th ed. truncate(8) But it doesn't seem to work. Ө(g(x)) = {f(x) such that there exist positive constants c1, c2, N such that 0 <= c1*g(x) <= f(x) <= c2*g(x) for all x > N}. Fourier series : Fourier series is able to represent any piecewise regular function in the range [0,2L] Dirichlet conditions: f(x) has only a finite number of discontinuities and only a finite number of extreme values (maximum and minimum). When a function is discontinuous, its Fourier series doesn't necessarily equal the function. It is analogous to a Taylor series, which represents functions as possibly infinite sums of monomial terms. Can we use linear piecewise functions in order to model the QRS complex? Murray says: 11 Sep 2019 at 8:12 pm [Comment permalink] @Sarthak: I believe you could, but I think the Fourier Series approach would be more appropriate. Function Analysis Added. 584 Chapter 9 Fourier Series Methods DEFINITION Fourier Series and Fourier Coefficients Let f(t) be a piecewise continuous function of period 2yr that is defined for all t. Inverse version of the second shift formula. (Note that f0(t+) and f(t ) are both nite, by de nition of \piecewise smooth"). The Fourier integral is a natural extension of Fourier trigonometric series in the sense that it represents a piecewise smooth function whose domain is semi-infinite or infinite A periodic function f(x) defined in a finite interval (-L,L) can be expressed in Fourier series by extending this concept, non periodic functions defined in -∞0 centered around the real axis, then |a n| = Opexp( rn)q: Lecture 6 October 8, 2018 10 / 14. More precisely, it converges to f0(t) if f is continuous at t, and to1 2. < tn ≤ 2L where f (t) is not differentiable, and if at each of these points the left and right-hand limits lim f (t) and lim f (t) exist (although they might not be equal). 1 group with 20 teams (9 teams are sponsored Series E teams) play Bo8 games. library for computing Fast Fourier Transforms. The correct answer was given: Brain. Find the Fourier series of the following functions. A function f(x) is piecewise smooth on some interval if and only if f(x) is continuous and f0(x) is continuous on a nite collection of sections of the given interval. UNIT IV: Fourier Series Periodic functions; Fourier series of Periodic functions; Euler‟s formulae; Functions having arbitrary period; Change of intervals; Even and Odd functions; Half range sine and cosine series. Fourier Cosine Series of a piecewise function - Duration: 26:46. Introduction to Fourier sine series and Fourier cosine series - Duration: 17:54. The Fourier series of a function $f\in L^2([0,1])$ converges to $f$ in the $L^2$ norm. Fourier transform unitary, ordinary frequency. Return the n -th cosine coefficient of the Fourier series of the periodic function f extending the piecewise-defined function self. Fourier Series Expansions of Functions. IEEE Trans. Introduction to Complex Fourier Series Nathan P ueger 1 December 2014 Fourier series come in two avors. We will also define the odd extension for a function and work several examples finding the Fourier Sine Series for a function. Orthogonal Functions and Fourier Series. The Crown series four: Princess Diana pleads 'to be loved' by the Royal Family as Gillian Anderson recreates Margaret Thatcher's brittle tone in new trailer. , then the Fourier series of f(x) converges 1. 726 10 Fourier Series Applying these observations to the functions sint and cost with funda-mental period 2ˇ gives the following facts. Learn about our use of cookies, and collaboration with select social media and trusted analytics partners hereLearn more about cookies, Opens in new tab. 1: Radix 2 FFT. 3 Fourier Cosine and Sine Series 12. Compute Fourier Series Representation of a Function. Before looking at further examples of Fourier series it is useful to distinguish two classes of functions for which the Euler-Fourier formulas for the coefficients can be simplified. • Let P be the set of piecewise continuous fuctions from I to R (a linear subspace of the vector space of all such functions). Matt Henry in doubt for West Indies series after injuring right thumb. A number of things that follow on one after the other or are connected one after the other. Time limit: 0. (Received Pronunciation) IPA(key): /ˈsɪə. function f (x) =π, π∈[]−π, π, , extended periodically on the real line; this function is discontinuous at x =(2k +1)π for all interger values of k. The following advice can save you time when computing. The piecewise linear function based on the floor function of time t, is an example of a sawtooth wave with period 1. I'm trying to do problem 3, section 24. Addeddate 2012-06-15 18:52:17 Cite J. Introduction to Fourier sine series and Fourier cosine series - Duration: 17:54. We use piecewise functions to describe situations in which a rule or relationship changes as the input value crosses certain "boundaries. A Fourier cosine series has $$df/dx = 0$$ at $$x=0$$, and the Fourier sine series has $$f(x=0)=0$$. If f : R !C is a piecewise continuous 2ˇ-periodic function, then the numbers c k(f) = 1 2ˇ Z ˇ ˇ f(x)e ikxdx; k2Z (9) are called the Fourier coe cients of fand the series X1 k=1 c k(f)eikx is called the Fourier series for f. For example, we can see that the series y(x,t) = X∞ n=1 sin nπx L An cos nπct L +Bn. FOURIER SERIES. In math, a piecewise function (or piecewise-defined function) is a function whose definition changes depending on the value of the independent variable. Chapter 2 is concerned with the generalized functions and their Fourier transforms. Fourier Transform/ Series 2 example-FE/EIT Exam Review. In this program, the whole structure is passed to another function by address. given Fourier coefficients of f. Example 1 - A Piecewise Smooth Function. Excel offers several options to calculate averages, including the functions: AVERAGE, AVERAGEA, and AVERAGEIF. Then fb= bg ⇒ f = g. However, if D^ { (r-1)}_ {m-1}=0, then. 8 Real Fourier series and complex Fourier series 165 8A Real Fourier series on [−π,π] 165 8B Computing real Fourier coefficients 167 8B(i) Polynomials 167 8B(ii) Step functions 168 8B(iii) Piecewise linear functions 170 8B(iv) Differentiating real Fourier series 172 8C Relation between (co)sine series and real series 172 8D Complex Fourier. clearly suggests the much simpler complex form of the Fourier series x(t) = +X1 n=1 X ne in(2ˇf 0)t: (14) with the coe cients given by X n= 1 T Z T=2 T=2 x(t)e in(2ˇf 0)tdt (15) Here, the Fourier series is written for a complex periodic function x(t) with arbitrary period T= 1=f 0. Enjoy exclusive Amazon Originals as well as popular movies and TV shows. Here is a 7-term expansion (a0, b1, b3, b5, b7, b9, b11): Figure 5. Theorem Let f be a piecewise smooth function on the interval [0, L]. The Fourier expansion of the square wave becomes a linear combination of sinusoids: If we remove the DC component of by letting , the square wave become and the square wave is an odd function composed of odd harmonics of sine functions (odd). Due to 1) we need to calculate the Fourier-integral numerically, by sampling the matrix at different instants of time. Consider the sequence $(e_n)$, which turns out to be a basis for $L^2$. (f0(t+) f0(t )) if f is discontinuous at t. Chapter 2 is concerned with the generalized functions and their Fourier transforms. We, therefore, followed the clade model with a series of branch-site models, which allow one clade at a time to be designated as a set of "foreground" branches and test whether this clade has experienced episodes of positive selection compared to the remaining sets of "background" branches (ωforeground. Notes of Fourier Series These notes are provided by Mr. Fourier series is one of the most intriguing series I have met so far in mathematics. The Fourier representation of a piecewise smooth function f is the identity f(x) = √a0 2 + P ∞ k=1ak cos(kx) + P k=1bk sin(kx) We take it for granted that the series converges and that the identity holds at all points x where f is continuous. Daileda Fourier Series Introduction Periodic functions Piecewise smooth functions Inner products Definition 1: We say that f(x) is piecewisecontinuousif f has only finitely many discontinuities in any interval, and f(c+) and f(c−) exist for all c in the domain of f. Even and Odd Functions, 76 ix fc. Fourier Series Of Piecewise Function 1 Orthogonal Functions 12. $\begingroup$ Remember that you're not computing coefficients for two different functions - you're computing the coefficients of one function, except you will have two integrals when computing the Fourier coefficients due to the function being piecewise across the period. Due to 1) we need to calculate the Fourier-integral numerically, by sampling the matrix at different instants of time. fx− floor. A function is said to be piecewise continuous (some say sectionally continuous) if it is continuous except at a discrete set of jump points, where it at least has an identifiable value on the left and a different one on the right. table 3 represents a direct variation function. The idea that you stumbled upon is that of the "fourier transform". Ө(g(x)) = {f(x) such that there exist positive constants c1, c2, N such that 0 <= c1*g(x) <= f(x) <= c2*g(x) for all x > N}. Let us take a trivial example: the Fourier series of the cosine function: sage : f = piecewise ([(( 0 , 2 * pi ), cos ( x ))]) To get the cosine coefficient of order n of the Fourier series, one has to call the method fourier_series_cosine_coefficient with n and the half-period as argument:. In case of the even function, for example x 2, coefficients b n were zero, because the integrand x 2 sin n π x - is odd function. I'm taking a Fourier Analysis course using Churchill 's Fourier Series and Boundary Value Problems, 6th ed. It’s easier to say what the Fourier series does exactly at a discontinuity. larity as the sum of a piecewise polynomial function and a function which is continuously differentiable up to the specified order. f(x)={1 0) is an example of _____ operator. If f(x) is an odd function with period , then the Fourier Half Range sine series of f is defined to be. If f : R !C is a piecewise continuous 2ˇ-periodic function, then the numbers c k(f) = 1 2ˇ Z ˇ ˇ f(x)e ikxdx; k2Z (9) are called the Fourier coe cients of fand the series X1 k=1 c k(f)eikx is called the Fourier series for f. The Fourier integral is a natural extension of Fourier trigonometric series in the sense that it represents a piecewise smooth function whose domain is semi-infinite or infinite A periodic function f(x) defined in a finite interval (-L,L) can be expressed in Fourier series by extending this concept, non periodic functions defined in -∞0 centered around the real axis, then |a n| = Opexp( rn)q: Lecture 6 October 8, 2018 10 / 14. This is where the function integral_mcx_dwill be useful, replacing the TI’s built-in integrator. Convergence In order to justify the use of Fourier series to model functions and explore the various application of Fourier analysis, we must rst investigate whether the Fourier series is, indeed, a good approximation. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform. What the Fourier series does on either side of the discontinuity is more interesting. Convergence of Fourier Series for 2T-Periodic Functions The Fourier series of a 2T-periodic piecewise smooth function f(x) is a 0 + X1 n=1 a ncos nˇx T + b nsin nˇx T where a 0 = 1 2T Z T T f(x)dx; a n= 1 T Z T T f(x)cos nˇx T dx; b n= 1 T Z T T f(x)sin nˇx T dx: The series converges to f(x) at points of continuity of fand to f(x+)+f(x ) 2. Fourier Series Of Piecewise Function 1 Orthogonal Functions 12. is continuous and is -periodic if and only if ,i. Both of those shifts will affect the fourier series in a predictable way, so that if you can find the fourier series for the shifted function, you can easily convert to the fourier series of the original function. (US) IPA(key): /ˈsɪɹiz/, /ˈsiɹiz/. Again, using MathView to handle the detailed manipulation allows Let's have a look at a simple notebook example where the Fourier series approximates a unit step function at x=0 and calculate the coefficients. By using this information and choosing suitable values of θ (usually 0, π, or π), derive the following formulas for the sums of numerical series. Introduction to Complex Fourier Series Nathan P ueger 1 December 2014 Fourier series come in two avors. Differential equations involving piecewise functions -lapDE3. Fourier Series Summary. We use piecewise functions to describe situations in which a rule or relationship changes as the input value crosses certain "boundaries. This document describes an alternative, where a function is instead decomposed into terms of the form einx. FOURIER SERIES When the French mathematician Joseph Fourier (1768–1830) was trying to solve a problem in heat conduction, he needed to express a function f as an infinite series of sine and cosine functions: 兺 共a f 共x兲 苷 a 0 1 n cos nx bn sin nx兲 n苷1 苷 a 0 a1 cos x a2 cos 2x a3 cos 3x b1 sin x b2 sin 2x b3 sin 3x Earlier, Daniel Bernoulli and Leonard Euler had used such. Sines and cosines are the most fundamental periodic functions. Find the Fourier series of the following piecewise defined function, on the interval [-1, 1]: h (x) = (-1-x if-1 ≤ x < 0 1-x if 0 < x ≤ 1 x. 1 group with 20 teams (9 teams are sponsored Series E teams) play Bo8 games. Probabilistic Time Series Forecasting with Shape and Temporal Diversity Vincent LE GUEN (CNAM, Paris A Ranking-based, Balanced Loss Function for Both Classification and Localisation in Object Detection A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a. With simpy like : p = Piecewise((sin(t), 0 < t),(sin(t), t < pi), (0 , pi < t), (0, t < 2*pi)) fs = fourier_series(p, (t, 0, 2*pi)). ""The Fourier series of an even function is a cosine series and the Fourier series of an odd function is a sine series"" this is shown in this video lecture. Theorem Let f be a piecewise smooth function on the interval [0, L]. 5 Strum-Liouville Problems 12. IEEE Trans. series (plural series). Watch anytime, anywhere. (Received Pronunciation) IPA(key): /ˈsɪə. The fourier series is used to approximate a periodic function on a given interval using only whole multiples of the base frequency. When a function is discontinuous, its Fourier series doesn't necessarily equal the function. A Taylor series does not include terms with negative powers. The proof of the convergence of a Fourier series is out of the scope of this text, however, from this theorem, we can derive two important results [Haberman, pp. The piecewise linear function based on the floor function of time t, is an example of a sawtooth wave with period 1. Then f has. Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals using a set of complex The continuous time Fourier series synthesis formula expresses a continuous time, periodic function as the sum of continuous time, discrete. proves it as an application of the Fourier series convergence theorem!) 3. The amplitudes of the harmonics for this example drop off much more rapidly (in this case they go as 1/n 2 (which is faster than the 1/n decay seen in the pulse function Fourier Series (above)). In this problem, we are given a piecewise function and we have to find the Fourier series associated with the function. Appendix 0). Relation Between Trigonometric and Exponential Fourier Series. Get the free "Fourier Series of Piecewise Functions" widget for your website, blog, Wordpress, Blogger, or iGoogle. I'm trying to do problem 3, section 24. As an odd function, this has a Fourier sine series f(x) ˘. Do exponential fourier series also have fourier coefficients to be evaluated. In this article, f denotes a real valued function on which is periodic with period 2L. Download Introduction To Fourier Analysis And Wavelets books, This book provides a concrete introduction to a number of topics in harmonic analysis, accessible at the early graduate level or, in some cases, at an upper undergraduate level. Figures 5 and 6 show the even and the odd extension respectively, for the function given on its half-period. Fourier Convergence Theorem. Let f be a piecewise continuous function defined on [-1, 1] with a full Fourier series given by \frac{{a_0 }} {2} + \sum\limits_{k = 1}^\infty {\left( {a_k \cos \left( {k\pi x} \right) + b_k \sin \left( {k\pi x} \right)} \right). Tensorflow layers using piecewise Lagrange polynomials and Fourier series. Let f(x) be a -periodic piecewise continuous function. The amplitudes of the harmonics for this example drop off much more rapidly (in this case they go as 1/n 2 (which is faster than the 1/n decay seen in the pulse function Fourier Series (above)). This is where the function integral_mcx_dwill be useful, replacing the TI’s built-in integrator. A piecewise function is a function in which more than one formula is used to define the output over different pieces of the domain. Then f has. The following advice can save you time when computing. We investigate 2 periodic extensions of y=x to the interval [-L,L] along with their Fourier series. From the Bernoulli distribution we may deduce several probability density functions de-scribed in this document all of which are based on series of independent Bernoulli trials. If you are a student in one of the mathematical, physical, or engineering sciences, you will almost certainly find it necessary to learn. Problem1 Find the fundamental period and deduce and plot the magnitude and the phase of the exponential Fourier series coefficients Dn for the following periodic signals: (1) x(t) = cos(2t) + e-3 sin(4t) + 2 sin(2t + π/4) (2) a(t) shown in Fig. Find more Mathematics widgets in Wolfram|Alpha. It’s easier to say what the Fourier series does exactly at a discontinuity. Get the free "Fourier Series of Piecewise Functions" widget for your website, blog, Wordpress, Blogger, or iGoogle. Baron Jean Baptiste Joseph Fourier $$\left( 1768-1830 \right)$$ introduced the idea that any periodic function can be represented by a series of sines and cosines which are harmonically related. It looks like the whole Fourier Series concept is working. Find the Fourier series of h (x) = x on the interval [-π, π]. We, therefore, followed the clade model with a series of branch-site models, which allow one clade at a time to be designated as a set of "foreground" branches and test whether this clade has experienced episodes of positive selection compared to the remaining sets of "background" branches (ωforeground. Hence, any piecewise continuous function f(x) on [0;L] can be represented both as Fourier cosine series and a Fourier sine series. So the question is, can we write f(x) = a 0 + X1 k=1 b kcos(kx) + X1 k=1 c ksin(kx). A function is called C 1 -piecewise on some interval I= [a;b] if there exists a partition. larity as the sum of a piecewise polynomial function and a function which is continuously differentiable up to the specified order. 20 3 To apply effective mathematical tools for the solutions of first order ordinary differential equations. m that is similar to coef_legen and has signature function [z,s,c]=coef_fourier(func,n) % [z,s,c]=coef_fourier(func,n) % more comments % your name and the date to compute the first coefficients of the Fourier series using Equation. The Fourier series of a piecewise continuous function with 8 segments and no discontinuities can be found from the above applet with Fn = 1. Fourier series i. Chapter 2 is concerned with the generalized functions and their Fourier transforms. Using the substitution x = Ly π (−π ≤ x ≤ π), we can transform it into the function. u n(x) := cos(nˇx L) for n= 0;1;2;:::. Object must have a datetime-like index (DatetimeIndex, PeriodIndex, or TimedeltaIndex), or pass datetime-like values to the on or level keyword. 2) The entries are only piecewise continuous in time, with discontinuities in between. This email address is being protected from spambots. The spectrum contains only terms with b n. REFERENCES [1]. An improvement of the Beurling-Helson theorem. Fourier Series Of Piecewise Function 1 Orthogonal Functions 12. fourier does not transform piecewise. Fourier analysis has been applied to stock trading, but research examining the technique has found little to no evidence that it is useful in practice. It is also quite easy to show that if f(x) is piecewise smooth, then also is F(x). If f is piecewise continuous with piecewise continuous derivative on [0,L ), then its sine Fourier series converges to the odd periodic extension of f modified at discontinuities using averages. Before looking at further examples of Fourier series it is useful to distinguish two classes of functions for which the Euler-Fourier formulas for the coefficients can be simplified. function f (x) =π, π∈[]−π, π, , extended periodically on the real line; this function is discontinuous at x =(2k +1)π for all interger values of k. There are countless types of symmetry, but the ones we want to focus on are. November 2019; Issues properties of discrete and continuous finite Fourier series. Piecewise Constant Function. Travel and explore the world of cinema. Derivative numerical and analytical calculator. Free piecewise functions calculator - explore piecewise function domain, range, intercepts, extreme points and asymptotes step-by-step This website uses cookies to ensure you get the best experience. Sine series. In order to incorporate general initial or boundaryconditions into oursolutions, it will be necessary to have some understanding of Fourier series. In this article, f denotes a real valued function on which is periodic with period 2L. Again, using MathView to handle the detailed manipulation allows Let's have a look at a simple notebook example where the Fourier series approximates a unit step function at x=0 and calculate the coefficients. list_of_pairs is a list of pairs (I, fcn), where fcn is a Sage function (such as a polynomial over RR, or functions using the lambda notation), and I is an interval such as I = (1,3). In this section we will define piecewise smooth functions and the periodic extension of a function. introduce one of the many ways that Fourier series are used in applications. The main goal is to have a Fourier series function able to work in exact mode for piecewise signals. SERIES IN OPTICS AND OPTOELECTRONICS Series Editors: Robert G W Brown, University of California, Irvine, USA E Roy Pike, Kings This is now known as the Fourier series representation of a periodic function. over an x- range of three periods of the Fourier series. Fourier Series Of Piecewise Function 1 Orthogonal Functions 12. Just as the Fourier series expansion of the Bernoulli functions are useful in computing the special values of Dirichlet L-functions, we would like to see some applications to a certain generalization of Dirichlet L-functions and higher-order generalized Bernoulli numbers in near future. 3 Fourier Cosine and Sine Series 12. Let f(x) be a piecewise C1 function in Per L(R). Aug 30, 2020 an introduction to laplace transforms and fourier series springer undergraduate mathematics series Posted By Ry?tar? ShibaPublic Library TEXT ID 298292c4 Online PDF Ebook Epub Library AN INTRODUCTION TO LAPLACE TRANSFORMS AND FOURIER SERIES SPRINGER UNDERGRADUATE MATHEMATICS SERIES INTRODUCTION : #1 An Introduction To Laplace. Tensorflow layers using piecewise Lagrange polynomials and Fourier series. Riemann-Lebesgue lemma (1. equations and fourier integral representation. FOURIER SERIES When the French mathematician Joseph Fourier (1768–1830) was trying to solve a problem in heat conduction, he needed to express a function f as an infinite series of sine and cosine functions: 兺 共a f 共x兲 苷 a 0 1 n cos nx bn sin nx兲 n苷1 苷 a 0 a1 cos x a2 cos 2x a3 cos 3x b1 sin x b2 sin 2x b3 sin 3x. 2 - Fourier Series and Convergence • State the definition of a Piecewise Continuous function. Let f ( x) be a function, which is twice differentiable, such that f ( x ), f ' ( x ), and f '' ( x) are piecewise continuous on the interval. Fourier Series Summary. $\endgroup$ - Eweler Sep 28 '14 at 20:59. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform. Notice that if the periodic extension of is a continuous function, then the Fourier periodic extension of coincides with the periodic extension of. Limit calculation added. The fundamental result on convergence of Fourier series, due to Dirichlet, states: Theorem. In particular, if L > 0then the functions cos nˇ L t and sin nˇ L t, n =1, 2, 3, are periodic with fundamental. Is there any way to solve that? Perhaps an alternative? Many thanks. Introduction to Fourier sine series and Fourier cosine series - Duration: 17:54. Mathematica for Fourier Series and Transforms Fourier Series Periodic odd step function Use built-in function "UnitStep" to define. Do exponential fourier series also have fourier coefficients to be evaluated. For a distribution in a continuous variable x the Fourier transform of the probability density. Then f has. There are countless types of symmetry, but the ones we want to focus on are. < tn ≤ 2L where f (t) is not differentiable, and if at each of these points the left and right-hand limits lim f (t) and lim f (t) exist (although they might not be equal). (Reversibility of Fourier transform for continuous functions) Let f and g be real- or complex-valued functions which are continuous and piecewise smooth on the real line, and suppose that they are absolutely integrable. 2 Uniform convergence of classical Fourier series Let2 fbe piecewise smooth on ( 1;1), continuous on [ 1;1], with f( 1) = f(1). Fourier transform unitary, ordinary frequency. I'm taking a Fourier Analysis course using Churchill 's Fourier Series and Boundary Value Problems, 6th ed. Recall how a convolutional layer overlays a kernel on a section of an image and performs bit-wise multiplication with all of the values at that location. It is noted that, like and , the weighted average is discontinuous at if. We present an algorithm for the evaluation of the Fourier transform of piecewise constant functions of two variables. In math, a piecewise function (or piecewise-defined function) is a function whose definition changes depending on the value of the independent variable. , we use ˘and not =. Fourier series. Piecewise Functions 2 Page 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. City Of Laredo Solid Waste Schedule. piecewise smooth function f and an interval [ L;L], the Fourier series of f converges to either f (if f is continuous) or the average of f on [ L;L]. The Fourier series for a number of piecewise smooth functions are listed in Table l of §21, and Theorem 2. Proposition (i) The Fourier series of an odd function f : [−L,L] → R coincides with its Fourier sine series on [0,L]. Signal Processing : Fourier transform is the process of breaking a signal into a sum of. Fn = 2 shows the special case of the segments approximating a sine. This requires fto be periodic on [0;2ˇ]. Find the Fourier series of the following piecewise defined function, on the interval [-1, 1]: h (x) = (-1-x if-1 ≤ x < 0 1-x if 0 < x ≤ 1 x. Three halves. The segments are set by the parameters 'a' to 'h'. Fourier series has its application in problems pertaining to Heat conduction, acoustics, etc. The decomposition of non-periodic functions is accomplished with the Fourier. Fourier Series. Fn = 2 shows the special case of the segments approximating a sine. The Fourier Transform can, in fact, speed up the training process of convolutional neural networks. Find the Fourier coecients and the Fourier series of the square-wave function f dened by. • Let P be the set of piecewise continuous fuctions from I to R (a linear subspace of the vector space of all such functions). When a function is discontinuous, its Fourier series doesn't necessarily equal the function. 3 Fourier Cosine and Sine Series 12. Exercises for MAT3320 Fabrizio Donzelli 1 Fourier Series 1. the Gibbs phenomenon, the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity. As an odd function, this has a Fourier sine series f(x) ˘. On-Line Fourier Series Calculator is an interactive app to calculate Fourier Series coefficients (Up to 10000 elements) for user-defined piecewise functions up to 5 pieces, for example. The fourier series is used to approximate a periodic function on a given interval using only whole multiples of the base frequency. An improvement of the Beurling-Helson theorem. We shall shortly state three Fourier series expansions. By using this information and choosing suitable values of θ (usually 0, or s), derive the following formulas for the sums of numerical series. Compute Fourier Series Representation of a Function. 30 The Fourier series of a piecewise smooth, 2π-periodic func- tion f(x) converges uniformly to f(x) on [−π,π]. Both of those shifts will affect the fourier series in a predictable way, so that if you can find the fourier series for the shifted function, you can easily convert to the fourier series of the original function. If I compute the antiderivative of the piecewise version of the abs function. If f is piecewise continuous with piecewise continuous derivative on [0,L ), then its sine Fourier series converges to the odd periodic extension of f modified at discontinuities using averages. Even and Odd Functions, 76 ix fc.
2021-01-20T06:09:00
{ "domain": "mayoume.pl", "url": "http://oqxl.mayoume.pl/fourier-series-of-piecewise-function.html", "openwebmath_score": 0.8784337043762207, "openwebmath_perplexity": 763.9511490232254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713878802045, "lm_q2_score": 0.853912747375134, "lm_q1q2_score": 0.8413357977348968 }
https://mathhelpboards.com/threads/3-3-003-find-the-equation-of-the-curve-that-passes-through-the-point-1-2.27646/
# [SOLVED]3.3.003 Find the equation of the curve that passes through the point (1,2) #### karush ##### Well-known member Find the equation of the curve that passes through the point $(1,2)$ and has a slope of $(3+\dfrac{1}{x})y$ at any point $(x,y)$ on the curve. ok this is weird I woild assume the curve would be an parabola and an IVP soluiton... #### MarkFL Staff member You are essentially being asked to solve the IVP: $$\displaystyle \d{y}{x}=\left(3+\frac{1}{x}\right)y$$ where $$y(1)=2$$. #### karush ##### Well-known member $\dfrac{1}{y}dy =\left(3+\dfrac{1}{x}\right) dx + C$ then $\int$ both sides?? #### MarkFL Staff member I'd suggest you wait until you integrate to introduce a constant of integration. Or use definite integrals with the boundaries as the limits. #### karush ##### Well-known member $\ln y = 3x+\ln x$ ok so we got $y(1)=2$, $\quad x=1\quad y=2$ $\ln 2 = 3(1) + \ln 1 + C$ so $\ln 2-3 =C$ ummmmm!! Last edited: #### MarkFL Staff member You are essentially being asked to solve the IVP: $$\displaystyle \d{y}{x}=\left(3+\frac{1}{x}\right)y$$ where $$y(1)=2$$. I would next separate the variables, and in doing so we are dividing by $$y$$, thereby eliminating the trivial solution: $$\displaystyle y\equiv0$$ And so we have: $$\displaystyle \frac{1}{y}\,dy=\left(3+\frac{1}{x}\right)\,dx$$ Integrate, using the boundaries as limits: $$\displaystyle \int_2^y \frac{1}{u}\,du=\int_1^x 3+\frac{1}{v}\,dv$$ $$\displaystyle \left[\ln|u|\right]_2^y=\left[3v+\ln|v|\right]_1^x$$ $$\displaystyle \ln|y|-\ln(2)=(3x+\ln|x|)-(3+\ln(1))$$ $$\displaystyle \ln|y|=3x+\ln|x|-3+\ln(2)$$ This implies: $$\displaystyle y(x)=2xe^{3(x-1)}$$ We could also have written the ODE as: $$\displaystyle \d{y}{x}-\left(3+\frac{1}{x}\right)y=0$$ Compute the integrating factor: $$\displaystyle \mu(x)=\exp\left(-\int 3+\frac{1}{x}\,dx\right)=\frac{e^{-3x}}{x}$$ And the ODE becomes: $$\displaystyle \frac{e^{-3x}}{x}\d{y}{x}-\frac{e^{-3x}}{x}\left(3+\frac{1}{x}\right)y=0$$ $$\displaystyle \frac{d}{dx}\left(\frac{e^{-3x}}{x}y\right)=0$$ $$\displaystyle \frac{e^{-3x}}{x}y=c_1$$ $$\displaystyle y(x)=c_1xe^{3x}$$ $$\displaystyle y(1)=c_1e^3=2\implies c_1=2e^{-3}$$ Hence: $$\displaystyle y(x)=2xe^{3(x-1)}$$ #### karush ##### Well-known member so i didn't use the limits properly wow ... that was a great help appreciate all the steps #### skeeter ##### Well-known member MHB Math Helper $\ln y = 3x+\ln x$ ok so we got $y(1)=2$, $\quad x=1\quad y=2$ $\ln 2 = 3(1) + \ln 1 + C$ so $\ln 2-3 =C$ ummmmm!! $\ln{y} = 3x + \ln{x} + \ln{2} - 3$ $\ln{y} = 3(x-1) + \ln(2x)$ $y = 2x \cdot e^{3(x-1)}$ #### karush ##### Well-known member soi stopped too soon! have to admit that was an interesting problem Last edited:
2020-08-03T09:28:25
{ "domain": "mathhelpboards.com", "url": "https://mathhelpboards.com/threads/3-3-003-find-the-equation-of-the-curve-that-passes-through-the-point-1-2.27646/", "openwebmath_score": 0.9803794026374817, "openwebmath_perplexity": 2686.3459217377535, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.985271386582759, "lm_q2_score": 0.8539127455162773, "lm_q1q2_score": 0.8413357947955131 }
http://mathhelpforum.com/calculus/26553-calculate-tangent-line-2-circles.html
# Thread: Calculate Tangent Line to 2 Circles 1. ## Calculate Tangent Line to 2 Circles I am trying to calculate the equation of line tangent to 2 circles. I have attached a picture to give a better idea. Circle 1 has a center at (3,4) and radius of 5 Circle 2 has a center at (9,19) and radius of 11, So far I have wrote the equation = of the 2 circles, and found the distance between the points the tangent lines touch on each circle. My picture is not the most acurate one, it is just to give an idea. Thanks for the help 2. ## Geometry of tangents First, remember that a tangent to a circle is perpendicular to a radial line segment to the point of tangency. Thus, recalling how we define the distance between a point and a line, the distance between a line tangent to a circle and the center of that circle is the radius of that circle. For a line of the form $\displaystyle Ax+By+C=0$, the distance to the point $\displaystyle (x_0,y_0)$ is $\displaystyle d=\frac{|Ax_0+By_0+C|}{\sqrt{A^2+B^2}}$ Since the centers of the circles are (3,4) and (9,19), and the radii are 5 and 11, respectively, you have $\displaystyle \frac{|3A+4B+C|}{\sqrt{A^2+B^2}}=5$ and $\displaystyle \frac{|9A+19B+C|}{\sqrt{A^2+B^2}}=11$ Note you will have an unneeded degree of freedom (as you could multiply the line equation $\displaystyle Ax+By+C=0$ by a constant and still have the same line), so you can pick A,B,C to be scaled such that $\displaystyle A^2+B^2=1$, so you then can define $\displaystyle \theta$ such that $\displaystyle A=\cos\theta,\,\,B=\sin\theta$, and our equations are $\displaystyle |3\cos\theta+4\sin\theta+C|=5$ and $\displaystyle |9\cos\theta+19\sin\theta+C|=11$ Thus you have two equations for two unknowns. Note that there will be four solutions: two will be the 'external' tangents, and the other two will be 'internal' tangents (which cross each other between the circles) --Kevin C. 3. Originally Posted by CaliMan982 I am trying to calculate the equation of line tangent to 2 circles. I have attached a picture to give a better idea. Circle 1 has a center at (3,4) and radius of 5 Circle 2 has a center at (9,19) and radius of 11, So far I have wrote the equation = of the 2 circles, and found the distance between the points the tangent lines touch on each circle. My picture is not the most acurate one, it is just to give an idea. Thanks for the help Hello, I've attached a slightly more accurate sketch. The steps to do the construction: 1. Draw a circle around the center of the green circle with radius $\displaystyle r_{green}-r_{blue}$ 2. Draw a circle around the midpoint of $\displaystyle M_{green}M_{blue}$ 3. Connect the intersection points of the circle of #2 with the circle of #1 with $\displaystyle M_{blue}$. The tangent you are looking for is a parallel of this line. 4. Translate this line by $\displaystyle r_{blue}$ units until it touches both circles. Calculating the angles: $\displaystyle \tan(\alpha) = \frac{15}6 = \frac52$ For symmetry reasons the angles $\displaystyle \beta_1$ and $\displaystyle \beta_2$ must be equal: $\displaystyle \tan(\beta) = \frac6{15} = \frac25$ Calculating the slop of the tangents (red): Use $\displaystyle \tan(\alpha + \beta) = \frac{\tan(\alpha) \pm \tan(\beta)}{1 \mp \tan(\alpha) \cdot \tan(\beta)}$ Now you have to calculate the coordinates of the touching points on one of the circles to complete the equation of the tangent. 4. where does the 16/5 com from? 5. Originally Posted by CaliMan982 where does the 16/5 com from? Pardon? If you mean $\displaystyle \frac{15}{6}$ then it is the slope between the 2 centres of the circles: $\displaystyle \frac{19-4}{9-3}=\frac{15}{6}$ 6. How would I calculate a point on the cirlce the tangent line touches, i just calculated the slope. 7. Originally Posted by CaliMan982 How would I calculate a point on the cirlce the tangent line touches, i just calculated the slope. Have a look here: http://www.mathhelpforum.com/math-he...348-post1.html 8. ## Complete solution to the four tangent lines of two circles Tangents to Two Circles gives complete expressions for cos theta, sin theta, and "C" in terms of the coordinates of the centers and radii of the two circles, which you may find helpful.
2018-05-24T10:12:05
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/26553-calculate-tangent-line-2-circles.html", "openwebmath_score": 0.7372506260871887, "openwebmath_perplexity": 310.13214703747076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303440461106, "lm_q2_score": 0.849971181358171, "lm_q1q2_score": 0.8413272668730374 }
https://math.stackexchange.com/questions/3530949/solve-sin-x-cos-x-sin-x-cos-x
# Solve $\sin x + \cos x = \sin x \cos x.$ I have to solve the equation: $$\sin x + \cos x = \sin x \cos x$$ This is what I tried: $$\hspace{1cm} \sin x + \cos x = \sin x \cos x \hspace{1cm} ()^2$$ $$\sin^2 x + 2\sin x \cos x + \cos^2 x = \sin^2 x \cos^2x$$ $$1 + \sin(2x) = \dfrac{4 \sin^2 x \cos^2x}{4}$$ $$1 + \sin(2x) = \dfrac{\sin^2(2x)}{4}$$ $$\sin^2(2x) - 4 \sin(2x) -4 = 0$$ Here we can use the notation $$t = \sin(2x)$$ with the condition that $$t \in [-1,1]$$. $$t^2-4t-4=0$$ Solving this quadratic equation we get the solutions: $$t_1 = 2+ 2\sqrt{2} \hspace{3cm} t_2 = 2 - 2\sqrt{2}$$ I managed to prove that $$t_1 \notin [-1, 1]$$ and that $$t_2 \in [-1, 1]$$. So the only solution is $$t_2 = 2 - \sqrt{2}$$. So we have: $$\sin(2x) = 2 - 2\sqrt{2}$$ From this, we get: $$2x = \arcsin(2-2\sqrt{2}) + 2 k \pi \hspace{3cm} 2x = \pi - \arcsin(2-2\sqrt{2}) + 2 k \pi$$ $$x = \dfrac{1}{2} \arcsin(2-2\sqrt{2}) + k \pi \hspace{3cm} x = \dfrac{\pi}{2} - \dfrac{1}{2}\arcsin(2 - 2\sqrt{2}) + k \pi$$ Is this solution correct? It's such an ungly answer, that I kind of feel like it can't be right. Did I do something wrong? • You have found some extraneous solutions by squaring both sides of the original equation. Check which of your solutions are valid in the original equation. Feb 1 '20 at 22:50 • The math does seem correct, and just because it is an ugly answer doesn't mean it is not correct. Feb 1 '20 at 23:47 • @PeterForeman Can you please expand on what you said a little bit? I don't see how squaring both sides added extra solutions. What do you mean by "Check which of your solutions are valid in the original equation"? What should I check and how? – user592938 Feb 2 '20 at 1:14 • Your extraneous solutions come from the equation $\sin x+\cos x=-\sin x\cos x$. By squaring, you combine the solutions to $\sin x+\cos x=\sin x\cos x$ and the solutions to $\sin x+\cos x=-\sin x\cos x$ into your final results. Feb 2 '20 at 1:30 Note that when you square the equation $$(\sin x + \cos x)^2 = (\sin x \cos x)^2$$ which can be factorized as $$(\sin x + \cos x - \sin x \cos x)(\sin x + \cos x + \sin x \cos x)=0$$ you effectively introduced another equation $$\sin x + \cos x =- \sin x \cos x$$ in the process beside the original one $$\sin x + \cos x = \sin x \cos x$$. The solutions obtained include those for the extra equation as well. Normally, you should plug the solutions into the original equation to check and exclude those that belong to the other equation. However, given the complexity of the solutions, it may not be straightforward to do so. Therefore, the preferred approach is to avoid the square operation. Here is one such approach. Rewrite the equation $$\sin x + \cos x = \sin x \cos x$$ as $$\sqrt2 \cos(x-\frac\pi4 ) = \frac12 \sin 2x = \frac12 \cos (2x-\frac\pi2 )$$ Use the identity $$\cos 2t = 2\cos^2 t -1$$ on the RHS to get the quadratic equation below $$\sqrt2 \cos(x-\frac\pi4) = \cos^2 (x-\frac\pi4 ) -\frac12$$ or $$\left( \cos(x-\frac\pi4) - \frac{\sqrt2-2}2\right)\left( \cos(x-\frac\pi4) - \frac{\sqrt2+2}2\right)=0$$ Only the first factor yields real roots $$x = 2n\pi + \frac\pi4 \pm \cos^{-1}\frac{\sqrt2-2}2$$ As your error has been pointed out, I am providing a different way to tackle the problem without introducing extra solutions. From the given equation, we have $$1=(1-\sin x)(1-\cos x)$$, which is equivalent to $$1=\Biggl(1-\cos\left(\frac{\pi}2-x\right)\Biggr)\left(2\sin^2 \frac{x}{2}\right)=4\sin^2\left(\frac{\pi}{4}-\frac{x}{2}\right)\sin^2\frac{x}{2}$$ That is $$2\sin\left(\frac{\pi}{4}-\frac{x}{2}\right)\sin\frac{x}{2}=\pm 1.\tag{1}$$ This means $$\cos\left(\frac{\pi}{4}-x\right)-\cos\frac{\pi}{4}=\pm 1.$$ Therefore $$\cos\left(\frac{\pi}{4}-x\right)=\frac{1\pm\sqrt{2}}{\sqrt{2}}.$$ But $$\frac{1+\sqrt2}{\sqrt2}>1$$, so $$\cos\left(\frac{\pi}{4}-x\right)=\frac{1-\sqrt 2}{\sqrt2}.\tag{2}$$ Therefore $$2n\pi+\left(\frac{\pi}{4}-x\right) = \pm \arccos \frac{1-\sqrt 2}{\sqrt2}$$ for some integer $$n$$. This gives us $$x=\left(2n+\frac14\right)\pi \pm \arccos \frac{1-\sqrt 2}{\sqrt2}.\tag{3}$$ In fact there are also complex solutions to $$(1)$$, and they are given by $$x=\left(2n+\frac{1}{4}\right)\pi\pm i\operatorname{arccosh} \frac{1+\sqrt 2}{\sqrt2}.\tag{4}$$ Note that $$\operatorname{arccosh} \frac{1+\sqrt 2}{\sqrt2}=\ln\left(\frac{1+\sqrt2+\sqrt{1+2\sqrt2}}{\sqrt2}\right).$$ All real and complex solutions to the original equation are given by $$(3)$$ and $$(4)$$. Note that $$\frac\pi4 + \arccos \frac{1-\sqrt 2}{\sqrt2}=\frac12\arcsin(2-2\sqrt2)+\pi$$ and $$\frac\pi4 -\arccos \frac{1-\sqrt 2}{\sqrt2}=\frac{\pi}{2}-\frac12\arcsin(2-2\sqrt2)-\pi.$$ So your solutions only work for odd $$k$$. Even values of $$k$$ do not give solutions. There is a subtle way to get rid of the extra solutions that uses the work you've done. When you render $$\sin 2x=2\sin x\cos x =2-2\sqrt{2}$$ you then have with $$u=\sin x, v=\cos x$$: $$uv=1-\sqrt{2}$$ $$\color{blue}{u+v=uv=1-\sqrt{2}}$$ where the blue equation reimposes the original requirement and it's goodbye extraneous roots. Using the Vieta formulas for a quadratic polynomial this is solved by rendering $$u$$ and $$v$$ as the two roots of the quadratic equation $$w^2-(1-\sqrt2)w+(1-\sqrt2)=0$$ The roots of this equation are obtained by the usual methods, and because of the symmetry of the original equation between $$\sin x$$ and $$\cos x$$ you may take either root as the sine and the other as the cosine. Note that with the negative product the values must be oppositely signed which informs us of quadrant location. $$\sin x=\dfrac{1-\sqrt2+\sqrt{2\sqrt2-1}}{2},\cos x=\dfrac{1-\sqrt2-\sqrt{2\sqrt2-1}}{2}$$ ($$x$$ in 2nd quadrant) $$\sin x=\dfrac{1-\sqrt2-\sqrt{2\sqrt2-1}}{2},\cos x=\dfrac{1-\sqrt2+\sqrt{2\sqrt2-1}}{2}$$ ($$x$$ in 4th quadrant) Then with the quadrant information above we may render the correct roots for $$x$$: $$x=\pi-\arcsin(\dfrac{1-\sqrt2+\sqrt{2\sqrt2-1}}{2})+2n\pi$$ $$x=\arcsin(\dfrac{1-\sqrt2-\sqrt{2\sqrt2-1}}{2})+2n\pi$$ Using auxiliary angle, we first change the equation $$\sin x+\cos x=\sin x \cos x \tag*{(*)}$$ equivalently to $$\sqrt{2} \cos \left(x-\frac{\pi}{4}\right)= \frac{\sin 2 x}{2}$$ Putting $$y=x-\dfrac{\pi}{4}$$ gives a quadratic equation in $$\cos y$$ $$\sqrt{2} \cos y=\frac{1}{2}\left(2 \cos ^{2} y-1\right)$$ Solving the quadratic equation yields $$\cos y=\frac{\sqrt{2}-2}{2} \text { or } \frac{\sqrt{2}+2}{2} \text { (rejected) }$$ Plugging the general solution for $$(*)$$ is $$x=\frac{(8 n+1) \pi}{4} \pm \cos ^{-1}\left(\frac{\sqrt{2}-2}{2}\right),$$ where $$n\in Z.$$
2022-01-22T06:06:40
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3530949/solve-sin-x-cos-x-sin-x-cos-x", "openwebmath_score": 0.8731548190116882, "openwebmath_perplexity": 175.32820810013965, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303437461133, "lm_q2_score": 0.849971181358171, "lm_q1q2_score": 0.8413272666180484 }