url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://www.khanacademy.org/math/precalculus/precalc-matrices/solving-equations-with-inverse-matrices/e/writing-systems-of-equations-as-matrix-equations
|
# Represent linear systems with matrix equations
Represent systems of two linear equations with matrix equations by determining A and b in the matrix equation A*x=b.
### Problem
The following system of equations is represented by the matrix equation .
b, with, vector, on top, equals
Represent each row and column in the order in which the variables and equations appear.
Get 3 answers correct in a row
|
2016-05-26 00:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6623906493186951, "perplexity": 594.5617878707222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275429.29/warc/CC-MAIN-20160524002115-00051-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://earthscience.stackexchange.com/questions/143/how-does-remote-sensing-of-ocean-currents-work
|
# How does remote sensing of ocean currents work?
I am aware that techniques exist for measuring surface currents using HF radar, either from land-based installations or from space.
How do these work? I have assumed in the past that it's some sort of doppler system, but if I'm right about that, how are currents separated out from waves?
• Not a complete answer, but we can measure salinity, winds, and waves from space, but only at the surface.
– gerrit
Apr 16 '14 at 17:23
• With regard to HF radar, did you notice codar.com/intro_hf_radar.shtml? Perhaps Paduan and Graber (1997) is of interest as well. Apr 16 '14 at 22:03
• @TorbjørnT. hah, you're quite right, there's a decent explanation on the site that I linked! I shall summarise as an answer... Apr 17 '14 at 5:40
## 2 Answers
I'm not familiar with land-based methods, but for global measurements, one method is to use satellite altimetry (I'm more familiar with the geodesy side, but many of the same satellites are used). I think many of the current methods interpolate global or regional currents from a sparse network of buoys. As more radar satellites are launched, however, satellite measurements of sea surface currents will become more common.
Global-scale satellite-based radar altimetry measures the average elevation of the ocean's surface over a few kilometers (the exact amount depends on the wavelength of the radar band being used). Over this distance, waves cancel out, so the measurement is accurate to within a few centimeters. (This is aided by using the motion of the satellite to enhance the image, referred to as Synthetic Aperture Radar (SAR).)
Surface currents in the ocean are directly related to the slope of the ocean's surface. (Again, at the scales we're working with, wind-formed waves cancel out.)
Once you have an accurate snapshot of the elevation of the ocean, you can calculate the direction and magnitude of the surface currents. (Note that this is only the surface currents! Deep ocean currents are different matter altogether.)
This is also how we coarsely map the ocean floor from space. (For example, Sandwell & Smith's extensive work: http://topex.ucsd.edu/WWW_html/mar_topo.html)
The average elevation of the ocean's surface with respect to center of the Earth follows the geoid (by definition).
Seamounts on the ocean floor cause a preturbation in the geoid above them. Water effectively "bunches up" around seamounts (or rather, the surface of gravitational equipotential "bunches up" above the seamount).
By repeatedly measuring the oceans surface (over several years) we can remove the effect of surface currents and get an accurate picture of what the geoid looks like over the ocean. You can then use this to predict water depths over the oceans on a spatial scale of ~1 kilometer.
In fact, to calculate surface currents, you need this information to begin with. The surface currents are calculated as deviations from this measurement of the geoid.
Joe Kington has provided an excellent answer re sensing from space. Torbjørn T. pointed out in a comment that the site that I linked for context actually has a good explanation... so after blushing slightly, I shall summarise it here. People with more specialist knowledge are, of course, welcome to elaborate or correct any misunderstanding. Incidentally, it came as a surprise to me to realise that the land-based and space-based methods are entirely different techniques - so perhaps this should have been two questions.
There's a good explanation at http://www.codar.com/intro_hf_radar.shtml. In summary:
The surface of the sea, complete with waves, acts in a manner analogous to a diffraction grating. When illuminated with HF radar, only a specific frequency is returned in the direction of the transmitter, and that specific frequency corresponds to waves of a particular wavelength that are travelling directly towards or away from the transmitter. This wavelength is one that is always present in the ocean (and, presumably, always present at some level in every direction of travel?).
Assuming deep water, because we know the wavelength and frequency of the wave, we can calculate its speed. We can also obtain its speed from doppler shift in the radar return. Any difference between the two must be due to surface currents.
|
2021-12-05 05:12:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8063361048698425, "perplexity": 890.2668831763635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00039.warc.gz"}
|
https://www.poemas-de-amor.net/id/involutory-matrix-eigenvalues-e6701f
|
Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. In general, matrix multiplication between two matrices involves taking the first row of the first matrix, and multiplying each element by its "partner" in the first column of the second matrix (the first number of the row is multiplied by the first number of the column, second number of the row and second number of column, etc.). {\displaystyle 3x+y=0} where each λi may be real but in general is a complex number. n The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. . , are the same as the eigenvalues of the right eigenvectors of {\displaystyle E_{1}\geq E_{2}\geq E_{3}} I , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue In linear algebra, an eigenvector (/ˈaɪɡənˌvɛktər/) or characteristic vector of a linear transformation is a nonzero vector that changes by a scalar factor when that linear transformation is applied to it. {\displaystyle b} Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem, where Define an eigenvalue to be any scalar λ ∈ K such that there exists a nonzero vector v ∈ V satisfying Equation (5). In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. In particular, the eigenvalues of the sum of the identity matrix I and another matrix is one of the rst sums that one encounters in elementary linear algebra. Because the eigenspace E is a linear subspace, it is closed under addition. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems. E d E / ( By definition of a linear transformation, for (x,y) ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u,v ∈ E, then, So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. 1 {\displaystyle A} {\displaystyle AV=VD} A 2 H In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. 2 2 k x First of all, we observe that if λ is an eigenvalue of A, then λ 2 is an eigenvalue of A 2. i This particular representation is a generalized eigenvalue problem called Roothaan equations. A See the post “Determinant/trace and eigenvalues of a matrix“.) by their eigenvalues Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. Similar observations hold for the SVD, the singular values and the coneigenvalues of (skew-)coninvolutory matrices. Let λi be an eigenvalue of an n by n matrix A. 1 Equation (3) is called the characteristic equation or the secular equation of A. You can also provide a link from the web. | E In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time μ / ) I've searched through internet and the solutions I found is all about minimal polynomial which I haven't learnt. ) ) 2 There are some really excellent tools for describing diagonalisability, but a bit of work needs to be done previously. {\displaystyle D^{-1/2}} 0 − Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. then is the primary orientation/dip of clast, @Theo Bendit Well, since this is on my linear algebra final exam. T {\displaystyle \mathbf {v} } This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. ] [ ≤ E λ If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. They are very useful for expressing any face image as a linear combination of some of them. n x is the characteristic polynomial of some companion matrix of order {\displaystyle I-D^{-1/2}AD^{-1/2}} {\displaystyle k} The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. Viewed 624 times 2 $\begingroup$ On my exam today there's this question: A is a real n by n matrix and it is its own inverse. 3 × Matrix A: Find. E Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which Two proofs given Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. for use in the solution equation, A similar procedure is used for solving a differential equation of the form. By Theorem 5(iii), Pl +2 P2 is involutory for any idempotent matrix P2 if and only if PIP2 = P2Pl = - P2, (4.1) so that each row and column of P2 must be a left and right eigenvector of Pl, respectively, for X = - 1. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. A . 1 T is the secondary and 3 ] [6][7] Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. 3 − [13] Charles-François Sturm developed Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. {\displaystyle A} {\displaystyle v_{1}} If one infectious person is put into a population of completely susceptible people, then The linear transformation in this example is called a shear mapping. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. A In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. {\displaystyle k} respectively, as well as scalar multiples of these vectors. 2 Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. = [16], At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. distinct eigenvalues 1 The eigensystem can be fully described as follows. Its characteristic polynomial is 1 − λ3, whose roots are, where I Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. 1 x If E T has a characteristic polynomial that is the product of its diagonal elements. v is the (imaginary) angular frequency. ipjfact Hankel matrix with factorial elements. T k The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. 2 Thus, if matrix A is orthogonal, then is A T is also an orthogonal matrix. That is, there is a basis consisting of eigenvectors, so $A$ is diagonalizable. 0 {\displaystyle m} {\displaystyle A^{\textsf {T}}} i {\displaystyle u} Prove that A is diagonalizable. The two complex eigenvectors also appear in a complex conjugate pair, Matrices with entries only along the main diagonal are called diagonal matrices. x 2 V | {\displaystyle Av=6v} ( A Even cursory examination of the numerical stability of the represen tation (1.1) is an imaginary unit with E Points along the horizontal axis do not move at all when this transformation is applied. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. 3 A [3][4], If V is finite-dimensional, the above equation is equivalent to[5]. Both equations reduce to the single linear equation [ is the same as the transpose of a right eigenvector of T λ ) We've shown that $E$ spans $\Bbb R^n$. Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy Equation (2). − A property of the nullspace is that it is a linear subspace, so E is a linear subspace of ℂn. 4 The basic equation is AX = λX The number or scalar value “λ” is an eigenvalue of A. has passed. This orthogonal decomposition is called principal component analysis (PCA) in statistics. , that is, This matrix equation is equivalent to two linear equations. and n This is a finial exam problem of linear algebra at the Ohio State University. . v {\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}},} Involutory matrix diagonaliable. Taking the transpose of this equation. I is a diagonal matrix with ω Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. I A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. 1 An example of an eigenvalue equation where the transformation [23][24] = {\displaystyle 2\times 2} On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. − The calculator will find the eigenvalues and eigenvectors (eigenspace) of the given square matrix, with steps shown. We should be able to solve it using knowledge we have. 1 . Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively. within the space of square integrable functions. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. Comparing this equation to Equation (1), it follows immediately that a left eigenvector of v This condition can be written as the equation. {\displaystyle H} Taking the determinant to find characteristic polynomial of A. then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. μ If the degree is odd, then by the intermediate value theorem at least one of the roots is real. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. To prove the inequality Defective matrix: A square matrix that does not have a complete basis of eigenvectors, and is thus not diagonalisable. D leads to a so-called quadratic eigenvalue problem. 2 . ; this causes it to converge to an eigenvector of the eigenvalue closest to + 6 , E E μ λ th largest or For other uses, see, Vectors that map to their scalar multiples, and the associated scalars, Eigenvalues and the characteristic polynomial, Eigenspaces, geometric multiplicity, and the eigenbasis for matrices, Diagonalization and the eigendecomposition, Three-dimensional matrix example with complex eigenvalues, Eigenvalues and eigenfunctions of differential operators, Eigenspaces, geometric multiplicity, and the eigenbasis, Associative algebras and representation theory, Cornell University Department of Mathematics (2016), University of Michigan Mathematics (2016), An extended version, showing all four quadrants, representation-theoretical concept of weight, criteria for determining the number of factors, "Du mouvement d'un corps solide quelconque lorsqu'il tourne autour d'un axe mobile", "Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. , for any nonzero real number T And if and are any two matrices then. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. {\displaystyle \lambda _{1},...,\lambda _{n}} and = A The values of λ that satisfy the equation are the generalized eigenvalues. Eigenvalues are the special set of scalars associated with the system of linear equations. Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. vectors orthogonal to these eigenvectors of u The identity matrix. {\displaystyle n} Let {\displaystyle k} @Kenny Lau Is it incorrect? @Theo Bendit the method we use through this class is to find a basis consisting of eigenvectors. {\displaystyle E_{1}=E_{2}>E_{3}} giving a k-dimensional system of the first order in the stacked variable vector matrix of complex numbers with eigenvalues D v (sometimes called the combinatorial Laplacian) or is − A {\displaystyle \lambda =6} {\displaystyle D=-4(\sin \theta )^{2}} ( λ γ , the fabric is said to be linear.[48]. {\displaystyle E} A variation is to instead multiply the vector by 0 alone. {\displaystyle \gamma _{A}(\lambda _{i})} The three eigenvectors are ordered Companion matrix: A matrix whose eigenvalues are equal to the roots of the polynomial. Ψ A n ( {\displaystyle A} . = An example is Google's PageRank algorithm. ] {\displaystyle A} {\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}} In this example, the eigenvectors are any nonzero scalar multiples of. In this case the eigenfunction is itself a function of its associated eigenvalue. , with the same eigenvalue. d … , or any nonzero multiple thereof. In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. Suppose A 2 Now say $E$ is the set of eigenvectors of $A$. with H to y This equation gives k characteristic roots ω ( The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial.[10][27][28]. A k , the fabric is said to be planar. It seems very few students solved it if any. is the eigenfunction of the derivative operator. is the eigenvalue's algebraic multiplicity. ,[1] is the factor by which the eigenvector is scaled. Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. sin [18], The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. is the same as the characteristic polynomial of ) It is in several ways poorly suited for non-exact arithmetics such as floating-point. It is mostly used in matrix equations. The matrix Q is the change of basis matrix of the similarity transformation. [ is an eigenstate of {\displaystyle v_{3}} Proof: Say $z=x+Ax$. {\displaystyle n\times n} Therefore. For instance, do you know a matrix is diagonalisable if and only if $$\operatorname{ker}(A - \lambda I)^2 = \operatorname{ker}(A - \lambda I)$$ for each $\lambda$? a ( {\displaystyle (A-\xi I)V=V(D-\xi I)} {\displaystyle \det(A-\xi I)=\det(D-\xi I)} $\lambda_1\lambda_2\cdots \lambda_n$ since the right matrix is diagonal. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). A^2 = I) of order 10 and \text {trace} (A) = -4, then what is the value of \det (A+2I)? x = {\displaystyle n\times n} Ψ n {\displaystyle |\Psi _{E}\rangle } i − Keywords: singular value decomposition, (skew-)involutory matrix, (skew-)coninvolutory, consimilarity 2000MSC:15A23, 65F99 1. where I is the n by n identity matrix and 0 is the zero vector. − x ( In linear algebra, a nilpotent matrix is a square matrix N such that = for some positive integer.The smallest such is called the index of , sometimes the degree of .. More generally, a nilpotent transformation is a linear transformation of a vector space such that = for some positive integer (and thus, = for all ≥). , from one person becoming infected to the next person becoming infected. All I know is that it's eigenvalue has to be 1 or -1. sin In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. i 1 {\displaystyle A} A A While the definition of an eigenvector used in this article excludes the zero vector, it is possible to define eigenvalues and eigenvectors such that the zero vector is an eigenvector.[42]. = Finding of eigenvalues and eigenvectors. x ] which has the roots λ1=1, λ2=2, and λ3=3. 1 , This matrix has eigenvalues 2 + 2*cos(k*pi/(n+1)), where k = 1:n. The generated matrix is a symmetric positive definite M-matrix with real nonnegative eigenvalues. columns are these eigenvectors, and whose remaining columns can be any orthonormal set of i is the tertiary, in terms of strength. The largest eigenvalue of i In general, λ may be any scalar. ) west0479 is a real-valued 479-by-479 sparse matrix with both real and complex pairs of conjugate eigenvalues. ( 3 In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. G 2 {\displaystyle H} {\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}} . n That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). ξ [a] Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. The eigenvalues need not be distinct. If you haven't covered minimal polynomials and related topics this was a hard question. . times in this list, where ( λ [ [ If can be determined by finding the roots of the characteristic polynomial. {\displaystyle \omega ^{2}} The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. In Mathematics, eigenvector … ξ [49] The dimension of this vector space is the number of pixels. = contains a factor 0 {\displaystyle {\begin{bmatrix}b\\-3b\end{bmatrix}}} Ask Question Asked 2 years, 4 months ago. The identity and the counteridentity areboth invo-lutory matrices. Note that. I {\displaystyle v_{i}} λ If In this formulation, the defining equation is. {\displaystyle \mathbf {v} } ⟩ {\displaystyle n} [ ] ) Maybe there's some smart argument?
2020 involutory matrix eigenvalues
|
2021-06-18 00:38:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947515070438385, "perplexity": 391.51739443967256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634576.73/warc/CC-MAIN-20210617222646-20210618012646-00296.warc.gz"}
|
https://stats.stackexchange.com/questions/202605/nonstationary-solutions-for-stationary-arma-equations
|
# Nonstationary solutions for stationary ARMA equations
By "stationary" I mean "weakly stationary".
Consider a "stationary" AR(1) equation:
$$X_t=\varphi X_{t-1}+\varepsilon_t,$$ where $t\in\mathbb{Z}$ are discrete time moments, $\varepsilon_t$ a zero-mean white noise (just some iid sequence), $\varphi\in(-1,1)$. It is well known that there is a stationary solution (that is, a discrete time series satisfying the equation). Denote it by $X_t.$ However, we can introduce another time series $Y_t=X_t+\varphi^t$, which appears to be a nonstationary solution for the "stationary" equation (clearly, $\mathbb{E}[Y_t]$ is not free of $t$, since $X_t$ is evidently zero-mean).
Given more general stationary AR($p$) process, is it possible to somehow damage the weak stationarity property? Or, in general, is it true that any stationary discrete time AR (or even ARMA) equation has a nonstationary solution?
• Could expand a little bit? Could you explain how $Y_t=X_t+\phi^t$ appears to be a nonstationary solution for $X_t=\varphi X_{t-1}+\varepsilon_t$? (Perhaps a better practice would be not to use $\phi$ and $\varphi$ in the same exercise because both are "phi", which may make it confusing.) – Richard Hardy Mar 20 '16 at 11:32
• What do you mean by a "solution", what kind of object is that? (Like a constant, a stochastic process, ...) Could you elaborate on that, perhaps expand that section of the post? – Richard Hardy Mar 20 '16 at 12:35
• Richard, the solution is assumed to be a time series, of course. I've added it to the post. – Nikita Mar 20 '16 at 12:39
• Could you show fully or partly that $Y_t$ is a (nonstationary) solution? Also, maybe I am being picky, but I am not that used to the terminology and so having $X_t$ in the general form AR(1) equation AND as a solution to it is a bit confusing. Could we somehow distinguish between the two notationally? (But perhaps it is standard to use such notation, then just ignore my comment.) – Richard Hardy Mar 20 '16 at 12:46
• I meant that the fact $Y_t$ is a solution is easy to check. – Nikita Mar 20 '16 at 12:55
If you let your process go on, then you'll notice how the term $\varphi^t$ disappears: $$\lim_{t\to\infty}\varphi^t=0$$
So, $$E[Y_t]=E[X_t]+E[\varphi^t]=_{t\to\infty}0$$ this is despite the right hand side having being dependent on finite $t$.
So the answer to your question is that your process $Y_t$ is not non-stationary. Hence, it doesn't serve as a counter-example.
Additional thoughts. You formulated your question in terms of solutions of the stochastic processes. Look at what is the solution of the AR(1) process.
For instance, if you forecast $\tau$ steps ahead you get: $$X_{t+\tau}=\varphi^\tau (X_t+\sum_{s=1}^\tau \varepsilon_{t+s}\varphi^{-s})$$
You can see how it simply collapses to the noise around zero as $\tau$ grows, no matter what was the initial $X_t$. When you add your $\varphi^\tau$ term it also disappears, so the stable solution is the same: noise around zero:
$$X_{t+\tau}+\varphi^\tau=\varphi^\tau (X_t+\sum_{s=1}^\tau \varepsilon_{t+s}\varphi^{-s}+\varphi^s)$$
• Does that answer the questions? I have trouble seeing the connection clearly (although I do see some connection). – Richard Hardy Mar 15 '17 at 15:38
• I updated the answer. – Aksakal Mar 15 '17 at 15:39
• Thank you. I find both the question and the answer interesting, but it takes some effort to wrap my head around them. Element by element these are easy, but the relations between them can be deceptive :) – Richard Hardy Mar 15 '17 at 15:43
• @RichardHardy, I'm being lazy here. Maybe I should have describe this all in the framework of SDE, then it would be clearer. – Aksakal Mar 15 '17 at 15:44
• Also, are we allowed to consider the case $t\rightarrow\infty$ and sort of hand-wave finite $t$ (especially in the expectation in the second formula)? I can see where you are going if you look at the fixed points of SDEs, but is that really what we need here? I guess that depends on the definition of what a solution is, and the OP seems to be interested not in fixed points but in processes that satisfy some property/equation (see the comments under the OP). – Richard Hardy Mar 15 '17 at 15:46
|
2019-12-13 11:39:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863427460193634, "perplexity": 424.2458938303464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00031.warc.gz"}
|
https://questions.examside.com/past-years/medical/question/maximum-deviation-from-ideal-gas-is-expected-from-neet-chemistry-gaseous-state-eqhcfpcepdjoyksl
|
1
### NEET 2013
Maximum deviation from ideal gas is expected from
A
CH4(g)
B
NH3(g)
C
H2(g)
D
N2(g)
## Explanation
The compressibility factor is the term which is measured for a gas to study its deviation from the ideal behaviour. Compressibility factor,
$Z = {{PV} \over {nRT}}$
Greater is the difference in the value of Z from 1, greater is the deviation of the gas from ideal behaviour.
Among the given molecules, NH3 is easily liquefiable gas which deviates from ideal behaviour to the maximum extent.
2
### NEET 2013 (Karnataka)
What is the density of N2 gas 227oC and 5.00 atm. pressure? (R = 0.082 L atm K$-$1 mol$-$1)
A
1.40 g/mL
B
2.81 g/mL
C
3.41 g/mL
D
0.29 g/mL
## Explanation
PV = nRT
$\Rightarrow$ PV = ${W \over M}RT$
$\Rightarrow$ $P = {W \over M} \times {{RT} \over V}$
$\Rightarrow$$P = {{dRT} \over M}$ [Density = ${{Mass} \over {Volume}}$]
$\Rightarrow d = {{PM} \over {RT}} = {{5 \times 28} \over {0.0821 \times 500}} = 3.41\,g/ml$
3
### AIPMT 2012 Mains
Equal volumes of two monoatomic gases, A and B at same temperature and pressure are mixed. The ratio of specific heats (Cp/Cv) of the mixture will be
A
0.83
B
1.50
C
3.3
D
1.67
## Explanation
Cp for monoatomic gas mixture of same volume = ${5 \over 2}R$
$\therefore$ CV = ${3 \over 2}R$
$\Rightarrow {{{C_P}} \over {{C_V}}} = {{{5 \over 2}R} \over {{3 \over 2}R}} = {5 \over 3} = 1.67$
4
### AIPMT 2012 Mains
For real gases van der Waals equation is written as
$\left( {p + {{a{n^2}} \over {{V^2}}}} \right)$ (V $-$ nb) = n RT
where $a$ and $b$ are van der Waals constants. Two sets of gases are
(I) O2, CO2, H2 and He
(II) CH4. O2 and H2
The gases given in set-I in increasing order of b and gases given in set-II in decreasing order of $a$, are arranged below. Select the correct order from the following
A
(I) He < H2 < CO2 < O2 (II) CH4 > H2 > O2
B
(I) O2 < He < H2 < CO2 (II) H2 > O2 > CH4
C
(I) H2 < He < O2 < CO2 (II) CH4 > O2 > H2
D
(I) H2 < O2 < He < CO2 (II) O2 > CH4 > H2
## Explanation
Van der Waal gas constant '$a$' represent intermolecular force of attraction of gaseous molecules and Van der Waal gas constant 'b' represent effective size of molecules . Therefore order should be
(I) H2 < He < O2 < CO2 (II) CH4 > O2 > H2
|
2021-12-08 10:29:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999094009399414, "perplexity": 10946.155128067086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00232.warc.gz"}
|
http://zh.wikipedia.org/wiki/%E6%98%93%E8%BE%9B%E6%A8%A1%E5%9E%8B
|
# 易辛模型
## 定義
$E=-\sum_{ij} J_{ij}s_i s_j-H\sum_i s_i$
$J_{ij}$是耦合矩陣。
$J_{ij} > 0$ 表示自旋交互作用為鐵磁性
$J_{ij} < 0$ 表示自旋交互作用為反鐵磁性
$J_{ij} = 0$ 表示自旋間無交互作用。
## 一維易辛模型
$E = -J \sum _{i=1}^N s_i s_{i+1}+H \sum _{i=1}^N s_i$
$M (H,T)=\frac{\sinh (\beta H )}{\sqrt{\sinh ^2(\beta H )+e^{-4\beta J }}}$.
$M (0,T)=0$.
## 延伸閱讀
1. Kerson Huang, Introduction to Statistical Physics.
2. I. A. Stepanov. Exact Solutions of the One-Dimensional, Two-Dimensional, and Three-Dimensional Ising Models. – Nano Science and Nano Technology: An Indian Journal. 2012. Vol. 6. No 3. 118 - 122. The paper is on the Journal’s website with a free access.
|
2013-12-05 10:16:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39654240012168884, "perplexity": 10587.477014155493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163043499/warc/CC-MAIN-20131204131723-00015-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/56406/how-to-generate-valid-number-of-combinations-basis-on-the-rules
|
# How to generate valid number of combinations basis on the rules?
I have a list of datacenters, which are dc1, dc2 and dc3. And each datacenter has six ids as of now (can be 11 in each datacenter as well or uneven number for each datacenter).
dc1 - h1, h2, h3, h4, h5, h6
dc2 - h1, h2, h3, h4, h5, h6
dc3 - h1, h2, h3, h4, h5, h6
Now I am trying to generate combinations of datacenter and its id so the rules are -
1. In each pass, we will have each datacenter and alternate IDs. For example:-
dc1=h1, dc2=h2, dc3=h3
If you see above, each datacenter has unique Id, meaning dc1 has h1, dc2 has h2 and dc3 has h3. We cannot have same id for each datacenter in any of the passes, like we cannot have like this:
dc1=h1, dc2=h1, dc3=h3
If you see above, h1 appears twice in dc1 and dc2 which is wrong.
2. In the next pass, we should not use same id for that datacenter. Meaning if below is the first pass:
dc1=h1, dc2=h2, dc3=h3
then second pass can be as below. If you see below, dc1 has h4 in the second pass, which has some other ID apart from h1, as h1 was used in the first pass for dc1 datacenter and same with dc2 and dc3.
dc1=h4, dc2=h5, dc3=h6
We cannot have same ID for same datacenter in the next pass if it was used earlier. So we cannot have this in the second pass:
dc1=h4, dc2=h2, dc3=h6
as h2 was used earlier for dc2 in the first pass.
Problem Statement:
Now I have got all the above logic working fine in my below code:
public static List<Map<String, String>> createDatacenterIdMappings(Map<String, List<String>> input, boolean debug) {
// Output holder
ArrayList<Map<String, String>> output = new ArrayList<Map<String, String>>();
// Initialize variables
int szRaw = input.size();
int minReq = -1;
for (List<String> dataCenters : input.values()) {
minReq = dataCenters.size() > minReq ? dataCenters.size() : minReq; // Find minimum size required
}
minReq = input.size() > minReq ? input.size() : minReq;
// Flip map from datacenters : id to id : datacenters if
// required
boolean flipped = false;
if (minReq == input.size()) {
Map<String, List<String>> temp = new LinkedHashMap<String, List<String>>();
Set<String> dataCenters = input.keySet();
for (String dataCenter : dataCenters) {
List<String> id = input.get(dataCenter);
for (String machine : id) {
if (!temp.containsKey(machine)) {
temp.put(machine, new ArrayList<String>());
}
}
}
input = temp;
flipped = true;
}
ArrayList<String> keys = new ArrayList<String>(input.keySet());
int k = 0;
// For each row in output process
for (int i = 0; i < minReq; i++) {
// A map to hold output, and another to track status, TODO: can be
// reduced to one
Map<String, String> map = new LinkedHashMap<String, String>();
Map<String, String> outputMap = new LinkedHashMap<String, String>();
for (int j = 0; j < keys.size(); j++) {
List<String> list = input.get(keys.get(j));
int sz = list.size();
if (sz == 0)
continue;
// Find next eligible value for current key (if key is
// datacenter then valid value)
String valueToUse = null;
k = k % sz;
int initK = k;
boolean found = false;
do {
valueToUse = list.get(k);
if (!map.values().contains(valueToUse)) {
found = true;
break;
}
k = (k + 1) % sz;
} while (k != initK);
if (!found)
continue;
map.put(keys.get(j), valueToUse);
if (flipped)
outputMap.put(valueToUse, keys.get(j));
else
outputMap.put(keys.get(j), valueToUse);
list.remove(k);
}
}
return output;
}
For the below inputs -
List<String> hosts1 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
List<String> hosts2 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
List<String> hosts3 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
boolean debug = false;
List<Map<String, String>> mappings = createDatacenterIdMappings(maps, debug);
// print mappings here
I get output as:
[{dc1=h1, dc2=h2, dc3=h3},
{dc1=h4, dc2=h1, dc3=h2},
{dc1=h3, dc2=h4, dc3=h1},
{dc1=h2, dc2=h3, dc3=h4}]
Other input parameters, can be:
List<String> hosts1 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6",
"h7", "h8", "h9", "h10", "h11"));
List<String> hosts2 = new ArrayList<>();
List<String> hosts3 = new ArrayList<>();
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
And output for above input comes as:
[{dc1=h1},
{dc1=h2},
{dc1=h3},
{dc1=h4},
{dc1=h5},
{dc1=h6},
{dc1=h7},
{dc1=h8},
{dc1=h9},
{dc1=h10},
{dc1=h11}]
or below is another input:
List<String> hosts1 = new ArrayList<>(Arrays.asList("h1", "h2", "h3"));
List<String> hosts2 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6",
"h7", "h8", "h9", "h10", "h11"));
List<String> hosts3 = new ArrayList<>(Arrays.asList("h1", "h2", "h3"));
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
And output for above input comes as:
[{dc1=h1, dc2=h2, dc3=h3},
{dc1=h2, dc2=h1},
{dc1=h3, dc2=h4, dc3=h2},
{dc2=h5, dc3=h1},
{dc2=h3},
{dc2=h6},
{dc2=h7},
{dc2=h8},
{dc2=h9},
{dc2=h10},
{dc2=h11}]
I am opting for code review to see whether we can simplify this code or improve it slightly. Since it looks slightly complicated to me, there should be some way of simplifying this code for sure.
Below is the code I use to print out the mappings. Just keep on changing the input parameters -
public static void main(String[] args) {
List<String> hosts1 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
List<String> hosts2 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
List<String> hosts3 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
boolean debug = false;
List<Map<String, String>> mappings = createDatacenterIdMappings(maps, debug);
printOutput(mappings);
}
public static void printOutput(List<Map<String, String>> mappings) {
System.out.println(mappings.toString().replaceAll("},", "},\n") + "\n==================================");
}
Below are the inputs and outputs -
Example - 1
List<String> hosts1 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
List<String> hosts2 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
List<String> hosts3 = new ArrayList<String>(Arrays.asList("h1", "h2", "h3", "h4"));
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
Output -1
[{dc3=h3, dc2=h2, dc1=h1},
{dc3=h4, dc2=h1, dc1=h2},
{dc3=h1, dc2=h4, dc1=h3},
{dc3=h2, dc2=h3, dc1=h4}]
Example -2
List<String> hosts1 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6", "h7", "h8", "h9",
"h10", "h11"));
List<String> hosts2 = new ArrayList<>();
List<String> hosts3 = new ArrayList<>();
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
Output -2
[{dc1=h1},
{dc1=h2},
{dc1=h3},
{dc1=h4},
{dc1=h5},
{dc1=h6},
{dc1=h7},
{dc1=h8},
{dc1=h9},
{dc1=h10},
{dc1=h11}]
Example -3
List<String> hosts1 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6",
"h7", "h8", "h9", "h10", "h11"));
List<String> hosts2 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6",
"h7", "h8", "h9", "h10", "h11"));
List<String> hosts3 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6",
"h7", "h8", "h9", "h10", "h11"));
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
Output -3
[{dc1=h1, dc2=h2, dc3=h3},
{dc1=h4, dc2=h5, dc3=h6},
{dc1=h7, dc2=h8, dc3=h9},
{dc1=h10, dc2=h11, dc3=h1},
{dc1=h2, dc2=h1, dc3=h4},
{dc1=h5, dc2=h4, dc3=h7},
{dc1=h8, dc2=h7, dc3=h10},
{dc1=h11, dc2=h10, dc3=h2},
{dc1=h3, dc2=h6, dc3=h8},
{dc1=h9, dc2=h3, dc3=h5},
{dc1=h6, dc2=h9, dc3=h11}]
Example -4
And similarly for six host id's per datacenter -
List<String> hosts1 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6"));
List<String> hosts2 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6"));
List<String> hosts3 = new ArrayList<>(Arrays.asList("h1", "h2", "h3", "h4", "h5", "h6"));
Map<String, List<String>> maps = new LinkedHashMap<String, List<String>>();
maps.put("dc1", hosts1);
maps.put("dc2", hosts2);
maps.put("dc3", hosts3);
• A few othe data cases that may throw off a few people: dc1, dc2, dc3 with each having hosts h1, h2 – Vogel612 Jul 8 '14 at 22:23
• Additionally something that will throw off even more sophisticated solutions: remove one of the hosts in dc2 for additional boggling :) – Vogel612 Jul 8 '14 at 22:30
• Follow-up question – 200_success Jul 12 '14 at 11:17
Your code is so complicated that I did not understand it (I did not try that long). I decided to write my own solution since I thought it would take less time than understanding your code. My method takes about 15 lines. Also, I used good OO decomposition and wrapped this generation of configurations in a class called... ConfigurationGenerator. I defined a class in part because I make copies of the lists.
So if you want an actual code review: your code is much too complicated for the task at hand. I guess only experience allows you to recognize early on that you are on the wrong (or too complicated) path.
public static class ConfigurationGenerator<K, V> {
private Map<K, List<V>> lists;
public ConfigurationGenerator(Map<K, List<V>> lists) {
// Make copies of the lists so we can modify them without touching the original lists.
for (K key : lists.keySet())
this.lists.put(key, new ArrayList<>(lists.get(key)));
}
/**
* @returns null when all lists are exhausted.
*/
public Map<K, V> extractConfiguration() {
Map<K, V> output = new HashMap<K, V>();
Set<V> takenValues = new HashSet<>();
for (K key : lists.keySet()) {
List<V> list = lists.get(key);
V chosenValue = null;
for (V value : list) {
if (!takenValues.contains(value)) {
chosenValue = value;
break;
}
}
if (chosenValue != null) {
list.remove(chosenValue);
output.put(key, chosenValue);
}
}
return output.isEmpty() ? null : output;
}
}
public static void main(String[] args) {
Map<String, List<String>> lists = new LinkedHashMap<String, List<String>>();
lists.put("dc1", Arrays.asList("h1", "h2", "h3", "h4"));
lists.put("dc2", Arrays.asList("h1", "h2", "h3", "h4"));
lists.put("dc3", Arrays.asList("h1", "h2", "h3", "h4"));
ConfigurationGenerator<String, String> generator = new ConfigurationGenerator<>(lists);
Map<String, String> map = null;
while ((map = generator.extractConfiguration()) != null) {
System.out.println(map);
}
}
• What you write here is plain incorrect. Actually each line of code, except for about 8 in that 80LoC method has their full reason for existence. after I spent two hours dissecting the code and having a rename-orgy I know! Actually I am sure your code will not produce the same output for a few test cases I have come up with. I will post an answer tomorrow to prove – Vogel612 Jul 8 '14 at 22:19
Your code is very hard to understand. As I mentioned in a comment, I needed roundabout 2 hours to unwrap it, and that was just for understanding. In that process I renamed a big part of your variables, so: here goes the review, be prepared
# Naming:
Overall the naming quality of this code is between acceptable and hellishly gruesome.
int minReq = -1;
This name is a drastic shortening, and missing too much context to describe exactly what it does. I renamed to requiredIterations, and suddenly the for-loop 20 LoC later was a little clearer. Everytime you read minReq, you need to remember: "minReq stands for the minimum required iterations". If you read requiredIterations then you know: "this stands for the minimum required iterations".
int k = 0;
Later you use k to do some very useful, but hard to understand stuff. Do you actually know what k is? I can tell you. It's an index, that's persistent through the different for-loops you have. I thusly named it persistentIndex.
And this is where it gets fishy. Your requirements don't match the currently used code! This totally boggled me.
Map<String, String> map = new LinkedHashMap<String, String>();
Map<String, String> outputMap = new LinkedHashMap<String, String>();
These are horrible too. What is in your Map map? You comment there "tracking status in one map, output in the other". This is not a comment, it's a name! your map is better off with statusTrackingMap.
Then there is that outputMap. I confused that one with the output fairly often, because they both start with output. What do you put in there? The current data-center:host combinations. Then name it, so it contains what it says on the tin!: currentCombinationsMap is definitely better.
List<String> list = input.get(keys.get(j));
ouch. First Map map and now List list? Again rename it. I chose machines from the comment you wrote before flipped.
int sz = list.size();
or rather, now:
int sz = machines.size();
This one is the worst after k. Use machineCount or something along these lines instead!
int initK = k;
That one is also kinda evil. This is the break-marker for the following do-while loop, given the case persistentIndex rolls back to 0. I called it startingPersistentIndex, which still isn't good, but definitely better than initK.
int szRaw = input.size();
You never use that variable. That was the first thing I could remove.
# Code-Smells:
You have a few not-very-nice code-smells in that code, one of them is this here:
if (!found) {
continue;
}
//following code
This is in fact a guard clause, and a not even quite bad at that. But it's overly complicated to track back. What you actually want to say is:
if(found){
//following code
}
IMO this is easier to understand.
While we're at it:
if (!map.values().contains(valueToUse)) {
found = true;
break;
}
is in fact nothing else but:
found = !map.containsValue(valueToUse);
for the following program. And here we see, that the name is wrong. It says found on the tin, but it tells you if the valueToUse was not found in the statusTrackingMap.
Aaand another rename:
if(!map.containsValue(valueToUse)){
valueToUseNotFound = true;
break;
}
And continuing from that, this could be refactored to sit somewhere different, but then your behavior changes. As I don't know how that would be seen, it might be difficult, but in fact, this is just a break-condition for your do-while loop.
It could be rewritten to:
do {
//stuff
} while (persistentIndex != startingPersistentIndex && !valueToUseNotFound)
But this includes another increment of the persistentIndex and I was reluctant to change the behavior of the code.
Your innermost for-loop then ends with:
if (valueToUseNotFound) {
statusTrackingMap.put(dataCenters.get(j), valueToUse);
if(flipped) {
currentCombinationsMap.put(valueToUse);
} else {
currentCombinationsMap.put(dataCenters.get(j), valueToUse);
}
machines.remove(persistentIndex);
}
And this is where I lost it, when I examined WHY IN GOD'S NAME THE HELL the expected output when calling your old method with the same input as my refactored method and asserting: assertEquals(oldOutput.toString(), newOutput.toString()) was [{}, {}, {}]
So here is your lesson for all eternity, and if it wasn't you who committed this crime, then slap that person with a boulder!
# DON'T TINKER WITH INPUT!
### Keep references intact, somebody outside may want to reuse them!
It's like you gave some person a reference to your bag of Keys, asking him to tell you how he would reorder them on different bunches, and when you check your keybag, suddenly all the keys have vanished.
</rant>
What you do is: you copy the data, and you leave the original references intact. How? like this:
Map<String, List<String>> clone = new LinkedHashMap<>();
for (String key : input.keySet()) {
clone.put(key, new ArrayList<>(input.get(key)));
}
# Declaring output variables:
The general rule in Java is: Declare variables as close as possible to their usage.
This especially applies for your output. Why do you declare it at the top of your method, and use it the first time 70 lines later. until then everybody will have forgotten what that actually was, let alone be able to know the type of it.
Most All of the comments you wrote were useless after I renamed the variables that were explained to meaningful names. The only comment where that was not the case, was your //TODO and that should have been deprecated before you posted to CR, so I just ignored it.
# Final Code:
Applying all this, I could break down your code from ~80 Lines to ~55 Lines, without losing any functionality while drastically increasing readability:
public static List<Map<String, String>> createDatacenterIdMappings (
final Map<String, List<String>> original) {
Map<String, List<String>> input = new LinkedHashMap<>();
for (String key : original.keySet()) {
input.put(key, new ArrayList<>(original.get(key)));
}
int requiredIterations = -1;
for (List<String> dataCenters : input.values()) {
requiredIterations = dataCenters.size() > requiredIterations
? dataCenters.size()
: requiredIterations;
}
requiredIterations = input.size() > requiredIterations
? input.size()
: requiredIterations;
ArrayList<Map<String, String>> output = new ArrayList<>();
ArrayList<String> dataCenters = new ArrayList<>(input.keySet());
int persistentIndex = 0;
for (int i = 0; i < requiredIterations; i++) {
Map<String, String> statusTrackingMap = new LinkedHashMap<>();
Map<String, String> currentCombinationsMap = new LinkedHashMap<>();
for (int j = 0; j < dataCenters.size(); j++) {
List<String> machines = input.get(dataCenters.get(j));
int machineCount = machines.size();
if (machineCount == 0) {
continue;
}
String valueToUse = null;
persistentIndex = persistentIndex % machineCount;
int persistentIndexIterationStartMarker = persistentIndex;
boolean valueToUseNotFound = false;
do {
valueToUse = machines.get(persistentIndex);
if (!statusTrackingMap.containsValue(valueToUse)) {
valueToUseNotFound = true;
break;
}
persistentIndex = (persistentIndex + 1) % machineCount;
} while (persistentIndex != persistentIndexIterationStartMarker);
if (valueToUseNotFound) {
statusTrackingMap.put(dataCenters.get(j), valueToUse);
currentCombinationsMap.put(dataCenters.get(j), valueToUse);
machines.remove(persistentIndex);
}
}
• Thanks for help. Appreciated a lot with your detailed suggestion and feedback. One minor issue which I am noticing is, if I try with example 4 as in my question, it always misses h5 for dc3 somehow. Any thoughts why? – arsenal Jul 9 '14 at 7:09
|
2019-11-19 01:37:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28291237354278564, "perplexity": 14813.17581845647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00397.warc.gz"}
|
https://latex-tutorial.com/tips-for-professional-latex-typesetting/
|
5 Tips for Professional LaTeX Typesetting
Tip 1: Quotes in LaTeX
First of all, you should consider the fact that, although your keyboard has only one type of double quotation marks, namely ", in books there are two:
• opening quotation marks “ and,
• closing quotation marks ”.
In LaTeX, the first symbols are left-quote characters while the last are right-quote or apostrophe characters, which can be obtained using left quote command \lq and right quote command \rq., repeating the command twice will create a double quotation mark. Here is an example:
% Quotes in LaTeX
\documentclass{article}
\begin{document}
So if you want to quote something, you should never write \verb|"looks awful"|. Instead, you should write looks beautiful'' or \lq looks nice\rq.
\end{document}
Compiling this code yields:
Tip 2: Dashes in LaTeX
With respect to dashes, good math books have, at least, four kind of different symbols: a hyphen (-), an en-dash (–), an em-dash (—) and a minus sign (). Hyphens are used in compound nouns such as right-quote. En-dashes are used for intervals of numbers such as 4–6. Em-dashes are used for punctuation in sentences—for instance, when you want to avoid a parenthesis at the end of a sentence. Finally, minus signs are used in formulas, usually to denote the difference between two numbers.
If you want a professionally typeset document, you should be aware of this four cases, which you can write as: hyphen -, en-dash –, em-dash — and minus sign \verb|$-$| . Here is the corresponding LaTeX code:
% Quotes in LaTeX
\documentclass{article}
\begin{document}
This is a hyphen - and this is an en-dash --
This is an em-dash --- and this is a minus sign $-$
\end{document}
Compiling this code yields:
Tip 3: Ties in LaTeX
Another important aspect of TeX and LaTeX typesetting that you probably haven’t heard about are ties. Ties help TeX know when lines should be broken, in order to avoid line breaks that may make reading a sentence difficult. In order to understand the importance of ties, let me cite the creator of TeX Donald Knuth. In the TeX book he asserts that
TeX provides a convenient way to avoid psychologically bad breaks, so that you will be able to obtain results of the finest quality by simply giving a few hints to the machine.
“Ties”— denoted by ~ in plain TeX—are the key to successful line breaking. Once you learn how to insert them, you will have graduated from the ranks of ordinary TeXnical typists to the select group of Distinguished TeXnicians. And it’s really not difficult to train yourself to insert occasional ties, almost without thinking, as you type a manuscript.
Donald Knuth
So when you type ~, it is the same as typing a space, but TeX won’t break the line at this space. You shouldn’t leave any blanks next to the ~, since they will count as additional spaces. Furthermore, since the return key in the input file is the same as a blank space in the output, you should never put a ~ at the end of an input line, since it will produce an extra space.
For example, you may want to use the tie in the following situations:
• Add an abbreviation before someone’s name, e.g., Dr.~House. In this case, this serves a second function too that is worth mentioning. TeX{} puts extra space after punctuation marks in sentences, to make them look better. In fact, if you force TeX{} to stretch or shrink a sentence, it will mainly change this after punctuation mark spaces. But when you are typing an abbreviation, you don’t want a punctuation mark space; instead, you want a normal space. The tie, in this situation, also puts a normal space.
• In references to named parts of the document: Chapter~2, Appendix~B, Figure~3.5, Theorem~1.2, etc.
• Between a person’s forenames and between multiple surnames: Elizabeth~II, Johannes van~der~Waals, Donald~E. Knuth.
You may think that TeX should handle this particular cases by itself, but Knuth doesn’t consider this a possibility. As he says:
It would be nice to boil all of these rules down to one or two simple principles, and it would be even nicer if the rules could be automated so that keyboarding could be done without them; but subtle semantic considerations seem to be involved. Therefore it’s best to use your own judgment with respect to ties. The computer needs your help.
Donald Knuth
Although you may not believe it, sometimes TeX needs your help.
Tip 4: Ligatures in LaTeX
Let’s now dive into what TeX automatically does to make your documents look better, and how can you change it in the (rare) case it doesn’t come out well. First of all, TeX makes ligatures, which are certain combinations of letters that are treated as a unit in book printing. For example, look closely at the first two letters in “fly” and “find” in the following illustration. The letters that clash are substituted for a single symbol, “fi” and “fl” in these cases, which look much better.
In rare occasions you are faced with a word like “shelfful”, which looks better without the ligature in the two consecutive f’s. In this case, we can fool TeX into thinking that there is no such ligature, for example using grouping: {shelf}ful and shelf{}ful.
and here is the corresponding LaTeX code:
% Ligature in LaTeX
\documentclass{article}
\begin{document}
\textbf{With Ligature:} shelfful
\textbf{Without Ligature:} {shelf}ful and shelf{}ful
\end{document}
Tip 5: Dots in LaTeX
Finally, I would like to explain some more details about the spacing rule mentioned in the ties situation. As I said, TeX puts extra space after punctuation marks, because it makes sentences look better. But this can be a problem too, since sometimes TeX may think we are ending a sentence when we are not, adding some awkward spacing. Besides the above-mentioned example, there’s a common situation where you want to prevent TeX from adding extra space and it is when you use an ellipsis of three dots in the middle of a sentence.
For instance, if I just write three dots in a row it looks awful since the dots are too close together and they leave too much space, which loses part of the ellipsis effect:
In order to prevent this undesired effect, plain TeX has the control sequence \ldots which yields the desired result as shown below:
and the corresponding LaTeX code:
% Dots in LaTeX
\documentclass{article}
\begin{document}
Three dots... in the middle of a sentence
Three dots\ldots in the middle of a sentence
\end{document}
Conclusion
And that’s all for this tutorial. I’m sure from now on you will be more aware of your document’s fine tuning and appearance, to make it look as the work of a professional typist, but made at home with your personal computer. This is the spirit of a true TeXnician.
|
2021-10-26 05:19:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9273495674133301, "perplexity": 1584.6152127197324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00132.warc.gz"}
|
http://exxamm.com/QuestionSolution4/Aptitude/The+cost+price+of+2+items+A+and+B+is+same+The+shopkeeper+decided+to+mark+the+price+40+more+than+the+CP+of+each+item+/2431391222
|
The cost price of 2 items A and B is same. The shopkeeper decided to mark the price 40% more than the CP of each item.
Question Asked by a Student from EXXAMM.com Team
Q 2431391222. The cost price of 2 items A and B is same. The shopkeeper decided to mark the price 40% more than the CP of each item.
A discount of 25% was given an item A and discount of 20% was given on item B. total profit earn on both item was Rs.34
Quantity I: CP of the items
Quantity II: CP of any item which was sold at 12.5% profit and profit earned on it was sold for Rs. 50
IBPS-PO 2016 Mains
A
Quantity I > Quantity II
B
Quantity I < Quantity II
C
Quantity I >= Quantity II
D
Quantity I = Quantity II
E
No relation
HINT
(Provided By a Student and Checked/Corrected by EXXAMM.com Team)
Access free resources including
• 100% free video lectures with detailed notes and examples
• Previous Year Papers
• Mock Tests
• Practices question categorized in topics and 4 levels with detailed solutions
• Syllabus & Pattern Analysis
|
2019-02-20 05:18:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23181810975074768, "perplexity": 5148.623144852222}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00314.warc.gz"}
|
http://www.next-toulouse.eu/news/how-disorder-can-turn-into-long-range-order-at-high-magnetic-field
|
# How disorder can turn into long-range order at high magnetic field ?
## Wednesday, July 05, 2017
5 juillet 2017, 10h58
In the two consecutive works published in the Physical Review Letters, experimentalists and theorists have demonstrated that, contrary to previous expectations, disorder can help ordering of quantum matter. To show this, they have studied the spin-chain based material $\mathrm{Ni}(\mathrm{Cl}_{1-x}\mathrm{Br}_x)_2-4\mathrm{SC}(\mathrm{NH}_2)_2$, also called “DTN”, which at low temperature presents a magnetic-field-induced ordered phase, described as a Bose-Einstein condensate (BEC). So far it was believed that, close to this BEC phase, chemical disorder created by doping Br impurities to substitute Cl ions would lead to localization, namely the so-called Bose-Glass state. However, building on Nuclear Magnetic Resonance experiments at high magnetic field, combined with state-of-the-art quantum Monte Carlo simulations, it has been shown that the impurity-induced localized bosonic degrees of freedom are indeed at work, but their mutual interaction plays a new unsuspected major role.
Their pairwise effective interaction leads to a global quantum coherence over the full sample, which results in a new type of BEC ordering of these impurity states, in sharp contrast with a localized Bose-Glass. This discovery is rewarding a very successful collaboration between experimental (LNCMI Grenoble) and theoretical (LPT Toulouse) teams.
# Filter
Show the NEXT news Show the laboratories news
## Date
Month MonthJanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember Year Year201720162015201420132012201120102009
# Agenda
## 07/18/2017Shrinking Armored Droplets to Advance New Materials for Applications
François Sicard (University College London) 18 juillet 2017, 14h00 Pickering...
## 06/13/2017Microscopic models with extrinsic non-Abelian defects : Lattice genons in fractional Chern insulators
Gunnar Möller, University of Kent (Canterbury, UK) 13 juin 2017, 14h00 The many-body...
|
2017-10-17 09:34:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4803329408168793, "perplexity": 5547.192371941627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821017.9/warc/CC-MAIN-20171017091309-20171017111309-00855.warc.gz"}
|
http://d-citymusic.com/lib/category/linear/page/5
|
# CLEP® College Algebra Book + Online (CLEP Test Preparation)
## Stu Schwartz
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 14.00 MB
There are 31 matching applications in this category. It looks like we will have to multiply the first equation by 2, to get opposites on y. Not only algebra, but many other areas of mathematics (geometry, trigonometry and calculus) are used in statistics. as well as Grade 3, Grade 4, and Grade 5 under Operations and Algebraic Thinking. I asked another student the same question. In-class Theorem 2 A matrix equation has a unique solution if and only if it is consistent AND there is a pivot in every column of the coefficient matrix after row-reduction.
# Linear Algebra (Undergraduate Texts in Mathematics)
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 14.31 MB
If you'd like to try this example yourself, here is the code I used to generate the files: svdimg.m. Likeable Charles outjest Term paper helpline bastinading flexibly. While writing the instructors' solution manual, I rechecked all the solutions to exercises in the back of the text and found some more errors, which are recorded below in the errata Here is the table of contents of the text: Copyright © 2007 Springer Science+Business Media, LLC
# Green Functors and G-sets (Lecture Notes in Mathematics)
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 12.71 MB
The Linear Algebra Toolkit has been written entirely in PERL. Proposition 32.1 Two vectors $\vec{v_1}$ and $\vec{v_2}$ in $\mathbb{R}^m$ are linearly independent if and only if neither vector is a scalar multiple of the other. Modify it as you wish for your particular interest or viewpoint. .pdf files of figures that are incorporated into the lectures. The program charpoly.exe computes the characteristic polynomial of a matrix having all integer entries.
# Matrices: Theory and Applications (Graduate Texts in
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 14.26 MB
These are one-dimensional arrays so we won't get confused by the fact that Fortran expects two-dimensional arrays to be in column-major order. Pure static mathematical methods can’t produce sustainable trading results. Oleh kerana algebra linear adalah teori yang berjaya, kaedahnya telah dibangunkan dalam bahagian lain matematik. The nature of Technical Drawing is drawings made by Linear Projection. Second, when we solve a system it has to be a solution of ALL equations involved and we have not incorporated the third equation yet.
# The Moduli Space of N=1 Superspheres With Tubes and the
## Katrina Barron
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 14.77 MB
In the event that you have to have guidance with algebra and in particular with algebra 1 or algebra come pay a visit to us at Mathpoint.net. Suppose that the initial market consists of k people.000. can we determine a and b so that the distribution will be the same from year to year? Identify and compute stochastic matrices. Which meant that the note takers couldn't answer questions. The online format of this program offers the same quality curriculum and learning experience as its campus-based counterpart.
# Matemáticas para las ciencias ambientales (Álgebra lineal y
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 8.76 MB
Students explore the concept of correlation versus causation with. Linear Programming Linear programming, how to use linear programming to solve word problems. This fully updated and revised text defines the discipline’s main terms, explains its key theorems, and provides over 425 example problems ranging from the elementary to some that may baffle even the most seasoned mathematicians. As being a country out of control with a weak President and one that is Used to scare powerful people.
# Linear Algebra over Division Ring: Vector Space
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 12.64 MB
Name your program PartialPivotStability.java. At TuLyn, we created word problems on linear algebra to help you better understand linear algebra. The lowest midterm grade can be dropped and replaced by the final exam grade. Both were on "Modern Algebra," but included chapters on linear algebra. Math pizzazz pre algebra, simplifying square roots, 3 ways of simplifying radical expressions, simplifying rational expressions worksheet, Free Answer Algebra Problems Calculator.
# Groups, trees, and projective modules (Lecture notes in
## Warren Dicks
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 7.35 MB
Unit 2 Review · Unit 2 Algebra Skillz Review Video . TNT is a newer design, and will integrate the functionlaity of Lapack++, IML++, SparseLib++, and MV++ .] LAPACK++ (Linear Algebra PACKage in C++) is a software library for numerical linear algebra that solves systems of linear equations and eigenvalue problems on high performance computer architectures. This book covers the following topics: Brief introduction to Logic and Sets, Brief introduction to Proofs, Basic Linear Algebra, Eigenvalues and Eigenvectors, Vector Spaces.
# Ripples in Mathematics
## A. Jensen
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 6.09 MB
The output is similar to that of rowred.exe. Entre los siglos XVI y XVII se consolidó la noción de número complejo, con lo cual la noción de álgebra empezaba a apartarse de cantidades medibles. And many other free math textbooks are available online. Sparse matrix algorithms are used when input matrices have such a large number of zero entries that it becomes advantageous, for storage or efficiency reasons, to “squeeze” them out of the matrix representation. Then A + B = _1 + 0 −2 +2 4+ (−4) 2 + 1 −1 +3 3+ 1 = _1 0 0 3 2 4. in the set of all rational numbers. only when A and B are of the same size. ―The World’s Largest Matrix Computation: Google’s Page Rank Is an Eigenvector of a Matrix of Order 2. and Murray Browne.
# Fundamentals of Matrix Computations
## David S. Watkins
Format: Paperback
Language: 1
Format: PDF / Kindle / ePub
Size: 8.94 MB
|
2017-09-21 15:40:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3654179573059082, "perplexity": 2809.4442760227403}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00260.warc.gz"}
|
http://brickisland.net/DDGSpring2016/2016/02/26/assignment-4/
|
# Assignment 4
For your next assignment, due on Friday, March 4th, you will simply complete the exercises you previously skipped over that required use of exterior calculus. These exercises can all be found in the class notes, and include:
• Chapter 5, Exercise 13 (Hint: read very carefully the end of Section 3.2)
• Chapter 5, Exercise 16
• Chapter 6, Exercise 20
• Chapter 6, Exercise 26
You should also carefully read Section 6.3 (which leads up to Exercise 26), which provides an alternative perspective on the Laplacian to the finite element approach you derived in your last homework, and will be very useful to understand for upcoming assignments. For the exercises in Chapter 5, it may also be helpful to review your reading of Chapter 2 (on the geometry of surfaces), re-thinking these concepts in terms of differential forms. (For example, the Weingarten map $$dN$$ and differential $$df$$ of the immersion can both be viewed as vector-valued 1-forms.) These exercises are a bit more challenging than the “warm up” exercises from your readings, so please post comments if you have any trouble.
## 10 thoughts on “Assignment 4”
1. Keenan says:
For exercise 16, it may help to go review some of the discussion of curvature from Sections 2.3 and 2.4, especially the discussion of how derivatives of tangent vectors are related to curvature, and what principal curvatures are / how they’re related to mean curvature.
2. Oliver says:
What time is it due on Friday? The usual of 11 PM?
1. Keenan says:
Correct.
3. YeHan says:
What does $df$ mean in exercise 16? In the readings, there always is a unit direction vector $X$ on a point on $M$ so that we can define the one-form directional differential as $df(X)$. In the problem, it it not obvious how $X$ is defined on $M$. Do I misunderstand anything here? Thanks.
1. YeHan says:
Sorry, it’s exercise 13
2. Oliver says:
It’s not defined on $M$, it’s defined on $\partial M$. We have an integral over the surface $M$, and then somehow using Stoke’s theorem, we can instead talk about an integral along some curve on the boundary of that surface ($\partial M$). $df(X)$ is then the infinitesimal length of the steps we take along that curve, if I understand correctly.
4. Slav says:
If $\alpha$ is a 1-form, then $\alpha \wedge \alpha$ should equal 0 by anti-symmetry. On the other hand, if $\alpha$ is vector valued, and we use the cross product, then $\alpha \wedge \alpha (u,v) = 2 \alpha (u) \times \alpha (v)$ which may not equal zero. How can this discrepancy be explained?
1. Oliver says:
With a 1 form, you’re only measuring one direction, so it can only possibly be 0. With a 2 form, you’re measuring the cross product of two different directions $u, v$ (after passing through $\alpha$), which might not be equal to zero.
2. The antisymmetry property relates to the commutativity of the codomain. Most of our language pertains to scalar-valued forms, which we implicitly call forms, but here we’re working with vector-valued forms (equipped with the cross product), which is not commutative, so $\alpha \wedge \alpha$ is not necessarily $0$.
|
2019-12-13 07:34:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7178978323936462, "perplexity": 550.9273549428422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00164.warc.gz"}
|
https://socratic.org/questions/what-is-the-equation-of-the-line-with-slope-m-1-2-that-passes-through-5-3
|
# What is the equation of the line with slope m= -1/2 that passes through (5,3) ?
Mar 24, 2018
The equation of the line is $y = - \frac{1}{2} x + \frac{11}{2}$
#### Explanation:
As you know, equation of a line can be presented by y=mx+c (slope-intercept form). our slope (m)=$- \frac{1}{2}$ so we will have to find c (the y-intercept). The rest is shown above.
$y = 3$ , $x = 5$ and $m = - \frac{1}{2} \rightarrow$ We then substitute what we were given into our equation:
$3 = \left(- \frac{1}{2}\right) \cdot 5 + c \rightarrow$ We work out what we have
$3 = \left(- \frac{5}{2}\right) + c \rightarrow$ Add$\left(- \frac{5}{2}\right)$ to both sides which gives us $c = \frac{11}{2}$, therefore equation of the line is $y = - \frac{1}{2} x + \frac{11}{2}$
|
2022-08-17 15:38:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8670005202293396, "perplexity": 561.5305430137134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00554.warc.gz"}
|
http://www.zora.uzh.ch/21634/
|
Browse by:
Zurich Open Repository and Archive
Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-21634
Nikeghbali, A (2006). Some random times and martingales associated with BES0(δ) processes (0<δ<2). ALEA Latin American Journal of Probability and Mathematical Statistics, 2:67-89 (electronic).
Preview
PDF (Verlags-PDF)
1375Kb
Preview
Accepted Version
PDF (Accepted manuscript, Version 2)
293Kb
Preview
Accepted Version
PDF (Accepted manuscript, Version 1)
281Kb
## Abstract
In this paper, we study Bessel processes of dimension , with , and some related martingales and random times. Our approach is based on martingale techniques and the general theory of stochastic processes (unlike the usual approach based on excursion theory), although for , these processes are even not semimartingales. The last time before 1 when a Bessel process hits 0, called , plays a key role in our study: we characterize its conditional distribution and extend Paul Lvy's arc sine law and a related result of Jeulin about the standard Brownian Motion. We also introduce some remarkable families of martingales related to the Bessel process, thus obtaining in some cases a one parameter extension of some results of Azma and Yor in the Brownian setting: martingales which have the same set of zeros as the Bessel process and which satisfy the stopping theorem for , a one parameter extension of Azma's second martingale, etc. Throughout our study, the local time of the Bessel process also plays a central role and we shall establish some of its elementary properties.
Other titles: Some random times and martingales associated with $BES_{0}(\delta)$ processes $(0<\delta<2)$ Journal Article, refereed, original work 07 Faculty of Science > Institute of Mathematics 510 Mathematics English 2006 20 Jan 2010 12:06 23 Nov 2012 13:58 Instituto Nacional de Matematica Pura e Aplicada, Brazil 1980-0436 http://alea.impa.br/english/index_v2.htm http://arxiv.org/abs/math/0505423 Google Scholar™
|
2014-03-10 19:49:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6440023183822632, "perplexity": 1565.973517040375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010990749/warc/CC-MAIN-20140305091630-00020-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1175.70013
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1175.70013
Lei, J.; Santoprete, M.
Rosette central configurations, degenerate central configurations and bifurcations.
(English)
[J] Celest. Mech. Dyn. Astron. 94, No. 3, 271-287 (2006). ISSN 0923-2958; ISSN 1572-9478/e
Summary: In this paper we find a class of new degenerate central configurations and bifurcations in the Newtonian $n$-body problem. In particular we analyze the Rosette central configurations, namely a coplanar configuration where $n$ particles of mass $m_1$ lie at the vertices of a regular $n$-gon, $n$ particles of mass $m_2$ lie at the vertices of another n-gon concentric with the first, but rotated of an angle $\pi /n$, and an additional particle of mass $m_0$ lies at the center of mass of the system. This system admits two mass parameters $\mu = m_0/m_1$ and $\epsilon = m_2/m_1$. We show that, as $\mu$ varies, if $n > 3$, there is a degenerate central configuration and a bifurcation for every $\epsilon > 0$, while if $n = 3$ there is a bifurcation only for some values of $\epsilon$.
MSC 2000:
*70F10 n-body problem
Keywords: bifurcations; central configurations; degenerate central configurations; $n$-body problem
Highlights
Master Server
|
2013-05-25 23:04:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572267472743988, "perplexity": 1369.6344419089523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00034-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.cureus.com/articles/3001-indices-of-regional-brain-atrophy-formulae-and-nomenclature
|
"Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has."
Technical report
peer-reviewed
## Indices of Regional Brain Atrophy: Formulae and Nomenclature
### Abstract
The pattern of brain atrophy helps to discriminate normal age-related changes from neurodegenerative diseases. Albeit indices of regional brain atrophy have proven to be a parameter useful in the early diagnosis and differential diagnosis of some neurodegenerative diseases, indices of absolute regional atrophy still have some important limitations. We propose using indices of relative atrophy for representing how the volume of a given region of interest (ROI) changes over time in comparison to changes in global brain measures over the same time.
A second problem in morphometric studies is terminology. There is a lack of systematization naming indices and the same measure can be named with different terms by different research groups or imaging softwares. This limits the understanding and discussion of studies.
In this technological report, we provide a general description on how to compute indices of absolute and relative regional brain atrophy and propose a standardized nomenclature.
### Introduction
Brain atrophies with age. Numerous cross-sectional and longitudinal imaging studies have found an inverse correlation between increasing age and decreasing brain volumes [1-7], and these findings are substantiated by postmortem data [8-9]. Grey matter (GM) volume loss appears to be a constant, linear function of age throughout adult life, whereas white matter (WM) volume loss seems to be delayed until middle adult life [10]. In any case, it is clear that brain volume decreases in healthy aging, and it can be observed in almost all brain areas over one year. Albeit changes are especially evident in temporal and prefrontal cortices, where the rate of annual decline is about 0.5%/year, all subcortical and ventricular regions, except the caudate nucleus and the fourth ventricle, show changes visible in this period of time [11].
A better understanding of the brain aging process may help to discriminate normal age-related changes from neurodegenerative diseases. In neurodegenerative diseases, besides physiologic atrophy, patients develop a progressive focal atrophy that grows in extent and intensity with time. For instance, atrophy in Alzheimer’s disease (AD) begins outside the hippocampus, with development of neurofibrillary tangles in the transentorhinal and entorhinal cortex, spreading subsequently to the subiculum and CA1 regions of the hippocampus and later on to other cortical areas [12-15]. A similar progression from focal to wide atrophy is seen in almost all neurodegenerative diseases. As a result, cortical atrophy and ventricular enlargement are present both in healthy aging and in all neurodegenerative diseases to some extent. However, the start point of regional atrophy, the rate of atrophy, and the pattern of atrophy progression varies between healthy aging and neurodegenerative disease and among neurodegenerative diseases.
Therefore, we sustain that we should use measures of relative regional atrophy comparing how a given structure changes in respect to global brain changes when studying neurodegenerative diseases. Moreover, as intracranial volume is an interindividual variant, all studies using absolute global or regional atrophy measures need to be normalized by intracranial volume. In the same line, studies found a highly significant and well-recognized effect of sex on volume, with men having larger brain volumes [2, 7]. This finding suggests that studies considering the effect of sex on cross-sectional volumes should also include a correction for head size to reduce this potential confounding effect. All these limitations are automatically solved when using relative rates of atrophy as each subject forms his or her own control.
Magnetic resonance imaging (MRI)–based measurements of the brain have been proposed as aids in the diagnosis of AD and other neurodegenerative diseases in clinical practice. Most studies assessing brain atrophy in neurodegenerative diseases have been done using volumetric techniques, although some others--specifically addressed to clinical practice--have used linear and planimetric techniques. Visual rating scales have also been developed, although these are not objective and, therefore, are not valid for assessing progression [16]. In any case, the rationale of using relative regional atrophy applies with all methodological approaches.
A second problem in morphometric studies is terminology, the lack of a homogeneous nomenclature for parameters and indices of atrophy. The same measure can be named with different names by different research groups or imaging software programs. For instance, a widely used measure, such as the yearly rate of Whole Brain Atrophy (yrWBA), is also known as yearly brain atrophy rate (yBAR) and annualised percent brain volume change (PBVC/y) [17]. This fact is a limitation for the comparison and discussion of results between groups.
The aim of this technological report is to provide a general description on how to compute indices of absolute and relative regional brain atrophy and propose a standardized nomenclature.
### Technical Report
#### Computing indices of absolute regional brain atrophy
Computing the yearly rate of absolute regional brain atrophy is conceptually simple. First, we need the find out the volume of the target region (named Region of Interest, ROI) in the basal and follow-up MRI and compute the difference. Then all we need to do is to “annualize” the time lapsed between the two MRI studies (12/number of months). To do this, we just need to divide the number of months transcurred from basal to follow-up MRI studies. We recommend multiplying the result by 100 to avoid working with decimals.
Thus, the general formula for computing indices of absolute rate of atrophy is as follows:
$$yrA-ROI=\frac{\left ( ROI1-ROI2 \right ) \times1200}{(months\ between\ MRI\ studies)}$$
where ROI is the short name of ROI; ROI1 is the volume of the ROI in the basal MRI and ROI2 the volume of the ROI in the follow-up MRI.
Indices can be computed for each hemisphere separately, or for both brain hemispheres together (taking the addition of volumes of ROI on both hemispheres). We will use cerebral hemispheres for telencephalic and diencephalic structures while we will use cerebellar hemispheres and ipsilateral brainstem for structures in the posterior fossa.
#### Computing indices of relative regional brain atrophy
In order to compute indices of relative atrophy, we need to find out the volume of the ROI and the volume of a brain structure used as a measure of reference (here named Ref). There are several potential Ref that can be used. The Ref is usually the whole brain volume, although it can also be other parameters, such as the cortical brain volume or ventricular volume (an inverse, indirect measure of brain atrophy). In this case, we use the lateral ventricles (I-II) with telencephalic ROI and the third ventricle (III) with diencephalic structures. When the ROI is in the posterior fossa, the Ref can be the volume of the parenchyma (brainstem plus cerebellar volumes) or the fourth ventricle (IV) as an inverse, indirect measure.
Thus, the general formula for computing indices of yearly rates of relative regional atrophy is as follows:
$$yrRA-ROI(Ref)=\frac{\left ( ROI1-ROI2 \right ) \times1200}{\left (Ref1-Ref2 \right ) \times(months\ between\ MRI\ studies)}$$
where Ref is the short name of the measure of referece and ROI is the short name of the ROI.
ROI1 is the volume of the ROI in the first MRI and ROI2 the volume of the ROI in the second MRI.
Ref1 is the reference volume in the first MRI and Ref2 the reference volume in the second MRI. When using inverse measures of global atrophy such as ventricular volumes, we compute (Ref-Ref1) to avoid negative values.
Again, indices can be computed for each hemisphere separately, or for both hemispheres together. The ROI’s ipsilateral reference must always be taken with paired reference structures.
#### Nomenclature
We propose a standardized, comprehensive terminology for naming the absolute and relative rates of atrophy of every ROI. The proposed nomenclature for the most frequently used ROI can be found in Table 1.
ROI Absolute Rate of Atrophy Relative Rate of Atrophy Telencephalic ROI Cortical Gray Matter yrA-CGM yrRA-CGM(Ref) Whole Brain Hemisphere yrA-WBHp yrRA-WBH(Ref) Brain Hemispheric-Cortical Gray Matter yrA-BHpCGM yrA-BHCGM(Ref) Lobar Cortical Gray Matter (frontal, temporal, parietal, occipital and insular lobes) yrA-LCGM yrA-LCGM(Ref) Gyri (the name of any girus is suitable, following international nomina anatomica) yrA-"Gyrus name" yrRA-"Gyrus name"(Ref) Forebrain Parenchyma yrA-FP yrRA-FP(Ref) Hippocampus yrA-H yrRA-H(Ref) Medial temporal lobe (hippocampus + parahippocamal gyrus) yrA-MTL yrRA-MTL(Ref) Amygdala yrA-Amy yrRA-Amy(Ref) Caudate yrA-Cau yrRA-Cau(Ref) Putamen yrA-Pu yrRA-Pu(Ref) Striatum (Caudate+Putamen) yrA-St yrRA-St(Ref) Pallidum yrA-Pa yrRA-Pa(Ref) Lenticularis nucleus (Putamen+Pallidum) yrA-Len yrRA-Len(Ref) Diencephalic ROI Thalamus yrA-T yrRA-T(Ref) Hypothalamus yrA-HypoT yrRA-GypoT(Ref) Subthalamic nucleus yrA-SubT yrRA-SubT(Ref) ROI in the posterior fossa Whole Cerebellum yrA-WCer yrRA-Cer(Ref) Whole Cerebellar Hemisphere yrA-WCerHp yrRA-HCer(Ref) Whole Cerebellar Cortical Gray Matter yrA-WCerCGM yrRA-WCerCGM(Ref) Cerebellar Hemispheric Cortical Gray Matter yrA-CerHpCGM yrRA-CerHCGM(Ref) Midbrain (Mesencephalon) yrA-MidB yrRA-MidB(Ref) Pons yrA-Pons yrRA-Pons(Ref) Medulla yrA-Med yrRA-Med(Ref) Olivary body yrA-OB yrRA-OB(Ref) Substantia Nigra yrA-Sn yrRA-Sn(Ref) Red Nucleus yrA-Rn yrRA-Rn(Ref)
The first two letters denote the “temporal lapse”, this way all indices start with “yr” for “yearly rate”. Then we place the letter “A” if we are referring to an absolute rate of atrophy or the letters “RA” if we are referring to a relative rate of atrophy. Then a hyphen “-” separates the “yrA” or “yrRA” from the name of the ROI. Thus, the hyphen is followed by the initials of the anatomic structure used as ROI. When the ROI is a paired structure, the letter l (left) or r (right) must precede the initials of the structure if we are measuring side by side; otherwise, it will be understood that it is referred to both sides together. Finally, in rates of relative atrophy only, we add the initials of the referenced measure between parenthesis at the end “(Ref)”, where (Ref) is the short name of the parameter or structure used as measure of reference (Table 2).
Brain Structure Initials or Symbol To be used with Telencephalic ROI Whole Brain WB Brain Hemisphere BHp Cortical Gray Matter CGM Lateral Ventricles I-II To be used with Diencephalic ROI Whole Brain WB Third Ventricle III To be used with ROI in the posterior fossa Whole Brain WB Parenchyma in the posterior fossa Ppf Cerebellar Hemisphere CerHp Fourth Ventricle IV
Following this nomenclature, it is easy to read any index. For instance, the “yrA-Rn” reads “yearly rate of atrophy of the red nucleus“ -and we understand it is a measure of absolute atrophy of both red nuclei- and the “yrRA-lT(WB)” reads “yearly rate of relative atrophy of the left thalamus referred to whole brain“.
### Discussion
Regional brain atrophy has been proposed to be used in clinical practice for diagnosing neurodegenerative diseases and also as a surrogate marker of disease progression in clinical trials [16]. However, there is a lack of consensus on how to compute and nominate indices of regional brain atrophy.
Here, we describe a general approach for computing rates of absolute and relative regional atrophy--that may be extended to any ROI--at the time of proposing a standardized nomenclature. These indices are especially thought to be used with volumetric techniques, but they can also be applied to planimetric techniques where areas of ROI are used instead of volumes. Indeed, we have developed and validated some planimetric techniques, such as the yearly rate of Medial Temporal Lobe Atrophy (yrMTA) [18]--this should be named yrRA-MTL(I-II) in accordance to the terminology described here. The yrMTA has proven some usefulness in the diagnosis of AD (Poster presented at the Alzheimer’s Association International Conference, Washington, 2015) and in correlating memory deficits in Parkinson’s disease (PD) (Presented at the 18th Congress of Parkinson’s Disease and Movement Disorders, Stockholm, 2014 and the 10th International Congress on Non-Motor Dysfunctions in Motor Dysfunctions in Parkinson’s Disease, Nice, 2014). We have also described in detail some volumetric indices, such as the yearly rate of relative atrophy of the thalamic nuclei (yrRAT(I-II-III)) [19] that has proved helpful in the diagnosis of multiple sclerosis and in assessing the prognosis of patients with clinically isolated syndrome (Presented at the 31st ECTRIMS Congress, Barcelona, 2015).
If the methodological approach and nomenclature proposed here were adopted by all research groups working in brain morphometry, it would ease comparing and discussing results of studies addressing the rates of regional brain atrophy.
There is much work to do before any of these parameters are ready to be used with diagnostic purposes in routine clinical practice. Extensive research is needed in both retrospective and prospective studies. Retrospective studies are relatively easy to address, particularly for some conditions such as AD and PD, where large databases of neuroimaging studies and clinical data are available to researchers worldwide. Those indices showing positive results in retrospective studies should be validated in prospective studies before they can be used in clinical practice and clinical trials.
### Conclusions
There is a lack of consensus on how to compute and nominate indices of regional brain atrophy. Here, we provide a general description on how to compute indices of absolute and relative regional brain atrophy and propose a standardized nomenclature.
If this approach were universally adopted, it would allow a direct comparison of results from different research groups.
### References
1. Gur RC, Mozley PD, Resnick SM, Gottlieb GL, Kohn M, Zimmerman R, Herman G, Atlas S, Grossman R, Berretta D: Gender differences in age effect on brain atrophy measured by magnetic resonance imaging. Proc Natl Acad Sci U S A. 1991, 88:2845–2849. 10.1073/pnas.88.7.2845
2. Blatter DD, Bigler ED, Gale SD, Johnson SC, Anderson CV, Burnett BM, Parker N, Kurth S, Horn SD: Quantitative volumetric analysis of brain MR: normative database spanning 5 decades of life. AJNR Am J Neuroradiol. 1995, 16:241–251.
3. Mueller EA, Moore MM, Kerr DC, Sexton G, Camicioli RM, Howieson DB, Quinn JF, Kaye JA: Mueller EA, Moore MM, Kerr DC, Sexton G, Camicioli RM, Howieson DB, Quinn JF, Kaye JA. Neurology. 1998, 51:1555–1562. 10.1212/WNL.51.6.1555
4. Coffey CE, Wilkinson WE, Parashos IA, Soady SA, Sullivan RJ, Patterson LJ, Figiel GS, Webb MC, Spritzer CE, Djang WT: Quantitative cerebral anatomy of the aging human brain: a cross-sectional study using magnetic resonance imaging. Neurology. 1992, 42:527–536. 10.1212/WNL.42.3.527
5. Murphy DG, DeCarli C, Schapiro MB, Rapoport SI, Horwitz B: Age-related differences in volumes of subcortical nuclei, brain matter, and cerebrospinal fluid in healthy men as measured with magnetic resonance imaging. Arch Neurol. 1992, 49:839–845. 10.1001/archneur.1992.00530320063013
6. Mu Q, Xie J, Wen Z, Weng Y, Shuyun Z: A quantitative MR study of the hippocampal formation, the amygdala, and the temporal horn of the lateral ventricle in healthy subjects 40 to 90 years of age. AJNR Am J Neuroradiol. 1999, 20:207–211.
7. Scahill RI, Frost C, Jenkins R, Whitwell JL, Rossor MN, Fox NC. A: Longitudinal Study of Brain Volume Changes in Normal Aging Using Serial Registered Magnetic Resonance Imaging. Arch Neurol. 2003, 989–994. 10.1001/archneur.60.7.989
8. Dekaban AS: Changes in brain weights during the span of human life: relation of brain weights to body heights and body weights. Ann Neurol. 1978, 4:345–356. 10.1002/ana.410040410
9. Ho KC, Roessmann U, Straumfjord JV, Monroe G: Analysis of brain weight, II: adult brain weight in relation to body height, weight, and surface area. Arch Pathol Lab Med. 1980, 104:640–645.
10. Ge Y, Grossman RI, Babb JS, Rabin ML, Mannon LJ, Kolson DL: Age-related total gray matter and white matter changes in normal adult brain. Part I: volumetric MR imaging analysis. Neuroradiol. 2002, 23:1327–33.
11. Fjell AM, Walhovd KB, Fennema-Notestine C, McEvoy LK, Hagler DJ, Holland D, Brewer JB, Dale AM: One-year brain atrophy evident in healthy aging. J Neurosci. 2009, 29:15223–31. 10.1523/JNEUROSCI.3252-09.2009
12. Jack CR Jr, Petersen RC, Xu YC, Waring SC, O'Brien PC, Tangalos EG, Smith GE, Ivnik RJ, Kokmen E: Medial temporal atrophy on MRI in normal aging and very mild Alzheimer's disease. Neurology. 1997, 49:786–794. 10.1212/WNL.49.3.786
13. Braak H: Braak E. On areas of transition between entorhinal allocortex and temporal isocortex in the human brain. Normal morphology and lamina-specific pathology in Alzheimer's disease. Acta Neuropathol. 1985:325–332. doi: 10.1007/BF00690836
14. Convit A, de Asis J, de Leon MJ, Tarshish CY, De Santi S, Rusinek H. Atrophy: of the medial occipitotemporal, inferior, and middle temporal gyri in non-demented elderly predict decline to Alzheimer's disease. Neurobiol. Aging, 2000:19–26. doi: 10.1016/S0197-4580(99)00107-4
15. Lim HK, Jung WS, Ahn KJ: et al. Relationships between hippocampal shape and cognitive performances in drug-naïve patients with Alzheimer's disease. Neurosci. Lett, 2012:124–129. doi: 10.1016/j.neulet.2012.03.072
16. Menéndez-González M, de Celis Alonso B, Salas-Pacheco J, Arias-Carrión O: Structural neuroimaging of the medial temporal lobe in Alzheimer's disease clinical trials. J Alzheimers Dis. 2015, 48:in press.
17. Brainmarkers.com. (2015). Accessed: July 2, 2015: http://brainatrophyindices.blogspot.com.es/p/nomenclature.html.
18. Conejo Bayón F, Maese J, Fernandez Oliveira A, Mesas T, Herrera de la Llave E, Alvarez Avellón T, Menéndez-González M: Feasibility of the Medial Temporal lobe Atrophy index (MTAi) and derived methods for measuring atrophy of the medial temporal lobe. Front Aging Neurosci. 2014, 6:305. 10.3389/fnagi.2014.00305
19. Menéndez-González M, Salas-Pacheco JM, Arias-Carrión O: The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis. Front Aging Neurosci. 2014, 6:219. 10.3389/fnagi.2014.00219
Technical report
peer-reviewed
### Author Information
###### Ethics Statement and Conflict of Interest Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
Technical report
peer-reviewed
### Figures etc.
8.6
###### RATED BY 4 READERS
Scholary Impact Quotient™ (SIQ™) is our unique post-publication peer review rating process. Learn more here.
|
2020-06-05 22:31:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5347853302955627, "perplexity": 10082.277409239989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00483.warc.gz"}
|
http://openstudy.com/updates/5126df7fe4b0dbff5b3cf066
|
appleduardo 2 years ago whats the integral of the following function?
1. appleduardo
$\int\limits_{}^{}\frac{ dx }{ x ^{2} +2x + 1}$
2. harsh314
take $x ^{2}+2x+1=(x+1)^{2}$ and proceed by substiution
3. harsh314
did u get it..............
4. appleduardo
yeah! but i forgot to type 2 before x^2, :/ so the function is:$\int\limits_{}^{}\frac{ dx }{ 2x^{2} + 2x +1 }$
5. appleduardo
what i did is this: $\frac{ 1 }{ 2 }\int\limits_{}^{}\frac{ dx }{ x ^{2} +2x +1}$ but then i get stuck!
6. harsh314
then divide and multiply the whole denominator by 2$2(x ^{2}+x+\frac{ 1 }{ 2 })$ then you need to make it into a square expression as$x ^{2}+x*2*\frac{ 1 }{ 2 }+\frac{ 1 }{ 4 }-\frac{ 1 }{ 4 }+\frac{ 1 }{ 2 }$add 1/4 and subtract 1/4 then $(x+\frac{ 1 }{ 2 })^{2}=x^{2}+x+\frac{ 1 }{ 4 }$substitute and then solve
7. appleduardo
yeah thanks, i will try and ill post when im done :D
8. appleduardo
i got: $2arc tg \frac{ x +0.5 }{ 0.5 } + c$ is that correct?
9. appleduardo
(correction)$arc tg \frac{ x +0.5 }{ 0.5 } + c$
10. harsh314
the question takes the form................$\frac{ 1 }{ (x+\frac{ 1 }{ 2 })^{2}-(\frac{ 1 }{ \sqrt{2} })^{2} }$
|
2015-07-05 21:15:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702993035316467, "perplexity": 4359.802919926846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097710.59/warc/CC-MAIN-20150627031817-00238-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/jess/Sugata_Hazra
|
• Sugata Hazra
Articles written in Journal of Earth System Science
• Development of grabens and associated fault-drags: An experimental study
Experiments on extensional faulting were performed with semi-brittle talc-sand beds resting on a ductile clay base. The experiments show that the development of graben in the talc-sand beds is controlled by the deformation in the ductile basement. Graben-like structures form only when there is a non-uniform stretching in the basement. Uniform extension at the basement level fails to produce any such structures. Grabens initiate as large synclinal structures (sag). The sag is generated either by a downward flexing of the talc-sand bed on a ductile basement or by non ****-uniform thinning of beds. Listric master faults bounding the grabens intersect the basement at high angles. The master faults that initiate as curved shear planes rotate further with continued extension. At the initial stage, the graben structures are associated with normal drags, and with progressive deformation, drag patterns change from normal to a reverse one.
• High-resolution satellite image segmentation using Hölder exponents
Texture in high-resolution satellite images requires substantial amendment in the conventional segmentation algorithms. A measure is proposed to compute the Hölder exponent (HE) to assess the roughness or smoothness around each pixel of the image. The localized singularity information is incorporated in computing the HE. An optimum window size is evaluated so that HE reacts to localized singularity. A two-step iterative procedure for clustering the transformed HE image is adapted to identify the range of HE, densely occupied in the kernel and to partition Hölder exponents into a cluster that matches with the range. Hölder exponent values (noise or not associated with the other cluster) are clubbed to a nearest possible cluster using the local maximum likelihood analysis.
• Characterizing spatial and seasonal variability of carbon dioxide and water vapour fluxes above a tropical mixed mangrove forest canopy, India
The above canopy carbon dioxide and water vapour fluxes were measured by micrometeorological gradient technique at three distant stations, within the world’s largest mangrove ecosystem of Sundarban (Indian part), between April 2011 and March 2012. Quadrat analysis revealed that all the three study sites are characterized by a strong heterogeneity in the mangrove vegetation cover. At day time the forest was a sink for CO2, but its magnitude varied significantly from −0.39 to −1.33 mg m−2 s−1. The station named Jharkhali showed maximum annual fluxes followed by Henry Island and Bonnie Camp. Day time fluxes were higher during March and October, while in August and January the magnitudes were comparatively lower. The seasonal variation followed the same trend in all the sites. The spatial variation of CO2 flux above the canopy was mainly explained by the canopy density and photosynthetic efficiency of the mangrove species. The CO2 sink strength of the mangrove cover in different stations varied in the same way with the CO2 uptake potential of the species diversity in the respective sites. The relationship between the magnitude of day time CO2 uptake by the canopy and photosynthetic photon flux was defined by a non-linear exponential curve ($R^2$ ranging from 0.51 to 0.60). Water vapour fluxes varied between 1.4 and 69.5 mg m−2 s−1. There were significant differences in magnitude between day and night time water vapour fluxes, but no spatial variation was observed.
• # Journal of Earth System Science
Current Issue
Volume 128 | Issue 5
July 2019
|
2019-06-20 09:41:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44466111063957214, "perplexity": 4395.6495714074945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00468.warc.gz"}
|
https://2020.ecoop.org/details/ecoop-2020-papers/25/Scala-with-Explicit-Nulls
|
ECOOP 2020
Sun 15 - Tue 17 November 2020 Online Conference
co-located with SPLASH 2020
The Scala programming language makes \emph{all} reference types \emph{implicitly nullable}. This is a problem, because $\texttt{null}$ references do not support most operations that do make sense on regular objects, leading to runtime errors. In this paper, we present a modification to the Scala type system that makes nullability \emph{explicit} in the types. Specifically, we make reference types \emph{non-nullable} by default, while still allowing for nullable types via \emph{union types}. We have implemented this design for explicit nulls as a fork of the Dotty (Scala 3) compiler. We evaluate our scheme by migrating a number of Scala libraries to use explicit nulls. Finally, we give a \emph{denotational semantics} of \emph{type nullification}, the interoperability layer between Java and Scala with explicit nulls. We show a soundness theorem stating that, for variants of System F that model Java and Scala, nullification (mostly) preserves elements of types.
|
2020-09-22 14:43:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4223836362361908, "perplexity": 4511.274333739931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00047.warc.gz"}
|
https://openwetware.org/wiki/Physics307L:People/Ritter/Notebook/070917
|
# Physics307L:People/Ritter/Notebook/070917
## Speed of light
See link to last years lab manual for a complete lab description [1]
## Purpose
The purpose of this lab is, as the title suggests, to determine the speed of light. This will be done by measuring the time it take for a signal to travel from a LED source to a PMT (More on both of these in the "equipment and setup section") This lab will build on what we learned in the previous lab on how to use an oscilloscope [[2]]. Now our oscilloscope will be triggered by a TAC which will measure the the delay between two inputed signals, a start and a stop, and output a signal proportional to the time it takes for our light signal to travel the length of our tube.
## Equipment and setup
see comment
Steven J. Koch 14:08, 25 September 2007 (EDT):Great description!
This lab was done along with Antonio Rivera [[3]]and A complete photo record of all equipment, as well as a set up description may be found on his site.
A major portion of this lab involved setup. Given this was the first time this lab had been preformed in a while, setup proved to be a rather extensive process. I have provided a listing of the major components involved.
1. LED - The first component of the setup was the low voltage source (which for the purposes of this lab was set between 150V and 200V) to power our LED (for more on LED's visit [4]the Wikipedia page on light emitting diodes). The LED will serves as source of photons to be sent down an enclosed tube to our PMT. The "start signal" from the light source serves as a start time for our TAC.
2. PMT - At the other end of our tube is a PMT or photomultiplier tube. The PMT will take a photon emitted from our LED and convert it into an electrical signal based on its intensity. The PMT in this experiment is powered by a high voltage source set between 1800V and 2000V. The PMT signal will serve as the stop signal for our TAC. It is the difference between start and stop times that will provide our "time of flight" measurement for light.
3. Delay - Delay is necessary to provide adequate distance between the start and stop times received by the TAC. An initial assumption was that our signals would travel 1ft. in 1ns. It was our hope to provide 10ns delay between the start time and the stop time. Initially we attempted to use a delay generator to provide adequate spacing for our TAC. It turned out that our delay generator was no good (see lessons learned section for more). We turned then to our assumption of 1ft/1ns and simply added for cable to the PMT side of our set up. This seemed adequate for the purposes of our experiment.
4. TAC - The time-amplitude converter is used to take the start and stop times of our experiment and produce and outputed voltage according to the magnitude of the delay between the two. This piece of equiptment was the source of the most confusion in the setup process.
``` A. Set TAC up for an acceptable window of time. For the purposes of our lab we set it for
100ns.
B. Ensure a proper signal from the start and stop time signals, and also that they fall
within the 100ns time frame.
C. TAC signal is sent to an oscilloscope. Here the amplitude of the signal is read as a ratio
of the delay time divided by your preset delay window.
${\displaystyle Dt/Dw=Amplitude/10V}$
```
5. Oscilloscope - triggered by the TAC and used to measure the amplitude of our TAC signal as a voltage
## Data Collection
In our first attempts at collecting data we found it difficult to find any consistency with our readings. Signals seemed to be very sensitive to the polarity of the PMT (which we were adjusting by hand)
see comment
Steven J. Koch 14:06, 25 September 2007 (EDT):Of course I do know you spent the majority of the time just getting things working! Obviously some more data points would help a lot. Some more comments: Even though we know the measurements on the meter stick are more precise and accurate than our TAC readings, you should still indicate somewhere your estimate of random error on meter stick (from parallax, etc.). E.g. "estimated +/- 0.5 cm for all measurements."
I am not sure what you mean by "zeroing" the PMT. The essential thing to try to do is rotate the PMT (and thus the polarizer) at each distance so that the magnitude of the PMT signal is the same for each TAC measurement. This isn't easy to do, though ... take a look at Anne's or Matthew D's notebook to see some data they obtained.
measurement (cm) TAC output (V) travel time (ns)
75 3.8 38
95 3.7 37
55 3.9 39
Second set of data taken by zeroing the output of the PMT rather then adjusting the polarity after every LED movement. This seemed like a much more accurate way of taking data. However, the large amounts of jitter on the oscilloscope made it very difficult to determine the exact output from the oscilloscope.
measurement (cm) TAC output (V) Travel Time (ns)
55 2.16 21.6
80 1.98 19.8
see comment
Steven J. Koch 14:06, 25 September 2007 (EDT):"Off almost exactly by a factor of 2" = a statement about your discrepancy but not an estimate of your uncertainty. I do realize you ran out of time to take more data, but I will take off substantial points for not having an uncertainty estimate for your "practice grade." :)
Speed of light calculation from table 2 produced an amount of 1.4 x 10^8 m/s. Off almost exactly by a factor of 2
Fortunately, in this lab the true value is known already. The speed of light is 299,792,458 m/s. this gives us 53% error in our data.
## Error and Lessons Learned
see comment
Steven J. Koch 14:00, 25 September 2007 (EDT):Your "most important lesson" is a great lesson to have learned! The experiment using the "start" signal from the LED for both "start" and "stop" was a great idea. I agree that systematic error was the big problem ... with a little more time, I think you would have acquired data allowing you to obtain uncertainty estimates as well as get much closer to the true value. Great experimental work and good job blazing the trail for everyone on this lab!
The most important lesson learned came early on, and it is to simply test each piece of equipment separately. Initially my partner and I had the entire experiment set up. This would have been a huge disaster given our difficulties with the delay generator, the TAC and the start time signal.
The TAC was a difficult piece of equipment to use, and new to both of us. In dealing with new equipment, I learned that it is always a good idea to learn how to use it outside of the more complex setting of the experiment. At one point we removed the TAC. We T'd off the LED signal. Plugged one end into the start time input. We then plugged the other end into the stop time input with 10 feet of extra cable. This allowed us to prove that the TAC worked, that we were using it correctly, and also allowed us to check our 1ft of cable = 1ns of travel time assumption.
Given that our final result was off by a factor of 2 it seems obvious that there was some level of systematic error involved in our experiment, and more then likely it came from more then one source.
see comment
Steven J. Koch 14:00, 25 September 2007 (EDT):1) I think this is by far the biggest problem. It's tricky to do, but setting the PMT to exactly the same average level for each distance is critical.
2) Were you measuring the speed of light in a vacuum, though?
3)Good idea for next time
4) I think the LED start signal is supposed to be a negative pulse, but would like it to look a lot cleaner. From what I saw from you and others, though, is that the start pulse from LED wasn't the main source of problems (it was the PMT).
1. Inaccuracy in setting PMT output - The PMT output seemed to drastically effect the amplitude of the TAC signal. Becoming more precise in setting that at a consistent level is crucial to precision.
2. 1ft = 1ns - This didn't appear to be completely true. With our set up a given length of approximately 10ft produced 13.4ns delay time.
3. Data Points - Given time constraints we were limited in the amount of data we were able to take. It would seem that many more data points would be necessary. In addition, each data point we took was an averaged amplitude given by the oscilloscope. I would have liked to have recorded min and max points as well to help develop a better idea of the numerical value of the error involved.
4. Bad start time signal - We were never able to receive a good signal from the LED. Even though the LED was powered by a positive source the oscilloscope always gave a negative reading.
|
2017-12-11 06:11:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42436516284942627, "perplexity": 847.8136526834598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512208.1/warc/CC-MAIN-20171211052406-20171211072406-00591.warc.gz"}
|
https://stats.stackexchange.com/questions/61798/example-of-distribution-where-large-sample-size-is-necessary-for-central-limit-t?noredirect=1
|
# Example of distribution where large sample size is necessary for central limit theorem
Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$.
I know this isn't enough for all distributions.
I wish to see some examples of distributions where even with a large sample size (perhaps 100, or 1000, or higher), the distribution of the sample mean is still fairly skewed.
I know I have seen such examples before, but I can't remember where and I can't find them.
• Consider a Gamma distribution with shape parameter $\alpha$. Take the scale as 1 (it doesn't matter). Let's say you regard $\text{Gamma}(\alpha_0,1)$ as just "sufficiently normal". Then a distribution for which you need to get 1000 observations to be sufficiently normal has a $\text{Gamma}(\alpha_0/1000,1)$ distribution. – Glen_b Jun 15 '13 at 14:13
• @Glen_b, why not make that an official answer & develop it a bit? – gung Jun 15 '13 at 14:28
• Any sufficiently contaminated distribution will work, along the same lines as @Glen_b's example. E.g., when the underlying distribution is a mixture of a Normal(0,1) and a Normal(huge value, 1), with the latter having only a tiny probability of appearing, then interesting things happen: (1) most of the time, the contamination does not appear and there is no evidence of skewness; but (2) sometimes the contamination appears and the skewness in the sample is enormous. The distribution of the sample mean will be highly skewed regardless but bootstrapping (e.g.) will usually not detect it. – whuber Jun 15 '13 at 17:02
• @whuber's example is instructive, showing that the central limit theorem can, in theory, be arbitrarily misleading. In practical experiments, I suppose one needs to ask oneself whether there could be some huge effect that occurs very rarely, and apply the theoretical result with a little circumspection. – David Epstein Jun 19 '13 at 7:40
Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$.
This common rule of thumb is pretty much completely useless. There are non-normal distributions for which n=2 will do okay and non-normal distributions for which much larger $n$ is insufficient - so without an explicit restriction on the circumstances, the rule is misleading. In any case, even if it were kind of true, the required $n$ would vary depending on what you were doing. Often you get good approximations near the centre of the distribution at small $n$, but need much larger $n$ to get a decent approximation in the tail.
Edit: See the answers to this question for numerous but apparently unanimous opinions on that issue, and some good links. I won't labour the point though, since you already clearly understand it.
I am wanting to see some examples of distributions where even with a large sample size (maybe 100 or 1000 or higher), the distribution of the sample mean is still fairly skewed.
Examples are relatively easy to construct; one easy way is to find an infinitely divisible distribution that is non-normal and divide it up. If you have one that will approach the normal when you average or sum it up, start at the boundary of 'close to normal' and divide it as much as you like. So for example:
Consider a Gamma distribution with shape parameter $α$. Take the scale as 1 (scale doesn't matter). Let's say you regard $\text{Gamma}(α_0,1)$ as just "sufficiently normal". Then a distribution for which you need to get 1000 observations to be sufficiently normal has a $\text{Gamma}(α_0/1000,1)$ distribution.
So if you feel that a Gamma with $\alpha=20$ is just 'normal enough' -
Then divide $\alpha=20$ by 1000, to get $\alpha = 0.02$:
The average of 1000 of those will have the shape of the first pdf (but not its scale).
If you instead choose an infinitely divisible distribution that doesn't approach the normal, like say the Cauchy, then there may be no sample size at which sample means have approximately normal distributions (or, in some cases, they might still approach normality, but you don't have a $\sigma/\sqrt n$ effect for the standard error).
@whuber's point about contaminated distributions is a very good one; it may pay to try some simulation with that case and see how things behave across many such samples.
In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite misleading (and underestimates the sample size needed). He makes an excellent point that the mean can be approximately normal but that is only half the story when we do not know $\sigma$. When $\sigma$ is unknown, we typically use the $t$ distribution for tests and confidence limits. The sample variance may be very, very far from a scaled $\chi^2$ distribution and the resulting $t$ ratio may look nothing like a $t$ distribution when $n=30$. Simply put, non-normality messes up $s^2$ more than it messes up $\bar{X}$.
• This is a good point to make; it's often not actually the mean that people deal with but some function of it and other things. However it's not only $s^2$ that can be messed up - you also lose independence of numerator and denominator, and that can have some surprising effects in the tails. – Glen_b Jun 16 '13 at 13:13
You might find this paper helpful (or at least interesting):
http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf
Researchers at UMass actually carried out a study similar to what you're asking. At what sample size do certain distributed data follow a normal distribution due to CLT? Apparently a lot of data collected for psychology experiments is not anywhere near normally distributed, so the discipline relies pretty heavily on CLT to do any inference on their stats.
First they ran tests on data that was uniform, bimodal, and one distibution that was normal. Using Kolmogorov-Smirnov, the researchers tested how many of the distributions were rejected for normality at the $\alpha = 0.05$ level.
Table 2. Percentage of replications that departed normality based on the KS-test.
Sample Size
5 10 15 20 25 30
Normal 100 95 70 65 60 35
Uniform 100 100 100 100 100 95
Bimodal 100 100 100 75 85 50
Oddly enough, 65 percent of the normally distributed data were rejected with a sample size of 20, and even with a sample size of 30, 35% were still rejected.
They then tested several several heavily skewed distributions created using Fleishman's power method:
$Y = aX + bX^2 +cX^3 + dX^4$
X represents the value drawn from the normal distribution while a, b, c, and d are constants (note that a=-c).
They ran the tests with sample sizes up to 300
Skew Kurt A B C D
1.75 3.75 -0.399 0.930 0.399 -0.036
1.50 3.75 -0.221 0.866 0.221 0.027
1.25 3.75 -0.161 0.819 0.161 0.049
1.00 3.75 -0.119 0.789 0.119 0.062
They found that at the highest levels of skew and kurt (1.75 and 3.75) that sample sizes of 300 did not produce sample means that followed a normal distribution.
Unfortunately, I don't think that this is exactly what you're looking for, but I stumbled upon it and found it interesting, and thought you might too.
• "Oddly enough, 65 percent of the normally distributed data were rejected with a sample size of 20, and even with a sample size of 30, 35% were still rejected." -- then it sounds like they're using the test wrong; as a test of normality on completely specified normal data (which is what the test is for), if they're using it right, it must be exact. – Glen_b Jun 15 '13 at 8:26
• @Glen_b: There are multiple sources of potential error here. If you read the document, you'll note that what is listed as "normal" here is actually normal random variates with mean 50 and standard deviation of 10 rounded to the nearest integer. So, in that sense, the test used is already using a misspecified distribution. Second, it still appears they have performed the tests incorrectly, as my attempts at replication show that for a sample mean using 20 such observations, the rejection probability is about 27%. (cont.) – cardinal Jun 15 '13 at 14:29
• (cont.) Third, regardless of the above, some software may use the asymptotic distribution and not the actual one, though at sample sizes of 10K this shouldn't matter too much (if ties had not been artificially induced on the data). Finally, we find the following rather strange statement near the end of that document: Unfortunately, the properties of the KS-test in S-PLUS limit the work. The p-values for the present study were all compiled by hand over the multiple replications. A program is needed to calculate the p-values and make a judgment on them compared to the alpha level chosen. – cardinal Jun 15 '13 at 14:32
• Hi @Glen_b. I don't believe the rounding will reduce the rejection rate here because I believe they were testing against the true standard normal distribution using the rounded data (which is what I meant by saying the test used a misspecified distribution). (Perhaps you were, instead, thinking of using the KS test on a discrete distribution.) The sample size for the KS test was 10000, not 20; they did 20 replications at sample size 10000 each to get the table. At least, that was my understanding of the description from skimming the document. – cardinal Jun 15 '13 at 23:02
• @cardinal - you're correct, of course, so perhaps that could be the source of a substantial chunk of the rejections at large sample sizes. Re: "The sample size for the KS test was 10000, not 20" ... okay, this is sounding increasingly odd. One is left to wonder why they'd think either of those conditions was of much value, rather than say the other way around. – Glen_b Jun 15 '13 at 23:08
|
2019-10-21 05:57:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7252841591835022, "perplexity": 488.8578931341578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00142.warc.gz"}
|
http://experiment-ufa.ru/-8y+10=-6y-4
|
# -8y+10=-6y-4
## Simple and best practice solution for -8y+10=-6y-4 equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the equation solver your own equation and let us solve it.
## Solution for -8y+10=-6y-4 equation:
Simplifying
-8y + 10 = -6y + -4
Reorder the terms:
10 + -8y = -6y + -4
Reorder the terms:
10 + -8y = -4 + -6y
Solving
10 + -8y = -4 + -6y
Solving for variable 'y'.
Move all terms containing y to the left, all other terms to the right.
Add '6y' to each side of the equation.
10 + -8y + 6y = -4 + -6y + 6y
Combine like terms: -8y + 6y = -2y
10 + -2y = -4 + -6y + 6y
Combine like terms: -6y + 6y = 0
10 + -2y = -4 + 0
10 + -2y = -4
Add '-10' to each side of the equation.
10 + -10 + -2y = -4 + -10
Combine like terms: 10 + -10 = 0
0 + -2y = -4 + -10
-2y = -4 + -10
Combine like terms: -4 + -10 = -14
-2y = -14
Divide each side by '-2'.
y = 7
Simplifying
y = 7`
## Related pages
differentiation of cos xprime factorization for 72what is the prime factorization of 71convert 1.2m to inchessystem of equations word problems calculatorroman numerals xy5x 3y 2senx83-20solving 2 step equations calculatorfactoring 2x 2-5x-3px50lcm 5 83000 dollars to poundsmiddle term splitting calculatortanx derivativewhat is the prime factorization of 165square root of 392what is 12.5 as a decimalderivative ln 3x2.2kg lbdivide and multiply fractions calculatorfraction from least to greatest calculatormultiplying mixed fractions calculatorstep by step differentiation calculatorlnx derivativep490factor 3x 12if 8x 5x 2x 4x 114 the 5x 33y 5x650-195is9100cos 3x graph1900 in roman numerals1967 in roman numeralswhat is the prime factorization of 330graph 2x y 2easy 98.1prime factorization of 143gcf of 26800-256fractions solverthe prime factorization of 1752sinx-1 0sin 2x 1 0cos 2x derivative15-30-50.875 in a fractionhighest common factor finder42-8-38x 3y graph0.375 in fraction formgreatest common factor of 48 and 60cos2x xwhat is the prime factorization of 77graph y 2 5x 20.375 as a fractionprime factorization of 344can x 3 1 be factoredprime factorization of 212expressions solver calculator250000 dollars to poundswhat are the factors of 6x 6what is the prime factorization of 120derivative of 3sinxcalculator derivativecommon multiples of 10 and 15y 2x-6 graphab 2abeasy maths solutions2x 2-18greatest common factor of 120equation solving calculator with steps
|
2018-06-19 18:02:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3409096896648407, "perplexity": 7850.77841459544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863109.60/warc/CC-MAIN-20180619173519-20180619193519-00197.warc.gz"}
|
https://premium-themes.forums.wordpress.com/topic/latex-display-does-not-work-with-the-theme/
|
# LaTeX display does not work with the theme
• Author
Posts
• #43159
pnique
Member
The blog I need help with is pabloecheniquerobba.com.
#43324
pnique
Member
I changed to my older theme and I realized that it converts the latex code onto:
http://s0.wp.com/latex.php?latex=p%28x%29dx+%3D+%5Cfrac%7B1%7D%7B%5Csigma%5Csqrt%7B2%5Cpi%7D%7D+%5Cexp+%5Cleft%28+-+%5Cfrac%7B%28x-%5Cmu%29%5E2%7D%7B2%5Csigma%5E2%7D+%5Cright%29dx&bg=#ffffff&fg=#383838&s=0
http://s0.wp.com/latex.php?latex=p%28x%29dx+%3D+%5Cfrac%7B1%7D%7B%5Csigma%5Csqrt%7B2%5Cpi%7D%7D+%5Cexp+%5Cleft%28+-+%5Cfrac%7B%28x-%5Cmu%29%5E2%7D%7B2%5Csigma%5E2%7D+%5Cright%29dx&bg=ffffff&fg=212121&s=0
The only difference seems to be in the lack of # in the colors directive.
Please fix, I payed 49€ for this guys! ;)
#43355
upthemes
Member
Looking into this as well. Give us a bit to figure out what’s happening here. :-) Thanks for you patience.
#43357
pnique
Member
Thanks, you’ve been fast!
By the way, beautiful theme!
#43358
upthemes
Member
Thank you. Can you link up a post that uses LaTeX so I can see it?
#43359
pnique
Member
#43360
upthemes
Member
What happens if you embed a new LaTeX object into a post? Can you post one of the existing snippets here so I can see the format?
#43361
pnique
Member
In that same post, I have just added $latex \mu$ to the beginning, which is the way of including the mu symbol in LaTeX in wp. It also shows a black box at the moment, like the rest of items in the post.
#43363
upthemes
Member
I’m checking with the WordPress.com theme team to see if they have any insight as to why they would be displaying like that.
There is a support topic here to explain some of the LaTeX behavior:
http://en.support.wordpress.com/latex/#latex-colors
Let me speak to someone and get back to you ASAP.
#43368
pnique
Member
OK, thank you very much, but bear in mind that I did not used any explicit declaration of colours. The ones in my post seem added by default.
#43369
upthemes
Member
Pablo, we are working on a solution and will update this thread as soon as we have one in place. Thanks for your patience! The other fixes you’ve asked about will also be coming shortly. They are going through the review process right now.
#43371
pnique
Member
Great, thanks! :)
#43376
upthemes
Member
We figured out the issue and your LaTeX posts should all look right now. Thanks for your patience again! Hope you enjoy the theme.
The topic ‘LaTeX display does not work with the theme’ is closed to new replies.
|
2021-09-23 14:03:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44875243306159973, "perplexity": 858.198903922283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00404.warc.gz"}
|
https://socratic.org/questions/suppose-you-have-a-traingle-with-sides-a-b-and-c-using-pythagorean-theorem-what-#356684
|
# Suppose you have a traingle with sides: a, b and c. Using pythagorean theorem what can you deduce from the following inequality? i) a^2+b^2 = c^2 ii) a^2+b^2 lt c^2 iii) a^2+b^2 gt c^2
##### 1 Answer
Dec 26, 2016
Please see below.
#### Explanation:
(i) As we have ${a}^{2} + {b}^{2} = {c}^{2}$, which means that sum of the squares of the two sides $a$ and $b$ is equal to square on the third side $c$. Hence, $\angle C$ opposite side $c$ will be right angle.
Assume, it is not so, then draw a perpendicular from $A$ to $B C$, let it be at $C '$. Now according to Pythagoras theorem, ${a}^{2} + {b}^{2} = {\left(A C '\right)}^{2}$. Hence, $A C ' = c = A C$. But this is not possible. Hence, $\angle A C B$ is a right angle and $\Delta A B C$ is a right angled triangle.
Let us recall the cosine formula for triangles, which states that ${c}^{2} = {a}^{2} + {b}^{2} - 2 a b \cos C$.
(ii) As range of $\angle C$ is ${0}^{\circ} < C < {180}^{\circ}$, if $\angle C$ is obtuse $\cos C$ is negative and hence ${c}^{2} = {a}^{2} + {b}^{2} + 2 a b | \cos C |$. Hence, ${a}^{2} + {b}^{2} < {c}^{2}$ means $\angle C$ is obtuse.
Let us use Pythagoras theorem to check it and draw $\Delta A B C$ with $\angle C > {90}^{\circ}$ and draw $A O$ perpendicular on extended $B C$ as shown. Now according to Pythagoras theorem
${a}^{2} + {b}^{2} = B {C}^{2} + A {C}^{2}$
= ${\left(B O - O C\right)}^{2} + A {C}^{2}$
= $B {O}^{2} + O {C}^{2} - 2 B O \times C O + A {O}^{2} + O {C}^{2}$
= $B {O}^{2} + A {O}^{2} - 2 O C \left(B O - O C\right)$
= $A {B}^{2} - 2 O C \times B C = {c}^{2} - O C \times B C$
Hence ${a}^{2} + {b}^{2} < {c}^{2}$
(iii) and if $\angle C$ is acute $\cos C$ is positive and hence ${c}^{2} = {a}^{2} + {b}^{2} - 2 a b | \cos C |$. Hence, ${a}^{2} + {b}^{2} > {c}^{2}$ means $\angle C$ is acute.
Again using Pythagoras theorem to check this, draw $\Delta A B C$ with $\angle C < {90}^{\circ}$ and draw $A O$ perpendicular on $B C$ as shown. Now according to Pythagoras theorem
${a}^{2} + {b}^{2} = B {C}^{2} + A {C}^{2}$
= ${\left(B O + O C\right)}^{2} + A {O}^{2} + O {C}^{2}$
= $B {O}^{2} + O {C}^{2} + 2 B O \times C O + A {O}^{2} + O {C}^{2}$
= $A {B}^{2} + 2 O C \left(C O + O B\right)$
= ${c}^{2} + 2 a \times O C$
Hence ${a}^{2} + {b}^{2} > {c}^{2}$
|
2022-07-02 02:49:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 46, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131561875343323, "perplexity": 303.02779406870343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00335.warc.gz"}
|
https://learn.careers360.com/ncert/question-find-the-second-order-derivatives-of-the-functions-given-in-exercises-1-to-10-x-raised-to-20/
|
## Filters
Q&A - Ask Doubts and Get Answers
Q
# Find the second order derivatives of the functions given in Exercises 1 to 10. x ^ 20
Q2 Find the second order derivatives of the functions given in Exercises 1 to 10.
$x ^{20}$
Answers (1)
Views
Given function is
$y=x ^{20}$
Now, differentiation w.r.t. x
$\frac{dy}{dx}= 20x^{19}$
Now, the second-order derivative is
$\frac{d^2y}{dx^2}= 20.19x^{18}= 380x^{18}$
Therefore, second-order derivative is $\frac{d^2y}{dx^2}= 380x^{18}$
Exams
Articles
Questions
|
2020-02-27 21:48:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.553941011428833, "perplexity": 2968.7243831360092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00072.warc.gz"}
|
https://www.mrunix.de/forums/showthread.php?75315-tcolorbox-breakable-multicol&s=f1d0ccb136012d433bf3489f54b41dd4&p=353789&mode=threaded
|
## tcolorbox - breakable - multicol
Offenbar hat tcolorbox ein Problem mit dem Umbruch in
multicols-Umgebungen. Gibt es ein Gegenmittel?
Code:
\documentclass{minimal}
\usepackage{multicol}
\usepackage{tcolorbox}
\tcbuselibrary{breakable,skins}
\tcbset{breakable,enhanced}
\begin{document}
\vspace*{16cm}
\begin{tcolorbox}[colback=gray,colframe=black]
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box. Here, you see the
lower part of the box. Here, you see the lower part of the box.
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box. Here, you see the
lower part of the box. Here, you see the lower part of the box.
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box. Here, you see the
lower part of the box. Here, you see the lower part of the box.
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box.
\end{tcolorbox}
\newpage
\begin{multicols}{2}
\vspace*{16cm}
\begin{tcolorbox}[colback=gray,colframe=black]
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box. Here, you see the
lower part of the box. Here, you see the lower part of the box.
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box. Here, you see the
lower part of the box. Here, you see the lower part of the box.
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box. Here, you see the
lower part of the box. Here, you see the lower part of the box.
Here, you see the lower part of the box. Here, you see the lower
part of the box. Here, you see the lower part of the box. Here, you
see the lower part of the box. Here, you see the lower part of the
box. Here, you see the lower part of the box.
\end{tcolorbox}
\end{multicols}
\end{document}
|
2020-10-29 01:39:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9741586446762085, "perplexity": 380.0621689146275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902683.56/warc/CC-MAIN-20201029010437-20201029040437-00080.warc.gz"}
|
http://overanalyst.blogspot.com/2010/03/
|
## Wednesday, March 31, 2010
### (yes, another batman reference.)
somehow i'm prone to making references to the dark knιght in my classes.
today i described the proof of one of l'hôpital's rules as similar to the first scene, where they have the bank robbery:
you can't rob a bank by yourself, so you need a gang -- say, another gunman who watches the crowd, as well as a getaway driver -- but say that you want to keep all the money to yourself ..
here, we are $x$; the getaway driver is $a$, and the other gunman is $c$, where we use a mean-value theοrem of the form
$$\frac{f(x)-f(a)}{g(x)-g(a)} \;=\; \frac{f'(c)}{g'(c)}.$$
let's just say that, to prove the theorem, we want to get rid of $a$ and $c$ .. (-:
the students seemed amused by it, even by the end of the proof.
## Tuesday, March 30, 2010
### life after conference.
in some departments, the life of a postdoc is a constant struggle against obstacles of all sorts, educational and bureaucratic. sometimes one must fight for one's research time.
ever since i returned from a conference, this past weekend, i've felt unproductive. catching up on lectures was troublesome enough.
yesterday, after office hours, i was ready to collapse. somehow i summoned the will to go home, and go on a 3mi road run.
sometimes one goes to great lengths, in order to avoid being called "lazy."
i've been warning my calculu∫ students that what comes next is hard stuff. i told them that vectοr calculu∫ is its own upper-level mathematιcs course, and requires time to learn.
in particular, i told them that we'll see generalisations of the fundameηtal theοrem of calculu∫. namely, when the geometry is not as simple as an interval and when you have different types of derιvative and ιntegral to use, the results are a little more confusing.
heck, even knowing when to use them is tricky. it took me a while, in my own education, to appreciate stοkes' theοrem. [1]
tomorrow i'll tell them about cοntour integrals, and give the fundamental theοrem a whirl .. literally.
i've already written up an example involving the spiral of archimedes! (-:
anyways, there's still work to do, tonight.
[1] oddly enough, i once had a student whose last name was stοkes. in the same class, there was another student named green. too bad it wasn't a calc 3 course. (-:
## Sunday, March 28, 2010
### conferences: the unease of giving talks.
this weekend is a conference in lexington, kentucky. day 1 of 2 is over -- a dozen or more talks, meals and coffee, research discussions -- and strangely enough, i don't feel sleepy.
some observations:
1. i think i've recast myself as someone who studies analysιs of PDE now, albeit on metrιc spaces. do one project, give one talk, and suddenly people think that you know what you're doing and suggest all sorts of project ideas.
2. after my talk was over, nobody had any questions .. which was unnerving, because it could mean plenty of things:
did they understand it all, think it overly obvious?
did no one understand, thereby making any question impossible?
this time was especially worrisome: i spent my usual non-teaching time this week either (a) preparing a midterm (+ extra office hours), (b) helping a graduate student prepare a talk, and (c) entertaining friends/overnight guests from out of town.
subsequently i finished writing my talk on the drive to the conference, and had no real time to practice it. there was a real chance that it could have bombed.
call me needy, but i was relieved when a colleague complimented me on the talk. i don't know whether he suspected that i wanted to hear it, but i was just glad he said so.
there's probably a moral in this, but i can't see it right away.
## Wednesday, March 24, 2010
### on writing talks (thoughts over a few days)
i can't remember exactly when i gave my first talk. it was probably about 8-9 years ago. i do remember that it was awful. so was the second one.
in the few years that i've been a mathematician, i've already lost track of the number of talks i've given. at some point in graduate school, i averaged at least 2 per semester, probably 3, and that doesn't count conferences.
i don't regret the experience. then again, planning a talk is like planning a trip:
the first few times you fly on an airplane, it's exciting and worrisome at the same time; you plan for everything. you're nervous at the airport. you wonder if you'll miss the connecting flight, even though it's a 1.5 hour layover.
then you get used to it, then you procrastinate a bit on a few trips, and one day you nearly miss a flight.
after a panic, eventually you settle down.
there's a necessary amount of planning to do, but now you know how much. the only time you worry is when you're traveling to a completely new destination and subsequently, having no idea how to get there or what to pack.
yesterday i started LaTeχing my talk;
i managed one slide, which sounds bad.
what is good, however, is that it completely determines at least 2/3's of what i want to discuss.
not having been trained in PDE, i am now paranoid about background. tonight i may do some reading.
supposedly it's good to work in several areas of research, but so far it's brought me nothing but trouble.
maybe it really boils down to # of papers, with a minimum # of them in good journals.
## Tuesday, March 23, 2010
### things that i don't know, but would like to know.
despite the fact that my work is related to the analysιs on metrιc spaces, there are plenty of theories that seem, at the moment, beyond my ken.
the first that comes to mind are spaces equipped with dιirichlet forms, especially those that arise from analysis on fractals, a la kιgami and strιchartz. even after having read a little, having some sense of this "resistance metric," it remains a mystery to me.
another concerns randοm walks on graphs and stοchastic games. probability is not my strong point; i'm afraid that i'll have to wait until the next life, for this one.
then there are these abstract wιener spaces. each time i "read" [1] about them, i learn something new:
D. Preιss proved in [P] that the density theorem for gaussιan measures is no longer true, at least if balls for the norm of E are involved; on the other hand, these balls are not natural in the differentιal calculus (Sobοlev and BV functions, integratιon by parts, etc.) in Wιener spaces, that involves only directions in H. For these reasons, we use H−Gateaux differentiability (i.e. Gateaux differentiability, along directions in H) of H−distance functions, in the same spirit of [Bo],[D].
(H denotes the "Camerοn-Martιn" space, which remains a mystery to me.)
from CV6MT: Stepanοv's Theorem in Wιener spaces - Preprint (2010) - Luigι Ambrοsio - Estibalιtz Duraηd Cartageηa
i've written before about how ρreiss's result is .. unnerving. it's intriguing to know that this theory of wιener spaces does address it!
[1] i read very few articles. browsing through the abstract and the introduction of a paper doesn't count as reading it. until you walk through a proof with some indication of interest, you aren't reading.
## Monday, March 22, 2010
### i knew it ..
last night i decided to set my alarm to 6am. i had a strange intuition that if i didn't, then there would be little/no opportunity to accomplish anything resembling research.
as it happens, i was right;
it's 6pm and i still haven't gotten much done .. \-:
(more on this, later.)
on a happier note, congratulations to my friend alan. i just learned that, this year, he won (jointly?) the sumner-myers prize in mathematics, at the U of M.
this is the abstract for his talk. (-:
Abstract: Motivated by geοmetry, we consider a less dιscrete' way of counting lattιce points in pοlytopes, in which one assigns a certain weight' to each lattιce point. On the combinatοrial side, this approach reveals some `hidden symmetry' which improves upon and makes transparent some classical results in εhrhart theory. On the geοmetric side, the cοmbinatorial invarιants count orbifοld Bett&iota numbers of torιc stacks. If time permits, we will discuss a generalization involving mοtivic integratιon, and Michιgan's 2010 foοtball prospects. Go blue!
congrats also to paul, but i happen to know alan a little better. (-:
## Friday, March 19, 2010
### thoughts, over a week or two.
these are little notes i jotted down over the last week or so. they never became outright blog posts, but i figured that i'd share them anyway.
When i think about it, i've always juggled several projects at once. it's just that i've never done it well.
I must be doing something wrong.
why does everything take so long?
i wrote the exam solutions for my analysis class in 10 minutes. on the other hand, it took an hour to write the motivations behind the solutions.
(to explain, i've been stressing how one thinks up a proof vs. how one writes a proof.)
i have no problem correcting students' proofs and indicating errors. however, it's hard to figure out how many points an error is worth.
it's hard to sense progress when writing. all the satisfaction is at the end, when it's done. but then that feeling is too much at once, leaving you tired and unmotivated, because there is no ready accomplishment that can warrant the same (excessively high) satisfaction.
## Wednesday, March 17, 2010
### in gambling, there is always a risk.
i've been writing.
there's one section left of the preprint that's not been fleshed out, but we have the theorem in mind. as for a proof,
1. there's a standard technique that works. it involves a cοvering theorem.
2. another, less standard technique, involves rescalιngs of space, which seems more intuitive to me.
excited by this, i read through a paper or two. for about a week, i thought that i was able to adapt that existing technique to our new-ish setting .. but i can't get the damned constants to work.
i can't see a way around it,
not without an additional, artifical hypothesis ..
.. so i'm letting it go.
it's another one of my attempts at "originality" which has fallen flat and won't ever get up. somehow i thought it could work, that i could do it, and that the time i invested wasn't really a risk.
well, i should have known better;
i should have played it safe.
[sighs]
it could be worse, i guess.
at least i found out only after a week;
at least there is something else to try.
## Tuesday, March 16, 2010
### i mean, it's just a constant ..
9 times out of 10, in analysis all one needs is an inequality, of some kind. usually the multiplicative constants are unimportant.
today is that one day, of all things,
that the actual constants matter ..
[sighs]
## Sunday, March 14, 2010
### blog suggestion: opinionator.
i don't read the new york times regularly, at least not in print form.
for me, the density of its prose takes a few weeks of readjustment; i'm never patient enough to make the commitment. [1]
on the other hand, there are a host of NYT blogs that are tremendous fun. of these, one is particularly gears for us mathmos and techies:
"opinionator" by steνen strοgatz
personal anecdote. in my first year of grad school, i ran supplementary problem sessions for a linear algebra class, and even subbed for one or two of its lectures, during hallowe'en [2]. one economics student was particularly interested in the subject, though she had taken little/no mathematics since her calculus days. we became pleasant acquaintances.
one day there was a distinguished mathematical lecture by strοgatz, who was an unknown to me at the time. we had snuck into the reception tea earlier. when the crowd started towards the lecture room, i suggested that we attend. in retrospect i think she agreed out of guilt -- cookies can do that to you -- but we enjoyed the talk immensely. it mixed well physical intuitions from nature and a little maths from beyond the classroom.
"this is great!" she said, "are all the mathematics colloquia this interesting?"
ummm .. (-:
at any rate, strοgatz is a fine expositor. he has this way of taking something simple but, in a seamless process, pointing out its depths.
for instance, consider arithmetic .. which is also considered in "rock groups." trivial stuff, right?
each of us can imagine arithmetic in the form of making patterns with little stones. however, a neat little pattern, such as
[image borrowed from NYT]
serves as a wonderful intuition: why sums of odd numbers give perfect squares.
sure, it's trivial when you have the picture. it's not research-level maths ..but still, it makes me smile. it makes me feel like a boy again, realising a little depth, learning these facts for the first time.
then there is empathy. some representations don't make sense, initially, because we may encounter matters beyond our intuition. for example, in "division and its discontents," strοgatz writes:
The bafflement began when Ms. Stanton pointed out that if you triple both sides of the simple equation
$\frac{1}{3} \;=\; 0.33333\ldots$
you’re forced to conclude that $1$ must equal $.9999\ldots$
At the time I protested that they couldn’t be equal. No matter how many 9’s she wrote, I could write just as many 0’s in $1.0000\ldots$ and then if we subtracted her number from mine, there would be a teeny bit left over, something like $.0000\ldots01$.
i remember experiencing the same sense of mystery, as well as trying to explain it to my calculu∫ 2 students.
"the reason why it was confusing back then is that, as children, we were ignorant of geοmetric series," i offered.
"put another way, it's probably the first time you were exposed to the notion of a limit, which isn't quite fair. for most of us, we learned decimals in grade school, whereas we learned about limits in our first calculus class."
anyway, i like the exposition in "opinionator." i like (re)discovering the depths in seemingly simple ideas.
[1] for the same reason, i don't often read novels. lately the only ones i've read are (i) suggestions from friends and (ii) those, upon inspection of their spines, i estimated would take at most one sitting to read.
[2] it's not hard to remember that day. somehow i procured some orange chalk for the occasion. (-:
## Friday, March 12, 2010
### on being a "young researcher" ..
it's odd, being one of a few postdocs in a mathematics department .. at least, from my own biased experience.
you see, when i was a student, my department ran rife with postdocs.
before the economic woes of fall 2008, there were at least a dozen hires, every year. [0]
in the research group that i joined, in one year there were 4, all in varying stages of a job. it made for well-attended research seminars .. and lively discussions afterwards, over mediocre pizza and drinks.
here, there are about a half-dozen postdocs. of them, only two are non-applied; i am one and the other is, roughly speaking, a logician/algebraist. we haven't so much in common, except we are few.
one could liken the change in departments to, say, becoming among the few remaining νulcans in a rebooted star trek universe.
luckily, i have plenty of people to talk to. i meet regularly with two faculty, my postdoctoral mentors. their students are promising; occasionally i meet with them, too.
sometimes, though, i miss talking to mathematicians that are my own "age."
• when i speak with tenured faculty, i am the slow, plodding one: i learn a lot, but it's a race to catch up with what they already know well.
• when i speak with graduate students, somehow i am the knowledgeable, patient one. [1] it becomes important to pace matters accordingly, ask them pre-questions before questions, sharpen the discussion ..
so regularly i feel woefully young and inexperienced, and other times, i am surprisingly seasoned and "old."
sometimes, for a change in scene and an exchange of ideas, i travel. i meet colleagues and friends. [2] i try to figure out who i still am, who i can be.
in this aspect, it is good to be young. it is easier to find travel funds, as a young researcher. i know this time will be short, so i may as well enjoy the perks!
[0] the pace of hiring may have remained the same; i don't know. since i left for my postdoc, i haven't returned there.
[1] yeah, i know: me, of all people. can you believe it? (-:
[2] maybe i should call people more often. in fact, i was on skype today, to talk to a collaborator. now i wonder what collaborations were like, before VoIP.
## Wednesday, March 10, 2010
### flight plans | collaborations vs. solo works.
every time i board a plane, i forget something. this time, it included toothpaste, contact lens solution, and a copy of one particular paper that i was planning to read.
frustrating!
i brought a secondary reference, a copy of my preprint in progress, and even two papers that were for "fun" .. that is, in case i felt like thinking about maths, but not about any of my own projects ..
but the one paper that really mattered: it had to be that one that i would forget.
over the first leg of my flight i tried to reconstruct the proof. i remembered to include one particular step, but couldn't remember exactly why it was necessary.
by the second leg of my flight, i had grown curious enough to look up the PDF version in my maths_papers folder of my laptop. with only 30 minutes left of battery power, there was enough time to read the proof and think it over. the LaTeXing would have to wait.
good enough. odds are that i wouldn't be able to write it well at the time, anyway.
on a related note, if all goes well i'll have four more collaborations in the works. that is good.
then again, it's been a while since i wrote a paper on my own.
in my recent work on schοenflies extensions, i can't separate which were my ideas and which were the advisor's.
i'm also loathe to count the paper i cut from my thesis [1]; i can say with honesty that though the advisor pointed me to that direction of study, the driving ideas were almost all mine. i remember the advisor being confused at first, probably thinking i was crazy.. and one day, his eyes widened when he realised why exactly it would work.
you see, he was a hard man to surprise. i never thought that i'd be able to. (-:
anyways: been there, done that, what have i done lately, on my own? the question is still relevant: can i be successful, by myself?
i still feel like i don't know anything. i'm still not good at formulating problems. maybe that explains the collaborations. i'm nearsighted with details, and someone has to point me in a direction to start. \-:
maybe i'll try my hand at (geοmetric) measurε theοry again. i don't know much about it, but it's an area in which i can work alone, comfortably. if i prove something interesting enough [2], then maybe i'll finally write that paper about curreηts.
[1] in retrospect, maybe it would have been a good idea to break it up into two papers. it ended up being 35 pages long, and in two rough parts: (1) building some machinery and some euclidean results, and (2) answering a special case of a conjecture that everyone already knows is true, but with a different proof in mind.
it's been a year since i sent it to a journal, and supposedly it's gone to the referee. is it really that taxing to read? admittedly i needed three distinct theories to make it work, but .. come on. if (s)he thinks it's crap, then at least let me know now so that i can re-submit it!
[2] for now, i have a few theorems, but .. they're somewhat obvious. if i were a referee, i would reject a paper with only those contents.
## Monday, March 08, 2010
### holidays, of several sorts.
spring break commences. i'll be working and traveling .. perhaps even both at once, so i mightn't be updating for a few days.
so before i forget, let me repeat my mild distaste for pi day.
i simply cannot explain it rationally. admittedly, if i had to choose between e and π, then i'd choose the latter.
then again, this happens to be a day when all the math groupies gather together and geek out ..
.. and i've never been good in crowds;
they make me uneasy. [1]
some holidays are remembrances of important historic events, that characterise our identity.
i would not object to celebrating hιlbert's birthday, for example. the undergraduate math club actually celebrated cantοr's birthday [2] recently; props to them!
most of them, however, are just excuses to have a party. besides, i have always been a contrarian and a strange one;
i'll gladly throw a toga party on 15 march;
when i was in charge of the geο calendar, i would always try and sneak in festivus under the list of holidays and observances .. to no avail.
someone always took it out. \-:
so this year, let me announce it again: why not celebrate $\sqrt{10}$ day -- march 16th, instead? (-:
[1] this might well explain my erratic behavior at conferences: i don't talk to as many people as i should. i've been trying to improve on that.
[2] oddly enough, this coincides with golden ratio day, which is "january 62nd" (or march 3, in funny arithmetic).
## Friday, March 05, 2010
### blind spots.
i think my analysιs midterm today caused trauma to my students. afterwards, they asked me about part A of one problem:
"can you give us the counterexample?" they asked.
"sure," i said. "take the closed interνal $[0,\infty)$ and then .."
".. wait, that's a closed interval?" one asked, shrilly.
so i said yes. they inquired further, and we looked it up in the textbook.
"so is $\mathbf{R}$ an open interval?" another asked.
"actually, it's both open and closed," i pointed out. "there are no endpoints to test, so it has to be both."
"what?!?"
[sighs]
i thought that was one of the first things that anyone ever teaches students: open vs. closed, neither or both ..
oh well. i'll just grade the exams generously.
## Thursday, March 04, 2010
### a day in the life.
many things happened to me today, which was unusual.
1. i worked for a few hours on a joint preprint [0];
2. i held three office hours and administered 1 makeup exam;
3. i had two separate research meetings with members of my research group;
4. i decided not to go to india for the ICM, and accept the logical consequences.
5. i sat on a panel on a "life after graduate school" workshop, in which the subjects were how to apply for postdocs and write grants [1];
6. i was invited to give a colloquium!!!
[0] technically, a day consists of 24 hours. suffice it to say that i didn't get much sleep, last night.
[1] this time, i avoided sounding like an idiot. on the other hand, i think i ended up sounding paranoid and bitter.
## Wednesday, March 03, 2010
### names.
some of my analysιs students are finally calling me by my first name, which is a relief.
it's who i am,
how i imagine myself.
being called "dr. so-and-so" makes me sound either old or distinguished or a medical doctor, all of which seem .. wrong, to me.
"prof. so-and-so" is worse. i feel like a fraud, having convinced someone that i'm on the tenure track somewhere.
on a related note, being called "mr. so-and-so" feels businesslike.
i imagine some sort of imminent business transaction, like check-in at the airport counter or a credit card offer!
### "You either die a hero or you live long enough to see yourself become the villain."
earlier today one of my TAs stopped by my office to discuss a mishap during recitation.
immediately i thought of the ending to the dark knight.
i should probably now interpolate between those two sentences.
last week one of my quiz questions was numerically painful for the students, due to a typo or two: suddenly they had to take the sum of squares of two 2-digits numbers instead of two 1-digit numbers, and then some. [1]
subsequently a majority of the class was unable to complete the quiz in the allotted time.
i think most of them still haven't forgiven me.
so in some show of fairness, i wrote another quiz that same thursday, and told the students that they'll get another chance to take it again for a better grade.
on friday, i spotted a new typo; most of the steps would remain intact, but they wouldn't be able to reach a final answer. so i fixed it and emailed the new copy to the TAs ...
..
..
yeah, you can guess what happened.
actually, it gets better:
that one TA made the photocopies for another TA as well.
so i told the TA,
"look, my name is mud already.
you remember last week, right?
i'll just tell them that it was my mistake.
sometimes it's not about being a hero. sometimes we make mistakes, sometimes we don't, but someone has to deal with the pieces .. be the object of scorn .. say, a dark knight. \-:
[1] if you must know, it was something like $\sqrt{28^2 + 33^2}$.
## Monday, March 01, 2010
### when a lecture has no worth, say what you want [EPILOGUE ADDED].
i'm hard-pressed to think of what to discuss today, in my analysιs class. we've covered what i wanted out of chapter 5 already. running time backwards,
next week: spring break
friday: midterm / "spring break eve"
wednesday: review session
so .. today?!?
if i start chapter 6, then i'll just have to review half of it after break .. which will over-test my patience.
if i give a pre-review session, then that might help. then again, review sessions are already enough teeth-pulling ..
in that everyone has questions,
so why have two of them?
i don't want to cancel class either;
that sets a tricky precedent for future pre-exam weeks .. \-:
ultimately, today's lecture isn't worth anything, in terms of the syllabus. maybe i'll just talk about what i want to talk about --
a supplementary lecture, like unifοrn convergence,
power series as continuous functions,
and why sine and cosine are continuous, after all
-- and just tell the students that they are not responsible for remembering it, i.e. that it will never come up on any homeworks, quizzes, or exams.
at least that would stop them from worrying. heck, with the stress off, they might even listen more attentively than usual .. (-:
- added: 2 mar '10 -- 12:55am -
well, on the whole that that was a foolish idea. i don't think the students got much out of it. i should have held an impromptu review session, instead.
my greatest gaffe was telling the students that they didn't need to take notes. it didn't occur to me how quickly it would take them to lose interest.
on a related but disturbing note, my students had the same looks on their faces as my audiences, when i give conference talks. maybe i have this effect on everyone.. \-:
|
2018-06-22 05:40:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5540739893913269, "perplexity": 2142.6483059650486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864354.27/warc/CC-MAIN-20180622045658-20180622065658-00033.warc.gz"}
|
https://uniskills.library.curtin.edu.au/digital/spss/extras/
|
# Introduction to SPSS
This part of the module covers a few extra tips and tricks you may find helpful when analysing data in SPSS. In particular, it covers the following (use the drop-down menu above to jump to a different section as required):
• Using subsets of the data
• More data transformations
• Multiple response data
• Using SPSS syntax
The examples covered make use of the Household energy consumption data.sav file, which contains fictitious data for 80 people based on a short ‘Household energy consumption’ questionnaire. If you want to work through the examples provided you can download the data file using the following link:
If you would like to read the sample questionnaire for which the data relates, you can do so using this link:
Before commencing the analysis, note that the default is for dialog boxes in SPSS to display any variable labels, rather than variable names. You may find this helpful, but if you would prefer to view the variable names instead then from the menu choose:
• Edit
• Options…
• Change the Variable Lists option to Display names
## Using subsets of the data
While the default in SPSS is for all of the cases in the data file to be processed every time, this doesn’t mean you need to have separate data files for each little subset of cases in order to process them separately. Instead, you can use a filter to select and process particular subsets of your data file as required.
As an example, suppose that for reporting purposes there is a need to analyse just the female responses - temporarily ignoring the other data. To select this subset of data, choose the following from the SPSS menu (either from the Data Editor or Output window):
• Data
• Select cases…
Then to select cases according to certain criteria (e.g. if they are female):
• Select If condition is satisfied
• Click If…
The expression that defines the required condition in this case is that the ‘q2’ variable (the gender variable) is equal to the value 2 (the code representing female). To define this:
• move the ‘q2’ variable into the box and add in the = 2
Next:
• Click on Continue
• Click on OK
In the Data View of the Data Editor window the cases that do not satisfy the selection criteria (i.e. those of other genders) will now not be visible, as they have been temporarily filtered out (or the row numbers will have a line through them, depending on the version of the software). Any analyses now will only report on the selected cases – the females.
For example, to find out how many females are in each of the different categories for the ‘q8’ variable (which relates to consumption reduction), run the Frequencies procedure (as described in the Descriptive statistics page of this module) on the available data. Note that the number of cases reported in the output should be 69, the number of females, and not the 80 that constitutes the full data file.
When all of the analysis of the female only data has been completed, another subset can be isolated by going through the Select Cases process again if required. Alternatively, to revert back to the whole data file don’t forget to turn the selection/filter off! To do this, choose the following from the SPSS menu (either from the Data Editor or Output window):
• Data
• Select cases…
• select All cases
• click on OK
All 80 cases are available for processing again once the selection has been turned off.
## More data transformations
While the most common types of data transformations are explained in the Transformations page of this module, this section looks at two additional, specific examples.
### Converting a string variable to a numeric variable
Although SPSS does allow alphabetic/string information to be entered as part of the data file, the more in-depth statistical analysis procedures require numeric data only (even if those numbers are simply codes or values representing categories).
At the questionnaire design stage it may be very difficult to anticipate the responses that will be given though, so creating a tick-box type question can be too complicated or restrictive. Hence allowing open-ended responses may be preferable instead, and the choice then is to either numerically code the data before keying it in, or to recode the responses once they have been entered into SPSS. This section details how to do the latter using the Automatic Recode and Recode into Different/Same Variable procedures, and uses the ‘q13’ variable in the sample data file as an example. This variable stores participant responses to the question:
What kind of hot water system do you use at your property?
The variable is defined as String under Variable View, and is a nominal variable. A frequency table of the responses is as follows:
This output shows only five different types of hot water systems, but because of different spelling and terminology and different use of upper and lower case characters, twelve different responses are listed. To reduce this twelve down to the real five, the different categories need to be combined (i.e. recoded).
To complete the first part of this two-step process, from the menus choose:
• Transform
• Automatic Recode…
Now in the dialogue box that opens:
• move the variable (‘q13’) into the Variables box
• enter a name for the new variable in the New Name box (for example ‘q13_autorecode’)
• click on Add New Name
• select Treat blank string variables as user-missing (so that no category is created for these)
• select OK
The resultant output should be as follows:
Note that the original responses have been sorted into alphabetical order and assigned a value from 1 to 12. The original data has been used to create the Value Labels for those values and all this has been put into a new variable at the end of the data file called ‘q13_autorecode’.
The second step of the process is then to reduce these 12 categories to the 5 required ones, using the standard Recode into Different Variables command (or you could use the Recode into Same Variables command in this instance if preferred). Instructions on how to do this are provided in the Transformations page of this module. In this case, the existing and new categories could be as follows:
Existing category New category
1 (electric) 1 (Electric)
2 (electric) 1 (Electric)
3 (gas instant) 2 (Instantaneous gas)
4 (Gas instant) 2 (Instantaneous gas)
5 (Gas instantaneous) 2 (Instantaneous gas)
6 (gas storage) 3 (Gas storage)
7 (Gas storage) 3 (Gas storage)
8 (Heat pump) 4 (Heat pump)
9 (Hot water heat pump) 4 (Heat pump)
10 (solar) 5 (Solar)
11 (Solar) 5 (Solar)
12 (Solar hot water) 5 (Solar)
### Computing a new variable by adding or averaging existing variables
Part of the aim of the energy consumption questionnaire is to determine how satisfied the participants are with their electricity provider. Rather than asking this as a single question though, the information is collected through four questions relating to different aspects of the service. As these questions all use the same rating system (measured on a scale of 1 to 5, with 1 indicating ‘Very unsatisfied’ and 5 indicating ‘Very satisfied’), the four variables representing these questions (‘q9’ through to ‘q12’) can be combined to come up with an overall satisfaction score.
One way of doing this is by adding all the variables together to create a score out of 20, which can be done using the Compute variable procedure as described in the Transformations page of this module. This time, you could enter the new variable name Overall_satisfaction and the numeric expression q9+q10+q11+q12:
After clicking OK , the new variable should appear at the end of the data file.
If you run the Frequencies procedure on this new variable (as described in the Descriptive statistics page of this module) you will see that there are only 78 satisfaction scores, whereas there are 80 cases in the data file. Looking at the actual data reveals why; the data in row 30 is missing for all four of the variables ‘q9’ to ‘q12’, and the data in row 31 is missing for variables ‘q10’ and ‘q12’. Since the numeric expression shown above only calculates new values for those cases that have complete data, the new variable has not been computed for rows 30 and 31.
Sometimes this will be what you want, but other times you will require data for the new variable regardless of whether some of the data is missing or not (note that if there is missing data for all the variables, there will automatically be missing data for the new variable). To do this simply requires that a different numeric expression is used within the Compute variable procedure, which makes use of the sum function.
For example, you could alter the numeric expression for the variable you have just created to sum(q9 to q12) (note that the word ‘to’ can be used between the variables in this case as they occur one after the other in the data file; if this isn’t the case, you would need to list the variable names separated by commas instead):
With this new numeric expression, there is now a value for the ‘Overall_satisfaction’ variable in row 31.
You may also like to experiment with other formulas. For example, if you wanted to calculate an average overall satisfaction score instead you could also try using two different, similar numeric expressions:
• The numeric expression (q9 + q10+ q11 + q12)/4 will again have missing data for both rows 30 and 31.
• The numeric expression mean(q9 to q12) (or mean(q9, q10, q11, q12) if the variables are not in order) will have missing data only for row 30. For row 31, the average will be calculated by dividing by 2 instead of 4, since there are only two variables with data.
Regardless of which formula you choose to use, the new variable can then be analysed in the usual way.
## Multiple response data
The sample questionnaire provided contains two questions with multiple parts: question 15, which asks whether the participant owns any heating or cooling products and prompts them to list up to three if so; and question 16, which asks the participant whether or not they own five different items.
While the data for each part of these questions is required to be stored in a separate variable (for example ‘q15’, ‘q15.1’, ‘q15.2’ and ‘q15.3’; and ‘q16.1’, ‘q16.2’, ‘q16.3’, ‘q16.4’ and ‘q16.5’), often the data needs to be analysed together in sets. These are known as multiple response sets in SPSS, and this section explains how to create, analyse and display them.
### Creating and using multiple response sets
There are two ways of creating multiple response sets in SPSS. One of the ways (the Multiple Response option in the Analyze menu) does not retain the sets between SPSS sessions. The other does as long as the data file is saved again once they have been created; it is this latter way that is used in this example. There are two ways to access this method using the menu options in SPSS, the first of which is by selecting:
• Data
• Define Multiple Response Sets…
The second way is by selecting:
• Analyze
• Tables
• Multiple Response Sets…
Either way, you can then create sets in the Define Multiple Response Sets window. For example, to create a set containing the ‘q15.1’, ‘q15.2’ and ‘q15.3’ variables (in order to analyse all of the specified heating and cooling methods together) do the following:
• move the three variables containing the heating and cooling methods (‘q15.1’, ‘q15.2’, and ‘q15.3’) into the Variables in Set panel
• click on Categories (because the responses were coded into different categories)
• give the set a Name (for example ‘q15methods’)
• give the set a Label (for example ‘Heating and cooling methods’)
The set for question 16 can be created at the same time, but this time the variables are dichotomous and the answers of interest are the ‘Yes’ ones (coded 1). You can create this set as follows:
• move the five variables relating to question 16 into the Variables in Set panel
• click on Dichotomies
• enter the counted value as 1
• give the set a Name (for example ‘q16’)
• give the set a Label (for example ‘Items owned’)
Both sets will now be listed in the Multiple Response Sets panel, so now:
• Click on OK
Once you have created the multiple response sets some output will appear in the results window (not shown here) detailing the variables used. The two sets will not be visible in the data file, except as the separate variables making up the sets, but they are set up for use in any of the Tables procedures. The sets will be retained for this use if the data file is saved before ending the SPSS session.
To use these sets in Custom Tables, from the menus choose:
• Analyze
• Tables
• Custom Tables
• click on OK
The Custom Tables dialogue box is arranged differently to most other procedures in that it has a ‘Canvas’ area where specifications are dragged and dropped to build the required table. The concept is similar to using the Chart Builder for producing graphs.
The multiple response sets that have been defined will be listed after the variables in the panel on the left hand side. The icon with four squares depicts a set with categorical variables while the one with two squares is for a dichotomous set.
To create a frequencies table for the ‘q15methods’ set:
• click on ‘q15methods’
• drag and drop it into the Rows panel
A mock up of the table will appear in the canvas, which should look like the following (note there are no percentages or totals provided automatically, but you can add these as detailed below):
To include percentages on the table:
• click on Summary Statistics…
• click on the plus sign next to Column Percent
• select Column N %
• use the arrow head to move it into the Display box
• click on Apply to selection
• click on Close
To include a total on the table:
• click on Category and Totals…
• select Total in the Show box on the right hand side
• click on Apply
The canvas will now show the table with percentages and a total included.
• click on OK
The resultant table should look like this:
Note that the percentages are automatically based on the number of valid cases, i.e. those people who answered the question by listing at least one heating or cooling method.
The tables for dichotomous multiple response sets are created in exactly the same way.
To create a two-dimensional table, similar to a Crosstabs table, the second variable will need to be dragged and dropped into the Columns panel. Row or column percentages can then be chosen as required.
### Graphing multiple response data
Once multiple response sets have been defined they can be used to create graphs in the Chart Builder, in the same way as for variables.
To create a Bar chart of the multiple response set ‘q15methods’, for example, from the menus choose:
• Graphs
• Chart Builder
• click on OK to say the all variables have been defined appropriately (if required)
• drag the simple bar chart image onto the canvas
• drag the ‘q15methods’ set and drop it on the X axis
By default, the Y axis will display the count for each category. To change this to response percentages (i.e. the percentage of respondents who selected each category) make the following change in the Element Properties dialogue box:
• choose Response Percentage from the drop down list in the Statistics area
• click on OK
The resultant chart should look something like the following:
It can be edited using the Chart Editor, as detailed in the Charts page of this module.
## Using SPSS syntax
SPSS syntax is a command language that is unique to SPSS. Rather than using the SPSS menus and dialogue boxes to peform procedures, as per the examples in this module, a syntax file can be used to write and then run commands. While this may seem a daunting prospect if you are not familiar with command languages, note that you do not have to write your own commands from scratch in order to create a syntax file unless you want to. In fact, you can create commands in a syntax file by doing any of the following:
• pasting commands that you have built up using the menus and dialogue boxes
• copying and pasting commands that are displayed in your output file
• creating your own commands from scratch by typing them directly into the syntax file
• editing commands that you have already pasted in order to create new commands
• a combination of any or all of the above.
There are many benefits to using a syntax file, some of which are that it:
• provides a record of all the procedures you have performed
• means you have a set of commands that can be repeated regularly, such as for monthly reports
• allows you to build a set of instructions that can be used to test different portions of data as it arrives
• is generally quicker, in particular when performing a large number of procedures.
This section details a few of the different ways you can create and run commands in a syntax file.
### Rules for syntax
Before we look at some of the ways to create commands in a syntax file, note that there are a few basic rules and guidelines to follow when creating or editing syntax. These are as follows:
• Every new command should start on a new line.
• It is best to start the first word of a new command on the far left hand side of the line in order. If the command contins subsequent lines, these should be indented at least one space (tab can be used to make the first line of each command really stand out).
• The commands are not case sensitive, so you can use upper or lower case or a mixture of both.
• Generally the commands can be abbreviated to the first four letters as long as they are unique to that command. Variable names cannot be abbreviated though.
• Commas or spaces can generally be used to separate names of variables in a list, or the word ‘to’ can be used if the variables occur one after the other in the data file.
• The end of each command must be signaled by a full stop.
• You can leave blank lines between commands to aid readability, but must not embed a blank line within a command.
• You can include comments by starting a line with at least one asterisk (*) and finishing with a full stop.
### Creating a syntax file
You can paste commands into a syntax file from an SPSS dialogue box rather than actually running the procedure. For example, to paste the command for creating a frequency table for the ‘q2’ variable into a syntax file, choose the following from the SPSS menu:
• Analyze
• Descriptive Statistics
• Frequencies…
• Click on the ‘q2’ variable
• Click on the arrow head to move it to the right hand box
• Click on Paste.
The procedure will not be executed, and instead a new syntax file will open which contains the relevant command. It should look like the following:
FREQUENCIES VARIABLES=q2
/ORDER=ANALYSIS.
(Note that the syntax file may also start with a command stating which data set is being used. For example, ‘DATASET ACTIVATE DataSet1’. This is not required if you only have one data set open, but if you have more than one you will need to ensure that this command is included and that it refers to the correct data set.)
The first line of the ‘FREQUENCIES’ command tells SPSS that we want to obtain frequencies for the variable ‘q2’. The second line relates to a default setting of this command, and these are often included when you paste from a dialogue box (typically they relate to handling of missing data or the choices under the ‘options’ button). As a general rule of thumb, if you didn’t have to click on something to get it you don’t need to specify it in the syntax file because it is the default anyway, which is the case here. Hence we can remove this line from the command, as long as we put a full stop at the end of the first line instead. The command now becomes:
FREQUENCIES VARIABLES=q1.
Once this or any other command is in the syntax window it can be edited or copied, pasted and edited as required. For example, you could copy and paste the command then edit it to request frequency tables for variables ‘q14’ and ‘q15’ at the same time, as follows:
FREQUENCIES VARIABLES=q14 q15.
Syntax commands can be included as part of your output file when you perform procedures, in which case you can simply copy and paste them into a syntax file. If the commands are not included in your output file already, you can request this by selecting the following from the SPSS menu:
• Edit
• Options…
• click on the Viewer tab
• select Display commands in the log
• click on OK
All the commands that you run, either from the syntax or through dialogue boxes, will now be listed as part of your output file. This can be an easy way of learning what the syntax commands look like and it can be a great way of trying something, examining the output, and only creating the syntax when you have achieved exactly the desired result.
As an example, run an independent samples (t) test to see if there is a significant difference in the mean summer daily energy consumption between those who do and don’t own a swimming pool. You can do this by choosing the following from the SPSS menu (refer to the Inferential statistics page of this module for more information on this test):
• Analyze
• Compare means
• Independent-Samples T-Test
• make the Test variable q7
• move the continuous variable (‘q6’) into the Test Variable(s) box
• move the categorical variable (‘q16.4’) into the Grouping Variable box
• click on Define Groups…
• keep Group 1 as category 1 and Group 2 as category 2 and select Continue
• click on OK
The resultant syntax output in the output file (above the tables) should look something like:
T-TEST GROUPS=q16.4(1 2)
/MISSING=ANALYSIS
/VARIABLES=q6
/ES DISPLAY(TRUE)
/CRITERIA=CI(.95).
You can then copy and paste this command into an existing or new syntax file (to create a new one for this purpose if required, go to the File menu and choose New and then Syntax). Either way, once you have a syntax file open you can transfer the command to it as follows:
• double click on the piece of syntax to select it
• highlight and copy the command (CTRL+C or right click and select Copy)
• navigate to the relevant syntax file
• paste the command on a new line in the syntax file (CTRL+V or right click and select Paste)
The new command can then be copied and edited in the same way as any other command. In particular, note that the command may include default settings (typically relating to handling of missing data or the choices under the ‘options’ button). As a general rule of thumb, if you didn’t have to click on something to get it you don’t need to specify it in the syntax file because it is the default anyway, and these lines can be removed. For example, the following lines can be removed from this particular command:
/MISSING=ANALYSIS
/ES DISPLAY(TRUE)
/CRITERIA=CI(.95).
Just make sure, as always, that you put a full stop at the end of the edited command. In this case the new command should be as follows:
T-TEST GROUPS=q16.4(1 2)
/VARIABLES=q6.
You can create a new syntax file, choose the following from the SPSS menu:
• File
• New
• Syntax
You can now write your own commands in the syntax file according to the rules and suggestions detailed previously. Note that if you know what command to use but are not sure of the exact format required, you can type the command name then click on the Syntax Help icon at the top of the syntax window:
Information about that command will then be provided to you in the online documentation, which will hopefully allow you to proceed with creating the command.
As an example, you could write a syntax command to compute a new variable called ‘Overall_satisfaction’ by using the sum function on the four variables q9 to q12 (as in the More data transformations section of this page). This command would be as follows (note that the word ‘to’ can be used between the variables in this case as they occur one after the other in the data file; if this isn’t the case, you would need to list the variable names separated by commas instead):
compute Overall_satisfaction = sum(q9 to q12).
You could also add an additional command to create a label for this variable, as follows:
variable labels Overall_satisfaction Overall satisfaction with electricity provider.
You could then create a command to display the descriptive statistics for this variable, as follows:
descriptives variables = Overall_satisfaction.
Next, you could recode the ‘Overall_satisfaction’ variable into a new categorical variable called ‘Satisfaction_grouped’. This variable could have two categories based on the mean ‘Overall_satisfaction’ value of 15.04; one category could consist of people with ‘Overall_satisfaction’ values below the mean, and the other could consist of people with ‘Overall_satisfaction’ values above the mean. The required commands to do this, as well as to create labels for the variable and for the categories, are as follows (note the use of the syntax ‘lo’, ‘thru’ and ‘hi’ when creating the categories):
recode Overall_satisfaction (lo thru 15.04 = 1)(15.04 thru hi=2) into Satisfaction_grouped.
variable labels Satisfaction_grouped Overall satisfaction with electricity provider (grouped).
value labels Satisfaction_grouped 1 ‘Below the mean’ 2 ‘Above the mean’.
Finally, you could create a crosstabulation for the ‘q3’ variable and the new ‘Satisfaction_grouped’ variable, with row and column percentages and the Chi-square statistic, in order to test whether there is any association between having children and the satisfaction grouping. The command to do this is as follows:
crosstabs tables= q3 by Satisfaction_grouped
/cells = count row col
/statistics=chisq.
### Running syntax commands
Once you have added a command or commands to your syntax file you will need to run them in order to have the procedures performed. You can do this using the ‘Run’ menu in the syntax file, or by pressing the Run Selection icon (a green triangle). The options in the ‘Run’ menu are as follows:
• All - this runs all the commands that are in the current open syntax window.
• Selection - this runs only the command(s) that have been highlighted in the syntax window, or the command where the cursor currently is.
• To End - this runs the commands from where the cursor currently is to the end of the syntax file.
• Step Through - this runs the commands one at a time starting from the first command in the syntax window (Step Through From Start) or from where the cursor currently is (Step Through From Current). After a given command has run the cursor advances to the next command and you can continue the step through process by selecting Continue.
Note that pressing the Run Selection icon is equivalent to choosing Selection from the menu.
You might notice that if you only run a command to compute or recode a new variable, SPSS won’t actually produce the output in your data file until it is actually needed (e.g. until you use it in a statistical procedure). Until this time, the message ‘Transformations pending’ will appear along the bottom of the various SPSS windows.
To make the transformation actually happen, you can do any of the following:
• go to the Transform menu and click on Run pending transformation
• run a procedure that uses the new variable, such as the descriptives procedure
• include the command execute after the command for computing or recoding the variable, then run both.
|
2022-12-09 13:01:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4815135896205902, "perplexity": 1174.2088025737378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00238.warc.gz"}
|
http://ifigenia.org/index.php?title=Intuitionistic_fuzzy_sets&oldid=7789
|
16-17 May 2019 • Sofia, Bulgaria
Submission: 21 February 2019Notification: 11 March 2019Final Version: 1 April 2019
# Intuitionistic fuzzy sets
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Intuitionistic fuzzy sets are sets whose elements have degrees of membership and non-membership. Intuitionistic fuzzy sets have been introduced by Krassimir Atanassov (1983) as an extension of Lotfi Zadeh's notion of fuzzy set, which itself extends the classical notion of a set.
• In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set.
• As an extension, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1].
• The theory of intuitionistic fuzzy sets further extends both concepts by allowing the assessment of the elements by two functions: $\mu$ for membership and $\nu$ for non-membership, which belong to the real unit interval [0, 1] and whose sum belongs to the same interval, as well.
Intuitionistic fuzzy sets generalize fuzzy sets, since the indicator functions of fuzzy sets are special cases of the membership and non-membership functions $\mu$ and $\nu$ of intuitionistic fuzzy sets, in the case when the strict equality exists: $\nu = 1 - \mu$, i.e. the non-membership function fully complements the membership function to 1, not leaving room for any uncertainty.
## Formal definition
Let us have a fixed universe E. Let A be a subset of E. Let us construct the set
$A^* = \lbrace \langle x, \mu_A(x), \nu_A(x) \rangle \ | \ x \in E \rbrace$
where $0 \leq \mu_A(x) + \nu_A(x) \leq 1$. We will call the set A* intuitionistic fuzzy set (IFS).
In the publications on IFS authors mainly deal with the concept of intuitionistic fuzzy set A* rather then with fixed set A. This is why, for the sake of simplicity, major publications presenting the very definition of the concept often use notation A instead of A*.[1][2] Mathematically, a more precise definition of the IFS is the following:
$A^* = \lbrace \langle x, \mu_A(x), \nu_A(x) \rangle \ | \ x \in E \ \& \ 0 \leq \mu_A(x) + \nu_A(x) \leq 1 \rbrace$
but it is also more complex one and never used, as of 2008.[3]
Functions $\mu_A: E \to [0,1]$ and $\nu_A: E \to [0,1]$ represent degree of membership (validity, etc.) and non-membership (non-validity, etc.). Also defined is function $\pi_A: E \to [0,1]$ through $\pi(x) = 1 - \mu (x) - \nu (x)$, corresponding to the degree of uncertainty (indeterminacy, etc.)
Obviously, for every ordinary fuzzy set $A$: $\pi_A(x) = 0$ for each $x \in E$ and these sets have the form $\lbrace \langle x, \mu_{A}(x), 1-\mu_{A}(x)\rangle |x \in E \rbrace.$
## Operations, relations, operators
For every two intuitionistic fuzzy sets $A$ and $B$ various relations and operations have been defined, most important of which are:
• Inclusion: $A \subset B \ \ \ \text{iff} \ \ \ (\forall x \in E)(\mu_A(x) \le \mu_B(x) \ \& \ \nu_A(x) \ge \nu_B(x))$ , $A \supset B \ \ \ \text{iff} \ \ \ B \subset A$
• Equality: $A = B \ \ \ \text{iff}\ \ \ (\forall x \in E)(\mu_A(x) = \mu_B(x) \ \& \ \nu_A(x) = \nu_B(x))$
• Classical negation: $\overline{A} = \lbrace \langle x, \nu_A(x), \mu_A(x) \rangle \ | \ x \in E \rbrace$
• Conjuncion: $A \cap B = \lbrace \langle x, \min(\mu_A(x), \mu_B(x)), \max(\nu_A(x), \nu_B(X)) \rangle \ | \ x \in E \rbrace$
• Disjunction: $A \cup B = \lbrace \langle x, \max(\mu_A(x), \mu_B(x)), \min(\nu_A(x), \nu_B(X)) \rangle \ | \ x \in E \rbrace$
These operations and relations are defined similarly to these from the fuzzy set theory. More interesting are the modal operators that can be defined over intuitionistic fuzzy sets. These have no analogue in fuzzy set theory.
|
2019-02-21 10:37:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913384079933167, "perplexity": 826.122891274325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00414.warc.gz"}
|
http://mathoverflow.net/revisions/13887/list
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
4 added 183 characters in body
Here's a very special case for $\mathfrak{gl}_n$$\mathfrak{gl}_n$ in characteristic 0 (which I have found useful in my work). Let $V$ be the vector representation, and for a partition $\lambda$ with at most $n$ parts, let ${\bf S}_{\lambda}(V)$ denote the corresponding highest weight representation. Then $Sym^n(Sym^2 V) = \bigoplus_{\lambda} {\bf S}_{\lambda}(V)$ where the direct sum is over all partitions $\lambda$ of size $2n$ with at most $n$ parts such that each part of $\lambda$ is even. Similarly, $Sym^n(\bigwedge^2 V) = \bigoplus_{\mu} {\bf S}_{\mu}(V)$ where the direct sum is over all partitions $\mu$ of $2n$ with at most $n$ parts such that each part of the conjugate partition $\mu'$ is even. If you want the corresponding result for $\mathfrak{sl}_n$ we just introduce the equivalence relation $(\lambda_1, \dots, \lambda_n) \equiv (\lambda_1 + r, \dots, \lambda_n + r)$ where $r$ is an arbitrary integer.
One reference for this is Proposition 2.3.8 of Weyman's book Cohomology of Vector Bundles and Syzygies (note that $L_\lambda E$ in that book means a highest weight representation with highest weight $\lambda'$ and not $\lambda$). Another reference is Example I.8.6 of Macdonald's Symmetric Functions and Hall Polynomials, second edition, which proves the corresponding character formulas.
3 added 1 characters in body
Here's a very special case for $\mathfrak{gl}_n$` (which I have found useful in my work). Let $V$ be the vector representation, and for a partition $\lambda$ with at most $n$ parts, let ${\bf S}_{\lambda}(V)$ denote the corresponding highest weight representation. Then $Sym^n(Sym^2 V) = \bigoplus_{\lambda} {\bf S}_{\lambda}(V)$ where the direct sum is over all partitions $\lambda$ of size $2n$ with at most $n$ parts such that each part of $\lambda$ is even. Similarly, $Sym^n(\bigwedge^2 V) = \bigoplus_{\mu} {\bf S}_{\mu}(V)$ where the direct sum is over all partitions $\mu$ of $2n$ with at most $n$ parts such that each part of the conjugate partition $\mu'$ is even. If you want the corresponding result for $\mathfrak{sl}_n$ we just introduce the equivalence relation $(\lambda_1, \dots, \lambda_n) \equiv (\lambda_1 + r, \dots, \lambda_n + r)$ where $r$ is an arbitrary integer.
One reference for this is Proposition 2.3.8 of Weyman's book Cohomology of Vector Bundles and Syzygies (note that $L_\lambda E$ in that book means a highest weight representation with highest weight $\lambda'$ and not $\lambda$).
2 added 21 characters in body
Here's a very special case for $\mathfrak{gl}n$ \mathfrak{gl}_n$(which I have found useful in my work). Let$V$be the vector representation, and for a partition$\lambda$with at most$n$parts, let${\bf S}\lambda(V)$S}_{\lambda}(V)$ denote the corresponding highest weight representation. Then $Sym^n(Sym^2 V) = \bigoplus_\lambda bigoplus_{\lambda} {\bf S}\lambda(V)$ S}_{\lambda}(V)$where the direct sum is over all partitions$\lambda$of size$2n$with at most$n$parts such that each part of$\lambda$is even. Similarly,$Sym^n(\bigwedge^2 V) = \bigoplus\mu bigoplus_{\mu} {\bf S}_\mu(V)$S}_{\mu}(V)$ where the direct sum is over all partitions $\mu$ of $2n$ with at most $n$ parts such that each part of the conjugate partition $\mu'$ is even. If you want the corresponding result for $\mathfrak{sl}_n$ we just introduce the equivalence relation $(\lambda_1, \dots, \lambda_n) \equiv$(\lambda_1 (\lambda_1 + r, \dots, \lambda_n + r)$where$r$is an arbitrary integer. One reference for this is Proposition 2.3.8 of Weyman's book Cohomology of Vector Bundles and Syzygies (note that$L_\lambda E$in that book means a highest weight representation with highest weight$\lambda'$and not$\lambda\$).
1
|
2013-06-20 11:35:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965672492980957, "perplexity": 6162.571838927867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711515185/warc/CC-MAIN-20130516133835-00048-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://nrich.maths.org/public/leg.php?code=72&cl=3&cldcmpid=1274
|
Search by Topic
Resources tagged with Generalising similar to Nim-interactive:
Filter by: Content type:
Stage:
Challenge level:
Nim-interactive
Stage: 3 and 4 Challenge Level:
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter.
Nim
Stage: 4 Challenge Level:
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter.
Nim-like Games
Stage: 2, 3 and 4 Challenge Level:
A collection of games on the NIM theme
Sliding Puzzle
Stage: 1, 2, 3 and 4 Challenge Level:
The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves.
Jam
Stage: 4 Challenge Level:
To avoid losing think of another very well known game where the patterns of play are similar.
Pentanim
Stage: 2, 3 and 4 Challenge Level:
A game for 2 players with similaritlies to NIM. Place one counter on each spot on the games board. Players take it is turns to remove 1 or 2 adjacent counters. The winner picks up the last counter.
Jam
Stage: 4 Challenge Level:
A game for 2 players
Winning Lines
Stage: 2, 3 and 4 Challenge Level:
An article for teachers and pupils that encourages you to look at the mathematical properties of similar games.
Polycircles
Stage: 4 Challenge Level:
Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon?
Games Related to Nim
Stage: 1, 2, 3 and 4
This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning.
One, Three, Five, Seven
Stage: 3 and 4 Challenge Level:
A game for 2 players. Set out 16 counters in rows of 1,3,5 and 7. Players take turns to remove any number of counters from a row. The player left with the last counter looses.
Building Gnomons
Stage: 4 Challenge Level:
Build gnomons that are related to the Fibonacci sequence and try to explain why this is possible.
Chocolate Maths
Stage: 3 Challenge Level:
Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . .
A Tilted Square
Stage: 4 Challenge Level:
The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices?
Hypotenuse Lattice Points
Stage: 4 Challenge Level:
The triangle OMN has vertices on the axes with whole number co-ordinates. How many points with whole number coordinates are there on the hypotenuse MN?
Route to Infinity
Stage: 3 and 4 Challenge Level:
Can you describe this route to infinity? Where will the arrows take you next?
Plus Minus
Stage: 4 Challenge Level:
Can you explain the surprising results Jo found when she calculated the difference between square numbers?
Pentagon
Stage: 4 Challenge Level:
Find the vertices of a pentagon given the midpoints of its sides.
Steel Cables
Stage: 4 Challenge Level:
Some students have been working out the number of strands needed for different sizes of cable. Can you make sense of their solutions?
Partitioning Revisited
Stage: 3 Challenge Level:
We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4
Multiplication Square
Stage: 3 Challenge Level:
Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice?
AMGM
Stage: 4 Challenge Level:
Choose any two numbers. Call them a and b. Work out the arithmetic mean and the geometric mean. Which is bigger? Repeat for other pairs of numbers. What do you notice?
Stage: 3 Challenge Level:
List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it?
In a Spin
Stage: 4 Challenge Level:
What is the volume of the solid formed by rotating this right angled triangle about the hypotenuse?
What's Possible?
Stage: 4 Challenge Level:
Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make?
Repeaters
Stage: 3 Challenge Level:
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
Konigsberg Plus
Stage: 3 Challenge Level:
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Cubes Within Cubes Revisited
Stage: 3 Challenge Level:
Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need?
Consecutive Negative Numbers
Stage: 3 Challenge Level:
Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers?
Christmas Chocolates
Stage: 3 Challenge Level:
How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?
Multiplication Arithmagons
Stage: 4 Challenge Level:
Can you find the values at the vertices when you know the values on the edges of these multiplication arithmagons?
More Number Pyramids
Stage: 3 and 4 Challenge Level:
When number pyramids have a sequence on the bottom layer, some interesting patterns emerge...
Loopy
Stage: 4 Challenge Level:
Investigate sequences given by $a_n = \frac{1+a_{n-1}}{a_{n-2}}$ for different choices of the first two terms. Make a conjecture about the behaviour of these sequences. Can you prove your conjecture?
Sum Equals Product
Stage: 3 Challenge Level:
The sum of the numbers 4 and 1 [1/3] is the same as the product of 4 and 1 [1/3]; that is to say 4 + 1 [1/3] = 4 × 1 [1/3]. What other numbers have the sum equal to the product and can this be so for. . . .
Picturing Triangle Numbers
Stage: 3 Challenge Level:
Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
Shear Magic
Stage: 3 Challenge Level:
What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles?
Number Pyramids
Stage: 3 Challenge Level:
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
Mini-max
Stage: 3 Challenge Level:
Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . .
Three Times Seven
Stage: 3 Challenge Level:
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Nim-7
Stage: 1, 2 and 3 Challenge Level:
Can you work out how to win this game of Nim? Does it matter if you go first or second?
Reverse to Order
Stage: 3 Challenge Level:
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
Stage: 3 Challenge Level:
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
Problem Solving, Using and Applying and Functional Mathematics
Stage: 1, 2, 3, 4 and 5 Challenge Level:
Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information.
More Magic Potting Sheds
Stage: 3 Challenge Level:
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it?
Stage: 4 Challenge Level:
A counter is placed in the bottom right hand corner of a grid. You toss a coin and move the star according to the following rules: ... What is the probability that you end up in the top left-hand. . . .
All Tangled Up
Stage: 3 Challenge Level:
Can you tangle yourself up and reach any fraction?
Tilted Squares
Stage: 3 Challenge Level:
It's easy to work out the areas of most squares that we meet, but what if they were tilted?
Stage: 3 Challenge Level:
Great Granddad is very proud of his telegram from the Queen congratulating him on his hundredth birthday and he has friends who are even older than he is... When was he born?
More Twisting and Turning
Stage: 3 Challenge Level:
It would be nice to have a strategy for disentangling any tangled ropes...
Equilateral Areas
Stage: 4 Challenge Level:
ABC and DEF are equilateral triangles of side 3 and 4 respectively. Construct an equilateral triangle whose area is the sum of the area of ABC and DEF.
|
2014-12-23 03:43:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47406405210494995, "perplexity": 1065.5844901943576}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802778068.138/warc/CC-MAIN-20141217075258-00003-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.spinningwing.com/the-helicopter/helicopter-modeling-and-simulation/
|
# Modeling and Simulation
Software is used to model and simulate helicopter behavior for several reasons. Computationally intensive CFD models may be used to study and improve airfoil shapes. Such models may even be coupled with acoustic software to predict rotor noise. Other, less computationally intensive models may be used to estimate high-level performance values like range and hover ceiling. Some models can be used to develop control laws and be incorporated in real-time simulators to train pilots.
In this article we’ll discuss what’s called comprehensive flight dynamics models. Such models incorporate enough aerodynamics and dynamics to provide useful predictions of rotor response, performance and handling qualities. They are sometimes used to test control laws in early stages of helicopter design. They incorporate sub models of dynamics and aerodynamics of the rotors, fuselage, landing gear and stabilizing surfaces. These subsystems are analyzed to varying degrees of fidelity. These models may be used in real-time flight simulators.
We’ll start with descriptions of the subsystem models: the fuselage, stabilizers, landing gear and rotors. After that, we’ll discuss how the outputs from these subsystems are put together in the overall model. We will then cover some useful processes normally included in a comprehensive model. Finally, we’ll cover the software engineering behind such models.
## Subsystems
In this section, we’ll discuss models for each of the subsystems. Each model outputs at least six values: 3 forces and 3 moments at/about a stationline, buttline and waterline associated with the component. Internally, these models typically use one or more subsystem-specific reference frames. However, the output should be in high-level body attached frame.
You’ll notice a disparity in the complexity of these subsystem models. E.g. rotor models are (necessarily) much more complex than others.
### Fuselage
The structural deformations of a helicopter fuselage are rarely included in these models. Good approximations can be obtained without such details. Fuselage aerodynamics, however, must be included for useful simulations. A simple table lookup method is often used to calculate these aerodynamics and will be described here. Other models use equations based on size and other geometry parameters. Rarely, the Navier-Stokes equations may be solved using a detailed specification of the fuselage geometry. The latter is very computationally intensive and avoided in most cases.
A simple table lookup model requires six input tables: three for forces and three for the moments. A reference point (stationline, buttline and waterline) where these forces and moments are applied is also input. Other inputs may be provided to govern rotor interference effects (e.g. in hover main rotor downwash on the fuselage can increase power requirement by 3-5%). The outputs are three forces and three moments in some body-attached frame, at/about the input reference point.
The input tables are two-dimensional, parameterized by fuselage angle of attack $$\alpha$$ and sideslip $$\beta$$ (Mach number is typically unnecessary). Linear interpolation (or other methods) are used to determine a value $$c(\alpha ,\beta )$$ from the table when the exact $$\alpha ,\beta$$ don’t exist. The entries in these tables are typically normalized by dynamic pressure$$\rho v^2/2$$, but not by size (unlike airfoil tables). Hence the $$i$$th force or moment $$F_i$$ is simply computed as $$$$F_i = \rho v^2 c_i(\alpha ,\beta )/2.$$$$
### Stabilizer / Fin Aerodynamics Surfaces
The same source code is typically used to model a horizontal stabilizer and vertical fin. Dynamic inputs include the free stream velocity at one or more points on the surface along with some information about the rotor(s) and fuselage. The latter inputs are used to model interference effects - interference from other surfaces is typically ignored. One or more net velocity vectors are computed in surface-specific reference frame(s) and then used to lookup (or compute) lift, drag and pitch moment coefficients. Finally, the coefficients are scaled to forces and moments, which are optionally computed in an aircraft body reference frame.
Fuselage and rotor interference models may be based on a handful of empirically-derived equations. Parameters therein are based on the geometry of the specific aircraft. These interference effects can be significant. For example, main rotor downwash on a horizontal stabilizer can substantially affect pitch behavior both statically and dynamically.
### Controls
The controls model outputs the feathering angle at the root of each blade and, optionally, the angle of the horizontal stabilizer (some helicopters have an active stabilizer that pitches). These may be a simple function of the control positions only, or they may be a complex function of the control positions, flapping and flight dynamics (when a SCAS is active).
The simplest rotor control model is a linear, uncoupled, single-main-rotor helicopter. In this model, the collective stick controls only the main rotor collective pitch, the F/A cyclic stick only 1/rev sin pitch variation, the lateral cyclic stick only the 1/rev cos pitch variation, and the pedals only the tail rotor collective pitch.
Larger helicopters may include mixers and stability augmentation systems that complicate rotor controls. Control mixing may be implemented in a straightforward way, but SAS, SCAS and general AFCS behavior maybe more complex and is often modeled in a separate subsystem. Code for this system may be autogenerated by Simulink. In some cases, this model will output effective “delta controls” that can be added to the primary pilot control inputs to give the proper blade feathering.
Passive control inputs due to pitch-flap coupling are normally implemented in the rotor model. The rotor model may also include pitch change along the length of a blade due to flexing under loads (sometimes called elastic twist).
Unintentional feathering changes due to flexibility in the airframe and mast are typically neglected.
Practical swashplate systems have inherent nonlinearities. These result primarily from the motion of the pivot points or attach points used to convert linear motion to angular motion or vice versa. These effects are typically neglected as well.
### Rotors
A sophisticated model is required to usefully simulate the main rotor for handling qualities and performance. Aspects of how blades flap and flex must be captured along with the impact on aerodynamics. The high frequency oscillatory aerodynamics of a blade in forward flight necessitates some level of unsteady aerodynamics. Reasonable performance predictions require a sophisticated wake model coupled in this analysis. A handful of methods may be used to perform this aerodynamic and structural dynamic analysis. Herein, we’ll discuss methods that yield useful predictions, yet can run in real-time on a workstation.
The model we discuss here is a blade element model, which treats elements of the blade (mostly) independently. A blade is typically subdivided span-wise into 5-20 elements as shown in the diagram below. The structural dynamics model will estimate the deflection and velocity of each blade section, including blade flexibility. The aerodynamics model will compute the airloads on each blade section, typically just lift, drag and pitching moment. The submodules will be further described below.
#### Structural Dynamics
A modal technique may be used to estimate blade dynamics. In this approach, 2 - 20 modes are input for a rotor. Each mode $$m_i$$ includes a displacement of each blade element in the flap-wise direction, but also typically chord-wise and twist (i.e. 3 DOF blade segments). Blades on a rotor are often coupled so that these modes include displacements along all blades on a rotor. The net displacement of all blade elements is a linear combination of these mode shapes $$\Sigma_i p_i m_i$$. The coefficients $$p_i$$ of this linear combination are called mode participation factors. They are time-varying, computed in the model as a function of the loads on all blade elements.
Mode shapes with associated frequencies $$f_i$$ are computed as the eigenvectors and eigenvalues of linear equations of motion: $$Em_i = f_i m_i$$. These depend only on the rotor design and rotor speed, not on the flight condition or maneuver. Therefore, these modes are typically computed once and saved in a file for use in all subsequent simulations. Exceptions to this rule are autorotation or engine starts/shutdowns where the rotor speed deviates more than about 10%.
A reference for the differential equations of motion of a helicopter rotor blade is NACA Report 1346 by Houbolt and Brooks. The second time derivative of the mode participation factors $$p_i$$ are computed as $$\ddot{p_i} = W_i/I_i -2f_i d_i\dot{p_i} -f_i^2p_i$$ and numerically integrated to give elastic velocities and displacements of the blade elements. $$W_i$$ denotes the virtual work on the $$i$$th mode, $$I_i$$ the inertia of this mode, and $$d_i$$ a (sometimes useful) modal damping value.
In addition to the aerodynamic loads, there are inertial loads acting on the blade elements which must be estimated. This is quite cumbersome and will not be covered here. The net loads including aerodynamics are used to estimate the virtual work on each mode, which then facilitates the calculation of the mode participation factors. For more information see NACA report 1346.
#### Aerodynamics
The approach described here uses tables of CL, CD and CM input for each blade element (or interpolated from surrounding blade elements). These values may be determined by wind tunnel experiments or CFD simulation. Corrections are made for yawed flow and unsteady aerodynamics in forward flight. The net velocity of a blade element is a sum of contributions from rotor rotation, free stream, elastic blade velocity and rotor-induced velocity. Velocity due to rotor rotation and free stream are simply computed from frame transformation matrices. The elastic (blade flapping, etc.) velocity is computed by the structural dynamics model. What’s left to explain here is the rotor-induced velocity, which is difficult both to predict and to measure.
Many models neglect the rotor-induced velocity in the plane normal to the rotor shaft. This is reasonable because the velocity due to rotor rotation (in this plane) is much larger for most of the dominant blade elements. The velocity parallel to the shaft, however, has a large effect on a blade element’s angle of attack $$\alpha$$. We’ll denote this value as $$v_i(r,\psi )$$. Careful calculation of this value is necessary for useful performance predictions.
So how does one estimate $$v_i(r,\psi )$$? We’ll discuss three methods: momentum-theory, prescribed wake, and free wake approaches.
Momentum theory based approaches first estimate the average value $$\bar{v_i}$$ over the rotor from a momentum theory equation (click here for more details about momentum theory). (This value may be adjusted for ground effect by the simple source method described by Cheeseman.) Momentum theory does not provide a useful estimate of the variation of $$v_i$$ over the rotor. Researchers have derived multiples $$f(\mu ,\lambda ,r,\psi )$$ of $$\bar{v_i}$$ from experiments so that $$v_i(r,\psi )=f(\mu ,\lambda , r,\psi)\bar{v_i}$$. Of course, $$\bar{v_i}$$ depends on rotor loading, but rotor loading depends on induced velocity. Hence an iterative “induced velocity loop” is typically used to converge rotor loads and $$\bar{v_i}$$.
A prescribed wake approach incorporates vortex lines extending downstream from rotor blades, as shown in the diagram below. The “Biot-Savart equation” is used to compute the induced velocity at blade elements due to these lines. A drawback of this method is that one must estimate the strength and geometry of these vortex lines by other means. One way to do this is to store the results from the free wake methods discussed below. (Reasons for using this method in lieu of a free wake are numerical stability and computational efficiency.)
A free wake approach is much like the prescribed wake described above, but here the location of vortex lines is computed instead of prescribed/input. Vortex lines emanate from blades based on spanwise changes in bound circulation (the details are model-dependent). Once created, the velocity anywhere on a vortex line is computed and integrated in time. The velocity is the sum of the induced velocity from other vortex line segments, free stream and optionally other sources. These methods are sometimes unstable and may require significant experience to get good results.
### Landing gear / skid model
The landing gear model computes the six forces/moments at/about the CG due to contact between the skids (and tail stinger) and the “landing surface.” The surface may be a 6-DOF moving ship deck when used in real-time simulation.
Even if skids are being modeled, a finite number of contact points are typically simulated. E.g. the 2 points at the end of each skid and the stinger (5 points total). A ground normal force and friction are estimated at each point. Before doing that, the relative displacement and velocity vectors between a point and the underlying landing surface must be computed. The latter may be complicated in the case of a moving ship deck with limited information available (sometimes only the location and attitude of the ship CG, updated at a frequency below what’s required by this model).
The ground normal force at each point is typically implemented as a simple function of the displacement and velocity of the contact point normal to the ground. Contact points are typically allowed to sink slightly below ground level while being pushed up by a strong vertical force. E.g. a set of springs and dampers connecting the contact point to points at and/or slightly below ground level. This must be done carefully to (1) avoid a bouncing behavior in the simulation, (2) not “suck down” a contact point when the aircraft is trying to lift off and (3) rest stably on a moving ship deck in rough seas.
After normal forces are computed, frictional forces can be estimated. Friction forces are proportional to the normal forces and their direction is substantially opposite to the velocity of the contact point in the surface plane. An exception for the force direction is made to prevent the aircraft from “drifting” relative to the landing surface while in static friction. This is required because the numerically integrated velocities relative to the surface are not exactly zero, even when the aircraft should be in static friction. To compensate, static forces are directed to keep the skids at the same point on the landing surface. When the force required to do this exceeds the static friction limit, the state is changed and the helicopter slides over the surface in kinetic friction.
## Putting it all together
In order to compute the overall aircraft acceleration, velocity and location the model must sum the force and moment outputs from each component. They are summed to equivalent forces and moments at/about the CG. Component forces and moments are first resolved to a common body reference frame (if not already) and then summed. A force $$F$$ applied by a component at a point displaced $$r$$ from the CG also generates a moment $$F\times r$$ about the CG, which must be summed in with the moments. The resulting collection of 3 forces and 3 moments is then multiplied by the inverse of the inertia matrix to give the body accelerations. These accelerations are numerically integrated (e.g. via a Runge-Kutta method) to give the aircraft rates $$u,v,w,p,q,r$$. The translational rates are typically converted into a global reference frame (e.g. north, east and down) while the angular rates $$p,q,r$$ are typically converted to Euler angle rates $$\dot{\phi},\dot{\theta},\dot{\psi}$$. Resulting rates are then integrated into displacements and Euler angles.
## Trim Process
Comprehensive models typically include a trim process, which searches for a helicopter state that satisfies input criteria. For example, the user may want to trim the helicopter to 100kts airspeed, with 2deg sideslip and a 5’/s climb rate. The trim process will run the model repeatedly until it finds values of independent variables (e.g. pitch, roll and control positions) that would facilitate such a flight condition. This is a multi-dimensional root finding problem - finding the values that, according to the model, result in 0 net aircraft forces and moments (i.e. no aircraft acceleration, steady state).
Let $$F$$ denote the net forces and moments at/about the CG, as computed by the model. $$F$$ is a function of the independent variables $$X$$. The trim process finds an $$X$$ for which $$F(X)=0$$. In practice, models find an $$X$$ for which $$|F(X)|<T$$ where $$T$$ is a vector of allowable error tolerances. $$F$$ is typically treated as a differentiable, black box function and numerical procedures like Newton’s method are applied to find the solution.
## Software Engineering
Many comprehensive models were developed before the 1990s, with little concern for software engineering. Such models are often difficult build on, interface to or improve in general. The problem would seem to be a good candidate for object-oriented software engineering, but performing such an overhaul of existing software (or writing "from scratch") is a long-term project difficult to justify.
Modern comprehensive models typically have interchangeable sub-models. For example, the user may choose a table lookup model for fuselage aerodynamics or a more computationally intensive model. The user may elect use a momentum theory calculation for rotor-induced velocity or use a more sophisticated free wake model. Allowing engineers to select different combinations of submodels has proven very useful in the industry.
|
2020-10-23 07:34:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6465808749198914, "perplexity": 1778.5922238283392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00300.warc.gz"}
|
https://www.semanticscholar.org/paper/Search-for-electroweak-production-of-charginos-and-Collaboration/c45710a03523ce184707e30866d53218106ee6b9
|
# Search for electroweak production of charginos and neutralinos in WH events in proton-proton collisions at sqrt(s) = 13 TeV
@inproceedings{Collaboration2017SearchFE,
title={Search for electroweak production of charginos and neutralinos in WH events in proton-proton collisions at sqrt(s) = 13 TeV},
author={Cms Tracker Collaboration},
year={2017}
}
Results are reported from a search for physics beyond the standard model in protonproton collision events with a charged lepton (electron or muon), two jets identified as originating from a bottom quark decay, and significant imbalance in the transverse momentum. The search was performed using a data sample corresponding to 35.9 fb−1, collected by the CMS experiment in 2016 at √ s = 13 TeV. Events with this signature can arise, for example, from the electroweak production of gauginos, which are…
3 Citations
• Materials Science
• 2017
We study the contributions of colorless vectorlike fermions to the triple gauge couplings ${W}^{+}{W}^{\ensuremath{-}}\ensuremath{\gamma}$ and ${W}^{+}{W}^{\ensuremath{-}}{Z}^{0}$. We consider models
• Physics, Computer Science
• 2018
We present the update of the SModelS database with the simplified model results from CMS searches for supersymmetry at Run 2 with 36 fb$^{-1}$ of data. The constraining power of these new results is
• Physics
• 2018
A bstractHow light can a fermion be if it has unit electric charge? We revisit the lore that LEP robustly excludes charged fermions lighter than about 100 GeV. We review LEP chargino searches, and
## References
SHOWING 1-10 OF 57 REFERENCES
• Physics
• 2017
: Searches are presented for direct production of top or bottom squark pairs in proton–proton collisions at the CERN LHC. Two searches, based on complementary techniques, are performed in all-jet
• Physics
• 2013
: This paper presents a search for the pair production of top squarks in events with a single isolated electron or muon, jets, large missing transverse momentum, and large transverse mass. The data
• Physics
• 2012
A bstractMotivated by hints for a light Standard Model-like Higgs boson and a shift in experimental attention towards electroweak supersymmetry particle production at the CERN LHC, we update in this
At the Large Hadron Collider, the identification of jets originating from b quarks is important for searches for new physics and for measurements of standard model processes. A variety of algorithms
• Physics
• 2012
The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40pb−1 of data collected in pp collisions at √ s = 7TeV at the LHC in 2010. A few benchmark sets
The performance and strategies used in electron reconstruction and selection at CMS are presented based on data corresponding to an integrated luminosity of 19.7 fb−1, collected in proton-proton
• Physics
• 2017
Improved jet energy scale corrections, based on a data sample corresponding to an integrated luminosity of 19.7 fb-1 collected by the CMS experiment in proton-proton collisions at a center-of-mass
• Physics
• 2013
We describe the Resummino package, a C++ and Fortran program dedicated to precision calculations in the framework of gaugino and slepton pair production at hadron colliders. This code allows to
• Physics
Physical review letters
• 2013
The result derived in this Letter completes the set of NNLO QCD corrections to the total top pair-production cross section at hadron colliders and allows a new level of scrutiny in parton distribution functions and new physics searches.
|
2023-02-05 04:22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7555800676345825, "perplexity": 2952.87946979933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00190.warc.gz"}
|
https://tex.stackexchange.com/questions/432118/join-list-with-separator
|
# join list with separator
I'm wondering if it's possible to create a command that takes some kind of list and joins it with some separator in between each entry.
For example, given something like:
parameters:
first
second with spaces
third
separator:
·
I'd want it to output:
first · second with spaces · third
With the spacing around the separator.
Perhaps this is more trouble than it's worth. Mainly it'd be to deal with the hassle of having to keep repasting the · character appropriately over any slight modification.
Otherwise, maybe I can leverage an environment, such as points, to join the \items with · on the interior? I don't mind if it ends up being more typing, at least it'd be easier to manage and I wouldn't have to keep repasting the separator over any slight modification, since each entry would correspond to an independent \item.
Based on expl3s clist:
\documentclass[]{article}
\usepackage{xparse}
\ExplSyntaxOn
\clist_new:N \l_jorge_list_clist
\clist_new:N \l_jorge_tmp_clist
\NewExpandableDocumentCommand \usemylist { m }
{
\clist_use:Nn \l_jorge_list_clist { #1 }
}
\NewDocumentCommand \setmylist { m }
{
\clist_set:Nn \l_jorge_list_clist { #1 }
}
\NewDocumentCommand \formatlist { m m }
{
\clist_set:Nn \l_jorge_tmp_clist { #1 }
\clist_use:Nn \l_jorge_tmp_clist { #2 }
}
\ExplSyntaxOff
\setmylist{first, second with spaces, third}
\begin{document}
\usemylist{.}
\formatlist{foo,bar,baz}{ FOOBAR }
\end{document}
• Thanks! I'll have multiple of these lists, should I just keep doing \setmylist before each \usemylist? Any way to combine the two to make it less tedious? If not, no big deal. – Jorge Israel Peña May 17 '18 at 18:15
• I accidentally hit enter before I finished typing my comment. It's updated. – Jorge Israel Peña May 17 '18 at 18:16
• @JorgeIsraelPeña updated with both a local adhoc variant and a variant to save a list first. – Skillmon May 17 '18 at 18:19
• That works beautifully. Thank you very much! – Jorge Israel Peña May 17 '18 at 18:22
|
2021-07-23 15:27:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4398746192455292, "perplexity": 4190.61267694362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00394.warc.gz"}
|
https://socratic.org/questions/circle-a-has-a-center-at-3-7-and-a-radius-of-1-circle-b-has-a-center-at-3-3-and-
|
# Circle A has a center at (3 ,7 ) and a radius of 1 . Circle B has a center at (-3 ,3 ) and a radius of 2 . Do the circles overlap? If not, what is the smallest distance between them?
Distance between the two centres is $\sqrt{{\left(3 + 3\right)}^{2} + {\left(7 - 3\right)}^{2}}$
=$\sqrt{36 + 16} = \sqrt{52} = 7.211$ which is more than the sum of their radii =3. Hence the circles would not overlap and intersect. The smallest distance between the two circles would be along the line joining the two centres and it would be 7.211-(1+2)= 4.2111
|
2022-06-30 08:08:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7164004445075989, "perplexity": 274.9868311748426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00371.warc.gz"}
|
https://discuss.avogadro.cc/t/constraints-in-avogadro2/3072
|
It would be cool to have the interface from Avogadro1 (or something similar/better) to implement the addition of constraints.
A new json entry might do the trick for passing the constraints to scripts for fixing certain coordinates (position, distance, angle, torsion):
{'constraints':
{
'position':[atom_id1, atom_id2,...],
'distance':[(atom_id1, atom_id2, dist1'),...]
'angle':[(atom_id1, atom_id2, atom_id3, angle1),...],
'dihedral':[(atom_id1, atom_id2, atom_id3, atom_id3, torsion1),...]
}
}
Together with being able to pass multiple fileformats, it should be straightforward to implement these constraints using the appropriate functions in RDKit before minimization.
I would also like to get these functionality using the Parsley forcefield/OpenMM but the implementation seems not as straightforward.
Yeah, I’m not 100% sold on the user interface. I’d like to adopt the Atom / Bond / Angle / Torsion spreadsheets from Avogadro 1 and then make it possible to add a constraint from that dialog.
Ideas welcome.
The JSON seems reasonable. The catch is:
• Implementing the UI
• Adding the constraint code to the new force field engine
• Passing the constraints into CJSON and other sources. (commands, generators, etc.)
I can imagine it would be nice to pass the constraints to QM interfaces to enable constrained optimization in Gaussian / Orca, etc.
OpenMM integration would be very nice - beyond Python are you willing to do some C++? If so, I’m happy to give some pointers.
I admit that I did not think at all about the QM input generators - I agree that it would be nice to be able to apply the constraints there, too. I have only some modest experience with MOPAC, setting the optimization flags for specific coordinates is rather tedious within the input file itself (maybe I miss a trick or two).
What comes to my mind is something like ASE. That could provide a generic interface for implementing constraints but might also bring a loss in flexibility further down the line because of the calculator interfaces implemented there.
Definitely willing to do some C++ - but no experience. I would like to try but that’s something in the longer term. My knowledge of C++ is superficial to point that I was even confused by the expression “give some pointers” (in combination with an insufficient command of English ).
It has been a while, finally I tried to port the Constraints dialog from Avogadro 1.2 to Avogadro2.
I added a constraints attribute to the QtGui::Molecule class definition to store the constraints - maybe there is a more “plug-inic” way of doing that ? Basically it is the code from Avogadro 1.2 adapted as a Plugin for Avogadro2. As this ismy first attempt of programming in C++ it is probably quite a mess and needs some review.
On top of that would it would also be my first contribution to a an open-source project - so, what would be the next step to proceed (pull request?)?
1 Like
Yes, the next step would be submitting a pull request and we can help you through review / cleanup.
This would be great incentive for me to finish the force field work with constraints.
I am still working on an implementation of the Constraint class itself. In the beginning I had something really simply that was basically just an array of numbers
constraint = (constraint_type, value, atom1, atom2 …)
But I figured that it would be more convenient to somehow link the constraint directly to the atoms, so that changes in the molecule would automatically be reflected in the contraints. But I am having some trouble to understand how the atoms are actually stored - are there any “Atom objects” that persist and that have their characteristics updated upon molecule changes?
Your data structure is essentially what I drafted, using an enum for the constraint type (distance, angle, torsion)
Atoms are stored as indexes. That makes it a bit difficult to track constraints - there probably needs to be some code particularly to update constraints when atoms are removed.
Happy to help work through some of that.
For tracking the constraints I stored the uniqueIDs of the atoms so that the current indices can easily be retrieved.
|
2020-08-15 18:49:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37315574288368225, "perplexity": 1202.69134099216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00396.warc.gz"}
|
https://kerodon.net/tag/00XX
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Notation 3.3.1.4. Let $X_{\bullet }$ be a simplicial set. For each nonnegative integer $n$, we let $X_{n}^{\mathrm{nd}} \subseteq X_{n}$ denote the collection of nondegenerate $n$-simplices of $X_{\bullet }$. If $X_{\bullet }$ is braced (Definition 3.3.1.1), then the face maps $\{ d_ i: X_{n} \rightarrow X_{n-1} \} _{0 \leq i \leq n}$ carry $X_{n}^{\mathrm{nd}}$ into $X_{n-1}^{\mathrm{nd}}$. In this case, the construction $[n] \mapsto X_{n}^{\mathrm{nd}}$ determines a semisimplicial set, which we will denote by $X_{\bullet }^{\mathrm{nd}}$.
|
2022-07-02 17:22:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959155321121216, "perplexity": 187.29156770482103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00249.warc.gz"}
|
https://blender.stackexchange.com/questions/207042/emissive-shader-does-not-change-much-in-changes-of-orders-of-magnitudes/207053
|
# Emissive shader does not change much in changes of orders of magnitudes [duplicate]
I'm trying to make some LED lamp but the EMISSIVE feature doesn't react as I expect. I read on the Blender manual that the emission power is defined in Watts per Square Meter. It doesn't really seem to work the right way. From 0 to 1 it works well, but then, raising it to 10, it's not decuplicating the light, and even shooting it to 100000 won't really change much.
You can see in the images how the change from 1000 to 10 millions doesn't change anything in the final image.
For a test, I added an area light on top of the emissive model, and a 1 W light assigned to it makes faaaar more light than the emissive material. You can see the difference in the final image.
What may be happening here?
Now, I may trick and just use the area lamp, but I want to know what is happening so to know how to work in the future.
Real world sizes. Emissive surfaces are exposed, nothing in between.
Actually, Eevee is not really lighting anything with the model, it just adds glow, a lot of it XD
Tried in a brand new scene, same problem: emissive power doesn't do anything past a certain threshold. It looks like the power is logarithmic. You can download it here:
• if you are in Eevee, the emission strength won't change anything unless you use a Light Probe > Irradiance Volume, so you'll need to add a light of use a light probe Jan 3 '21 at 22:11
• I never experienced such issue. Have you tried in a simple, brand new file? Jan 3 '21 at 22:12
• In the 4th img Use Nodes hasn't been clicked? Jan 3 '21 at 22:23
• @HISEROD in the 4th image the selected object is the AREA LAMP, I don't need nodes to use that. Jan 3 '21 at 22:25
• Could you add a blend file? blend-exchange.giantcowfilms.com Jan 3 '21 at 22:26
Set "Direct Light" to 0 to fix the power issue.
|
2022-01-28 03:55:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20942603051662445, "perplexity": 1339.5652236352837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00519.warc.gz"}
|
https://mathoverflow.net/questions/158179/extendability-of-contact-structures-foliations-of-s2
|
# Extendability of Contact Structures; Foliations of $S^2$
I am currently reading Eliashberg's paper on the classification of overtwisted contact structures (http://bogomolov-lab.ru/G-sem/eliashberg-tight-overtwisted.pdf). In it, there is the following statement (Lemma 2.1.5.1):
"Let $\xi$ be a simple contact structure near the boundary of $S = \partial B$ of the 3-ball. Then the extendability of $\xi$ as a contact structure to $B$ depends only on the topological type of the foliation on $S$ induced by $\xi$."
I understand that a contact structure defined in the neighborhood of a surface is basically defined by the diffeomorphism class of the foliation it induces. (This is a theorem of Giroux?) Hence the main point is to pass from "topological type" (i.e., homeomorphism class) to diffeomorphism class. To do this, Eliashberg gives some sort of perturbation argument. However, I'm not entirely sure I understand what he has written.
First of all, he starts off by choosing what seems to be a transversal to the foliation, but I am not sure this is possible, (Transversals exist for almost horizontal foliations, but maybe not simple ones?) He then makes what I think are several typographical errors. I would be very thankful if someone could explain the details of the proof to me - it isn't very long, but has left me quite confused.
Finally, what is an example of two simple foliations on $S^2$ that are homeomorphic but not diffeomorphic (i.e., why is the lemma necessary)? Sorry; I'm not terribly familiar with foliation theory.
|
2019-06-26 01:00:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7852091789245605, "perplexity": 221.53288938555275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999964.8/warc/CC-MAIN-20190625233231-20190626015231-00395.warc.gz"}
|
https://arxiver.moonhats.com/category/cross-listed/
|
# Observers in Kerr spacetimes: the ergoregion on the equatorial plane [CL]
We perform a detailed analysis of the properties of stationary observers located on the equatorial plane of the ergosphere in a Kerr spacetime, including light-surfaces. This study highlights crucial differences between black hole and the super-spinner sources. In the case of Kerr naked singularities, the results allow us to distinguish between “weak” and “strong” singularities, corresponding to spin values close to or distant from the limiting case of extreme black holes, respectively. We derive important limiting angular frequencies for naked singularities. We especially study very weak singularities as resulting from the spin variation of black holes. We also explore the main properties of zero angular momentum observers for different classes of black hole and naked singularity spacetimes.
D. Pugliese and H. Quevedo
Fri, 19 Jan 18
1/68
Comments: 20 pages, 13 multi-panels figures, 2 tables
# Constraints on the radiation temperature before inflation [CL]
We consider the short period of cosmic expansion ranging from the end of the Planck era to the beginning of inflation and set upper and lower limits on the temperature of the radiation at the commencement of the inflationary phase.
R. Herrera, D. Pavon and J. Saavedra
Fri, 19 Jan 18
27/68
Comments: 10 pages, 1 eps figure, key words: cosmology, early Universe, inflation, thermodynamics
# Modulus D-term Inflation [CL]
We propose a new model of single-field D-term inflation in supergravity, where the inflation is driven by a single modulus field which transforms non-linearly under the U(1) gauge symmetry. One of the notable features of our modulus D-term inflation scenario is that the global U(1) remains unbroken in the vacuum and hence our model is not plagued by the cosmic string problem which can exclude most of the conventional D-term inflation models proposed so far due to the CMB observations.
K. Kadota, T. Kobayashi, I. Saga, et. al.
Fri, 19 Jan 18
32/68
# Imprints of the redshift evolution of double neutron star merger rate on the signal to noise ratio distribution [CL]
Proposed third generation gravitational wave (GW) interferometers such as Einstein Telescope will have the sensitivity to observe double neutron star (DNS) mergers up to a redshift of $\sim 2$ with good signal to noise ratios. We argue that the measurement of {\it redshifted signal to noise ratio} defined by $\sigma=\rho (1+z)^{1/6}$, where $\rho$ is the signal to noise ratio (SNR) of a detected GW event and $z$ is its redshift can be used to study the distribution of DNS mergers. We show that if the DNS binaries are distributed uniformly within the co-moving volume, the distribution of redshifted SNR, $\sigma$, will be inversely proportional to the fourth power of $\sigma$, $p(\sigma)\propto \frac{1}{\sigma^4}$. We argue that the redshift evolution of DNS mergers will leave imprints on the distribution of $\sigma$ and hence this may provide a method to probe their redshift evolution. Using various parametric models for evolution of co-moving merger rate density as a function of redshift and assuming the sensitivity of Einstein Telescope, we discuss the distinguishability of the $\sigma$ distributions of these models with that of constant co-moving number density of the mergers.
S. Kastha and K. Arun
Fri, 19 Jan 18
57/68
# Fluid and gyrofluid modeling of low-$β_e$ plasmas: phenomenology of kinetic Alfvén wave turbulence [CL]
Reduced fluid models including electron inertia and ion finite Larmor radius corrections are derived asymptotically, both from fluid basic equations and from a gyrofluid model. They apply to collisionless plasmas with small ion-to-electron equilibrium temperature ratio and low $\beta_e$, where $\beta_e$ indicates the ratio between the equilibrium electron pressure and the magnetic pressure exerted by a strong, constant and uniform magnetic guide field. The consistency between the fluid and gyrofluid approaches is ensured when choosing ion closure relations prescribed by the underlying ordering. A two-field reduction of the gyrofluid model valid for arbitrary equilibrium temperature ratio is also introduced, and is shown to have a noncanonical Hamiltonian structure. This model provides a convenient framework for studying kinetic Alfv\’en wave turbulence, from MHD to sub-$d_e$ scales (where $d_e$ holds for the electron skin depth). Magnetic energy spectra are phenomenologically determined within energy and generalized helicity cascades in the perpendicular spectral plane. Arguments based on absolute statistical equilibria are used to predict the direction of the transfers, pointing out that, within the sub-ion range associated with a $k_\perp^{-7/3}$ transverse magnetic spectrum, the generalized helicity could display an inverse cascade if injected at small scales, for example by reconnection processes.
T. Passot, P. Sulem and E. Tassi
Fri, 19 Jan 18
59/68
# Vacuum Polarization and Photon Propagation in an Electromagnetic Plane Wave [CL]
The QED vacuum polarization in external monochromatic plane-wave electromagnetic fields is calculated with spatial and temporal variations of the external fields being taken into account. We develop a perturbation theory to calculate the induced electromagnetic current that appears in the Maxwell equations, based on Schwinger’s proper-time method, and combine it with the so-called gradient expansion to handle the variation of external fields perturbatively. The crossed field, i.e., the long wavelength limit of the electromagnetic wave is first considered. The eigenmodes and the refractive indices as the eigenvalues associated with the eigenmodes are computed numerically for the probe photon propagating in some particular directions. In so doing, no limitation is imposed on the field strength and the photon energy unlike previous studies. It is shown that the real part of the refractive index becomes less than unity for strong fields, the phenomenon that has been known to occur for high-energy probe photons. We then evaluate numerically the lowest-order corrections to the crossed-field resulting from the field variations in space and time. It is demonstrated that the corrections occur mainly in the imaginary part of the refractive index.
Thu, 18 Jan 18
58/58
Comments: 50 pages, 17 figures, accepted for publication in Progress of Theoretical and Experimental Physics
# Initial conditions for Inflation in an FRW Universe [CL]
We examine the class of initial conditions which give rise to inflation. Our analysis is carried out for several popular models including: Higgs inflation, Starobinsky inflation, chaotic inflation, axion monodromy inflation and non-canonical inflation. In each case we determine the set of initial conditions which give rise to sufficient inflation, with at least $60$ e-foldings. A phase-space analysis has been performed for each of these models and the effect of the initial inflationary energy scale on inflation has been studied numerically. This paper discusses two scenarios of Higgs inflation: (i) the Higgs is coupled to the scalar curvature, (ii) the Higgs Lagrangian contains a non-canonical kinetic term. In both cases we find Higgs inflation to be very robust since it can arise for a large class of initial conditions. One of the central results of our analysis is that, for plateau-like potentials associated with the Higgs and Starobinsky models, inflation can be realized even for initial scalar field values which lie close to the minimum of the potential. This dispels a misconception relating to plateau potentials prevailing in the literature. We also find that inflation in all models is more robust for larger values of the initial energy scale.
S. Mishra, V. Sahni and A. Toporensky
Wed, 17 Jan 18
5/51
|
2018-01-19 09:24:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7114986777305603, "perplexity": 867.8029344046834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887849.3/warc/CC-MAIN-20180119085553-20180119105553-00506.warc.gz"}
|
https://gamedev.stackexchange.com/questions/11821/learning-shaders-in-xna
|
I am trying to learn how to use Shaders for a 2D XNA project I am working on. To test them out, I was trying to make a white triangle become colored using a super simple Pixel Shader, and I can't get it to work.
float4 ThePixelShader(float4 texCoord : TEXCOORD0) : COLOR0
{
return(125,15,105,1);
}
technique MandelbrotSet
{
pass Pass0
{
}
}
And this is the code I was using to draw my Plain white triangle onto the screen, and trying to use the shader to change its color.
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
//MandelbrotEffect.Parameters[""].SetValue();
MandelbrotEffect.CurrentTechnique = MandelbrotEffect.Techniques["MandelbrotSet"];
foreach (EffectPass pass in MandelbrotEffect.CurrentTechnique.Passes)
{
pass.Apply();
spriteBatch.Draw(Dot, new Rectangle(10, 10, 200, 200), Color.White);
}
spriteBatch.End();
I'm guessing I am doing something dumb, but I can't figure out what. The triangle is drawn on screen as just a plain white triangle, what it would look like without a shader.
EDIT: Changed my drawing code to look like this.
GraphicsDevice.Clear(Color.Black);
//MandelbrotEffect.Parameters[""].SetValue();
MandelbrotEffect.CurrentTechnique = MandelbrotEffect.Techniques["MandelbrotSet"];
spriteBatch.Begin(0, BlendState.Opaque, null, null, null, MandelbrotEffect);
spriteBatch.Draw(blank, Vector2.Zero, Color.White);
spriteBatch.End();
Now it does draws the box fullscreen, and I know the shader is working because changing the Alpha value does change the Alpha value of the full screen image, but still the colors don't change.
Thoughts?
After your edit it is unclear exactly what code you are using.
Replace the code in your question with the code that you are using (for example, we don't know what "the drawing part" is, since all of this code is "the drawing part").
The following code is all you need to draw a quad with the effect as desired:
GraphicsDevice.Clear(Color.Black);
MandelbrotEffect.CurrentTechnique = MandelbrotEffect.Techniques["MandelbrotSet"];
MandelbrotEffect.CurrentTechnique.Passes[0].Apply();
spriteBatch.Begin(0, BlendState.Opaque, null, null, null, MandelbrotEffect);
spriteBatch.Draw(blank, Vector2.Zero, Color.White);
spriteBatch.End();
• Your right, your code works great. It turns out my Shader was messed up, and then I edited my XNA code, then fixed the shader, but didn't know for a while because my XNA code was all over the place. I've got it fixed now. Thanks. – Woody Zantzinger May 3 '11 at 2:09
Direct3D colours are RGBA, which means you're passing 0 for the Alpha component, which is totally transparent.
• I fixed that, and still just get the white square. If that had been the problem, wouldn't I have not seen a square? – Woody Zantzinger May 1 '11 at 20:33
|
2021-05-07 00:55:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22640332579612732, "perplexity": 1544.8508207113966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00171.warc.gz"}
|
https://math.stackexchange.com/questions/2887488/joint-continuity-of-bilinear-pairing-with-unusual-topology
|
Joint continuity of bilinear pairing (with unusual topology)
Let $V$ be a complex vector space, which we endow with the finest linear topology. Then continuous dual $V'$ coincides with the algebraic dual $V^*$. We choose the weak-star topology $\sigma(V^*,V)$ on $V^*$. Consider the map $(-,-) \ : \ V^* \times V \to \mathbb C$ defined by $(\phi,v)=\phi(v)$. I have been able to show that it is separately continuous and jointly sequentially continuous. I would like to know if it is jointly continuous.
No, whenever $V$ is infinite dimensional the evaluation isn't continuous with respect to the product of the weak$^*$ and the finest locally convex topology. Indeed, continuity at $0$ would give a finite set $E\subseteq V$, $\varepsilon >0$, and a $0$-neighbourhood $U$ in the finest locally convex topology such that $|f(u)|\le 1$ whenever $|f(e)|\le \varepsilon$ for all $e\in E$ and $u\in U$. Given $v\in V$ not contained in the linear span of $E$ you can choose $\delta>0$ such that $\delta v \in U$ and a linear functional $f$ which vanishes on $E$ with $|f(v)|\ge 2/\delta$ which yields a contradiction.
• Dear Jochen, I meant the finest $\mathbb C$-linear topology. I believe it is not locally convex unless dimension of $V$ is countable. Thus it is finer than the finest locally convex topology. I can't answer my question myself because I don't really understand what neighbourhoods of $0$ look like in this topology. It is possible that topology you are referring to would be more useful for me, though. Do you have any reference to some description of its properties? – Blazej Aug 21 '18 at 7:44
• I just realized that confusion might have arisen because of the tag "locally convex spaces". I guess I added it because the dual of $V$ equipped with the weak-star topology is locally convex. – Blazej Aug 21 '18 at 7:49
|
2019-05-23 07:51:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571526050567627, "perplexity": 104.04706041439968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257156.50/warc/CC-MAIN-20190523063645-20190523085645-00505.warc.gz"}
|
https://cs.stackexchange.com/questions/128789/speedup-with-multi-head-turing-machine
|
# Speedup with multi-head Turing Machine
What sort of speedup can a Turing machine with more than one head give vs a one-headed machine (I do not mean multiple tapes, I mean multiple heads operating on the same tape making concurrent edits on different parts of the tape)?
ie. what is the overhead, worst-case, for a one-head Turing Machine to simulate a multi-head Turing Machine as the number of heads grow?
^ This paper ^ says linear time. But the multi-head machines have the additional property of a one-move shift operation (shift a given head to the position of some other given head), is this standard?
Thanks!
• Think about palindrome recognition example: with multiple heads you can do this in linear time, while for one head it requires quadratic time (check references of paper "Palindrome recognition using a multidimensional tape") – user114966 Jul 28 '20 at 22:13
1. k-Band-Simulation von k-Kopf-Turing-Maschinen (1970) - establishes that $$k$$-head and $$k$$-tape can simulate each other without changing computation time.
2. Zwei-Band Simulation von Turingmaschinen (1971) - simulates a machine with $$k$$ heads on an $$m$$-dimensional tape by a machine with 1 normal tape plus 1 stack, establishing a rather precise time bound.
3. Linear-Time Simulation of Multihead Turing Machines (1989) - linearly simulates a machine with $$k$$-heads on a $$d$$-dimensional tape by a machine with $$k$$ separate $$d$$-dimensional tapes plus 1 normal tape.
|
2021-08-03 13:15:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5524569153785706, "perplexity": 3084.896298216775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00485.warc.gz"}
|
https://proxies-free.com/tag/regular/
|
## finite automata – Using Myhill Nerode to prove a language is not regular
Let $$L$$ be your language and let $$equiv$$ be the equivalence relation defined by $$x equiv y$$ iff $$x in Sigma^*$$ and $$y in Sigma^*$$ have no distinguishing extension (a distinguishing extension is a word $$z in Sigma^*$$ such that exactly one of $$xz$$ and $$yz$$ belongs to $$L$$).
Given $$i,j in mathbb{N}$$ with $$i < j$$, $$0^i$$ and $$0^j$$ cannot belong the same equivalence class since $$1^i$$ is a distinguishing extension for them: $$0^i 1^i in L$$ but $$0^j 1^i notin L$$.
This shows that $$L/equiv$$ is not finite, and hence $$L$$ is not regular.
## automata – DFA to Regular Expression
I have a DFA
And I don’t know how to turn it into regular expression. I think the problem lies with every state going back to q0, so I can’t figure out how what to do with the symbols.
Would greatly appreciate if someone could help or at least start me off in the right path?
## What's the different between Vultr HF server vs regular hosting?
Hi,
My question is about Vultr HF Server disk I/O,
According to some company blog, Vultr HF servers use NVME SSD Disks and you can use up to… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1839258&goto=newpost
## regex – Expressão regular para validar string e int em php
Preciso validar essa string.
`\$scop = "ES-1236";` chave de acesso valor fixo `ES-` dinamico `1236` sendo o dinamico apenas números inteiros.
para validar estou criando uma variavel com a busca
`\$lets = "ES-";`
e a expressão para validar
``````if(preg_match("/{\$lets}/i", \$let)) {
echo true;
}
``````
porém nesse formato ele está aceitando se passar somente `ES-` mas gostaria que ele obrigasse a ter um número inteiro após o `ES-`
## plotting – What regular cartesian expression \$y=f(x)\$ has a graph of the “number 7-like” shape?
It seems to be a simple question but I’m stuck. I was trying to transform many fuctions that are partly resembling `7` like $$ax^3+bx^2+cx+d$$, `sawtooth wave`, `Heavyside step`, $$|x|$$, a lot of `trigonometry function family` expressions, `elliptic curve` but failed to do what I need exactly. Finally I have to get an `upside-down trapezoid-like` graph like such:
with a single regular cartesian expression $$y=f(x)$$. All I’ve come up with is an `upside-down triange` with poorly drown discontinuities.
However it was made with an implicit formula `0 == ((UnitStep(-y)*(2 + y*(2/3)))/Abs(x)) - 1` and `ContourPlot` after all which’s not acceptable.
``````ContourPlot({0 == ((UnitStep(-y)*(2 + y*(2/3)))/Abs(x)) - 1}, {x, -5,
5}, {y, -5, 5}, Exclusions -> None, WorkingPrecision -> 16)
``````
I know there is a way to do something similar with transformation of a circle on the complex plane but it’s too difficult for me and I expect to produce the plot with reals.
## finite automata – prove that if a language is regular then so is the reverse of that language
Below is a problem from Dexter C Kozen’s Automata and Computability followed by my attempt at a solution. Please provide feedback on my proof. Are there any errors/leaps in logic?
Problem Statement
My attempt:
$$T$$ is regular if and only if there exists a NFA accepting $$T$$. Let $$N=(Q,∑,Δ,S,F)$$ such that $$L(N)=T.$$
I will show that there exists a NFA accepting $$T$$ if and only if there exist a NFA $$N_{R}=(Q,∑,Δ_R,S_R,F_R)$$ such that $$L(N_R)=revT.$$
Define: $$Δ_R(q,a)={p in Q : q in Δ(p,a)}$$, $$S_R=F$$, $$F_R=S$$
Lemma 1: if $$A⊆Q,$$ then $$hat{Δ}_R(hat{Δ}(A,x),rev(x))=A$$ for all $$x in ∑^✱$$.
Base Cases:
$$hat{Δ}_R(hat{Δ}(A,ε),ε)=hat{Δ}_R(A,ε)=A$$ by Kozen (6.1). So, equality holds for the empty string.
$$hat{Δ}_R(hat{Δ}(A,a),rev(a))=$$$$bigcup_{qinhat{Δ}(A,a) } {p in Q : q in Δ(p,a) }=$$
$${p in Q : Δ(p,a) ∩ hat{Δ}(A,a) not =∅ }=A$$
by Kozen (6.2), the fact that $$rev(a)=a$$, and the definition of $$Δ_R$$.
Inductive Step:
Assume $$hat{Δ}_R(hat{Δ}(A,x),rev(x))=A$$.
$$hat{Δ}_R(hat{Δ}(A,xa),rev(xa))=hat{Δ}_R(hat{Δ}(A,xa),arev(x))=$$
by definition of string reversal in problem statement.
$$hat{Δ}_R(bigcup_{qinhat{Δ}(A,x)}Δ(q,a), arev(x))$$
by Kozen definition (6.2) page 33
$$bigcup_{qinhat{Δ}(A,x)}hat{Δ}_R(Δ(q,a), arev(x))$$
by Kozen Lemma 6.2 page 34
$$bigcup_{qinhat{Δ}(A,x)}hat{Δ}_R(hat{Δ}_R(Δ(q,a), a),rev(x))=$$
by Kozen Lemma 6.1
$$hat{Δ}_R(hat{Δ}_R(hat{Δ}(A,xa), a),rev(x))=$$
by Kozen Lemma 6.2
$$hat{Δ}_R(hat{Δ}_R(hat{Δ}(hat{Δ}(A,x),a), rev(a)),rev(x))=$$
by Kozen Lemma 6.1 and by the fact that $$rev(a)=a$$
$$hat{Δ}_R(hat{Δ}(A,x),rev(x))=$$
by the base case for a single character
$$A$$ by assumption.
Lemma 2: $$hat{Δ}(A ∩ B,x)= hat{Δ}(A,x) ∩ hat{Δ}(B,x)$$
Base Case:
$$hat{Δ}(A ∩ B,ε)=A ∩ B= hat{Δ}(A,ε) ∩ hat{Δ}(B,ε)$$ by definition (6.1) kozen
Inductive step:
Assume $$hat{Δ}(A ∩ B,x)= hat{Δ}(A,x) ∩ hat{Δ}(B,x)$$
$$hat{Δ}(A ∩ B,xa)=bigcup_{qinhat{Δ}(A∩ B,x)}Δ(q,a)=$$
by definition (6.2) kozen
$$bigcup_{qin(hat{Δ}(A,x)∩ hat{Δ}( B,x))}Δ(q,a)=$$
by assumption
$$bigcup_{qinhat{Δ}(A,x)}Δ(q,a)∩bigcup_{qinhat{Δ}(A,x)}Δ(q,a)=hat{Δ}(A,xa)∩hat{Δ}(B,xa)$$
by definition (6.2) kozen and basic set theory
Now I will use lemma 1 and lemma 2 to show $$x in L(N)$$ IFF $$rev(x) in L(N_R)$$
$$x in L(N)$$ IFF $$hat{Δ}(S,x)∩F not = ∅$$ IFF
$$hat{Δ_R}(hat{Δ}(S,x)∩F,rev(x)) not = hat{Δ_R}(∅,rev(x))$$ IFF
$$hat{Δ_R}(hat{Δ}(S,x),rev(x)) ∩hat{Δ_R}(F,rev(x)) not = ∅$$, by lemma 6.2, IFF
$$S ∩hat{Δ_R}(F,rev(x)) not = ∅$$, by lemma 6.1, IFF
$$F_R ∩hat{Δ_R}(S_R,rev(x)) not = ∅$$, by definition of $$F_R,S_R$$, IFF
$$rev(x) in L(N_R)$$ $$∎$$
## finite automata – prove that if a language is regular then so is the reverse of that language
Below is a problem from Dexter C Kozen’s Automata and Computability followed by my attempt at a solution. Please provide feedback on my proof. Are there any errors/leaps in logic? If having access to the text would be helpful, then I am happy to post a link, assuming that doing so is permissible on this form.
My attempt:
$$T$$ is regular if and only if there exists a NFA accepting $$T$$. Let $$N=(Q,∑,Δ,S,F)$$ such that $$L(N)=T.$$
I will show that there exists a NFA accepting $$T$$ if and only if there exist a NFA $$N_{R}=(Q,∑,Δ_R,S_R,F_R)$$ such that $$L(N_R)=revT.$$
Define: $$Δ_R(q,a)={p in Q : q in Δ(p,a)}$$, $$S_R=F$$, $$F_R=S$$
Lemma 1: if $$A⊆Q,$$ then $$hat{Δ}_R(hat{Δ}(A,x),rev(x))=A$$ for all $$x in ∑^✱$$.
Base Cases:
$$hat{Δ}_R(hat{Δ}(A,ε),ε)=hat{Δ}_R(A,ε)=A$$ by Kozen (6.1). So, equality holds for the empty string.
$$hat{Δ}_R(hat{Δ}(A,a),rev(a))=$$$$bigcup_{qinhat{Δ}(A,a) } {p in Q : q in Δ(p,a) }=$$
$${p in Q : Δ(p,a) ∩ hat{Δ}(A,a) not =∅ }=A$$
by Kozen (6.2), the fact that $$rev(a)=a$$, and the definition of $$Δ_R$$.
Inductive Step:
Assume $$hat{Δ}_R(hat{Δ}(A,x),rev(x))=A$$.
$$hat{Δ}_R(hat{Δ}(A,xa),rev(xa))=hat{Δ}_R(hat{Δ}(A,xa),arev(x))=$$
by definition of string reversal in problem statement.
$$hat{Δ}_R(bigcup_{qinhat{Δ}(A,x)}Δ(q,a), arev(x))$$
by Kozen definition (6.2) page 33
$$bigcup_{qinhat{Δ}(A,x)}hat{Δ}_R(Δ(q,a), arev(x))$$
by Kozen Lemma 6.2 page 34
$$bigcup_{qinhat{Δ}(A,x)}hat{Δ}_R(hat{Δ}_R(Δ(q,a), a),rev(x))=$$
by Kozen Lemma 6.1
$$hat{Δ}_R(hat{Δ}_R(hat{Δ}(A,xa), a),rev(x))=$$
by Kozen Lemma 6.2
$$hat{Δ}_R(hat{Δ}_R(hat{Δ}(hat{Δ}(A,x),a), rev(a)),rev(x))=$$
by Kozen Lemma 6.1 and by the fact that $$rev(a)=a$$
$$hat{Δ}_R(hat{Δ}(A,x),rev(x))=$$
by the base case for a single character
$$A$$ by assumption.
Lemma 2: $$hat{Δ}(A ∩ B,x)= hat{Δ}(A,x) ∩ hat{Δ}(B,x)$$
Base Case:
$$hat{Δ}(A ∩ B,ε)=A ∩ B= hat{Δ}(A,ε) ∩ hat{Δ}(B,ε)$$ by definition (6.1) kozen
Inductive step:
Assume $$hat{Δ}(A ∩ B,x)= hat{Δ}(A,x) ∩ hat{Δ}(B,x)$$
$$hat{Δ}(A ∩ B,xa)=bigcup_{qinhat{Δ}(A∩ B,x)}Δ(q,a)=$$
by definition (6.2) kozen
$$bigcup_{qin(hat{Δ}(A,x)∩ hat{Δ}( B,x))}Δ(q,a)=$$
by assumption
$$bigcup_{qinhat{Δ}(A,x)}Δ(q,a)∩bigcup_{qinhat{Δ}(A,x)}Δ(q,a)=hat{Δ}(A,xa)∩hat{Δ}(B,xa)$$
by definition (6.2) kozen and basic set theory
Now I will use lemma 1 and lemma 2 to show $$x in L(N)$$ IFF $$rev(x) in L(N_R)$$
$$x in L(N)$$ IFF $$hat{Δ}(S,x)∩F not = ∅$$ IFF
$$hat{Δ_R}(hat{Δ}(S,x)∩F,rev(x)) not = hat{Δ_R}(∅,rev(x))$$ IFF
$$hat{Δ_R}(hat{Δ}(S,x),rev(x)) ∩hat{Δ_R}(F,rev(x)) not = ∅$$, by lemma 6.2, IFF
$$S ∩hat{Δ_R}(F,rev(x)) not = ∅$$, by lemma 6.1, IFF
$$F_R ∩hat{Δ_R}(S_R,rev(x))$$, by definition of $$F_R,S_R$$, IFF
$$rev(x) in L(N_R)$$ $$∎$$
## Expressão Regular Python no Javascript?
Separar os caracteres de uma string com hífen em Python funciona perfeitamente desse jeito
``````import re
regex = r"B(?=(.{1}))"
test_str = "Pêssego"
subst = "-"
result = re.sub(regex, subst, test_str, 0)
if result:
print (result) // P-ê-s-s-e-g-o
``````
Em javascript
``````const regex = /B(?=(.{1}))/g;
const str = `Pêssego`;
const subst = `-`;
const result = str.replace(regex, subst);
console.log(result); // Pês-s-e-g-o
``````
Devo acrescentar algo? Tirar algo? Qual a diferença entre dos dois? Existe alguma outra forma de separar uma string com hifens em em tempo real em um campo de entrada?
## differential geometry – Proof of smoothness of the Exponential map at a point of a complete regular surface in Euclidean space
Suppose S is a regular, connected surface in Euclidean space and $$d_{s}$$ is the intrinsic metric on S. When (S,d) is complete, we know that for each geodesic $$gamma:Jrightarrow S$$ (where J is any interval) there exists a unique geodesic $$eta:mathbb{R}rightarrow S$$ which is an extension of $$gamma$$.
If $$p$$ is any point of S, it is shown in do Carmo (page 284) that if a sufficiently small $$win T_{p}S$$ is chosen, the geodesic $$gamma$$ with initial state $$gamma(0)=p, gamma'(0)=w$$ is well defined at $$t=1$$. The symbol exp$$_{p}(w)$$ is made to denote the point $$gamma(1)in S$$. It is then shown (page 285) that exp$$_{p}$$ can be made into a smooth map on a sufficiently small neighbourhood of $$0in T_{p}S$$. This truth depends on the theorem of solutions of systems of ODE’s which says that the solution depends smoothly on initial conditions.
By what I wrote above, when S is complete, $$p$$ is any point and $$w$$ any tangent vector to S at $$p$$, the symbol exp$$_{p}(w)$$ is well defined, but how can we show smoothness of exp$$_{p}$$ on all of $$T_{p}S$$? Am I missing something elementary here? We know it is smooth around $$0$$, but the proof depended on covering $$p$$ with a chart and applying the theorem from analysis. It seems we cannot argue this way in the general case.
In do Carmo I did not see this technical question discussed, and neither in Tapp.
|
2021-03-04 17:54:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 138, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8698947429656982, "perplexity": 1174.7760250572894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00167.warc.gz"}
|
https://www.clutchprep.com/physics/practice-problems/141911/a-box-of-weight-w-2-0n-accelerates-down-a-rough-plane-that-is-inclined-at-an-ang-1
|
Work On Inclined Planes Video Lessons
Concept
# Problem: A box of weight w=2.0N accelerates down a rough plane that is inclined at an angle Φ =30° above the horizontal, as shown (Figure 6) . The normal force acting on the box has a magnitude n=1.7 N, the coefficient of kinetic friction between the box and the plane is μk=0.30, and the displacement d? of the box is 1.8 m down the inclined plane.Part a. What is the work Wn done on the box by the normal force?Express your answers in joules to two significant figures.Part b. What is the work Wfk done on the box by the force of kinetic friction?Express your answers in joules to two significant figures.
###### FREE Expert Solution
Work in done by force in an incline:
$\overline{){\mathbf{W}}{\mathbf{=}}{\mathbf{F}}{\mathbf{d}}{\mathbf{c}}{\mathbf{o}}{\mathbf{s}}{\mathbf{\theta }}}$
Work done by force of kinetic friction:
$\overline{){\mathbf{W}}{\mathbf{=}}{\mathbf{-}}{{\mathbf{\mu }}}_{{\mathbf{k}}}{\mathbf{n}}{\mathbf{d}}}$
Part a
The normal force is perpendicular to the displacement of the box.
Therefore, θ = 90°
93% (390 ratings)
###### Problem Details
A box of weight w=2.0N accelerates down a rough plane that is inclined at an angle Φ =30° above the horizontal, as shown (Figure 6) . The normal force acting on the box has a magnitude n=1.7 N, the coefficient of kinetic friction between the box and the plane is μk=0.30, and the displacement d? of the box is 1.8 m down the inclined plane.
Part a. What is the work Wn done on the box by the normal force?
Part b. What is the work Wfk done on the box by the force of kinetic friction?
|
2021-09-21 23:14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5395635366439819, "perplexity": 951.9749878577719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00150.warc.gz"}
|
http://math.stackexchange.com/questions/596615/if-1-boy-knows-r-girls-and-1-girl-knows-r-boys-then-number-of-boys-girls/596624
|
# If 1 boy knows r girls and 1 girl knows r boys ,then number of boys=girls
Yet another question from BdMO 2013 Nationals:
In a class,every boy knows $r$ number of girls and every girl knows $r$ number of boys.Show that there are equal number of boys and girls[Assume that friendship is symmetric i.e. if A knows B then B knows A].
This question was set in the Secondary section of the Nationals.In the Junior Section,a similar but easier question came.
In a class,every boy knows $3$ number of girls and every girl knows $3$ number of boys.If there are 13 boys,find the number of girls in the class.
I solved this question by noting that (i)If there are 3 number of boys in the class,then number of girls also has to be 3. (ii)If there are 4 boys,required number of girls is also 4.This gives us the required tool to solve the problem.We just have to note that
$13$ boys=$(3+3+3)+4$ boys
For every 3 boys,we need 3 girls[From (i)]. Hence,(3+3+3) boys implies that there are $3+3+3=9$ girls.From (ii),we know that if there are 4 boys,we need 4 girls in the class.
Summing up,$(3+3+3)+4$ boys need $(3+3+3)+4$ girls that is 13 girls.
However,I can't see how I am going to generalize this problem.Never mind,here's my work.
MY TRY: We once again utilize the following facts:
(i)If there are 3 number of boys in the class,then number of girls also has to be 3. (ii)If there are 4 boys,required number of girls is also 4.
So if we can rewrite $r$ in the form $3k+4z$,we are done.
Lemma: Every integer $r\ge 3$ except 5 can be written in the form $3k+4z$ [k and z are positive integers,not both of them $0$]
PROOF: A number greater than or equal to 3 is of the form $3p$,$3p+1$,$3p+2$ for some positive integer p.We treat the cases individually[we must note that 5 cannot be expressed as a sum of multiples of 3 and 4.For this,we consider $p\ge 2$. So,we consider the cases r=3,4,5 separately and see that the number of boys and girls are indeed equal].
$3p$: If r=3p for some positive integer p,then plugging k=p and z=0 gives us
3p=3(p)+4(0)
and we are done.
$3p+1$: Plugging k=p-1 and z=1, we get
3(p-1)+4=3p+1
$3p+2$: Plugging k=p-2 and z=2,we get
3(p-2)+4(2)=3p+2
and the proof is complete.
Since r can always be written in the form $3k+4z$,
(1)Number of girls needed for 3k number of boys is 3k[From (i)].
(2)Number of girls needed for 4z number of boys is 4z[From (ii)].
Therefore total number of girls=3k+4z=r and we are done.Am I correct?
-
$r=5$ can't be written as $3k+4z$ for positive integers $k,z$. Your proof needs to consider closure of the natural numbers, which subtraction does not satisfy. – Tim Ratigan Dec 7 '13 at 9:04
@TimRatigan,thanks.Is there a way to patch up the hole in my proof? – rah4927 Dec 7 '13 at 9:27
Make $p\geq 2$, then $p-2$ (the least number you use) is $\ge 0$ – Tim Ratigan Dec 7 '13 at 9:29
Well, I'm not a mathematician (wandered in here from Stackoverflow's "hot network questions") but it seems to me that if all the boys and girls are too shy to talk to one another (ie r is 0) then their numbers could vary independently, invalidating the question's conclusion :D – Jan Van Herck Dec 7 '13 at 13:39
@rahul, haha, point conceded, but that is just semantics. Obviously, the question allows for the possibility that a boy and a girl can go to the same class without knowing each other, so the point remains that if this is true for all of them, their number can vary independently. I just tried to word it in a funny way explaining it as shyness :) – Jan Van Herck Dec 7 '13 at 13:50
## 2 Answers
Let $b$ be the number of boys, and $g$ the number of girls.
For any boy $B$ and girl $G$ who know each other, write on a slip of paper "$B$ and $G$ know each other." We count the number of slips of paper in two different ways.
Since every boy knows $r$ girls, there are $br$ slips of paper.
Since every girl knows $r$ boys, there are $gr$ slips of paper.
Thus $br=gr$ and therefore $b=g$ (if $r\ne 0$).
Remark: For a more romantic version, let us suppose that each boy-girl pair who know each other kiss (once). Instead of counting slips of paper, count kisses.
-
How do I know that br=gr?Why must the number of slips of paper be equal? – rah4927 Dec 7 '13 at 9:29
Because friendship is symmetric. – Henry Swanson Dec 7 '13 at 9:38
@HenrySwanson,Ah,thanks. – rah4927 Dec 7 '13 at 9:42
I feel that it is important to note that the result does not hold if r=0,as pointed out by Jan Van Herck in the comments. – rah4927 Dec 7 '13 at 13:58
Yes, it is important to note that we want $r\ne 0$, thank you, I have added it. As to your earlier question, the answer computes the number of kisses in two different ways. Same kisses. – André Nicolas Dec 7 '13 at 16:58
Note that the relationships across gender form a bipartite graph $G$ with subgraphs $A$ (vertices are girls) and $B$ (vertices are boys). The graph is regular, that is all vertices have the same degree $r:=\deg{v}$. However, we also note that for every edge incident on a vertex on $B$, the other end of the edge is incident on $A$, since $G$ is bipartite. It follows that $\deg B=\deg A\Longrightarrow r|B|=r|A|\Longrightarrow |B|=|A|$
-
Can you please tell me the prerequisites to understanding this proof? – rah4927 Dec 8 '13 at 11:46
Just some basic graph theory. I understand it might look intimidating because of the terminology, but really it's a formal way of saying what Andre Nicolas said. This and this may be useful. – Tim Ratigan Dec 8 '13 at 20:50
|
2014-07-28 16:44:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7044784426689148, "perplexity": 586.9468735968434}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00477-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://techwhiff.com/learn/identify-x-chemical-symbol-and-mass-number-in/192266
|
# Identify X (chemical symbol and mass number) in each of the following reactions: (a) X +44...
###### Question:
Identify X (chemical symbol and mass number) in each of the following reactions: (a) X +44 N → H+;? O (b) ]Li + H He + X
#### Similar Solved Questions
##### The potential equals 7.92 V at the midpoint between two point charges that are 1.03 m...
The potential equals 7.92 V at the midpoint between two point charges that are 1.03 m apart. One of the charges is 1.10 * 10^-9 C. Find the value of the other charge....
##### What are all of the simple machines found in a washing machine?
What are all of the simple machines found in a washing machine?...
##### Question 13 4 pts A certain drug dosage calls for 245 mg per kg per day...
Question 13 4 pts A certain drug dosage calls for 245 mg per kg per day and is divided into three doses (1 every 8 hours). If a person weighs 119 pounds, how many milligrams of the drug should she receive every 8 hours? Round your answer to the nearest tenth of a milligram. Do not include units with...
|
2022-11-28 21:00:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26620686054229736, "perplexity": 1470.8321711739918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00726.warc.gz"}
|
http://www.absoluteastronomy.com/topics/Computational_complexity_theory
|
Computational complexity theory
# Computational complexity theory
Discussion
Ask a question about 'Computational complexity theory' Start a new discussion about 'Computational complexity theory' Answer questions from other users Full Discussion Forum
Encyclopedia
Computational complexity theory is a branch of the theory of computation
Theory of computation
In theoretical computer science, the theory of computation is the branch that deals with whether and how efficiently problems can be solved on a model of computation, using an algorithm...
in theoretical computer science
Theoretical computer science
Theoretical computer science is a division or subset of general computer science and mathematics which focuses on more abstract or mathematical aspects of computing....
and mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...
that focuses on classifying computational problems according to their inherent difficulty, and relating those classes
Complexity class
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:...
to each other. In this context, a computational problem is understood to be a task that is in principle amenable to being solved by a computer (which basically means that the problem can be stated by a set of mathematical instructions). Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime
Prime number
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite number. For example 5 is prime, as only 1 and 5 divide it, whereas 6 is composite, since it has the divisors 2...
or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity
Communication complexity
The notion of communication complexity was introduced by Yao in 1979,who investigated the following problem involving two separated parties . Alice receives an n-bit string x and Bob another n-bit string y, and the goal is for one of them to compute a certain function f with the least amount of...
), the number of gates
Logic gate
A logic gate is an idealized or physical device implementing a Boolean function, that is, it performs a logical operation on one or more logic inputs and produces a single logic output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and...
in a circuit (used in circuit complexity
Circuit complexity
In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of Boolean circuits that compute them....
) and the number of processors (used in parallel computing
Parallel computing
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...
). One of the roles of computational complexity theory is to determine the practical limits on what computer
Computer
A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem...
s can and cannot do.
Closely related fields in theoretical computer science are analysis of algorithms
Analysis of algorithms
To analyze an algorithm is to determine the amount of resources necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length...
and computability theory
Computability theory
Computability theory, also called recursion theory, is a branch of mathematical logic that originated in the 1930s with the study of computable functions and Turing degrees. The field has grown to include the study of generalized computability and definability...
. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically.
## Computational problems
### Problem instances
A computational problem
Computational problem
In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might want to solve. For example, the problem of factoring...
can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g. 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case "no"). Alternatively, the instance is a particular input to the problem, and the solution is the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres in length passing through all of Germany's 15 largest cities? The answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan
Milan
Milan is the second-largest city in Italy and the capital city of the region of Lombardy and of the province of Milan. The city proper has a population of about 1.3 million, while its urban area, roughly coinciding with its administrative province and the bordering Province of Monza and Brianza ,...
whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
### Representing problem instances
When considering computational problems, a problem instance is a string
String (computer science)
In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set or alphabet....
over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integer
Integer
The integers are formed by the natural numbers together with the negatives of the non-zero natural numbers .They are known as Positive and Negative Integers respectively...
s can be represented in binary notation, and graphs
Graph (mathematics)
In mathematics, a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges...
can be encoded directly via their adjacency matrices
In mathematics and computer science, an adjacency matrix is a means of representing which vertices of a graph are adjacent to which other vertices...
, or by encoding their adjacency list
In graph theory, an adjacency list is the representation of all edges or arcs in a graph as a list.If the graph is undirected, every entry is a set of two nodes containing the two ends of the corresponding edge; if it is directed, every entry is a tuple of two nodes, one denoting the source node...
s in binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
### Decision problems as formal languages
Decision problem
Decision problem
In computability theory and computational complexity theory, a decision problem is a question in some formal system with a yes-or-no answer, depending on the values of some input parameters. For example, the problem "given two numbers x and y, does x evenly divide y?" is a decision problem...
s are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language
Formal language
A formal language is a set of words—that is, finite strings of letters, symbols, or tokens that are defined in the language. The set from which these letters are taken is the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar...
, where the members of the language are instances whose answer is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrary graph
Graph (mathematics)
In mathematics, a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges...
. The problem consists in deciding whether the given graph is connected
Connectivity (graph theory)
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements which need to be removed to disconnect the remaining nodes from each other. It is closely related to the theory of network flow problems...
, or not. The formal language associated with this decision problem is then the set of all connected graphs—of course, to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
### Function problems
A function problem
Function problem
In computational complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just YES or NO...
is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem
Decision problem
In computability theory and computational complexity theory, a decision problem is a question in some formal system with a yes-or-no answer, depending on the values of some input parameters. For example, the problem "given two numbers x and y, does x evenly divide y?" is a decision problem...
, that is, it isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication
Multiplication
Multiplication is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic ....
of two integers can be expressed as the set of triples (abc) such that the relation a × b = c holds. Deciding whether a given triple is member of this set corresponds to solving the problem of multiplying two numbers.
### Measuring the size of an instance
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices?
If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis
Cobham's Thesis
Cobham's thesis, also known as Cobham–Edmonds thesis , asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is, if they lie in the complexity class P.Formally, to say that a problem can be solved in...
says that a problem can be solved with a feasible amount of resources if it admits a polynomial time algorithm.
### Turing Machine
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a thought experiment representing a computing machine. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis
Church–Turing thesis
In computability theory, the Church–Turing thesis is a combined hypothesis about the nature of functions whose values are effectively calculable; in more modern terms, algorithmically computable...
. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life
Conway's Game of Life
The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970....
, cellular automata or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machine
Probabilistic Turing machine
In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which randomly chooses between the available transitions at each point according to some probability distribution....
s, non-deterministic Turing machine
Non-deterministic Turing machine
In theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers....
s, quantum Turing machine
Quantum Turing machine
A quantum Turing machine , also a universal quantum computer, is an abstract machine used to model the effect of a quantum computer. It provides a very simple model which captures all of the power of quantum computation. Any quantum algorithm can be expressed formally as a particular quantum...
s, symmetric Turing machine
Symmetric Turing machine
- Definition of Symmetric Turing machines :A Symmetric TM is a TM which has a configuration graph that is undirected. That is configuration i yields configuration j if and only if j yields...
s and alternating Turing machine
Alternating Turing machine
In computational complexity theory, an alternating Turing machine is a non-deterministic Turing machine with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP...
s. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithm
Randomized algorithm
A randomized algorithm is an algorithm which employs a degree of randomness as part of its logic. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random bits...
s. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see nondeterministic algorithm
Nondeterministic algorithm
In computer science, a nondeterministic algorithm is an algorithm that can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different...
.
### Other machine models
Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random access machine
Random access machine
In computer science, random access machine is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers...
s. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically
Deterministic algorithm
In computer science, a deterministic algorithm is an algorithm which, in informal terms, behaves predictably. Given a particular input, it will always produce the same output, and the underlying machine will always pass through the same sequence of states...
.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a nondeterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The nondeterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that nondeterministic time is a very important resource in analyzing computational problems.
### Complexity measures
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine M on input x is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine M is said to operate within time f(n), if the time required by M on each input of length n is at most f(n). A decision problem A can be solved in time f(n) if there exists a Turing machine operating in time f(n) that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time f(n) on a deterministic Turing machine is then denoted by DTIME
DTIME
In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm...
(f(n)).
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity
Communication complexity
The notion of communication complexity was introduced by Yao in 1979,who investigated the following problem involving two separated parties . Alice receives an n-bit string x and Bob another n-bit string y, and the goal is for one of them to compute a certain function f with the least amount of...
, circuit complexity
Circuit complexity
In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of Boolean circuits that compute them....
, and decision tree complexity.
### Best, worst and average case complexity
The best, worst and average case
Best, worst and average case
In computer science, best, worst and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively...
complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size n may be faster to solve than others, we define the following complexities:
• Best-case complexity: This is the complexity of solving the problem for the best input of size n.
• Worst-case complexity: This is the complexity of solving the problem for the worst input of size n.
• Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution
Probability distribution
In probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values....
over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size n.
For example, consider the dc sorting algorithm quicksort. This solves the problem of sorting a list of integers that is given as the input. The worst-case is when the input is sorted or sorted in reverse order, and the algorithm takes time O(n2) for this case. If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is O(n log n). The best case occurs when each pivoting divides the list in half, also needing O(n log n) time.
### Upper and lower bounds on the complexity of problems
To classify the computation time (or similar resources, such as space consumption), one is interested in proving upper and lower bounds on the minimum amount of time required by the most efficient algorithm solving a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity, unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms
Analysis of algorithms
To analyze an algorithm is to determine the amount of resources necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length...
. To show an upper bound T(n) on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most T(n). However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of T(n) for a problem requires showing that no algorithm can have time complexity lower than T(n).
Upper and lower bounds are usually stated using the big Oh notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if T(n) = 7n2 + 15n + 40, in big Oh notation one would write T(n) = O(n2).
### Defining complexity classes
A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
• The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problem
Function problem
In computational complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just YES or NO...
s, counting problems, optimization problem
Optimization problem
In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. An optimization problem with discrete...
s, promise problem
Promise problem
In computational complexity theory, a promise problem is a generalization of a decision problem where the input is promised to belong to a subset of all possible inputs. Unlike decision problems, the yes instances and no instances do not exhaust the set of all inputs...
s, etc.
• The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on nondeterministic Turing machines, Boolean circuit
Boolean circuit
A Boolean circuit is a mathematical model of computation used in studying computational complexity theory. Boolean circuits are the main object of study in circuit complexity and are a special kind of circuits; a formal language can be decided by a family of Boolean circuits, one circuit for each...
s, quantum Turing machine
Quantum Turing machine
A quantum Turing machine , also a universal quantum computer, is an abstract machine used to model the effect of a quantum computer. It provides a very simple model which captures all of the power of quantum computation. Any quantum algorithm can be expressed formally as a particular quantum...
s, monotone circuits, etc.
• The resource (or resources) that are being bounded and the bounds: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc.
Of course, some complexity classes have complex definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
The set of decision problems solvable by a deterministic Turing machine within time f(n). (This complexity class is known as DTIME(f(n)).)
But bounding the computation time above by some concrete function f(n) often yields complexity classes that depend on the chosen machine model. For instance, the language {xx | x is any binary string} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis
Cobham's Thesis
Cobham's thesis, also known as Cobham–Edmonds thesis , asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is, if they lie in the complexity class P.Formally, to say that a problem can be solved in...
states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...
, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP
FP (complexity)
In computational complexity theory, the complexity class FP is the set of function problems which can be solved by a deterministic Turing machine in polynomial time; it is the function problem version of the decision problem class P...
.
### Important complexity classes
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Complexity class Model of computation Resource constraint
DTIME
DTIME
In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm...
(f(n))
Deterministic Turing machine Time f(n)
P
P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME, is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.Cobham's thesis holds...
Deterministic Turing machine Time poly(n)
EXPTIME
EXPTIME
In computational complexity theory, the complexity class EXPTIME is the set of all decision problems solvable by a deterministic Turing machine in O time, where p is a polynomial function of n....
Deterministic Turing machine Time 2poly(n)
NTIME
NTIME
In computational complexity theory, the complexity class NTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time O, and unlimited space....
(f(n))
Non-deterministic Turing machine Time f(n)
NP
NP (complexity)
In computational complexity theory, NP is one of the most fundamental complexity classes.The abbreviation NP refers to "nondeterministic polynomial time."...
Non-deterministic Turing machine Time poly(n)
NEXPTIME
NEXPTIME
In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time O for some polynomial p, and unlimited space....
Non-deterministic Turing machine Time 2poly(n)
DSPACE
DSPACE
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a...
(f(n))
Deterministic Turing machine Space f(n)
L
L (complexity)
In computational complexity theory, L is the complexity class containing decision problems which can be solved by a deterministic Turing machine using a logarithmic amount of memory space...
Deterministic Turing machine Space O(log n)
PSPACE
PSPACE
In computational complexity theory, PSPACE is the set of all decision problems which can be solved by a Turing machine using a polynomial amount of space.- Formal definition :...
Deterministic Turing machine Space poly(n)
EXPSPACE
EXPSPACE
In complexity theory, EXPSPACE is the set of all decision problems solvable by a deterministic Turing machine in O space, where p is a polynomial function of n...
Deterministic Turing machine Space 2poly(n)
NSPACE
NSPACE
In computational complexity theory, the complexity class NSPACE is the set of decision problems that can be solved by a non-deterministic Turing machine using space O, and unlimited time. It is the non-deterministic counterpart of DSPACE.Several important complexity classes can be defined in terms...
(f(n))
Non-deterministic Turing machine Space f(n)
NL
NL (complexity)
In computational complexity theory, NL is the complexity class containing decision problems which can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space....
Non-deterministic Turing machine Space O(log n)
NPSPACE Non-deterministic Turing machine Space poly(n)
NEXPSPACE Non-deterministic Turing machine Space 2poly(n)
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem
Savitch's theorem
In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, states that for any function ƒ ≥ log,...
.
Other important complexity classes include BPP, ZPP and RP
RP (complexity)
In complexity theory, RP is the complexity class of problems for which a probabilistic Turing machine exists with these properties:* It always runs in polynomial time in the input size...
, which are defined using probabilistic Turing machine
Probabilistic Turing machine
In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which randomly chooses between the available transitions at each point according to some probability distribution....
s; AC
AC (complexity)
In circuit complexity, AC is a complexity class hierarchy. Each class, ACi, consists of the languages recognized by Boolean circuits with depth O and a polynomial number of unlimited-fanin AND and OR gates....
and NC
NC (complexity)
In complexity theory, the class NC is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem is in NC if there exist constants c and k such that it can be solved in time O using O parallel processors...
, which are defined using Boolean circuits and BQP
BQP
In computational complexity theory BQP is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances...
and QMA
QMA
In computational complexity theory, QMA, which stands for Quantum Merlin Arthur, is the quantum analog of the deterministic complexity class NP or the probabilistic complexity class MA...
, which are defined using quantum Turing machines. #P
Sharp-P
In computational complexity theory, the complexity class #P is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute ƒ," where ƒ is the number of accepting paths of a nondeterministic...
is an important complexity class of counting problems (not decision problems). Classes like IP
IP (complexity)
In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985...
and AM are defined using Interactive proof system
Interactive proof system
In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties. The parties, the verifier and the prover, interact by exchanging messages in order to ascertain whether a given string belongs to a...
s. ALL
ALL (complexity)
In computability and complexity theory, ALL is the class of all decision problems.-Relations to other classes:ALL contains all complexity classes of decision problems, including RE and co-RE....
is the class of all decision problems.
### Hierarchy theorems
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, the time hierarchy theorem
Time hierarchy theorem
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems...
states that.
The space hierarchy theorem
Space hierarchy theorem
In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems...
states that.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
### Reduction
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at least as difficult as another problem. For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult than Y, and we say that X reduces to Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reduction
Polynomial-time reduction
In computational complexity theory a polynomial-time reduction is a reduction which is computable by a deterministic Turing machine in polynomial time. If it is a many-one reduction, it is called a polynomial-time many-one reduction, polynomial transformation, or Karp reduction...
s or log-space reduction
Log-space reduction
In computational complexity theory, a log-space reduction is a reduction computable by a deterministic Turing machine using logarithmic space. Conceptually, this means it can keep a constant number of pointers into the input, along with a logarithmic number of fixed-size integers...
s.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard
NP-hard
NP-hard , in computational complexity theory, is a class of problems that are, informally, "at least as hard as the hardest problems in NP". A problem H is NP-hard if and only if there is an NP-complete problem L that is polynomial time Turing-reducible to H...
problems.
If a problem X is in C and hard for C, then X is said to be complete
Complete (complexity)
In computational complexity theory, a computational problem is complete for a complexity class if it is, in a formal sense, one of the "hardest" or "most expressive" problems in the complexity class...
for C. This means that X is the hardest problem in C. (Since many problems could be equally hard, one might say that X is one of the hardest problems in C.) Thus the class of NP-complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...
problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...
problem that can be solved in polynomial time would mean that P = NP.
## Important open problems
### P versus NP problem
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP
NP (complexity)
In computational complexity theory, NP is one of the most fundamental complexity classes.The abbreviation NP refers to "nondeterministic polynomial time."...
, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem
Boolean satisfiability problem
In computer science, satisfiability is the problem of determining if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE...
, the Hamiltonian path problem
Hamiltonian path problem
In the mathematical field of graph theory the Hamiltonian path problem and the Hamiltonian cycle problem are problems of determining whether a Hamiltonian path or a Hamiltonian cycle exists in a given graph . Both problems are NP-complete...
and the vertex cover problem. Since deterministic Turing machines are special nondeterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming
Integer programming
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming, which is also known as mixed integer programming.Integer programming is NP-hard...
problems in operations research
Operations research
Operations research is an interdisciplinary mathematical science that focuses on the effective use of technology by organizations...
, many problems in logistics
Logistics
Logistics is the management of the flow of goods between the point of origin and the point of destination in order to meet the requirements of customers or corporations. Logistics involves the integration of information, transportation, inventory, warehousing, material handling, and packaging, and...
, protein structure prediction
Protein structure prediction
Protein structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence — that is, the prediction of its secondary, tertiary, and quaternary structure from its primary structure. Structure prediction is fundamentally different from the inverse...
in biology
Biology
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, origin, evolution, distribution, and taxonomy. Biology is a vast subject containing many subdivisions, topics, and disciplines...
, and the ability to find formal proofs of pure mathematics
Pure mathematics
Broadly speaking, pure mathematics is mathematics which studies entirely abstract concepts. From the eighteenth century onwards, this was a recognized category of mathematical activity, sometimes characterized as speculative mathematics, and at variance with the trend towards meeting the needs of...
theorems. The P versus NP problem is one of the Millennium Prize Problems
Millennium Prize Problems
The Millennium Prize Problems are seven problems in mathematics that were stated by the Clay Mathematics Institute in 2000. As of September 2011, six of the problems remain unsolved. A correct solution to any of the problems results in a US$1,000,000 prize being awarded by the institute... proposed by the Clay Mathematics Institute Clay Mathematics Institute The Clay Mathematics Institute is a private, non-profit foundation, based in Cambridge, Massachusetts. The Institute is dedicated to increasing and disseminating mathematical knowledge. It gives out various awards and sponsorships to promising mathematicians. The institute was founded in 1998... . There is a US$
United States dollar
The United States dollar , also referred to as the American dollar, is the official currency of the United States of America. It is divided into 100 smaller units called cents or pennies....
1,000,000 prize for resolving the problem.
### Problems in NP not known to be in P or NP-complete
It was shown by Ladner that if PNP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem
Graph isomorphism problem
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.Besides its practical importance, the graph isomorphism problem is a curiosity in computational complexity theory as it is one of a very small number of problems belonging to NP...
, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem
Graph isomorphism problem
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.Besides its practical importance, the graph isomorphism problem is a curiosity in computational complexity theory as it is one of a very small number of problems belonging to NP...
is the computational problem of determining whether two finite graph
Graph (mathematics)
In mathematics, a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges...
s are isomorphic
Graph isomorphism
In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H f \colon V \to V \,\!such that any two vertices u and v of G are adjacent in G if and only if ƒ and ƒ are adjacent in H...
. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to Laszlo Babai
László Babai
László Babai is a Hungarian professor of mathematics and computer science at the University of Chicago. His research focuses on computational complexity theory, algorithms, combinatorics, and finite groups, with an emphasis on the interactions between these fields...
and Eugene Luks has run time 2O(√(n log(n))) for graphs with n vertices.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP will equal co-NP). The best known algorithm for integer factorization is the general number field sieve
General number field sieve
In number theory, the general number field sieve is the most efficient classical algorithm known for factoring integers larger than 100 digits...
, which takes time O(e(64/9)1/3(n.log 2)1/3(log (n.log 2))2/3) to factor an n-bit integer. However, the best known quantum algorithm
Quantum algorithm
In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a...
for this problem, Shor's algorithm
Shor's algorithm
Shor's algorithm, named after mathematician Peter Shor, is a quantum algorithm for integer factorization formulated in 1994...
, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
### Separations between other complexity classes
Many known complexity classes are suspected to be unequal, but this has not been proved. For instance PNPPP
PP (complexity)
In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time...
⊆ PSPACE, but it is possible that P = PSPACE. If P is not equal to NP, then P is not equal to PSPACE either. Since there are many known complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines, co-NP is the class containing the complement
Complement (complexity)
In computational complexity theory, the complement of a decision problem is the decision problem resulting from reversing the yes and no answers. Equivalently, if we define decision problems as sets of finite strings, then the complement of this set over some fixed domain is its complement...
problems (i.e. problems with the yes/no answers reversed) of NP problems. It is believed that NP is not equal to co-NP; however, it has not yet been proven. It has been shown that if these two complexity classes are not equal then P is not equal to NP.
Similarly, it is not known if L (the set of all problems that can be solved in logarithmic space) is strictly contained in P or equal to P. Again, there are many complexity classes between the two, such as NL and NC, and it is not known if they are distinct or equal classes.
It is suspected that P and BPP are equal. However, it is currently open if BPP = NEXP.
## Intractability
Problems that can be solved in theory (e.g., given infinite time), but which in practice take too long for their solutions to be useful, are known as intractable problems. In complexity theory, problems that lack polynomial-time solutions are considered to be intractable for more than the smallest inputs. In fact, the Cobham–Edmonds thesis states that only those problems that can be solved in polynomial time can be feasibly computed on some computational device. Problems that are known to be intractable in this sense include those that are EXPTIME
EXPTIME
In computational complexity theory, the complexity class EXPTIME is the set of all decision problems solvable by a deterministic Turing machine in O time, where p is a polynomial function of n....
-hard. If NP is not the same as P, then the NP-complete problems are also intractable in this sense. To see why exponential-time algorithms might be unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is roughly the age of the universe
Age of the universe
The age of the universe is the time elapsed since the Big Bang posited by the most widely accepted scientific model of cosmology. The best current estimate of the age of the universe is 13.75 ± 0.13 billion years within the Lambda-CDM concordance model...
. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. Nevertheless a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances.
What intractability means in practice is open to debate. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example the decision problem in Presburger arithmetic
Presburger arithmetic
Presburger arithmetic is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who introduced it in 1929. The signature of Presburger arithmetic contains only the addition operation and equality, omitting the multiplication operation entirely...
has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem
Knapsack problem
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the count of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as...
over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem
Boolean satisfiability problem
In computer science, satisfiability is the problem of determining if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE...
.
## Continuous complexity theory
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis ....
. One approach to complexity theory of numerical analysis is information based complexity.
Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical system
Dynamical system
A dynamical system is a concept in mathematics where a fixed rule describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a...
s and differential equation
Differential equation
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders...
s. Control theory
Control theory
Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems. The desired output of a system is called the reference...
can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.
## History
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing
Alan Turing
Alan Mathison Turing, OBE, FRS , was an English mathematician, logician, cryptanalyst, and computer scientist. He was highly influential in the development of computer science, providing a formalisation of the concepts of "algorithm" and "computation" with the Turing machine, which played a...
in 1936, which turned out to be a very robust and flexible notion of computer.
date the beginning of systematic studies in computational complexity to the seminal paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard Stearns (1965), which laid out the definitions of time and space complexity and proved the hierarchy theorems.
According to , earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill
John Myhill
John R. Myhill was a mathematician, born in 1923. He received his Ph.D. from Harvard University under Willard Van Orman Quine in 1949. He was professor at SUNY Buffalo from 1966 until his death in 1987...
's definition of linear bounded automata (Myhill 1960), Raymond Smullyan
Raymond Smullyan
Raymond Merrill Smullyan is an American mathematician, concert pianist, logician, Taoist philosopher, and magician.Born in Far Rockaway, New York, his first career was stage magic. He then earned a BSc from the University of Chicago in 1955 and his Ph.D. from Princeton University in 1959...
's study of rudimentary sets (1961), as well as Hisao Yamada
-Selected publications:* Hisao Yamada: "A Historical Study of Typewriters and Typing Methods: from the Position of Planning Japanese Parallels", Journal of Information Processing, 2 , pp. 175–202-References:...
's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot
Boris Trakhtenbrot
Boris Avraamovich Trakhtenbrot or Boaz Trakhtenbrot is an Israeli and Russian mathematician in mathematical logic, algorithms, theory of computation and cybernetics. He worked at Akademgorodok, Novosibirsk during the 1960s and 1970s...
(1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers:
In 1967, Manuel Blum
Manuel Blum
Manuel Blum is a computer scientist who received the Turing Award in 1995 "In recognition of his contributions to the foundations of computational complexity theory and its application to cryptography and program checking".-Biography:Blum attended MIT, where he received his bachelor's degree and...
developed an axiomatic complexity theory based on his axioms
Blum axioms
In computational complexity theory the Blum axioms or Blum complexity axioms are axioms which specify desirable properties of complexity measures on the set of computable functions. The axioms were first defined by Manuel Blum in 1967....
and proved an important result, the so called, speed-up theorem
Blum's speedup theorem
In computational complexity theory Blum's speedup theorem, first stated by Manuel Blum in 1967, is a fundamental theorem about the complexity of computable functions....
. The field really began to flourish when the US researcher Stephen Cook
Stephen Cook
Stephen Arthur Cook is a renowned American-Canadian computer scientist and mathematician who has made major contributions to the fields of complexity theory and proof complexity...
and, working independently, Leonid Levin
Leonid Levin
in the USSR, proved that there exist practically relevant problems that are NP-complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...
. In 1972, Richard Karp
Richard Karp
Richard Manning Karp is a computer scientist and computational theorist at the University of California, Berkeley, notable for research in the theory of algorithms, for which he received a Turing Award in 1985, The Benjamin Franklin Medal in Computer and Cognitive Science in 2004, and the Kyoto...
took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial
Combinatorics
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Aspects of combinatorics include counting the structures of a given kind and size , deciding when certain criteria can be met, and constructing and analyzing objects meeting the criteria ,...
and graph theoretical
Graph theory
In mathematics and computer science, graph theory is the study of graphs, mathematical structures used to model pairwise relations between objects from a certain collection. A "graph" in this context refers to a collection of vertices or 'nodes' and a collection of edges that connect pairs of...
problems, each infamous for its computational intractability, are NP-complete.
• List of computability and complexity topics
• List of important publications in theoretical computer science
• Unsolved problems in computer science
• :Category:Computational problems
• List of complexity classes
• Structural complexity theory
Structural complexity theory
In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms...
• Descriptive complexity theory
• Quantum complexity theory
Quantum complexity theory
Quantum complexity theory is a part of computational complexity theory in theoretical computer science. It studies complexity classes defined using quantum computers and quantum information which are computational models based on quantum mechanics...
• Context of computational complexity
Context of computational complexity
In computational complexity theory and analysis of algorithms, a number of metrics are defined describing the resources, such as time or space, that a machine needs to solve a particular problem. Interpreting these metrics meaningfully requires context, and this context is frequently implicit and...
• Parameterized Complexity
Parameterized complexity
Parameterized complexity is a branch of computational complexity theory in computer science that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input. The complexity of a problem is then measured as a function in those...
• Game complexity
Game complexity
Combinatorial game theory has several ways of measuring game complexity. This article describes five of them: state-space complexity, game tree size, decision complexity, game-tree complexity, and computational complexity.-Measures of game complexity:...
• Proof complexity
Proof complexity
In computer science, proof complexity is a measure of efficiency of automated theorem proving methods that is based on the size of the proofs they produce. The methods for proving contradiction in propositional logic are the most analyzed...
• Transcomputational problem
Transcomputational problem
In computational complexity theory, a transcomputational problem is a problem that requires processing of more than 1093 bits of information. Any number greater than 1093 is called a transcomputational number...
|
2015-03-06 21:36:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7296208739280701, "perplexity": 435.6275258753242}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936471203.9/warc/CC-MAIN-20150226074111-00005-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://www.byteflying.com/archives/3832
|
# C#LeetCode刷题之#237-删除链表中的节点(Delete Node in a Linked List)
4 -> 5 -> 1 -> 9
Write a function to delete a node (except the tail) in a singly linked list, given only access to that node.
4 -> 5 -> 1 -> 9
Input: head = [4,5,1,9], node = 5
Output: [4,1,9]
Explanation: You are given the second node with value 5, the linked list should become 4 -> 1 -> 9 after calling your function.
Input: head = [4,5,1,9], node = 1
Output: [4,5,9]
Explanation: You are given the third node with value 1, the linked list should become 4 -> 5 -> 9 after calling your function.
Note:
The linked list will have at least two elements.
All of the nodes’ values will be unique.
The given node will not be the tail and it will always be a valid node of the linked list.
Do not return anything from your function.
```public class Program {
public static void Main(string[] args) {
var head = new ListNode(1) {
next = new ListNode(2) {
next = new ListNode(3) {
next = new ListNode(4) {
next = new ListNode(5)
}
}
}
};
}
private static void ShowArray(ListNode list) {
var node = list;
while(node != null) {
Console.Write(\$"{node.val} ");
node = node.next;
}
Console.WriteLine();
}
private static void DeleteNode(ListNode node) {
//先复制下一个节点的值
node.val = node.next.val;
//再把下一节点的指针指向下下一个节点
node.next = node.next.next;
}
public class ListNode {
public int val;
public ListNode next;
public ListNode(int x) { val = x; }
}
}```
`1 2 4 5`
|
2021-01-22 00:02:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24671849608421326, "perplexity": 1356.737745474051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703528672.38/warc/CC-MAIN-20210121225305-20210122015305-00100.warc.gz"}
|
https://gphl.gitlab.io/grade2_docs/planes.html
|
# Treatment of Planar Groups¶
## Grade2: Mogul + custom ring analysis¶
In terms of obtaining a good results when fitting/refining ligands in moderate-low resolution structures probably the most critical restraint term is that for planes. Imposing plane restraints when there should be none will often prevent realistic fitting. Conversely correctly identifying missing planes can reveal misfit ligands, for a good example of this see Smart, O. S. and G. Bricogne (2015) and PDB entry 1PMQ/4Z9L.
For each ring, Grade2 analyses the CSD hits from Mogul assess whether the ring is flat or puckered. This custom ring analysis is an advance over the original Grade where ring restraints where based on quantum chemical results and heuristics. An advantage of the custom analysis is that for flat ring it results in a $${\sigma}$$ _chem_comp_plane_atom.dist_esd) to be set based on the flatness distribution of the CSD hits.
For example, for PDB component DZ3 https://www.rcsb.org/ligand/DZ3 Grade2 produces planes with $${\sigma}$$ 's obtained from Mogul + custom CSD analysis:
Note that the two the phenyl rings have plane $${\sigma}$$ set to 0.007 Angstroms. This is tighter than the 0.020 Angstroms used for all planes in Grade.
Also notice the weak planes across the bonds (with a $${\sigma}$$ of 0.085 Angstrom) joining the phenyl rings to the amide (marked in green). These are set because the CSD distribution for the torsion angles show a preference for planarity but have a broad distibution. The restraints act to weakly encourage planarity but can easily overcome if the electron density fit warrants it.
## The --big_planes Option¶
The Grade2 --big_planes option produces large fused planes that overemphasize ring planarity. Using --big_planes is a generally a poor idea as it overemphasizes planarity but the option is provided as some users like planes to be kept very flat.
Historically protein crystallographers have tended to favour large single planar groups in refinement. Indeed, for example BUSTER currently uses a single plane restraint for the indole ring in tryptophan TRP.
Although aromatic rings are normally planar under certain conditions they can be induced to adopt bent structures (See for example [2.2]Paracyclophane in CSD structure DXYLEN13, and the FMN isoalloxazine ring in the high-resolution PDB entry 2wqf). The default plane restraints produced by Grade2 are designed to allow rings to bend in refinement.
If the option --big_planes is specified, Grade2 will merge together all planes with three atoms in common if they are "strong" planes. "Strong" planes have a sigma (_chem_comp_plane_atom.dist_esd) of 0.02 Angstroms or less. The $${\sigma}$$ of each fused plane is set to the lowest $${\sigma}$$ of any contributing planes. "Weak" planes (such as those in amide bonds) are not incorporated into -big_planes.
For example taking the PDB component DZ3 https://www.rcsb.org/ligand/DZ3. by default Grade2 produces planes:
But if the option --big_planes is specified, the phenyl ring and atom planes are merged:
Notice that merged big planes are created for each of the phenyl rings and the atoms around the rings. Each big plane also includes the hydroxyl hydrogen atom that should be coplanar. Each big plane $${\sigma}$$ is set to 0.007 Angstrom, resulting in a tightening in the overall planarity compared to default Grade2 restraints.
It is also noteworthy that the weak planes that more loosely encourage planarity for the amide bond and its neighbours are now not incorporated when --big_planes is specified. In the original Grade2 release 1.0.0, the --big_planes option had a bug where weak planes were incorporated in the plane merging process, resulting in unrealistic conformational restriction. The bug has been fixed from Grade2 release 1.1.5 (bug fix #342).
For fused aromatic rings the effect of the --big_planes option is normally to create a single large stiff plane. For example, for Flavin mononucleotide (FMN https://www.rcsb.org/ligand/FMN), by default Grade2 will produce separate plane restraints for each ring and planar atom in the isoalloxazine:
Using the --big_planes option for FMN results in the isoalloxazine ring being held planar with a single plane involving 22 atoms:
Although isoalloxazine rings are normally flat some enzymes bend the cofactor, as clearly demonstrated, in the 1.35 Angstrom resolution nitroreductase structure PDB entry 2wqf. BUSTER re-refinement of 2wqf with a default Grade2 dictionary allows the isoalloxazine ring to bend:
2wqf re-refinement with BUSTER using FMN Grade2 default dictionary shows the restraints allow the isoalloxazine ring to bend and fit the electron density. The 2Fo-Fc map contoured at 1.5 rmsd is shown in a light blue mesh, whereas the Fo-Fc difference map contoured at 3.0 rms is shown in red/green.
In contrast, re-refinement with the a grade2 FMN dictionary produced with the using the --big_planes option forces the isoalloxazine ring to be flat. The flat ring is clearly incompatible with the electron density.
2wqf re-refinement with BUSTER using FMN Grade2 --big_planes dictionary shows the single plane restraint forces the isoalloxazine ring to be flat. The 2Fo-Fc map contoured at 1.5 rmsd is shown in a light blue mesh, whereas the Fo-Fc difference map contoured at 3.0 rms is shown in red/green.
BUSTER re-refinement of a low resolution homology of 2wqf shows that the default Grade2 FMN dictionary allows a similar bend (to be published).
In conclusion, the --big_planes option can be used if you want ligand planar systems to be held strictly planar in refinement, even if the electron density indicates otherwise.
|
2023-02-08 21:24:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45475131273269653, "perplexity": 7555.762464654291}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00251.warc.gz"}
|
https://www.evanott.com/data-analysis/Analysis/gaussian.html
|
# The Gaussian Distribution¶
The Gaussian distribution, otherwise known as the “Normal distribution” or “bell curve”, is a powerful tool for data analysis, as it is ubiquitous, following from physical principles.
Freeman, Matthew. “A visual comparison of normal and paranormal distributions.” Journal of Epidemiology and Community Health. 2006 January; 60(1): 6.
## The Function¶
The Gaussian distribution is based on two parameters: the mean of the distribution, and the standard deviation of the distribution. The arithmetic mean (simple average) is denoted by $$\mu$$, and the standard deviation by $$\sigma$$, which can be calculated by:
$\begin{split}\mu=\frac{\sum_{i=1}^Nx_i}{N}\\ \\ \sigma^2=\frac{\sum_{i=1}^N(x_i-\mu)^2}{N}\end{split}$
where $$N$$ is the total number of elements in our dataset, and $$x_1,~x_2,~...,~x_N$$ are each of the values in our dataset.
Once we have $$\mu$$ and $$\sigma$$, we can define the Gaussian distribution for those parameters:
$Gaussian Distribution Function$$G(x)={\mathcal{N}}_{\mu,~\sigma}(x)= \frac{1}{\sigma\sqrt{2\pi}}\exp\left({-\frac{(x-\mu)^2}{2\sigma^2}}\right)$
(Here, I’m using $$G(x)$$ as shorthand for a fully parameterized normal distribution).
As we saw in the Manipulate section, we plot $$G(x)$$ for various values of $$\mu$$ and $$\sigma$$ to get a feel for how the function behaves:
But this still doesn’t tell us what this distribution means. In the form given above (we could easily provide $$f(x)=\int{G(x)dx}$$ instead, or other variant), we have a probability density function. Furthermore, the probability of having a result $$x^*\in[a,b]$$ is exactly $$P(a<x^*<b)=\int_a^bG(x)dx$$. So, if we integrate over all possible values of $$x$$, we should arrive at probability $$\int_{-\infty}^\infty G(x)dx=1$$, which is indeed the case ( the sum over the entire sample space of a probability density function needs to be $$1$$ - how could we have a notion of a “probability” if the odds of getting any result added up to anything other than $$100\%$$?).
It may not be immediately obvious, but some properties that are convenient for the normal distribution are that the peak is at $$x=\mu$$, and that the points of inflection (where the second derivative $$\frac{d^2G(X)}{dx^2}$$ changes sign) are at $$x=\mu\pm\sigma$$. As it is symmetric about $$x=\mu$$, the probability $$x<\mu$$ is $$P(x<\mu)=\int_{-\infty}^\mu G(x)dx=.5=\int_\mu^\infty G(x)dx=P(x>\mu)$$.
If we take the limit as the standard deviation goes to zero, we arrive at the Dirac delta function $$\delta(x-\mu)$$ that is zero everywhere except being infinite at $$x=\mu$$. It satisfies the condition $$\int_{-\infty}^x\delta(x'-\mu)dx'=\left\{\begin{array}{lc}0 & x<\mu \\ 1 & x\geq\mu\end{array}\right.$$.
## Combining Distributions¶
Rarely, if ever, do we care about measuring just one thing. More realistically, we have some collection of independent variables and want to measure some quantity that depends on all of them. IFF the variables are independent, we can adopt the following model.
For now, let’s assume we have two independent random variables $$x$$ and $$y$$ (see the Probability and Statistics section) (we’ll generalize in a moment). And let’s say we have some linear function $$f(x,y)$$. If we want to know the mean of this function $$\mu_f$$, we simply plug in the mean values for $$x$$ and $$y$$:
$\mu_f=f(\mu_x,~\mu_y)$
That’s all well and good, but we know that we should express some spread in our values for $$f$$, based on the spread in $$x$$ and $$y$$:
$\sigma_f^2=\left(\frac{\partial f}{\partial x}\middle|_{x=\mu_x,~y=\mu_y}\right)^2\sigma_x^2 +\left(\frac{\partial f}{\partial y}\middle|_{x=\mu_x,~y=\mu_y}\right)^2\sigma_y^2$
Examples of Combining Distributions
As a simple example, let’s look at “average”: $$f(x,~y)=\frac{x+y}{2}$$. Intuitively, the mean of adding these distributions and dividing by two is the average of the means: $$\mu_f=\frac{\mu_x+\mu_y}{2}$$. However, the variance is a little more interesting. We have $$\frac{\partial f}{\partial x}=1/2$$ and $$\frac{\partial f}{\partial y}=1/2$$. Using the equation above, this means that $$\sigma_f^2=(1/2)^2\sigma_x^2+(1/2)^2\sigma_y^2$$ or $$\sigma_f^2=\frac{\sigma_x^2+\sigma_y^2}{4}$$. This says something interesting: the variance in our combined distribution may sometimes be smaller (the “/4” term indicates this might be possible). In fact, this is totally reasonable – we just have to remember that the mean will scale as well.
To demonstrate this more simply, we can look at $$f(x)=x/10$$. The mean is $$\mu_f=\mu_x/10$$, and the variance is $$\sigma_f^2=\sigma_x^2/100$$ or $$\sigma_f=\sigma_x/10$$. Thus, we have indeed reduced our variance, but it has scaled with the mean. If we could reduce it arbitrarily, that means our measurements get more precise when we do mathematical operations on them. That would be strange for a number of reasons, so it’s good the math checks out.
In general, if we have variables $$x_1,~x_2,~\cdots,~x_N$$, then the variance in $$f(x_1,~x_2,~\cdots,~x_N)$$ is:
$\sigma_f^2=\sum_{i=1}^N \sigma_{x_i}^2\left(\frac{\partial f}{\partial x_i}\middle|_{ \mu_\overline{x}}\right)^2$
where $$|_{\mu_{\overline{x}}}$$ is shorthand for “evaluate at the average value for each variable”.
Practice Problem: Combining Distributions
Assuming variables are independent, work out what the variance is for:
$$f(x_1,~x_2,~\cdots,~x_N)=\Sigma_{i=1}^Nx_i/N$$ (assuming $$\forall{i}:\mu_i=\mu\wedge\sigma_i=\sigma$$) This one is critical – it tells us about what we can expect about the mean of taking many samples from one parent distribution.
With linear functions, it’s actually quite easy to see how we get the equation above. Suppose we have $$f(x,y)=ax+by$$ with $$x$$ coming from $$\mathcal{N}_{\mu_x,\sigma_x}$$ and $$y$$ coming from $$\mathcal{N}_{\mu_y,\sigma_y}$$. We know that $$\langle{x}\rangle=\mu_x$$ and $$\langle{y}\rangle=\mu_y$$. Then, we have:
$\langle{f}\rangle=\langle{ax+by}\rangle=a\langle x \rangle+b\langle{y}\rangle=a\mu_x+b\mu_y$
iff $$x$$ and $$y$$ are independent. This is indeed the same as if we had taken $$\mu_f=f(\mu_x,\mu_y)$$. The variance is not so easy, but we can muster it. First, we need the second moment of $$f$$.
$\begin{split}\langle{f^2}\rangle&=\langle{(ax+by)^2}\rangle\\ &=\langle{a^2x^2+b^2y^2+2abxy}\rangle\\ &=a^2\langle{x^2}\rangle+b^2\langle{y^2}\rangle+2ab\langle{xy}\rangle\end{split}$
Then we subtract the squared first moment from the second:
$\begin{split}\sigma_f^2&=\langle{f^2}\rangle-\langle{f}\rangle^2\\ &=a^2\langle{x^2}\rangle+b^2\langle{y^2}\rangle+2ab\langle{xy}\rangle-\left(a\mu_x+b\mu_y\right)^2\\ &=a^2\langle{x^2}\rangle+b^2\langle{y^2}\rangle+2ab\langle{xy}\rangle-a^2\langle{x}\rangle^2-b^2\langle{y}\rangle^2-2ab\langle{x}\rangle\langle{y}\rangle\\ &=a^2\langle{x^2}\rangle-a^2\langle{x}\rangle^2+b^2\langle{y^2}\rangle-b^2\langle{y}\rangle^2+2ab\left(\langle{xy}\rangle-\langle{x}\rangle\langle{y}\rangle\right)\\ &=a^2\sigma_x^2+b^2\sigma_y^2+2ab\left(\langle{xy}\rangle-\langle{x}\rangle\langle{y}\rangle\right)\end{split}$
The first two terms are familiar. The last is the covariance. If the variables are independent, then the covariance is 0 (if the probability distributions are independent, then the integrals are separable), so we recover $$\sigma_f^2=a^2\sigma_x^2+b^2\sigma_y^2$$.
Non-linear Functions
Just a reminder, if variables are not independent, then all this logic goes out the window. Furthermore, we’re assuming a Gaussian-like distribution here. We can relax the restriction to linear transformations, but then must note that the mean and variance will not necessarily be as written above.
For example, let’s take $$f(x)=\cos(x)$$ with $$x$$ coming from the distribution $$\mathcal{N}_{0,1}$$. Using the equations above, we get
$\begin{split}\mu_f&=\cos(\mu_x)=\cos(0)=1\\ \sigma_f^2&=\left(-\sin(x)|_{x=\mu_x}\right)^2\sigma_x^2\\ &=(-\sin(0))^21^2=0\end{split}$
We know not to expect a variance of 0! Why? If we go back to the original definition of variance,
$\sigma_f^2=\langle f^2 \rangle-\left(\langle f \rangle\right)^2$
then since the function is real-valued, the only way for the variance to be 0 is for all data points to have the same value! But we know that cosine is not a constant, so this invalidates our method.
To actually calculate the mean and standard deviation, we can use the probability density function associated with the distribution for $$x$$, and calculate the first and second moment of cosine:
$\begin{split}\mu_f=\langle f \rangle&=\int_{-\infty}^\infty\cos(x)\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx\\ &=\frac{1}{\sqrt{e}}\approx0.6065\\ \langle f^2 \rangle&=\int_{-\infty}^\infty\cos^2(x)\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx\\ &=\frac{1}{2}(1+\frac{1}{2e^2})\approx0.5677\\ \sigma_f^2&=\langle f^2 \rangle-\left(\langle f \rangle\right)^2\\ &=\frac{(e-1)^2}{2e^2}=0.2000\end{split}$
Thus, $$f$$ (which is not necessarily Normal anymore) has a mean of 0.6065 and standard deviation of 0.4470. This is obviously different from a mean of 1 and standard deviation of 0. Often, we use the linear approximation anyway. If our data covers a small enough region, we may be able to linearize it.
## Z-Scores, P-Values, and Confidence Intervals, Oh My!¶
There are many useful things we can report about samples taken from a Gaussian distribution. First, let’s take a look at what we claim to be the “normalizing transformation”:
$z(x)=\frac{x-\mu_x}{\sigma_x}$
This yields a Normal distribution with $$\mu_z=0$$ and $$\sigma_z=1$$.
Practice Problem: Normalization
Use the math above to prove $$\mu_z=0$$ and $$\sigma_z=1$$.
### Z-Score¶
We can then define a “z-score” for a point in our data (or for the mean of our sample, etc.): this is just the value we get if we apply the normalizing transformation. The z-score is then the number of standard deviations away from the mean the value is (including sign). Results for experiments often get quoted as the z-score (or equivalent metric with a non-Gaussian distribution), saying “$$5.9\sigma$$”, as in, the data were 5.9 standard deviations away from the mean or that the z-score was 5.9 (glossing over some technicalities here, but for our intuition, this is “close enough”).
This normalizing tranformation lets us talk about results in a general way. A good statistician knows that if we look at the distribution for $$z(x)$$ (assuming $$x$$ is Gaussian), the following are true:
$\begin{split}P(-1<z<1)&\approx0.68\\ P(-2<z<2)&\approx0.95\\ P(-3<z<3)&\approx0.997\\ P(-5<z<5)&\approx0.999999\end{split}$
This means that 68% of our data should be within one standard deviation, 95% be within 2 standard deviations, etc. These numbers are taken by the same integral as before:
$\begin{split}P(|z|<z^*)=\int_{-z^*}^{z^*}\mathcal{N}_{0,1}(z)dz\end{split}$
### P-Value¶
A related quantity is the “p-value”. Depending on the problem, it is defined as:
$\begin{split}p=&1-P(|z|<z^*)\\ &\mathrm{or}\\ p=&\frac{1-P(|z|<z^*)}{2}\end{split}$
It’s clearly some sort of probability, but its meaning is a little subtle. NOTE: THE FOLLOWING SENTENCE HAS THE WRONG INTERPRETATION IT IN: the p-value is often quoted as the “chance our result is a statistical fluke.” That is absolutely false.
HERE BE CORRECT INTERPRETATIONS: the p-value is the probability under the assumption of a “null hypothesis” of obtaining a result as strange as we did. What does that mean? That says that we assume the population has some mean and standard deviation $$\mu_0$$ and $$\sigma_0$$. Iff that assumption is correct, then our result should be observed $$p$$ percent of the time. As such, we often make cutoff points for the p-value observed in an experiment do decide if an effect is present. In social sciences, this is often $$\alpha=0.05$$ – the final check to see if our result is statistically significant is then $$p<\alpha$$. This says that out of every 20 experiments, we should expect (on average) for 1 to be as strange as our result if the null hypothesis is valid (meaning that $$\mu_0$$ and $$\sigma_0$$ accurately describe the situation). So, if we performed these 20 experiments and all 20 had a result this strange, this gives us pretty good indication that the null hypothesis is wrong. In that case, we may use our data to create a new hypothesis (maybe the actual mean and standard deviation are the ones in our experiment). Future tests would determine whether this hypothesis is correct, and so on.
In physics, we often demand a stronger value for $$\alpha$$. Particularly for major discoveries (Higgs boson, gravitational waves), this is $$\alpha=10^{-6}$$, corresponding to a $$5\sigma$$ difference from the null hypothesis. Why? We often have 1 or maybe 2 experiments to test our hypotheses. CERN’s Large Hadron Collider is the only place we have the capacity to create 14 TeV proton beams. So their ATLAS and CMS detectors are our two experiments. We simply can’t run 20 different experiments, hoping to see the effect in all of them. So, we impose a higher standard so that we are pretty sure the null hypothesis is wrong before making any conclusions.
If this confuses you, or if you’d like to read more, check out former UT Physics student Alex Reinhart’s book Statistics Done Wrong, particularly the section “The p value and the base rate fallacy” (available here). The following XKCD cartoon (referenced in Statistics Done Wrong as well), may help give an idea about how the p-value can be misinterpreted by non-statisticians:
### Confidence Interval¶
When reporting results, there are mainly 2 ways of representing the spread/error in our data: By reporting “$$\mu\pm\sigma$$”, we let the reader do the math to figure out possible values. The other way is with a confidence interval. For this, we say “we have 99% confidence that the value is between $$\mu-z^*\sigma$$ and $$\mu+z^*\sigma$$.” Again, each journal/field will have a prescribed confidence level, but we find the associated z-score for that certainty (inverting the calculation of $$P(|z|<z^*)$$) and report the values that far from the mean in each direction. This can be nice if we are trying to show that a value is non-zero. If 0 is included in the interval, then we can’t say with C% confidence that our result is different. If 0 is not included, we can say we have C% confidence our result is statistically significantly different.
### Connection to Mathematica¶
In various cases, we can have Mathematica run statistical tests on the models it fits for us. The Fitting Functions section has more on these actual functions. But if you manipulate the models correctly, you’ll find that it quotes p-values for fitted parameters. But you know that has to come from taking the integral of some kind of Gaussian distribution. Which one? It’s a little complicated, but nothing we can’t handle.
Mathematica uses some algorithm to pin down a value for a parameter. Then, it uses its best guess and the data to determine the standard error (essentially the same as the standard deviation in formulation). That error then says how far off the parameter’s estimation is likely to be from reality. In the simplest case, Mathematica assumes a null hypothesis that says the distribution has a mean of 0 and a standard deviation equal to the standard error. It then computes the z-score of the estimate (actually a t-statistic – formulated the same way, but for a distribution that isn’t quite a Gaussian – see below), and the associated p-value. As such, the p-value quoted here is the probability we expect to get a parameter fit this strange if the value for the parameter is actually 0 and the standard deviation (from error in experiment, simulation, etc.) is equal to the standard error.
If you have ingrained in you the correct interpretation of a p-value, seeing an extremely small value in the model fit should not surprise you. The value of the parameter may be very large compared to 0 as scaled by the standard error. If our data has little spread, this value will be a strong predictor if our model is correct. If our data is highly variable, the bounds on the parameter will not be as tight, and it may be more difficult to show that the effect is actually present.
## Significant Digits¶
When we need to write down our final answer, we have to go beyond the standard rules of significant digits. The standard rules certainly help, and inform our ability to report values.
### Standard Rules¶
The number of significant digits in a value is all the digits subject to: * leading zeros don’t count * trailing zeros after a decimal point do count So, 0.004 has one significant figure while 0.40000 has 5.
With this, if we multiply two numbers (N and M significant digits respectively), then the result should have $$\min(N,M)$$ significant figures. This means $$0.4\times\pi=1$$. This may seem surprising. But that’s how we maintain integrity of how good our measurements are. For example, if we are doing normal rounding, the actual value could be anywhere from .35 to .4499.... In the latter case, we get 1.4, in the former, we get 1.1.
If we add two numbers, we only get to keep up to the smallest power of ten in either addend. So $$4+\pi=7$$ while $$0.04+\pi=3.18$$. This is radically different than in multiplication. In the latter example, we ended up with 3 significant figures. However, $$4+0.4=4$$.
In any case, we keep as many digits as we’d like throughout our calculations, but we must consider these rules when reporting final results. If you round in between steps, you’re likely to get off-track quickly.
### Reporting Uncertainty¶
We apply these ideas when reporting values obtained from some sort of investigation. It makes no sense to say that “we achieved a result of $$0.1105\pm5.6$$.” If we were to calculate the values one standard deviation away from the reported mean, we will only keep the digit in front of the decimal point. A more realistic representation would then be $$0.1\pm5.6$$ or possibly even $$0\pm6$$. If we really can report the standard deviation to 2 decimal places, the former is clearly preferred.
If you (unwisely) choose to not determine the number of significant digits in your mean and standard deviation calculations (it is tedious), a rule of thumb enters the fray. In that case, you might choose to report just the leading digit in standard deviation (so if $$\sigma=0.000415$$, you’ll report it as $$4\times10^{-4}$$). In that case, you’ll keep the digits in the mean from the left up to and including the first one affected by adding/subtracting the standard deviation. So $$3.315\pm0.224$$ becomes $$3.315\pm0.2$$ becomes $$3.3\pm0.2$$ (sometimes written as $$3.3(2)$$).
There are ethical considerations here. You want to report the results of your calculations to as precise an extent as you can calculate, but are limited by the precision of your experiment. Just because the signal appears to be there, if the noise is too great, you can’t be sure your perceived signal (as the mean) isn’t just part of the noise. Reporting too many digits is misleading, suggesting you did a better experiment than reality.
## Applications¶
As we can see in the Error Analysis section, the concepts of a mean and standard deviation are critical to our understanding of how to model uncertainty when reporting results. Furthermore, when we report a result, we often use the mean and standard deviation to determine if our experiment has shown something new (though we might use a t-distribution instead of a Normal distribution, which is similar in shape but not formulation), based on how far the result is from the mean relative to the standard deviation. This sort of reasoning is needed for problems where we are trying to discover something - determining whether some thing exists. When looking at a known system but trying to determine values (calculating the speed of light, for example), we can use a Gaussian to model error by propagating the standard deviation appropriately.
There are some systems that follow a Normal distribution naturally, such as the ground state of the quantum harmonic oscillator (a topic in PHY 373), velocities in an ideal gas, and other cases. More often than not, we may approximate a curve as a Normal curve for ease in calculation.
As an interactive way to get an intuitive feel for how samples drawn from a Normal distribution vary with respect to the number of elements sampled and the size of the standard deviation, see the following Mathematica CDF (hover over to generate new samples).
|
2021-03-03 00:10:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002533316612244, "perplexity": 795.1565941402583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00493.warc.gz"}
|
http://electronics.stackexchange.com/tags/pic/new
|
# Tag Info
1
You don't initialize the point register. That means it can hold any value when you start looping the nextdigit loop! Try adding a clrf point under the start label. For debugging: in MPLAB -> View -> File Registers, you can see the value of point during runtime.
0
I checked out if there are any loosed connections like suggested by @pjc50 - found none. I even ordered new header and on my surprise the issue was still not gone. After many trial and error I figured out that it works if I set Voltage Level to 4,875 (instead of 5V):
1
I have a similar issue with firmware 01.28.72. When I went back to 1.12.01 it worked again.
0
The fastest way to do this is with an Arduino based board. Get a board, a clock module like this one https://www.sparkfun.com/products/99 and connect them together. You will probably need a Mega to have enough IO pins to driver the displays. Ask any coding questions on StackOverflow (the programming Q&A) site. Or - here is a project with an Arduino type ...
0
I highly recommend you to use a LCD instead of bunch of 7-segments. You can configure the digits easily. PCB will be much easier to build. Much more less soldering and easier to control in software. LCD can be used with direct wiring of 8 pins. I recommend you 2x16 LCD with a HD44780 chip in it. You can find in net for HD44780 examples a lot. As for ...
4
In order to expand the display to displaying DDD MM HH, you need five more I/O lines for the DDD and SS. The PIC16F84A microntroller shown in the linked circuit has only 13 I/O lines and there are all used. (If you don't need the buzzer and relay outputs then potentially you have two spare outputs, but as I mentioned above, you need five.) So If you want ...
1
One option would be a lower power microcontroller. This board can be used to develop for one (TI MSP430) capable of about 0.8 ua in low power mode; on a button press it can be active in 1 us.
2
If you need a solution that doesn't require a voltage regulator i.e. operates directly on the Vcc lines to the PIC then I've modified it slightly (and for a good reason): - The reason it's modified is that the BJT pass transistor might drop a little too much voltage under load if you are just switching an already regulated Vcc to the PIC. I've replaced ...
2
I searched around a bit of course and found this: Source: http://www.circuitsonline.net/schakelingen/145/computer-en-microcontroller/one-button-onoff.html (Dutch) According to the site, this circuit draws less than 1uA when off. I didn't test it myself. I don't know how much it draws when the device is on. When SW1 is pressed, Q1 starts to conduct and ...
4
The purpose of the 100 nF capacitors placed physically close to the microcontroller, is to decouple the varying load any microcontroller represents, from the power supply rails. Thus, those capacitors do need to exist, as close as possible to the supply pins of the device. Similarly, typical voltage regulators sense their output and use a feedback loop to ...
8
Smoothing caps need to be as close to the power pins as possible of the target ICs. Trace parasitics add a whole bunch of invisible components in series with the power and return nets. It's a difficult concept to visualize from a schematic standpoint, since a schematic shows logical relationships (nets) but not physical relationships (how far apart parts ...
1
RECEIVING The normal approach for implementing a software asynchronous receiver is to have a timer tick that runs continuously at 3x or 5x the baud rate (note: odd numbers are better than even numbers). Watch the input to be low on two consecutive ticks. Once that is observed, start sampling the input on every third tick, until you have sampled it nine ...
2
Consecutive direct accesses to variables in the same bank will be faster than direct accesses to variables in different banks. Consequently, the compiler is probably trying to consolidate variables which are accessed near each other so they'll be placed in the same bank. Unfortunately, it may be attempting to consolidate more variables than will actually ...
2
No promises, but I have personally experienced that exact same behavior more than one hundred times. Every single one of them was fixed by throwing away my connecting cable, and wiring a new one. This means The plug on the PICkit3 The physical connector on the PIC side itself The physical wires between those two plugs Your current observations might be ...
0
When I use XC16 (I dont think XC8 is much different), I generally create a project with the project wizard because it adds a bunch of useful c and h files. One of them being an interrupts.c. /******************************************************************************/ /* Interrupt Vector Options */ ...
0
I'm not sure how XC8 handles interrupt functions, but in C18 you must use a #pragma compiler directive to specify for the compiler that a particular function is an interrupt handler. This places a jump instruction to the interrupt handler at the proper interrupt vector location on the part. You should check the assembly listing to see if the interrupt vector ...
1
If I'm interpreting your question correctly it sounds like you want to sense the presence of AC voltage at 2 points. One point is the main supply, the second point is after a switch. You could use a resistor voltage divider to reduce the voltage to a level you could measure with a PIC, then a series diode to remove the negative half wave. The computer I'm ...
14
When doing down-hole stuff for oil and gas - my guess is the cost of a chip is not going to make any difference on the economics of your whole project. You may be spending more money on time wasted looking to save even $100 in parts. Say you cost$100 per hour (salaries + overhead). Not unreasonable if you are a good engineer. Say you save $100 by spending ... 2 A specific type of servo motor, a latching servo, is required for holding position after the control signal is removed. Depending on the specific servo in use (see caveats below), an alternative "poor man's latching servo" can be implemented thus: Control the power supply line for the servo with a high side switch, either a P-MOSFET or for high power ... 1 Here's an example of how I would create a lookup table for some precomputed values. I will use an example of swapping bits front-to-back within a byte. This is sometimes useful for FFT algorithms or SPI peripherals that want the wrong order. First I create a program that creates the table. Maintaining the table by hand is drudgery and error-prone, so ... 11 I'll give a general answer since the question lacks information: Suppose you have an uint8_t as input and a uint8_t as output and you want to create a full lookup table (i.e. every input has an output). You'd need 256 values, as the input can have 256 different values. You can now create a table with: const uint8_t the_table[256] = { ... } The const ... 7 "ONLY" 384 bytes? Way back in the day, I had the job of writing an entire operating system (by myself) for a specialized computer that served the ship, pipeline, and refinery management industry. The company's first such product was 6800 based and was being upgraded to 6809, and they wanted a new OS to go along with the 6809 so they could eliminate the ... 7 When I was in high school, I had a teacher that insisted that light dimming was too difficult a task for a student such as I to tackle. Thus challenged I spent quite a bit of time learning and understanding phase based light dimming using triacs, and programming the 16C84 from microchip to perform this feat. I ended up with this assembly code: 'Timing ... 11 One thing that I haven't seen mentioned: The microcontroller you mentioned is only$0.34 each in quantities of 100. So for cheap, mass-produced products, it can make sense to go to the extra coding trouble imposed by such a limited unit. The same might apply to size or power consumption.
9
I designed a humidity sensor for plants that tracks the amount of water the plant has and blinks an LED if the plant needs water. You can make the sensor learn the type of plant and thus change its settings while running. It detects low voltage on the battery. I ran out of flash and ram but was able to write everything in C code to make this product work ...
8
Well, years ago I wrote a temperature controller with serial I/O (bit-banging the serial I/O because the MCU didn't have a UART) and a simple command interpreter to talk to the controller. MCU was a Motorola (now Freescale) MC68HC705K1 which had a whopping 504 bytes of program memory (OTPROM) and about 32 bytes of RAM. Not as little as the PIC you reference, ...
81
You kids, get off my lawn! 384b is plenty of space to create something quite complex in assembler. If you dig back through history to when computers were the size of a room, you'll find some truly amazing feats of artistry executed in <1k. The classic "Story of Mel - a real programmer" is your starter for 10: ...
1
Page 208 shows the block diagram of a typical I/O port. The PWM peripheral communicates with the output drivers via the output multiplexers. Therefore the sink/source capabilities in PWM mode are the same as when they're in GPIO mode, which for this device is either 15mA or 20mA. You may need to use an external driver fed from the PWM signal if you need ...
42
Microcontrollers are sufficiently cheap that they are often used to do really simple things that in years past would more likely have been done with discrete logic. Really simple things. For example, one might want a device to turn on an output for one second every five seconds, more precisely than a 555 timer would be able to do. movwf OSCCON mainLp: ...
4
You can write a blink a LED with 384 bytes program memory, and even more. As far as I know, it is not possible to extend the program memory with an external chip (unless you're building a full ASM interpreter in the 384 bytes, which would be slow). It is possible to extend data memory with an external chip (EEPROM, SRAM) though.
13
You can use this for very small applications (e.g. delayed PSU start, 555 timer replacement, triac-based control, LED blinking etc...) with smaller footprint than you'd need with logic gates or a 555 timer.
0
AVDD and AVSS must be connected.
2
Some possibilities: Present the input signal with different scales to different A/D inputs. Range switching is then done in software. Roughly you want to use the signal with the highest reading that isn't clipped. In reality it is good to blend over at least part of the range with the next lower signal so that you have smooth overlap between the ranges. ...
0
So I had to re-read through the ADC document and ask for some help, but I think I've found the way. My previous iteration used a method of defining certain parameters, but I ended up just doing direct assignment to the registers eventually. For anyone else with the same problem, the following is my code: Configuration for analog: DDPCONbits.JTAGEN = 0; ...
2
As with all these kinds of low level issues, you have to read the datasheet. It's OK to call library routines, but on these small resource-limited systems where you're always close to the hardware, you have to know what's going on at the low level whether that is done in a library routine or your own code. Read the SPI part of the chapter for the MSSP ...
1
If you really want to learn the UART, take a look at my canned UART code for PIC and dsPIC. This should be included in the PIC Development Tools release at http://www.embedinc.com/pic/dload.htm. Look for files with "uart" in their names in the SOURCE > PIC and SOURCE > DSPIC directories within the software installation directory. For example, ...
2
You have unfortunately not learned much from your previous UART experience if that is all the code you have. Here it seems like you have used a couple of libraries (i.e header files) which has implemented all the "tricky" stuff for you. I would suggest that you take a look in your code and open up the UART1_Init() function. This is most likely included as ...
1
I suggest looking at the Microchip XC16 documentation that comes in the \docs folder of the installation, specifically 16-Bit_Language_Tools_Libraries_51456.pdf (for XC16 version 1.11). This has lots of UART examples and explains how to use the XC16 library functions to control the UART. I also suggest Microchip's Embedded Code Source site, as there are ...
1
If you attempt to erase a device and then perform a blank check on it and the blank check fails there’s a chance that the flash memory on the IC is bad. Flash memory based devices only have so many erase / write cycles that can be performed on them before they burn out.
-1
I think generating a known frequency on IR permits the designer to eliminate the effects of background IR (from Sun and CFL's etc.) So, suppose we use 40 kHz for a '1' and 25 kHz for a '0', the IR receiver needs to figure out the received frequency. We need pauses between frequencies to separate the received bits. How fast would the interrupts come frokm IR, ...
4
Just get a programmable LED display much like this one, and an audio amplifier and speaker. Hook these up to a PC, and do it all in software. You can play samples through the PC's audio line out to the amplifier to make whatever sound you want, and talk to the display over RS-232 to make it display numbers. Forget Arduino, custom electronics and so on. This ...
3
I find that a convenient way of getting a lot of LEDs for a little money is flexible LED strips. You can put soft diffusing plastic on top of the strips to make the light more even. You only need the Arduino for driving the score display. Build the LEDs into common-anode assemblies, one per digit, and switch the anode with a P-channel MOSFET and the ...
1
You don't "initialize" IIC protocol. The bus simply starts out idle, which is each line passively pulled up. Individual message do have a special start and end sequence. A start is the data line going low with the clock high (normally the data line is not allowed to change when the clock is high), and a stop is the data line going high with the clock ...
0
Yet another possible feature the MAC instruction can have is auto-incrementing the registers that point to multiplicands. I programmed a Zilog DSP that used the (16-bit fixed-point) Clarkspur core. It was a variation on Harvard architecture with three busses, letting it access three areas of memory simultaneously: Instruction memory, data ram bank 1, and ...
1
For Vr, the formula is based on the information in the datasheet. In the datasheet, it says the differential range will be from -0.5 * Vref to 0.5 * Vref. Vref in your schematic is 2.5V, and both channels - pin is at 2.5V, so the range is from 1.25V to 3.75V, which will be interpreted as -1.25V to +1.25V in the readings (it outputs a signed 16-bit integer ...
1
For the $V_r$ calculation: $V_{ref}=\frac{V_{cc}}{2}$, therefore $V_{ref}=2.5V$ the differential input range is from $-0.5*Vref$ up to $0.5*Vref$, therefore from -1.25...1.25V the result is in two's complement, with bit 15 as MSB, so the input ranges from -32768 to 32767 (bit 16 acts as additional sign bit, and together with bit 15 acts as ...
Top 50 recent answers are included
|
2013-05-25 11:28:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32171952724456787, "perplexity": 1774.4510439230562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705939136/warc/CC-MAIN-20130516120539-00046-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/49207-precal-help-print.html
|
# precal help
• September 15th 2008, 03:30 PM
Kingreaper
precal help
simplify: (x-(1/x^2))/(x-(1/x^3))
simplify: 4+___d___
4+_4_
4+d
• September 15th 2008, 03:35 PM
icemanfan
I'm not quite sure what the second question says, but for the first question you'll be needing common denominators. Working on the numerator first, we have $x - \frac{1}{x^2}$. To get a common denominator, multiply $x \cdot \frac{x^2}{x^2}$ so that you get $\frac{x^3}{x^2} - \frac{1}{x^2}$. Subtracting yields $\frac{x^3 - 1}{x^2}$. Can you simplify the denominator?
• September 15th 2008, 03:46 PM
Kingreaper
the bottom one, with many parantheses, says 4+(d/(4+(4/(4+d))))
i simplified the bottom part, but what do i do next?
• September 15th 2008, 03:52 PM
icemanfan
For the first question, you should have gotten $\frac{x^4 - 1}{x^3}$ as the denominator, which leaves you with $\frac{\frac{x^3 - 1}{x^2}}{\frac{x^4 - 1}{x^3}}$. Now, you can rearrange this (using the reciprocal rule) to $\frac{x^3(x^3 - 1)}{x^2(x^4 - 1)}$. You can cancel out x^2 in the top and bottom to yield $\frac{x(x^3 - 1)}{x^4 - 1}$ and then remove a factor of x-1 from both top and bottom to yield $\frac{x(x^2 + x + 1)}{x^3 + x^2 + x + 1}$ and that's as simplified as it gets.
For the second question, using the parentheses, work from the inside out, using common denominators to combine fractions.
|
2015-06-04 01:01:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973036408424377, "perplexity": 691.8896696208499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195037030.16/warc/CC-MAIN-20150601214357-00069-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.groundai.com/project/symmetries-of-codeword-stabilized-quantum-codes/
|
Symmetries of Codeword Stabilized Quantum Codes1footnote 11footnote 1This work was partially supported by NSERC, CIFAR, and IARPA.
# Symmetries of Codeword Stabilized Quantum Codes111This work was partially supported by NSERC, CIFAR, and IARPA.
Salman Beigi School of Mathematics, Institute for Research in Fundamental Sciences (IPM)
Niavaran Square, Tehran, Iran
[email protected]
Jianxin Chen Markus Grassl Centre for Quantum Technologies, National University of Singapore
3 Science Drive 2, Singapore 117543
[email protected]
Zhengfeng Ji Institute for Quantum Computing
200 University Avenue West, Waterloo, Ontario, Canada
[email protected]
Qiang Wang School of Mathematics and Statistics, Carleton University
1125 Colonel By Drive, Ottawa, Ontario, Canada
[email protected]
Bei Zeng
###### Abstract
Symmetry is at the heart of coding theory. Codes with symmetry, especially cyclic codes, play an essential role in both theory and practical applications of classical error-correcting codes. Here we examine symmetry properties for codeword stabilized (CWS) quantum codes, which is the most general framework for constructing quantum error-correcting codes known to date. A CWS code can be represented by a self-dual additive code and a classical code , i. e., , however this representation is in general not unique. We show that for any CWS code with certain permutation symmetry, one can always find a self-dual additive code with the same permutation symmetry as such that . As many good CWS codes have been found by starting from a chosen , this ensures that when trying to find CWS codes with certain permutation symmetry, the choice of with the same symmetry will suffice. A key step for this result is a new canonical representation for CWS codes, which is given in terms of a unique decomposition as union stabilizer codes. For CWS codes, so far mainly the standard form has been considered, where is a graph state. We analyze the symmetry of the corresponding graph of , which in general cannot possess the same permutation symmetry as . We show that it is indeed the case for the toric code on a square lattice with translational symmetry, even if its encoding graph can be chosen to be translational invariant.
CWS Codes, Union Stabilizer Codes, Permutation Symmetry, Toric Code
[by]S. Beigi, J. Chen, M. Grassl, Z. Ji, Q. Wang & B. Zeng\subjclassE.4 Coding and Information Theory\serieslogo\EventShortNameTQC 2013 \DOI10.4230/LIPIcs.xxx.yyy.p
## 1 Introduction
Coding theory is an important component of information theory having a long history dating back to Shannon’s seminal 1948 paper that laid the ground for information theory [21]. Coding theory is at the heart of reliable communication, where codes with symmetry, especially cyclic codes, such as the Reed-Solomon codes, are among the most widely used codes in practice [19].
In recent years, it has become evident that quantum communication and computation offer the possibility of secure and high rate information transmission, fast computational solution of certain important problems, and efficient physical simulation of quantum phenomena. However, quantum information processing depends on the identification of suitable quantum error-correcting codes (QECC) to make such processes and machines robust against faults due to decoherence, ubiquitous in quantum systems. Quantum coding theory has hence been extensively developed during the past 15 years [3, 9, 20].
Codeword stabilized (CWS) quantum codes are by far the most general construction of QECC [6]. A CWS code can be represented by a stabilizer state (i. e. a self-dual additive code) and a classical code , i. e. . When is a linear code, the corresponding CWS code is actually a stabilizer code. Also, any CWS code is local Clifford equivalent to a standard form , where is a graph state [6].
The CWS construction encompasses stabilizer (additive) codes and all the known non-additive codes with good parameters. It also leads to many new codes with good parameters, or good algebraic/combinatorial properties, through both analytical and numerical methods. Alternative perspectives of CWS codes have also been analyzed, including the union stabilizer codes (USt) method [11, 12], and the codes based on graphs [18, 23]. Concatenated codes and their generalizations using CWS codes have been developed [1], and decoding methods for CWS codes have been studied as well [17].
Given all the evidence that the CWS framework is a powerful method to construct and analyze QECC, it remains unclear to what extent the stabilizer state and the classical code can represent the symmetry of the CWS code in general. Given the vital importance that the code symmetry plays in coding theory, this understanding becomes crucial since if such a correspondence exists, it can provide practical methods for constructing CWS codes with desired symmetry from and/or with corresponding symmetry.
Unfortunately, there is no immediate clue what answer one can hope for. First of all, the representation is not unique. So for a given CWS code , there might be some stabilizer states and/or classical codes which are more symmetric than others. Perhaps the best known example is the CWS representation for the five-qubit code , where in the ideal case can be chosen as a graph state corresponding to the pentagon graph, and the is chosen as the repetition code . In this case, both and nicely represent the cyclic symmetry of the five-qubit code.
However, there are known ‘bad cases’, too. One example is the seven-qubit Steane code , where although the code itself is cyclic, one cannot find any corresponding to a cyclic graph, even if local Clifford operations are allowed [10]. Nonetheless, we know that the stabilizer group for this code is invariant under cyclic shifts, and the logical operator can be chosen as , therefore the logical can be chosen as a cyclic stabilizer code. This is to say, there exists a representation for such that is cyclic. In general it remains unclear under which conditions a representation for cyclic CWS code with a cyclic stabilizer state exists.
In this work, we address the symmetry properties of CWS codes. We are interested in the permutation symmetry of CWS codes, which includes the important category of cyclic codes. Our main question is, to which extent can the representation and the standard form reflect the symmetry of the corresponding CWS code . We show that for any CWS code with permutation symmetry, one can always find a stabilizer state with the same permutation symmetry as such that . As many good CWS codes are found by starting from a chosen , this ensures that when trying to find CWS codes with certain permutation symmetry, the choice of with the same symmetry will suffice. A key step to reach this main result is to obtain a canonical representation for CWS codes, which is in terms of a unique decomposition as union stabilizer codes.
We know that for the standard form of CWS codes using graph states, it is not always possible to find a graph with the same permutation symmetry. This is partially due to the fact that the local Clifford operation transforming the CWS code into the standard form may break the permutation symmetry of the original code. Also, the graphs usually can only represent the symmetry of the stabilizer generators of the stabilizer state, but not the symmetry of the stabilizer state in general. We show that this is indeed the case for the toric code on a two-dimensional square lattice with translational symmetry, even if its encoding graph can be chosen to be translational invariant.
However, we show that the converse always holds, i. e., any graph and classical code with certain permutation symmetry yields a CWS code with the same symmetry.
## 2 Preliminaries
The single-qudit (generalized) Pauli group is generated by the operators and acting on the qudit Hilbert space , satisfying , where . For simplicity, throughout the paper, we assume that is a prime, although our results naturally extend to prime powers. Denote the computational basis of by . Then, without loss of generality, we can fix the operators and such that and , respectively. Let be the identity operator. The set of operators forms a so-called nice unitary error basis which is a particular basis for the vector space of matrices [15, 16].
The -qudit Pauli group consists of all local operators of the form , where for some integer is an overall phase factor, and for some , is an element of the single-qudit Pauli group of qudit . We can write as or when it is clear what the qudit labels are. The weight of an operator is the number of tensor factors that differ from identity.
The -qudit Clifford group is the group of unitary matrices that map to itself under conjugation. The -qudit local Clifford group is a subgroup in containing elements of the form , where each is a single qudit Clifford operation, i. e., .
A stabilizer group in the Pauli group is defined as an abelian subgroup of which does not contain . A stabilizer consists of Pauli operators for some . As the operators in a stabilizer commute with each other, they can be simultaneously diagonalized. The common eigenspace of eigenvalue is a stabilizer quantum code with length , dimension , and minimum distance . The projection onto the code can be expressed as
PQ=1|S|∑\boldmath{M}∈S\boldmath{M}. (1)
The centralizer of the stabilizer is given by the elements in which commute with all elements in . For , the minimum distance of the code is the minimum weight of all elements in .
If , then there exists a unique -qudit state such that for every . Such a state is called a stabilizer state, and the group is called the stabilizer of . A stabilizer state can also be viewed as a self-dual code over the finite field under the trace inner product [7]. For a stabilizer state, the minimum distance is defined as the minimum weight of the non-trivial elements in [7].
A union stabilizer (USt) code of length is characterized by a stabilizer code with stabilizer , where are independent generators, and a classical code over of length . Note that for a given , the choice of the generators is not unique. Now for a classical code of length with codewords, for each codeword , the corresponding quantum code is given by the subspace stabilized by , , …, . Note that for , the subspaces and are mutually orthogonal. The corresponding USt code is then given by the subspace .
Therefore, the combination of (more precisely, the generators of ) and gives an USt quantum code . Hence we denote a USt code by . The projection onto can be expressed as
PQ=∑\boldmath{c}∈C1pm∑\boldmath{y}∈Fmpω\boldmath{c}⋅\boldmath{y}%\boldmath$g$y11…\boldmath{g}ymm, (2)
where we identify the elements of the finite field with integers modulo .
A CWS code of length is a USt code with . That is, it is characterized by a stabilizer state with stabilizer and a classical code of length . For a CWS code given by , the stabilizer always corresponds to a unique stabilizer state. We will then refer to as the stabilizer state when no confusion arises.
For a CWS code, the projection onto the code space is given by
PQ=∑\boldmath{t}∈C1pn∑\boldmath{x}∈Fnpω\boldmath{t}⋅\boldmath{x}%\boldmath$g$x11…\boldmath{g}xnn, (3)
where we again identify the elements of the finite field with integers modulo .
A CWS code has a permutation symmetry if
PσQ=PQ, (4)
where is the projection onto the space obtained by permuting the qudits of the code according to .
## 3 Canonical form of CWS codes
For a given a CWS code , there might exist another stabilizer state and another classical code such that . In other words, the representation of a CWS code by the stabilizer state and the classical code is non-unique.
In order to discuss the relationship between the symmetry of the CWS code and that of the stabilizer state , we first need to explore the relationship between the different representations of (i. e., the relationship between and , as well as the relationship between and ).
Let us start by recalling that a stabilizer code can be viewed as a CWS code where the classical code is a linear code [6]. A simple way to see this is that for a given stabilizer code with stabilizer generated by , which is a code of dimension , we can choose the larger stabilizer , where mutually commute. Now choose the classical code of length with codewords, where the first coordinates of each codeword are zero. Then we have , i. e., the stabilizer code can then be viewed as a CWS code with stabilizer state and classical code . However, note that the choice of (and hence ) is non-unique, as in particular the choice of is non-unique.
{example}
As an example, consider the five-qubit code with stabilizer
\boldmath{g}1=XZZXI,\boldmath{g}2=IXZZX,\boldmath% {g}3=XIXZZ,\boldmath{g}4=ZXIXZ. (5)
In the CWS picture, the stabilizer state can be chosen as
S=⟨\boldmath{g}1,\boldmath{g}2,\boldmath{g}3,\boldmath{g}4,\boldmath{Z}L⟩, (6)
where is the logical operator. Alternatively, one can choose the stabilizer state
S′=⟨\boldmath{g}1,\boldmath{g}2,\boldmath{g}3,\boldmath{g}4,\boldmath{X}L⟩, (7)
where is the logical operator. For both and , the classical code can be chosen as .
Similarly, a USt code can be viewed as a CWS code with the classical code of length possessing some coset structure, i. e., , where is a linear code. This linear code of length can be readily chosen as the classical code for the CWS representation of the stabilizer code . The code of length can be derived from of length by appending zero coordinates. However, again, the choices of and are non-unique.
In the general situation, we have some freedom in choosing the stabilizer state when representing a stabilizer code or a USt code in the CWS framework. Consequently, for a given CWS code , there are also many different ways to write it in terms of a USt code in general. We will show, however, that we can always obtain a unique stabilizer , when expressing a given CWS code as a USt code. The following theorem gives a canonical form for any CWS code.
{theorem}
Every CWS code has a unique representation as a union stabilizer code.
###### Proof.
To prove this theorem, we will need some lemmas.
{lemma}
[translational invariant codes] Let be a code over with and assume that for some non-zero we have , i. e., the code is invariant with respect to translation by . Then can be written as a disjoint union of cosets of the one-dimensional space generated by , i. e.,
C=⋃\boldmath{t}i∈C′C0+\boldmath{t}i,
where with .
###### Proof.
By assumption, for every , the vector is in the code as well. Hence we can arrange the elements of as follows:
C′\boldmath{t}1\boldmath{t}2…\boldmath{t}M/pC′+\boldmath{s}\boldmath{t}1+\boldmath{s}\boldmath{t}2+\boldmath{s}…\boldmath{t}M/p+\boldmath{s}C′+2% \boldmath{s}\boldmath{t}1+2\boldmath{s}\boldmath{t}2+2\boldmath{s}…\boldmath{t}M/p+2\boldmath{s}\omit⋮⋮⋮⋱⋮C′+(p−1)\boldmath{s}\boldmath{t}1+(p−1)\boldmath{s}\boldmath{t}2+(p−1)\boldmath{s}…\boldmath{t}M/p+(p−1)\boldmath{s}
Every column in this arrangements is a coset . ∎
{lemma}
[vanishing character sum] Let be an arbitrary code of length . Assume that the function
f:Fnp→C;f(% \boldmath{y})=∑\boldmath{c}∈Cω\boldmath{c}⋅\boldmath{y},
where , vanishes outside a proper subspace . Then there exists a non-zero vector such that . What is more, the code can be written as a union of cosets of the linear code , i. e.,
C=⋃\boldmath{t}∈C′C0+\boldmath{t}. (8)
###### Proof.
Let denote the characteristic function of the code , i. e., , and if and only if . Define . Then .
The Fourier transform of over reads
^g(\boldmath{y}) =1√pn∑\boldmath{x}∈Fnpω\boldmath{x}⋅\boldmath{y}g(\boldmath{x}) =1√pn∑\boldmath{x}∈Fnpω\boldmath{x}⋅\boldmath{y}(1−(1−ω)χC(\boldmath{x}))=√pnδ\boldmath{y},\boldmath{0}−1−ω√pn∑\boldmath{x}∈Fnpω\boldmath{x}⋅\boldmath{y}χC(\boldmath{x}) =√pnδ\boldmath{y},\boldmath{0}−1−ω√pn∑\boldmath{c}∈Cω\boldmath{c}⋅% \boldmath{y}=√pnδ\boldmath{y},\boldmath{0}−1−ω√pnf(\boldmath{y}),
where if , and otherwise.
This shows that for , the Fourier transform is proportional to the function , and hence vanishes outside of as well. Recall that that , as is a proper subspace by assumption. Let be a non-zero vector that is orthogonal to all vectors in . Furthermore, let denote the set-complement of in the full vector space.
We want to show that the code is invariant with respect to translations by , i. e., or equivalently, . This is in turn equivalent to showing that . In the following, denotes the inverse Fourier transform:
g(\boldmath{y}+\boldmath{s})=(F−1^g)(\boldmath{y}+\boldmath{s}) =1√pn∑\boldmath{x}∈Fnpω−\boldmath{x}⋅(\boldmath{y}+% \boldmath{s})^g(\boldmath{x}) =1√pn∑\boldmath{x}∈V0ω−\boldmath{x}⋅(\boldmath{y}+\boldmath{s})^g(\boldmath{x})+1√pn∑\boldmath{x}∈Vc0ω−\boldmath{x}⋅(% \boldmath{y}+\boldmath{s})^g(\boldmath{x}) =1√pn∑\boldmath{x}∈V0ω−\boldmath{x}⋅\boldmath{s}ω−\boldmath{x}⋅\boldmath{y}^g(\boldmath{x}) =1√pn∑\boldmath{x}∈V0ω−\boldmath{x}⋅\boldmath{y}^g(\boldmath{x% }) =1√pn∑\boldmath{x}∈V0ω−\boldmath{x}⋅\boldmath{y}^g(\boldmath{x% })+1√pn∑\boldmath{x}∈Vc0ω−\boldmath{x}⋅\boldmath{y}^g(\boldmath{x}) =1√pn∑\boldmath{x}∈Fnpω−\boldmath{x}⋅\boldmath{y}^g(\boldmath{x}) =(F−1^g)(\boldmath{y})=g(\boldmath{y}).
Here we have used the fact that vanishes outside of and that is orthogonal to all vectors in .
From Lemma 3, it follows that the code can be written as a union of cosets of the code generated by all vectors that are orthogonal to . ∎
Now we are ready to prove Theorem 3. Let denote the projection operator onto a CWS code , i. e.
PQ = ∑\boldmath{t}∈C1pn∑\boldmath{x}∈Fnpω\boldmath{t}⋅\boldmath{x}%\boldmath$g$x11…\boldmath{g}xnn = 1pn∑\boldmath{x}∈Fnp⎛⎝∑\boldmath{t}∈Cω\boldmath{t}⋅\boldmath{x}⎞⎠\boldmath{g}x11…\boldmath{g}xnn = 1pn∑\boldmath{x}∈Fnpα\boldmath{x}\boldmath{g}x11…\boldmath{g}xnn (9)
where are the generators of the stabilizer, and is a classical code.
First note that the coefficients in (3) are uniquely determined since the operators are a subset of the error-basis of linear operators on the space . The coefficient is proportional to . On the other hand, , where is the function appearing in Lemma 3. So if the coefficients vanish outside of a proper subspace , the classical code can be decomposed as union of cosets of . Then (3) can be re-written as follows:
PQ =1pn∑\boldmath{x}∈V0⎛⎝∑\boldmath{t′% }∈C′∑\boldmath{c}∈C0ω(% \boldmath{t}′+\boldmath{c})⋅\boldmath{x}% \boldmath{x}⎞⎠\boldmath{g}x11…\boldmath{g}xnn =1pn∑\boldmath{x}∈V0⎛⎝∑\boldmath{c}∈C0ω\boldmath{c}⋅% \boldmath{x}∑\boldmath{t′}∈C′ω\boldmath{t}′⋅\boldmath{x}⎞⎠\boldmath{g}x11…\boldmath{g}xnn =|C0|pn∑% \boldmath{x}∈V0⎛⎝∑\boldmath{t′}∈C′ω\boldmath{t}′⋅\boldmath{x}⎞⎠\boldmath{g}x11…\boldmath{g}xnn (10)
In the last step we have used the fact that the spaces and are orthogonal to each other, i. e., the inner product vanishes. Now assume that the space has dimension and that is a basis of . Then every vector can be expressed as . For every we define the vectors with , forming another classical code . Further, we define the operators . This allows us to express (3) as
PQ = 1pm∑\boldmath{y}∈Fmp⎛⎝∑\boldmath{s}∈Dω\boldmath{s}⋅\boldmath{y}⎞⎠~\boldmath{g}y11…~\boldmath{g}ymm. (11)
Hence, whenever the classical code associated to a CWS code has some non-trivial shift invariance, the projection onto a CWS code can be expressed as a projection onto a USt code (cf. (2)), thereby increasing the dimension of the underlying stabilizer code and reducing the size of the classical code. In order to obtain a unique representation, we may assume that the stabilizer code is of maximal dimension, and hence the classical code is “without any linear structure.”
In order to show uniqueness, consider the coefficients of the expansion of the projection in terms of the operator basis formed by the -qudit Pauli matrices . Clearly, we have . If the group was a proper subgroup of , the coefficients would vanish for outside a proper subspace , contradicting the assumption the classical code has no linear structure.
Note that the stabilizer is only unique up to the choice of some phase factors of the error basis. For example, replacing by will introduce some phase factor which has to be compensated by changing the first coordinate of the codewords of the classical code . To finally fix the degree of freedom, we can enforce , with for and . ∎
## 4 Symmetries of the stabilizer state of a CWS code
We are now ready to discuss the relationship between the symmetries of the CWS code and that of the corresponding stabilizer state .
{theorem}
For any CWS code with permutation symmetry , there exists a stabilizer state with the same permutation symmetry such that .
###### Proof.
To prove this theorem, we will need some lemmas. {lemma} If the projection operator given in Eq. (3) is invariant under a permutation of the qudits, then the stabilizer code related to expressing in terms of a USt code as in Eq. (11) is invariant with respect to the permutation as well.
###### Proof.
The statement follows directly from the uniqueness of the stabilizer group generated by the operators in Eq. (11). ∎
We now prove a lemma for a special case of Theorem 4, when the CWS code is a Calderbank–Shor–Steane (CSS) code [4, 22].
{lemma}
For a CSS code with permutation symmetry , there exists a stabilizer state such that has the same permutation symmetry as .
###### Proof.
For a CSS code , the stabilizer generators can always be chosen such that every generator is either a tensor product of powers of (denoted by ) or a tensor product of powers of (denoted by ). We can use the following matrix form:
[SX00SZ]
As the permutation symmetry of does not change the type of an operator, both and have necessarily the same symmetry . Furthermore, the logical operators can also be chosen as either tensor products of powers of or tensor products of powers of , which correspond to the dual of the classical codes associated to either the stabilizers or the stabilizers, respectively. Without loss of generality let us choose a set of commuting logical operators which are all of type. Then group generated by the set of mutually commuting operators is again invariant under the permutation . As the stabilizer group is maximal, it stabilizes a unique state . Hence is the stabilizer state with the desired symmetry, and the CSS code can be expressed as CWS code in terms of and some classical code . ∎
We now prove a lemma for the stabilizer code case of Theorem 4, which improves the result of Lemma 4.
{lemma}
For a stabilizer code with permutation symmetry , there exists a stabilizer state such that has the same permutation symmetry as .
###### Proof.
To prove this lemma, we shall use a standard form for stabilizers (see [20, Section 10.5.7]):
[IA1A2B0C000DIE]=[SXSZ0S′Z]=[SS′]
where is an matrix, is an matrix, is an matrix, is an matrix, is an matrix, and is an matrix. Similar as in the CSS case, we can choose a set of commuting logical operators which are all of type. In matrix form, they are given by . Then the group generated by the mutually commuting operators in stabilizes a unique state which is invariant with respect to the permutation . Hence is the stabilizer state with the desired symmetry that can be used to express as CWS code with some classical code . ∎
To prove Theorem 4, given a CWS code , we first find its unique decomposition as a USt code , based on Theorem 3. Here is in general a stabilizer code with generators. If has a permutation symmetry , then according to Lemma 4, the stabilizer code must also have the symmetry . Now according to Lemma 4, there exists a quantum state in the stabilizer code which also has the symmetry . Hence is the stabilizer state with the desired symmetry. Note that the stabilizer of the state contains the original stabilizer . Therefore, common eigenspaces of are further decomposed into one-dimensional joint eigenspaces of , and we can rewrite the projection onto the USt code in the form corresponding to a CWS code. ∎
## 5 Symmetries of the Classical Code
Theorem 4 does not make any statement about the symmetry of the classical code. In general, if we insist to use the canonical form of the CWS code as given by Theorem 3, we cannot expect that the (non-linear) classical code associated with the CWS code has the same symmetry as . That is, in this case, even if the stabilizer has the same permutation symmetry as the quantum code , one might not be able to find a classical code with the same symmetry in general. Let us look at an example.
{example}
Consider the stabilizer state (hence a CWS code, denoted by ), which is invariant under all permutations. Using the canonical form of as given by Theorem 3, the group is generated by and all pairs of , which is permutation invariant. However, the classical code consists of the vector which is one in the first coordinate and zero elsewhere, i. e., is a code with a single codeword , which has a smaller symmetry group than that of .
On the other hand, if we choose the group generated by and all pairs of , the corresponding classical code consists just of the zero vector. So in the representation , both and have the same permutation symmetries as .
This example indicates that exploiting the phase factor freedom in the USt code decomposition of a CWS code, and thereby deviating slightly from the canonical form, there is some chance to find both a stabilizer and a classical code with the same permutation symmetry as the CWS code.
To study the properties of the classical code associated with a CWS code , consider the case where the stabilizer state has some permutation symmetry . Then for given generators of the stabilizer , the permuted operators generate the same stabilizer . The transformation can be characterized by a -valued, invertible matrix given by
\boldmath{g}σi=n∏j=1\boldmath{g}Rjij. (12)
Let us write the
|
2020-02-21 04:02:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096865653991699, "perplexity": 364.65751941537997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00428.warc.gz"}
|
https://www.edegan.com/mediawiki/index.php?title=BPP_Field_Exam_2006&oldid=28098
|
# BPP Field Exam 2006
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
The 2006 field exam was on June 23rd 2006. Reference material was permitted, communication was not. It was stated that grading would be based on the assigned times for each question.
## Format and Originators
The 2006 BPP field exam had the following format:
• Morning (3 hrs): Question A.1 or A.2 (2hr), Question B.1 or B.2 (1hrs)
• Afternoon (3 hrs): Question C.1 or C.2 (1hr), Question D (2hrs)
The best guess as to question originators is:
• A.1 - de Figueiredo
• B.1 - Mowery
• B.2 - Dal Bo
• C.1 - Teece (written by Chesbrough)
• C.2 - Morgan
• D - Spiller
## Questions
### A.1: The Theory of Partnerships
A partnership group has a surplus it needs to allocate to the partners at the end of the year. The procedure it uses (as enshrined in its Operating Agreement) is as follows. Decisions are made by a committee-of-the whole (i.e. the entire partnership). At the annual meeting for the partnership, one of the partners is chosen randomly (with each having an equal likelihood of being selected) to propose an allocation to each of the members, including herself. If her proposal is accepted by a majority of the partnership, then that proposal is implemented. If it is not passed by a majority, then another partner is chosen randomly to make a proposal and the procedure repeats. All of the partners prefer higher allocations to lower allocations, and faster decisions to slower decisions.
a.) Outline a model of the above situation. Assuming the game is a single-shot and there are no punishments by the partnership (no reputation effects), discuss the equilibrium for your model. Note: you do not have to solve the model; simply discuss the proposition(s) you expect that could be derived and the intuition(s) behind it (them).
b.) Suppose that the partnership (membership) is stable, infinitely-lived, and makes a surplus allocation decision every year. How would you account for this in your model? Discuss equilibrium behavior and strategies using these assumptions (again you do not need to explicitly solve the model, simply explain your reasoning).
c.) Returning to the one-shot/no-reputation case, consider what would happen if partnership shares are not distributed evenly, and members have probabilities of being recognized which are proportional to their shares. What would be the equilibrium strategies and outcomes you would expect in this case?
d.) Finally, consider what would happen if both voting rights and recognition probabilities were proportional to the shares held, what would you expect in this case?
### A.2: Managerial Productivity & Incentives
Consider Holmstrom's 1982 managerial model, except that the manager knows her productivity parameter from the start. The manager lives for two periods $(t = 1, 2)\,$. Once she is employed by a firm in period $t\,$, the firm's production cost is $C_t = \Beta - e_t\,$, where $\Beta\,$ is the her productivity parameter and $e_t \gt 0\,$ is the effort she exerts at a cost of $\phi(e_t)\,$ (with $\phi' \gt 0\,$ and $\phi'' \gt O\,$). $C_t\,$ is observable but not verifiable, but $\Beta\,$ and $e_t\,$ are not observed by the firms. The manager's utility is $\sum_{t=1}^2 \delta^{t-1}[I_t -\phi(e_t)]\,$, where $I_t\,$ is her income at time $t\,$ and $\delta\,$ is her discount factor. Firms are competitive (they derive the same benefit from the manager's activity) and the manager cannot commit to staying with the same firm. It is common knowledge that $\Beta \in \{\underline{\Beta}, \overline{\Beta}\}\,$, where $\overline{\Beta} \gt \underline{\Beta} \gt 0\,$, and $Pr(\Beta = \overline{\Beta})=p\,$. Let $\Delta\Beta \equiv \overline{\Beta} - \underline{\Beta}\,$, and assume that $\phi(\Delta\Beta) \lt \delta\Delta\Beta\,$.
a.) Derive the best separating equilibrium for the manager (the manager offers the contract). In your answer, comment on the "intuitive criterion".
Depart now from part (a) above by assuming that the cost is verifiable, and that there is only one firm which chooses incentive schemes in both periods. Assume further that the firm cannot commit to a second period incentive scheme in the first period.
b.) Show that if the firm wants to separate the two types, then in the first period it must offer cost targets $\underline{C}\,$ and $\overline{C}\,$ such that $(\underline{C} - \overline{C})\,$ does not converge to zero as $\Delta\Beta\,$ goes to zero. (Using a quadratic $\phi(\cdot)\,$ may simplify the derivations. Hint: look at manager $\underline{B}\,$'s second period rent when she pretends to be $\overline{B}\,$. Write the two intertemporal incentive constraints).
c.) Use an intuitive argument to conclude that the optimal scheme for the firm is to have the two types pool when $\Delta\Beta\,$ is small.
### B.1: Understanding Innovation
a.) Explore the role of complementarities in the innovation process. To what extent has the modern literature augmented our understanding on this issue as compared to Schumpeter.
b.) To what extent are new frameworks needed to explain commercial successes and failures in the innovative process.
### B.2: The Alleged Inefficiencies of Democracy
Discuss the following paragraph:
The democratic system as we know it is bound to be inefficient. Multiple factions will get organized in order to compete for the spoils of government, in what will effectively be a legitimized rent-seeking contest that will dissipate enormous amounts of wealth. It would be far more efficient to either
a.) Have a dictatorship, or
b.) To auction off the presidency every 4 years. In this way, instead of spending effort, competing parties would bid a money transfer to the state which only has to [be] paid by the winner, and which could be returned to citizens.
### C.1: Open Innovation
Professor Henry Chesbrough of the Haas School of Business (a BPP PhD) recently coined the term, "open innovation," which he describes as a new model for managing corporate innovation in which "firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology."
a.) Discuss the use of "open innovation" by the firms and entrepreneurs discussed by Lamoreaux and Sokoloff and the Joseph Schumpeter of The Theory of Economic Development?
b.) What factors underpin the (asserted) revival of the "open innovation" strategy?
c.) What are the characteristics of knowledge that support or undercut the effectiveness of the "open innovation" strategy?
d.) What type of intellectual property "regime" favors the development of an open innovation strategy for the firm seeking to tap the supply of "external ideas" environment?
e.) Under what circumstances can prospective entrants benefit from the open innovation strategies of incumbent firms?
### C.2: The Organizational Implications of the Modigliani-Miller Theorem
The Modigliani-Miller theorem provides conditions where a firm's capital structure is irrelevant to its investment decisions. Yet, in practice, a firm's capital structure does seem to profoundly affect investment decision making as well as the valuation placed on the company in capital markets. Using tools and concepts from the field of contract theory, offer some explanations for the following questions.
a.) What determines the optimal stake in a company for outside equity to take in an owner-operated firm? How does the structure of the stake (debt versus equity) affect this decision?
b.) What determines when a firm should make a seasoned equity offering (i.e. sell additional equity in the capital market after previously issuing equity at some point in the past)? How should markets respond to seasoned equity offerings in determining the valuation of a firm?
c.) A firm is competing with several other firms in trying to acquire some other company that is on the market. What determines the optimal bidding strategy? How should the market respond to a successful acquisition in terms of the share price for the acquirer and the target?
d.) Suppose that a firm is selling some assets and faces bidders offering equity stakes. When would it prefer to sell for equity versus selling in exchange for cash or debt?
In formulating your answers, please describe one or more of the key economic tradeoffs associated with each of the questions and describe how these tradeoffs are affected by some of the details of the situation. In particular, your answers should focus on how the organization will react to changes in its balance sheet in terms of effort undertaken in the organization, projects pursued or not pursued, and acquisitions pursued or not pursued.
In doing so, you may, at your discretion, provide a "toy model" to illustrate your answer. You may also, at your discretion, highlight key existing work illuminating the tradeoffs you have identified.
### D: Public Sector Reform in New Zealand
Labor relations in the New Zealand public service sector have been changed significantly since the State Sector Act (1988). For example, the wage-fixing system in the public service before 1988 was based on a centrally negotiated "annual general adjustment" to pay rates, and a system of service-wide pay rates and scales applied to some 200 employee occupational classifications.
The 1988 Act made chief executives the employer, and set out requirements in terms of the merit principle and other "good employer" provisions. The State Services Commission, the public employer, delegated collective bargaining authority to chief executives, replacing the previous system of centrally determined pay and conditions. Chief executives are accorded "all the rights, duties, and powers of an employer" with respect to the employees of their departments. Thus, chief executives may appoint "such employees [as they] think necessary for the efficient [operation of their department]"; and, subject to agreed conditions of employment, they may also remove employees "at any time".
Chief executives, in consultation with the State Services Commission, make also appointments to Senior Executive Service positions on the basis of merit, for terms not exceeding five years (with eligibility for reappointment), and set conditions of service. In turn, Chief executive appointments are approved by Cabinet, based on the Commission's recommendation. The Cabinet may decline the recommendation, in which case it may direct the Commission to appoint a named person to the position. Chief executives are the equivalent of deputy heads of departments, one level below that of a secretary of a ministry in the US.
Occupational classes were rationalized: for example, the Ministry of External Relations and Trade has reduced the number of classifications from 16 to two, and in 1993 the Ministry of Agriculture and Fisheries had only four. The location of an individual in a salary range is dependent on performance, and chief executives and managers have discretion in where they place individuals within the range. Departments may also pay bonuses to staff if they wish.
Discuss:
a.) Develop a theoretical framework to analyze from a positive perspective the introduction of bureaucratic restrains on chief public officials
b.) Using this framework analyze the implications of New Zealand's reforms removing such bureaucratic restrains
c.) Discuss the institutional conditions that impact the effectiveness of these reforms
d.) Develop an empirical way by which you could test your analysis in (a) through (c).
|
2022-09-27 02:09:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2729414999485016, "perplexity": 2509.119483888132}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00320.warc.gz"}
|
http://math.stackexchange.com/questions/235969/show-that-x41-is-reducible-in-p-adic-numbers-mathbbq-p-for-p2-prime
|
# Show that $x^4+1$ is reducible in p-adic numbers $\mathbb{Q}_p$ for p>2 prime.
This is a homework problem for algebraic number theory but I'm having trouble getting started. Do I use induction in general, or show this holds for $p \equiv 1,3$ (mod 4)?
Any help would be appreciated!
-
Hint: do you have a square root of 2 available? [Clearly you are done if you have a fourth root of -1 to hand] – Mark Bennet Nov 12 '12 at 21:57
Mark's hint is on exactly the right path, but it's worth asking: what would make you even think to use induction on this problem? What would you induct over? – Steven Stadnicki Nov 12 '12 at 22:00
@StevenStadnicki I think he wants to use Newton approximation / Hensel lemma, i.e. stepping from $\mod p^n$ to $\mod p^{n+1}$. – Hagen von Eitzen Nov 12 '12 at 22:06
@HagenvonEitzen Ahhh, okay, that makes a lot more sense. I'd been thinking about stepping from one prime to the next, which seemed crazy on the face of it. – Steven Stadnicki Nov 12 '12 at 22:12
(1) Use the fact that $$X^4+1 = (X^2+\sqrt{-1})(X^2-\sqrt{-1}) = (X^2+\sqrt{2}X+1)(X^2-\sqrt{2}X+1) = (X^2+\sqrt{-2}X-1)(X^2-\sqrt{-2}X-1),$$ to show that $X^4+1$ is reducible in $\mathbb{F}_p[X]$ (even for $p=2$). You may need the law of quadratic reciprocity.
|
2013-12-19 16:57:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606633543968201, "perplexity": 334.58696945782634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345765796/warc/CC-MAIN-20131218054925-00013-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/143599-inequalities-print.html
|
# Inequalities
• May 7th 2010, 04:46 PM
s3a
Inequalities
Can someone remind me of the rules please? When do I switch the inequality? When don't I?
Any input would be GREATLY appreciated!
• May 7th 2010, 04:57 PM
skeeter
Quote:
Originally Posted by s3a
Can someone remind me of the rules please? When do I switch the inequality? When don't I?
Any input would be GREATLY appreciated!
(ii) $\frac{3}{2}>\frac{1}{3}$
but $\frac{2}{3} <\frac{3}{1}$(Happy)
|
2016-07-24 09:07:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383511304855347, "perplexity": 5385.429066600281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823989.0/warc/CC-MAIN-20160723071023-00099-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://www.monachos.gr/forum/showthread.php/4691-%CE%A0%CF%81%CF%8C%CE%B2%CE%BB%CE%B7%CE%BC%CE%B1-%CE%BC%CE%B5-%CF%83%CF%8E%CE%BC%CE%B1%CF%84%CE%B1-panel?s=5606a7605a1701eb3a4aa6c8e8ca67a2&p=56531
|
# : panel
1. ## panel
. 20 . 4 . , , panel . 2 3 . 3 2-3 . . .. 4 .. . , . 30 . 4 5 . . . 8 . 3 . . .
2. .
3. , . , .
4. . ?
5. ? -. panel. .
6. ?
7. Wolf cob 29
8. ;
.
• You may not post new threads
• You may not post attachments
•
|
2019-04-25 07:43:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956413507461548, "perplexity": 4456.071362881986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578711882.85/warc/CC-MAIN-20190425074144-20190425100144-00343.warc.gz"}
|
http://clay6.com/qa/50517/find-the-coordinates-of-the-foci-the-vertices-the-length-of-major-axis-the-
|
Find the coordinates of the foci, the vertices, the length of major axis, the minor axis, the eccentricity and the length of the latus rectum of the ellipse : $\large\frac{x^2}{16}$$+\large\frac{y^2}{9}$$=1$
$\begin {array} {1 1} (A)\;Axis : major \: axis \: is \: along \: x - axis, Foci : ( \pm 7, 0), Vertices : ( \pm 4, 0) , Length \: of \: the \: major \: axis = 8, Length \: of \: the minor \: axis = 6, eccentricity = \large\frac{\sqrt 7}{4} \\ (B)\;Axis : major \: axis \: is \: along \: y - axis, Foci : (0, \pm 7), Vertices : ( 0, \pm 4) , Length \: of \: the \: major \: axis = 8, Length \: of \: the minor \: axis = 6, eccentricity = \large\frac{\sqrt 7}{4} \\ (C)\;Axis : major \: axis \: is \: along \: x - axis, Foci : ( \pm2 \sqrt 7, 0), Vertices : ( \pm 4, 0) , Length \: of \: the \: major \: axis = 8, Length \: of \: the minor \: axis = 6, eccentricity = \large\frac{\sqrt 7}{2} \\ (D)\;Axis : major \: axis \: is \: along \: y - axis, Foci : (0, \pm 2\sqrt 7), Vertices : (0, \pm 4) , Length \: of \: the \: major \: axis = 8, Length \: of \: the minor \: axis = 6, eccentricity = \large\frac{\sqrt 7}{2} \end {array}$
Toolbox:
• Equation of an ellipse along major axis is $\large\frac{x^2}{a^2}$$+\large\frac{y^2}{b^2}$$=1$
• $c = \sqrt{a^2-b^2}$, where c is the focus of the ellipse.
• Coordinates of vertices are $( \pm a, 0)$
• Length of the latus rectum is $\large\frac{2b^2}{a}$
• Eccentricity $e=\large\frac{c}{a}$
• Length of the major axis is 2a ; Length of the minor axis is 2b.
Step 1 :
The given equation is $\large\frac{x^2}{16}$$+\large\frac{y^2}{9}$$=1$
Here $a^2 > b^2$
$\Rightarrow a > b$
$\therefore$ The major axis is along x - axis and the minor axis is along y - axis.
On comparing this with the equation of elipse
$\large\frac{x^2}{a^2}$$+\large\frac{y^2}{b^2}$$=1$
$a=4$ and $b=3$
$\therefore c = \sqrt{a^2-b^2} = \sqrt{16-9}$
Hence the coordinates of the foci are
$( \pm 7, 0)$
Step 2 :
Coordinates of vertices are $(\pm 4, 0)$
Length of the major axis is $2a=2 \times 4 = 8$
Length of the minor axis is $2b=2 \times 3 = 6$
Step 3 :
Eccentricity $e = \large\frac{c}{a}$$=\large\frac{\sqrt 7}{4} Length of the latus rectum is \large\frac{2b^2}{a} = \large\frac{2 \times 9}{4}$$= \large\frac{9}{2}$
edited Jul 15, 2014
|
2018-05-21 16:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935278058052063, "perplexity": 696.8998431742916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00361.warc.gz"}
|
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/1168
|
## A simple mathematical model of the bovine estrous cycle: follicle development and endocrine interactions
Please always quote using this URN: urn:nbn:de:0297-zib-11682
• Bovine fertility is the subject of extensive research in animal sciences, especially because fertility of dairy cows has declined during the last decades. The regulation of estrus is controlled by the complex interplay of various organs and hormones. Mathematical modeling of the bovine estrous cycle could help in understanding the dynamics of this complex biological system. In this paper we present a mathematical model of the bovine estrous cycle that includes the processes of follicle and corpus luteum development and the key hormones that interact to control these processes. Focus in this paper is on development of the model, but also some simulation results are presented, showing that a set of equations and parameters is obtained that describes the system consistent with empirical knowledge. Even though the majority of the mechanisms that are included are only known qualitatively as stimulatory or inhibitory effects, the model surprisingly well features quantitative observations made in reality. This model of the bovine estrous cycle could be used as a basis for more elaborate models with the ability to study effects of external manipulations and genetic differences.
$Rev: 13581$
|
2017-10-19 10:38:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41636234521865845, "perplexity": 1419.2032189685233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823282.42/warc/CC-MAIN-20171019103222-20171019123222-00461.warc.gz"}
|
https://phenomenalworld.org/tag:staff+picks
|
# ↳ Staff+picks
## Student Debt & Racial Wealth Inequality
The effect of cancelling student debt on various measures of individual and group-level inequality has been a matter of controversy, especially given presidential candidates’ recent and high-profile proposals to eliminate outstanding student debt. In this work, I attempt to shed light on the policy counterfactual by analyzing the Survey of Consumer Finances for 2016, the most recent nationally-representative dataset that gives a picture of the demographics of student debt.
When we test the effects of cancelling student debt on the racial wealth gap, we conclude that across all samples, across all quantiles, the racial wealth gap narrows when student debt is cancelled, and it narrows more the more student debt is cancelled.
## The "Next Big Thing" is a Room
If you don’t look up, Dynamicland seems like a normal room on the second floor of an ordinary building in downtown Oakland. There are tables and chairs, couches and carpets, scattered office supplies, and pictures taped up on the walls. It’s a homey space that feels more like a lower school classroom than a coworking environment. But Dynamicland is not a normal room. Dynamicland was designed to be anything but normal.
Led by the famous interface designer Bret Victor, Dynamicland is the offshoot of HARC (Human Advancement Research Community), most recently part of YCombinator Research. Dynamicland seems like the unlikeliest vision for the future of computers anyone could have expected.
Let’s take a look. Grab one of the scattered pieces of paper in the space. Any will do as long as it has those big colorful dots in the corners. Don’t pay too much attention to those dots. You may recognize the writing on the paper as computer code. It’s a strange juxtaposition: virtual computer code on physical paper. But there it is, in your hands. Go ahead and put the paper down on one of the tables. Any surface will do.
|
2019-08-20 13:22:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23227785527706146, "perplexity": 2955.7959393213837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315329.55/warc/CC-MAIN-20190820113425-20190820135425-00205.warc.gz"}
|
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00324572
|
# A Simple Linear Time LexBFS Cograph Recongition Algorithm
3 ALGCO - Algorithmes, Graphes et Combinatoire
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier
Abstract : Recently Lexicographic Breadth First Search (LexBFS) has been shown to be a very powerful tool for the development of linear time, easily implementable recognition algorithms for various families of graphs. In this paper, we add to this work by producing a simple two LexBFS sweep algorithm to recognize the family of cographs. This algorithm extends to other related graph families such as $P_4$-reducible, $P_4$-sparse and distance hereditary. It is an open question whether our cograph recognition algorithm can be extended to a similarly easy algorithm for modular decomposition.
Document type :
Journal articles
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00324572
Contributor : Christophe Paul <>
Submitted on : Thursday, September 25, 2008 - 2:23:55 PM
Last modification on : Thursday, December 17, 2020 - 12:00:24 PM
### Citation
Anna Bretscher, Derek Corneil, Michel Habib, Christophe Paul. A Simple Linear Time LexBFS Cograph Recongition Algorithm. SIAM Journal on Discrete Mathematics, Society for Industrial and Applied Mathematics, 2008, 22 (4), pp.1277-1296. ⟨10.1137/060664690⟩. ⟨lirmm-00324572⟩
Record views
|
2021-06-14 12:22:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26676812767982483, "perplexity": 3483.701579753735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00078.warc.gz"}
|
http://mhze.artofhamburg.de/parabolic-dish-calculator.html
|
These brilliant templates are a fantastic way of helping teach your children about parabolic designs!. No, You must put the feed antenna in focus of the parabola, otherwise it will be defocused and efficiency will drop. The parabola is defined as the locus of a point which moves so that it is always the same distance from a fixed point (called the focus) and a given line (called the directrix). The curved surface with the cross-sectional shape of the antenna receives the waves. λ = wavelength. For example, flashlights use parabolic reflectors to reflect light from the bulb forward in a concentrated beam, and some solar collectors use a parabolic reflector to concentrate the sun's rays to heat water or generate electricity (see Figure 3, below). Alter incoming plane waves traveling along the same axis as the parabola into a wave that is spherical and they all meet at the focus of the reflector. Focus & directrix of a parabola from equation. The most common form is shaped like a dish and is popularly called a dish antenna or parabolic dish. Cable TV vs Satellite TV comparison. 7 dB @ 2320 MHz respectively. Parabolic Dish Calculator This program will calculate the gain and focal piont for a parabola WiFi Video Tutorials Here. This can help a lot on a test, as most teachers probably won’t let you use Wolfram Alpha in class. Saved By the Sun || student handout If you decide to build a parabolic cooker, you will need to find the focal point of your cooker in order to place your water container where the sun's rays will be strongest. type is needed; a parabolic dish is often a good choice. DIRECTV is the oldest satellite TV company. Calculating half of a parabolic curve involves calculating the whole parabola and then taking points on only one side of the vertex. Parabolic antenna. Solid Signal > Satellite Equipment > DISH Network Dishes and LNBs DISH Network Dishes and LNBs 23 Products Found. [ 32 ] explored the ultrasound-enhanced silica gel regeneration mechanism with the field synergy analysis of convective heat and mass transfer. Th ey require minimum skill for construction and can easily produce gains in excess of 25 dB, depending on size and frequency. The missing value will be calculated. Recycling TV Antennas for 2-Meter Use “Green” that aluminum into something useful right in your own backyard! By Ronald Lumachi, WB2CQM Just as quickly as cable TV companies wire up neighborhoods for CATV service, home owners remove TV antennas, masts. If you apply vacuum to one side of a flat circular elastic sheet, will it make a parabolic or spherical shape? Based on intuition, I would expect something more complicated than those two simple shapes, but I did not calculate it. The modern EXCEL spreadsheet just cries out to. I chose a pot that was 9. IPH linear direct steam. The parabola is defined as the locus of a point which moves so that it is always the same distance from a fixed point (called the focus) and a given line (called the directrix). For over 10 years the 24dBi 2. Antennas Directivity, classification of antennas, Directivity and Gain, return loss, Input Impedance, bandwidth of an antenna, Polarization Mismatch Loss, antenna's beamwidth, Radiation Pattern, 1/4 Wavelength Ground Plane, Parabolic Dish, horn antenna derives, Yagi antenna, Panel or Patch antennas, BiQuad antenna, pdf file. Dish and feed are unpainted aluminium. Hey all, I am a 5th year Engineer student and I choose "reducing side lobes of parabolic dish antenna" as my final year project. " I agree with this, but I am looking for some reference that will allow me to calculate what will happen to the dish efficiency if I use only a 1/4. There are hundereds of calculators listed on the website that help students and engineers across Electrical Engineering, Mechanical Engineering, Civil Engineering, Physics, Math and many other sectors. The collected radio signals are reflected to one collection point, called the "focal" point, being the focus of the parabola. k = The Efficiency fo the Antenna. Finding the arc width and height. Calculating half of a parabolic curve involves calculating the whole parabola and then taking points on only one side of the vertex. Discover (and save!) your own Pins on Pinterest. Parabola Calculator for Parabolic Satellite Dish Antenna Design. This translation results in the standard form of the equation we saw previously with x. f/D = D / (16d) where D is the Diameter and d is the depth. Joshua FOLARANMI. For beam-steering reasons, the Arecibo telescope is actually a spherical, rather than parabolic reflector. Kashika and Reddy [8] used satellite dish of 2. Solid aluminium centre fed dish We can find several types of dish antennas taking into consideration the equation which define its shape. Parabolic geometry is the basis for such concentrating solar power (CSP) technologies as troughs or dishes. Point to point (yagi vs. I seem to feel that it is somewhat a reverse of a locus that I've stumbled upon before. Alter incoming plane waves traveling along the same axis as the parabola into a wave that is spherical and they all meet at the focus of the reflector. Calculates the area and circular arc of a parabolic arch given the height and To improve this 'Area of a parabolic arch Calculator', please fill in questionnaire. 0°E dan Satelit Intelsat 19 at 166. Antenna Gain vs Antenna Diameter vs Antenna Efficiency vs Frequency. How deep is the dish? SOLUTION a. Manual making of a parabolic solar collector Gang Xiao Laboratoire J. 0: Students graph quadratic functions and determine the maxima, minima, and zeros of the function. And we want "a" to be 200, so the equation becomes: x 2 = 4ay = 4 × 200 × y = 800y. The parabolic dish is the most common type of antennas for satellite communications. Lucky for the not so math savvy some people smatter. Use our free online app Parabolic Dish Antenna Calculator to determine all important calculations with parameters and constants. The aperture A of a dish antenna is the area of the reflector as seen by a Get Document. Focusing properties of spherical and parabolic mirrors 1. February 2017: Jose Antonio Gutierrez Guerra has provided an interactive parabolic curve calculator for those interested in designing their own reflectors. Measurements for a Parabolic Dish. o Choose an umbrella which isn't too deep. For problems of this nature Pappus’s Centroid Theorems are handy. The fields across the aperture of the parabolic reflector is responsible for this antenna's radiation. A 1m dish is 1m in diameter (D), thus it's area is pi*D/4 But the lecturer threw you a curve ball with "there is a 2. Antenna Gain vs Antenna Diameter vs Antenna Efficiency vs Frequency. The focal point is placed in the central axis of the dish. It is often referred to as a dish antenna. of segments 10 •Petal dimensions; base – 27. I need to find my Parabolic Dish's Focal Point , ive had a go with a few tables but I'm not sure if I'm doing it right , can some one here tell me the focal point distance please,. This design is strong structurally. Such a reflector can be molded or stamped in quantities, much like a salad bowl or a baby wading pool. It consists of a feed antenna pointed towards a parabolic reflector. It is the most practical and compact than competitors and you do not need a fast internet connection to use it. The receiver tube is suspended along the focal line of the trough. Parabolic dish antennas are similar to Yagi antennas with a metal grid that deflects signals in the same direction. an omni or a cardioid microphone, mounted facing inwards, at the central focal point of a portable parabolic dish. Ubiquiti Networks RD-2G24 is a 24dBi parabolic dish antenna operating on 2. 4 GHz Grid antennas from L-com can be used in applications involving directional 802. Focus & directrix of a parabola from equation. The dovetail bar is the same as used by many telescopes, so you can use it just like a normal optical tube. Ubiquiti Networks RD5G-31-AC is a parabolic dish antenna operating between 5. (1,250; microphone not included. Parabolic Dish Concentrator No. Parabolic wifi antennas normally are made up of a grid with spaces between rather than a solid dish. I tested a winston w dish in software and that indicates MUCH longer cook time with good concentration amd a soft focus. expect an offset-feed dish to have higher efficiency than a conventional dish of the same aperture. The parabolic microphone solves both issues since it combines a very narrow polar angle and high forward gain. 4 GHz Grid Antenna Products. Focus of a Parabola We first write the equations of the parabola so that the focal distance (distance from vertex to focus) appears in the equation. Calculating correct parabolic curve for microphone. 4 dB @ 1296 MHz and 15. A Parabolic dish is much like the analogy of a telephoto camera lens, at greater distances, more magnification is offered, but with a narrower field of view. The parabolic reflector, the boom and the feed. The gain of a parabolic antenna is modeled by the following equations:. It works for arcs that are up to a semicircle, so the height you enter must be less than half the width. A parabola antenna is an antenna that uses a parabolic reflector, a curved surface with cross sectional shape of a parabola to direct the radio waves. Parabolic Dish f/D Parabolic Dish Efficiency % Dish Antenna Efficiency 23 cm 13 cm Dish Diameter 1. If these specifications are not available, they may be. Wind load is the amount of stress on an object at a given wind speed. Please visit the 3M0 6-Petal Mesh DISH page RF HAMDESIGN 3 Meter PRIME FOCUS MESH DISH KIT (F/D=0. If these specifications are not available, they may be. Parabolic antennas or called as parabolic dish are very useful in high-gain antennas for point to point communication that carries data or information from one point to another. This program calculates the focal length and (x, y) coordinates for a parabola of any diameter and depth. Parkes has a parabolic dish antenna, 64 m in diameter with a collecting area of 3,216 m 2. Dorrill Dining Hall is located right in the heart of campus across from the Lankford Student Union on Brock Commons. Impedance and isolation have also. Learn how to build a DIY parabolic solar cooker, a shiny satellite dish that focuses sunlight, cooking food without need for the grid. Plastic or aluminum,it does'nt matter too much. If a parabola is translated h units horizontally and k units vertically, the vertex will be (h, k). diameter: 159 cm; 62 1/2in. The dish turns in two axes to track the sun from morning until evening, which provides the highest efficiency among the three systems. A bit more: The dish 500s' have a 'skew' adjustment so the apparent (at least to us on the ground located North or South of the equator,) geosync arc can be compensated for. I want to calculate how far along the arms of the parabola would be covered with a thin reflective sheet of a given width bent to conform to the given parabolic shape. The 12" aluminum parabolic dish increases the amount and sensitivity of collected sound (increases the reception of fainter sounds, or those from a longer distance), and provides more of a directional target location. The most common form is shaped like a dish and is popularly called a dish antenna or parabolic dish. Parabola Wikipedia. With the antenna horn offset out of the beam path, it also did not shadow the dish. This can be accomplished by deliberately under-illuminating the dish. Free Space Path Loss. In this paper we provide a novel approach to antenna selection for extraterrestrial RF communication based on reception parameters of a parabolic dish antenna configured with a dual-mode prime focus feed. Some tips: Align the dipole with the wires in the big dish. The full answer can be found at Parabolic reflector - Wikipedia. 8°W - Latest Sat Freq Update. I need to find my Parabolic Dish's Focal Point , ive had a go with a few tables but I'm not sure if I'm doing it right , can some one here tell me the focal point distance please,. After watching this video lesson, you will be able to write the equation of a parabola in standard form when given just two important points from the parabola. Now, we know that a parabolic shape must have a quadratic function, therefore an equation in standard form of f(x)=ax2+bx+c. The focal point will not change with frequency. A parabolic (or paraboloid or paraboloidal) reflector (or dish or mirror) is a reflective surface used to collect or project energy such as light, sound, or radio waves. Best wishes. Baking in a parabolic solar cooker; Parabolic solar reflectors - Dr. Ashok Kundapur; Calculating the parabolic curve. You feel the heat almost instantly without first heating the room. Collapsible parabolic dish microphone: explanation of the calculations Base dish f = D2 16d focal distance: normalizing factor n: We normalize the measurements so that we can calculate using the simple parabolic function (y = x2). ) the 2d by utilising-product is unfavorable, so the. Parabolic Dish Antenna pictures: Find Parabolic Dish Antenna photos in our online stock photo gallery. The missing value will be calculated. Parabolic Antenna. Useful converters and calculators. Build in lightweight aluminum, our parabolic antennas offer high shape precision and can be installed on different mounts, both is equatorial and altazimuthal. This translation results in the standard form of the equation we saw previously with x. - Calculate the arc-length of the parabola for each rn (call it "l") In our particular example, the formula for our parabola is y = (r/7)^2, where y is the height of the parabola if it is lying like a bowl on the ground -- so the dish turns out to be 1. My Homemade Parabolic Dish Solar Cooker built from scratch. You feel the heat almost instantly without first heating the room. E-mail: [email protected] The bending stiffness of the band as a function of distance along its length is selected so that the band and the flexible material in contact therewith assume a parabolic shape when ends of the band are moved toward one another. Dish diameter [m] Effectivity of illumination (typ. Parabola Calculator for Parabolic Satellite Dish Antenna Design. This guide shows you in an easy-to-follow approach, how to select your dish, choose the best location, install, and eventually fine tune your satellite antenna for the best reception. Parabola Standard Equation. Performance investigation of a new solar desalination unit based on sequential flat plate and parabolic dish collector Mehmet Emin Arguna and Ahmet Afşin Kulaksızb aDepartment of Environmental Engineering, Engineering Faculty, Selcuk University, Konya, Turkey; bDepartment of Electrical-Electronics. The gain of a parabolic antenna is modeled by the following equations:. Cable TV is less likely to be affected by the weather, but is typically more expensive than satellite TV service. High directivity is the main advantage of the parabolic. These figures of gain are not easy to achieve using other forms of antenna. The most common form is shaped like a dish and is popularly called a dish antenna or parabolic dish. calculate the area of the arch culvert 2018/01/26 00:53 Male/20 years old level/An office worker / A public employee/Very/ Purpose of use calculated the length of a hanging strand of lights Comment/Request Extremely useful calculator 2018/01/16 23:56 Male/30 years old level/A teacher / A researcher/Very/ Purpose of use. The microphone itself is placed on the focus of the parabola. (of course they're not exactly like that because that's a cone and we have a dish, but they're similar in how they're both not rectangles for almost the same reason, if that makes sense) And I have no idea how to express that area mathematically. The HeatDish® Plus warms you directly. Figure 1 Cylindrical Parabolic Collector 2 Figure 2 Parabolic Dish Collector 2 Figure 3 Compound Parabolic Concentrator 2 Figure 4 CPC Construction 6 Figure 5 Absorber Shapes 7 Figure 6 Two Stage Concentrators 8 Figure 7 CPC for Solar Still 9 Figure 8 Solar. , Argus Station FN31ng. A dish is a parabola of rotation, a parabolic curve rotated around an axis which passes through the focus and the center of the curve. Figure 2: Frequency-compensating filter characteristic. 5 inches (24 cm). 405 m in diameter with aluminium frame as a reflector to reduce the weight of the structure and cost of the solar system. calculate the focal length on parabolic antenna from your software download and now then what you reading this crab you wth eroor the text is to short. How can I calculate the surface area of a parabolic dish? I've got a satellite-dish shaped object, the curvature of which is a true parabola determined by the equation Y=4aX 2, where 'a' is the focal distance. Its mathematical equation is the parabolic reflector. To calculate the parabola, a mathematical analysis was performed to find the values that satisfy the design criteria, like: diameter, aperture angle, and concentration ratio. Take a look at your 2 metre antenna. This can help a lot on a test, as most teachers probably won’t let you use Wolfram Alpha in class. Examples of applications of the parabolic shape as Parabolic Reflectors and Antennas and a tutorial on how to Find The Focus of Parabolic Dish Antennas and on How Parabolic Dish Antennas work? are included in this site. And he should have said "steradians", see. There are hundereds of calculators listed on the website that help students and engineers across Electrical Engineering, Mechanical Engineering, Civil Engineering, Physics, Math and many other sectors. Parabolic Dish Concentrator No. And calculate the feed dimensions using subtended angle between focus point and dish edges. Handy calculator that helps compute the shape of a parabola, given a focal length - using Microsoft Excel and VBA Вот крупноформатная таблица, которую я пост. click here for parabola equation solver. In the Yagi case, it would be very long, in the parabolic case, very wide. The above parabolic area property calculator is based on the provided equations and does not account for all mathematical limitations. Antenna beamwidth calculator. Explanations. The petal properties are assumed to vary through the thickness of the petal. The concentrating types of CSP plant often have four contributions: solar power tower, solar parabolic dish, parabolic tough collector, and linear Fresnel reflector. Cable and satellite TV are different in more ways than just how they deliver television programming. Use incremental changes in x and y and pythagorus to calculate incremental lengths along parabola arc. It can be used for both parabolic dish and trough style designs. parabolic dishes with F/D ratios as low as 0. It is often referred to as a dish antenna. Antenna Gain vs Antenna Diameter vs Antenna Efficiency vs Frequency. Here, coefficient of y is positive, hence the parabola opens upwards. A bit more: The dish 500s' have a 'skew' adjustment so the apparent (at least to us on the ground located North or South of the equator,) geosync arc can be compensated for. of Electromagnetic Field, Technicka 2, 166 27, Prague, Czech Republic E-mail: [email protected] Use option 'O' of W1GHZ's HDL_ANT program to calculate it yours. If you want to build a parabolic dish where the focus is 200 mm above the surface, what measurements do you need? To make it easy to build, let's have it pointing upwards, and so we choose the x 2 = 4ay equation. If the antennas were separated by 5 ft and were in the far field, the antenna gain could be used with space loss formulas to calculate (at 5 GHz): 10 dBm - 3 dB + 6 dB - 50 dB (space loss) + 6 dB -3 dB = -34 dBm (a much smaller signal). The "Dish" area will list your elevation, azimuth, and skew. 28m •Focal Point 0. Finding the focal length of a parabolic dish f = \frac {D^2}{16d} where D is the diameter of the dish and d is the depth of the dish Offset-fed dish antenna. The basic formula for calculating wind load is as follows: area. It feels three times warmer than a 1,500 watt heater, yet uses a third less energy. It's just a load of mechanical bits and pieces juxtaposed to behave electrically in a special way. Following is the list of useful converters and calculators. They require minimum skill for construction and can easily produce gains in excess of 25 dB, depending on size and frequency. Measurements for a Parabolic Dish. Consider using a “grid” antenna for areas with high-wind and freezing weather. A parabolic antenna is an antenna that uses a parabolic reflector, a curved surface with the cross-sectional shape of a parabola, to direct the radio waves. airMAX ® 2x2 PtP Bridge Dish Antenna. Using the point, we can solve for. It is the most practical and compact than competitors and you do not need a fast internet connection to use it. The parabolic shape has an important property of. Processing. The graph of the satellite dish is given by the equation. Parabolic geometry is the basis for such concentrating solar power (CSP) technologies as troughs or dishes. (see the links in my previous message). The condensed version is that a parabola works somewhat like a lens, in that it has a focus point. You can do a bunch of Googleing and reading and still not understand the mathematics behind wave angle, Illumination angles, etc. 5 Max VSWR, N Female Add to compare Image is for illustrative purposes only. If these specifications are not available, they may be. an omni or a cardioid microphone, mounted facing inwards, at the central focal point of a portable parabolic dish. I seem to feel that it is somewhat a reverse of a locus that I've stumbled upon before. 4GHz ISM band as well as IEEE. This tutorial will show you how to make a parabola using Sketchup. Coarse synthetic tracking is achieved through a microcomputer-based control system to calculate Sun position for transient periods of cloud cover as well as sundown and sunrise positioning. All you need to know is where you are and know how to look at a map. 37) is the general form of the equation and holds for both parabolic troughs and dishes. The gain of these antennas is between the ranges between 8dBi to 20dBi. Of or similar to a parable. Remember, if you change any one of these values, you must hit 'Calculate' again. And calculate the feed dimensions using subtended angle between focus point and dish edges. 8°W - Latest Sat Freq Update. Such Cylindrical Waveguide Monopole versus Right Hand Circularly Polarized Helix: A Parabolic Antenna Feed Comparison David M. In fact the parabolic reflector antenna gain can be as high as 30 to 40 dB. When the parabolic dish was manufactured the dish was rotating at a certain angular velocity, and the resin assumed a shape that matched the rotation rate. Sc Given an offset satellite dish or antenna without LNB bracket or documentation, it is useful to be able to determine the focal point in order to establish where the feed or LNB should be located. to the majority of Hams who have not spent a lot of. Then take x = 0. The focal point will not change with frequency. Parabola = 2/3 base x height. after installing, try an experemint - set up a fake drug deal and see if the police show up!!. Tag: parabola Math Puns. It is the most practical and compact than competitors and you do not need a fast internet connection to use it. Satellite dishes are this shape because radio waves are reflected from the surface of the dish and received into the focus. Focus & directrix of a parabola from equation. The alignment process for tracking the satellite arc for small dish KU band and larger dish C band is very similar. This Freeware program was written to help you design solar or wifi projects using Parabolic Reflectors. d= D 2 ⋅n 2 ⇔n= d D/2 extension of the dish, cross-section extension ring base dish. For example, if you wish to input "25000000", just type "25M" instead. Frekuensi 2. The reason is because the lnb aims at the focus of the parabola aka center. The equation for the parabola in. The local curvature. This design is strong structurally. An important feature of parabolas is that they are even functions, meaning that they are symmetric about their axis. If you wish to pay via paypal e check, please email us for further instructions here to authorize a return our business hours are 8 am pm Monday Friday central time. To view shipping calculator, please click here. [email protected] Consider the parabola and the point {0,f[0]} = (0,0) on the curve. I am getting into long distance listening and I am looking for a parabolic dish in the 18" to 24" range. The properties of a parabola which make it possible to calculate the focal length of an offset dish when the point of origin of the curve isn't known are these: 1. A short five turn helical makes a very good feeder for a wi fi parabolic dish antenna. com Andres Kecskemethy Professor Department of Mechanical Engineering University of Duisburg-Essen. The basic structure of a parabolic dish antenna is shown in Figure 3. Baking in a parabolic solar cooker; Parabolic solar reflectors - Dr. For either calculation you enter the diameter, height, stemwall, and level information then click 'Calculate. HEAT ENERGY COLLECTION VIA PARABOLIC SOLAR REFLECTOR Ritesh Toppo¹, Rahul Tripathi², Rahul Mahamalla³, Er. The dish in Figure 5-2, aimed upward toward a satellite, has its feedhorn pointing toward the sky. Focal length varies proportionally with the size of the dish, so the focal point is also shown in the drawings. The volume of an ellipsoid is given by the following formula: The surface area of a general ellipsoid cannot be expressed exactly by an elementary function. 8m diameter dish, covered with aluminium flyscreen. Vertical rays enter a satellite dish whose cross section is a parabola. This tutorial explains how to use Tonatiuh release 2. How to use parabolic in a sentence. We can deal with their differences in a mathematical explanation or deal with the differences in a very simple way which not only mathematicians but everybody can understand. also par·a·bol·i·cal adj. Kashika and Reddy [8] used satellite dish of 2. cz Introduction Loop feeds are very often used as primary feeds in electrically small parabolic dish antennas. " The difference is that gain in the Parabolic Discone causes the RF field to flatten out into a "pancake" at higher levels of gain. The coverage width and elevation is comparable to that of a Yagi antenna. Parabolic Dish Calculator This program will calculate the gain and focal piont for a parabola WiFi Video Tutorials Here. The period of an orbit over the surface of that parabolic dish coincides with the dish's. Gain: 28 dB. Download Calculator for Parabolic Satellite Dish Antenna Design December 29, 2015 May 26, 2010 by Toni Parabola Calculator for Parabolic Satellite Dish Antenna Design This Freeware program was written to help you design solar collector or wifi projects using parabolic reflectors. Services we offer. A parabolic curve is a curve where any point in the curve is an equal distance from two areas. 5 Max VSWR, N Female Add to compare Image is for illustrative purposes only. gain [dBi] 1. This Freeware program was written to help you design solar collector or wifi projects using parabolic reflectors. Parabola Vertex Focus Directrix Latus. Each look angle calculator has the following limitations: They are designed to be used for orienting a TV satellite dish, which requires far less precision than a satellite broadband Internet connection. Virus-free and 100% clean download. Here is a figure to help you understand the concept of a parabola better. Cutting mirror is just like cutting glass. 1688MHz Center Frequency. Created 31 August 2001. First, for each dish I measured the diameter D and the depth d in the center of the dish in order to calculate the focal length and ƒ/D ratio. The coverage width and elevation is comparable to that of a Yagi antenna. Dileshwar Kumar Sahu* ¹Undergraduate Bachelor of Engg. knowledge to properly design and optimize parabolic dish antenna configurations for their associated “low-noise” communication stations. The Presto® HeatDish® Plus parabolic electric heater uses a computer-designed parabolic reflector to focus heat, like a satellite dish concentrates TV signals. If the said parabola is extended along an axis (becoming a trough) the solar rays are concentrated along a line through the focal point of the trough. Focus of a Parabola We first write the equations of the parabola so that the focal distance (distance from vertex to focus) appears in the equation. Equation (8. Parabolic Antenna. Parabolic Curve One of the most basic and versatile configurations is the Parabolic Curve line design. why pay250 to \$400 for one. The RF feed horn of a Satellite VSAT dish needs to be at the correct distance from the antenna. Due to it's large parabolic reflector, the antenna is able to focus a wireless beam to just 7 degrees which correlates with the 24dBi rating. The dish/engine system is a concentrating solar power (CSP) technology that produces smaller amounts of electricity than other CSP technologies—typically in the range of 3 to 25 kilowatts—but is beneficial for modular use. As long as the gaps appear electrically small at the signal's frequency, they are for all intents and purposes, solid. Shop with confidence. Alter incoming plane waves traveling along the same axis as the parabola into a wave that is spherical and they all meet at the focus of the reflector. Department of Mechanical Engineering, Federal University of Technology Minna, Niger State. o Choose an umbrella which isn't too deep. I want to do it on a fountain and prove it's a parabola but how do i do that? I would really appreciate it if you could help. Ashok Kundapur; Calculating the parabolic curve. 5GHz Parabolic Dishes. The concentrating types of CSP plant often have four contributions: solar power tower, solar parabolic dish, parabolic tough collector, and linear Fresnel reflector. Calculator to determine the location of the reflector & the focal point of the dish antenna / bolic skillet. calculate the focal length on parabolic antenna from your software download and now then what you reading this crab you wth eroor the text is to short. The big challenge is to illuminate the parabolic reflector correctly. The gain of a parabolic antenna is modeled by the following equations:. Use our free online app Parabolic Dish Antenna Calculator to determine all important calculations with parameters and constants. The most common form is shaped like a dish and is popularly called a dish antenna or parabolic dish. 43 cm from the end of the slot. A giant parabolic dish has a diameter of 400 meters and a depth of 80 meters. 99 Nooelec SAWbird+ GOES Barebones - Premium Saw Filter & Cascaded Ultra-Low Noise LNA Module for NOAA (GOES/LRIT/HRIT) Applications. Parabolic reflectors also loose gain if your finished reflector varies much from the correct curve. Free Parabola Foci (Focus Points) calculator - Calculate parabola focus points given equation step-by-step. This only works for prime focus dishes but can be useful. It's just a load of mechanical bits and pieces juxtaposed to behave electrically in a special way. The SunCatcher™ is a 25-kilowatt-electrical (kWe) solar dish Stirling system which consists of a unique radial solar concentrator dish structure that supports an array of curved glass mirror facets, designed to automatically track the sun, collect and focus, that is, concentrate, its solar energy onto a patented Power Conversion Unit (PCU). 3 Input parameters for solar shading simulation 46 Table 3. The reflecting properties of a parabola make parabolic reflectors useful in many practical applications. Chapter 4 PARABOLIC DISH ANTENNAS - QSL. Lucky for the not so math savvy some people smatter. The second parabolic dish (JPL parabolic dish concentrator) has parabolic reflector surface with 12 m diameter. Hamza Hijazi designed a low cost parabolic solar. Thanks to James Sanders (AG6IF), you can convert that unneeded or extra satellite dish into something more useful without incurring the ire of noisy neighbors--namely, a totally disguised antenna for VHF/UHF antenna. As the gain of an antenna increases, the antenna’s radiation pattern decreases until you have a very little window to point or aim your dish correctly. Then measure the depth of the dish, the distance from from the plane of the rim to the centre and call this value d. - A dipole has a gain of 2. February 2017: Jose Antonio Gutierrez Guerra has provided an interactive parabolic curve calculator for those interested in designing their own reflectors. Parabolic Dish Calculator Complete specifications to build a parabolic dish antenna for wifi. There are hundereds of calculators listed on the website that help students and engineers across Electrical Engineering, Mechanical Engineering, Civil Engineering, Physics, Math and many other sectors. Many systems with curved reflecting surfaces (e. For example, enter the width and height, then press "Calculate" to get the radius. Use incremental changes in x and y and pythagorus to calculate incremental lengths along parabola arc. And why are satalite dishes not the same. Calculating & Setting the Focal Length / FD Ratio: Prime Focus Parabolic Reflector Before mounting the LNBF, you will need to know the dish Focal Length and the FD ratio. In addition to higher efficiency, an offset-feed dish has another advantage for satellite reception. A parabola is the set of all points in a plane that are the same distance from a fixed line, called the directrix, and a fixed point (the focus) not on the directrix. Horn and preamp adjustments are much more convenient and accessible. While a large dish can provide gains upward of 30 dB, a small dish can easily provide the 20 to 25 dB gain needed for many satellite applications. How to say parabola. In the project I don't want to design a parabolic dish antenna, but use some techniques to reduce the side lobes only. Th ey require minimum skill for construction and can easily produce gains in excess of 25 dB, depending on size and frequency.
|
2019-11-15 01:01:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49743449687957764, "perplexity": 1394.1503483524305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00017.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/pchoose-the-correct-length-l-versus-square-of-time-perio-jee-main-physics-motion-3tjx17uvjrogamnj
|
1
JEE Main 2023 (Online) 1st February Evening Shift
+4
-1
Choose the correct length (L) versus square of the time period ($$\mathrm{T}^{2}$$) graph for a simple pendulum executing simple harmonic motion.
A
B
C
D
2
JEE Main 2023 (Online) 31st January Morning Shift
+4
-1
The maximum potential energy of a block executing simple harmonic motion is $$25 \mathrm{~J}$$. A is amplitude of oscillation. At $$\mathrm{A / 2}$$, the kinetic energy of the block is
A
9.75 J
B
37.5 J
C
18.75 J
D
12.5 J
3
JEE Main 2023 (Online) 30th January Evening Shift
+4
-1
For a simple harmonic motion in a mass spring system shown, the surface is frictionless. When the mass of the block is $1 \mathrm{~kg}$, the angular frequency is $\omega_{1}$. When the mass block is $2 \mathrm{~kg}$ the angular frequency is $\omega_{2}$. The ratio $\omega_{2} / \omega_{1}$ is
A
$1 / \sqrt{2}$
B
$1 / 2$
C
2
D
$\sqrt{2}$
4
JEE Main 2023 (Online) 25th January Evening Shift
+4
-1
A particle executes simple harmonic motion between $$x=-A$$ and $$x=+A$$. If time taken by particle to go from $$x=0$$ to $$\frac{A}{2}$$ is 2 s; then time taken by particle in going from $$x=\frac{A}{2}$$ to A is
A
4 s
B
1.5 s
C
3 s
D
2 s
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination
|
2023-03-29 00:33:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8119884133338928, "perplexity": 2559.7955466848916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00161.warc.gz"}
|
https://itprospt.com/num/17807372/e-x27-sinly-jysindxcosjcosdy
|
5
# E' sinly)JysindxCOSJcosdy...
## Question
###### E' sinly)JysindxCOSJcosdy
e' sinly) Jysin dx COS Jcos dy
#### Similar Solved Questions
##### 0 40 0 Mw'HOEN Heat
0 40 0 Mw 'HOEN Heat...
##### 2. Related RatesDuring a balloon angioplasty; surgeon positions balloon inside a constricted artery; then pumps air into the balloon. The inflating balloon clears path for blood to flow through the artery:Suppose the balloon has the shape of a cylinder capped with two cones. The length of the cylinder is L; its radius is the the cones each have height h (all measured in mm) As air fills the balloon; increases; while h and stay the same:2a. Give an equation for the volume of the balloon in terms
2. Related Rates During a balloon angioplasty; surgeon positions balloon inside a constricted artery; then pumps air into the balloon. The inflating balloon clears path for blood to flow through the artery: Suppose the balloon has the shape of a cylinder capped with two cones. The length of the cyli...
##### Points) In a random sample of 12 people; the average number of minutes spent sleeping each day is 396 minutes with a sample standard deviation of 91 minutes Find the 90% confidence interval for the mean: Show your work to receive credit:
points) In a random sample of 12 people; the average number of minutes spent sleeping each day is 396 minutes with a sample standard deviation of 91 minutes Find the 90% confidence interval for the mean: Show your work to receive credit:...
##### Test: POST UNIT 8 TEST Ch.Submit TestTilo Qrcaiion:6 of 12 /0 complete)Thix Tet: 12 pts possibleQuestion HelpMneinDn conelation bokunonhro variabns negaliv= chalcansaid about Iho Teopo nltho regrossion Eno?Choose Ine correct answrer below"Posilivo Noqulivo Wcen ortniating Felaleeet
Test: POST UNIT 8 TEST Ch. Submit Test Tilo Qrcaiion: 6 of 12 /0 complete) Thix Tet: 12 pts possible Question Help MneinDn conelation bokunonhro variabns negaliv= chalcan said about Iho Teopo nltho regrossion Eno? Choose Ine correct answrer below" Posilivo Noqulivo Wcen ortniating Felaleeet...
##### The curvature for the curve 7() = ViTcosti ViTsimtj - 13k isSelect one: snl+costj~inti-cotjE None of these answers
The curvature for the curve 7() = ViTcosti ViTsimtj - 13k is Select one: snl+costj ~inti-cotj E None of these answers...
##### 4. Let X.be iid exponential with mean 0_(a) Find an unbiased estimator of based only on Y = min(X; :n)_(b) Find the UMVUE of 0. Prove that it is better than your estimator in (a) by explicitly comparing the variances:
4. Let X. be iid exponential with mean 0_ (a) Find an unbiased estimator of based only on Y = min(X; :n)_ (b) Find the UMVUE of 0. Prove that it is better than your estimator in (a) by explicitly comparing the variances:...
##### OludIS (GituM Lnseyto.pointa) Find pan cular scuicnIno gygtemequations251* 238y ~ 8e 'Y =-224* _ 257y.Both ol vour fuaIion:ccmectieceiye Ciedilx()=St) =
OludIS (GituM Lnseyto. pointa) Find pan cular scuicn Ino gygtem equations 251* 238y ~ 8e ' Y =-224* _ 257y. Both ol vour fuaIion: ccmect ieceiye Ciedil x()= St) =...
##### Mutually beneficial associations between certain soil fungi and the roots of most plant species are called (a) mycorrhizae (b) pneumatophores (c) nodules (d) rhizobia (e) humus.
Mutually beneficial associations between certain soil fungi and the roots of most plant species are called (a) mycorrhizae (b) pneumatophores (c) nodules (d) rhizobia (e) humus....
##### Find the total differential for $z=\sin e^{x^{2}} \cos y$.
Find the total differential for $z=\sin e^{x^{2}} \cos y$....
##### Question 20 ptsThe Open-Top Box ProblemAn open-top box Is made Irom single square sheet of metal that Is 40 Inches in length Squares are cut trom the corners with an unknown dimension and the Ilaps are folded Up to create box with no tOp. Determine the volume Iunction terms of one varlable 2. Determine the fejilble domaln of the volume function 3. Apply Fermnat s Theorem t0 find the dimenstonal cnt Injt Wl maxdinlzs the volume Determine the maxlmum volume and dlmensionsClick t0 procccd whet yo
Question 2 0 pts The Open-Top Box Problem An open-top box Is made Irom single square sheet of metal that Is 40 Inches in length Squares are cut trom the corners with an unknown dimension and the Ilaps are folded Up to create box with no tOp. Determine the volume Iunction terms of one varlable 2. D...
##### Another dilierential €quarion that ipopularion Erowth called the Gotnpertz equation:cl ( P)eSclve J rhfs diflerenrixl equutio for ULK 3I0O ,d inirial population R 400.b; Compure rhe limirig value ofrhe size ofthe population:Jim P(4)Ar whar value of Pdars grow' fstese; Thar what Occurwhen hrsr derivariveJkso whiere An indlection pointFor determinc tlc t-values wheta che graph COnciv WP And concave down EAe Wnnt (licte mucre than one interval; us eapital U for ataEinc: You found thc P whuch
Another dilierential €quarion that i popularion Erowth called the Gotnpertz equation: cl ( P)e Sclve J rhfs diflerenrixl equutio for ULK 3I0O ,d inirial population R 400. b; Compure rhe limirig value ofrhe size ofthe population: Jim P(4) Ar whar value of Pdars grow' fstese; Thar what Occu...
##### Problems 65–68 are based on material learned earlier in the course. The purpose of these problems is to keep the material fresh in your mind so that you are better prepared for the final exam. Find the area and circumference of a circle of radius $13 \mathrm{cm}$
Problems 65–68 are based on material learned earlier in the course. The purpose of these problems is to keep the material fresh in your mind so that you are better prepared for the final exam. Find the area and circumference of a circle of radius $13 \mathrm{cm}$...
##### Use the accompanying data set to complete the following actions_ Find the quartiles_ b. Find the interquartile range. Identify any outliers41 51 35 43 40 37 41 48 43 37 35 54 43 35 15 52 39 50 29 29a. Find the quartilesThe first quartile, Q1 , is The second quartile 02' The third quartile, Q3, is (Type integers or decimals ) b. Find the interquartile range. The interquartile range (IQR) is (Type an integer or a decimal.) c: Identify any outliers Choose the correct choice below:There exists
Use the accompanying data set to complete the following actions_ Find the quartiles_ b. Find the interquartile range. Identify any outliers 41 51 35 43 40 37 41 48 43 37 35 54 43 35 15 52 39 50 29 29 a. Find the quartiles The first quartile, Q1 , is The second quartile 02' The third quartile, ...
##### Find the total length of the cardioid r=8(1+costheta)
find the total length of the cardioid r=8(1+costheta)...
##### 21. What is the product ofthe following reaction? Hsot ~CHLCH; Bra
21. What is the product ofthe following reaction? Hsot ~CHLCH; Bra...
|
2022-08-19 23:06:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6609278321266174, "perplexity": 8017.964445724231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00726.warc.gz"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&page=toc&handle=euclid.dmj/1077243034
|
Published by Duke University Press since its inception in 1935, the Duke Mathematical Journal is one of the world's leading mathematical journals. Without specializing in a small number of subject areas, it emphasizes the most active and influential areas of current mathematics.
### Volume 85, Number 1
#### Publication Date: October 1996
Stacks of stable maps and Gromov-Witten invariants
K. Behrend and Yu. Manin; 1-60
Combinatorics of Fulton’s essential set
Kimmo Eriksson and Svante Linusson; 61-76
Global bifurcation results for a semilinear elliptic equation on all of $\mathbb{R}^N$
K. J. Brown and N. Stavrakakis; 77-94
Correlation for surfaces of general type
Brendan Hassett; 95-107
On stable minimal surfaces in manifolds of positive bi-Ricci curvatures
Ying Shen and Rugang Ye; 109-116
Local statistics for random domino tilings of the Aztec diamond
Henry Cohn, Noam Elkies and James Propp; 117-166
On the multiplicities of the discrete series of semisimple Lie groups
Kaoru Hiraga; 167-181
Higher Chow groups and the Hodge-$\mathcal{D}$-conjecture
James D. Lewis; 183-207
Lagrangian intersection under Legendrian deformations
Kaoru Ono; 209-225
Restriction theorems and semilinear Klein-Gordon equations in $(1+3)$-dimensions
Hans Lindblad and Christopher D. Sogge; 227-252
Sobolev inequalities and Myers’s diameter theorem for an abstract Markov generator
D. Bakry and M. Ledoux; 253-270
Correction to “Congruences between cusp forms: The $(p,p)$ case”
Chandrashekhar Khare; 271
|
2013-05-24 14:38:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36334171891212463, "perplexity": 9205.94018991027}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704664826/warc/CC-MAIN-20130516114424-00068-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://mathshistory.st-andrews.ac.uk/Biographies/Lions/
|
# Pierre-Louis Lions
### Quick Info
Born
11 August 1956
Grasse, Alpes-Maritimes, France
Summary
Pierre-Louis Lions is a French mathematician best known for his work on partial differential equations and the calculus of variations.
### Biography
Pierre-Louis Lions is the son of the famous mathematician Jacques-Louis Lions and Andrée Olivier. He was born in Grasse, Alpes-Maritimes in the Provence- Alpes- Côte- d'Azur region of France, northwest of Cannes. It was the birthplace of his father and the town the family considered home, although at the time of his birth his father was a professor at the University of Nancy. Let us note that Grasse is not very far from Draguignan where Alain Connes, who won a Fields Medal 12 years before Lions won his Fields Medal, was born.
When Pierre-Louis was six years old his father became a professor in Paris and the family lived there. He attended the Lycée Pasteur and then the Lycée Louis-le-Grand before entering the École Normale Supérieure in 1975. Lions studied at the École Normale Supérieure from 1975 to 1979. His thesis, supervised by H Brézis, was presented to the University of Pierre and Marie Curie (formally Paris VI when the University of Paris was split into thirteen separate universities in 1970) and in 1979 he received his Doctorat d'Etat es sciences. On 1 December 1979, after he received his doctorate, Lions married Lila Laurenti. They have one child Dorian.
From 1979 to 1981 Lions held a research post at the Centre National de la Recherche Scientifique in Paris. Then, in 1981, he was appointed professor at the University of Paris-Dauphine. While still holding this post he was attached to the Centre National de la Recherche Scientifique as Director of Research in 1995. He has also held the position of Professor of Applied Mathematics at the École Polytechnique from 1992.
Lions has made some of the most important contributions to the theory of nonlinear partial differential equations through the 1980s and 1990s. Evans, in [2], writes:-
He has made truly fundamental discoveries cutting across many disciplines, pure and applied, and his publications are so numerous and varied as to defy easy classification. Keep in mind that there is in truth no central core theory of nonlinear partial differential equations, nor can there be. The sources of partial differential equations are so many - physical, probalistic, geometric etc. - that the subject is a confederation of diverse subareas, each studying different phenomena for different nonlinear partial differential equation by utterly different methods. Pierre-Louis Lions is unique in his unbelievable ability to transcend these boundaries and to solve pressing problems throughout the field.
The references quoted [2], [3] and [4] decribe some important aspects of Lions work which led to the award of a Fields Medal at the International Congress of Mathematicians in Zürich in 1994. The first area of Lions work that is highlighted by both [1] and [3] is his work on "viscosity solutions" for nonlinear partial differential equations. The method was first introduced by Lions in joint work with M G Crandall in 1983 in which they studied Hamilton-Jacobi equations. Lions and others have since applied the method to a wide class of partial differential equations, the so-called "fully nonlinear second order degenerate elliptic partial differential equations." The problem that arises is decribed in [2]:-
... such nonlinear partial differential equation simply do not have smooth or even $C^{1}$ solutions existing after short times. ... The only option is therefore to search for some kind of "weak" solution. This undertaking is in effect to figure out how to allow for certain kinds of "physically correct" singularities and how to forbid others. ... Lions and Crandall at last broke open the problem by focusing attention on viscosity solutions, which are defined in terms of certain inequalities holding wherever the graph of the solution is touched on one side or the other by a smooth test function.
Another equally innovative piece of work by Lions was his work on the Boltzmann equation and other kinetic equations. The Boltzmann equation keeps track of interactions between colliding particles, not individually but in terms of a density. In 1989 Lions, in joint work with DiPerma, was the first to give a rigorous solution with arbitrary initial data.
Another major contribution by Lions, in a long series of important papers, is to variational problems. Varadhan, speaking at the Congress of Mathematicians in Zürich in 1994 about Lions' work [4], said:-
There are many nonlinear PDEs that are Euler equations for variational problems. The first step in solving such equations by the variational method is to show that the extremum is attained. This requires some coercivity or compactness. If the quantity to be minimised has an "energy"-like term involving derivatives, then one has control on local regularity along a minimising sequence.
Lions's clever idea was to introduce "concentration compactness" techniques which look at energy concentrations and so avoid problems which occur when examining the minimising sequences without compactness. He introduced certain measures to handle the concentrations.
Lions has received many awards for his outstanding contributions to mathematics. He is a member of the French Academy of Sciences and he was awarded prizes by the Academy, the Doistau-Blutet Foundation Prize in 1986 and the Ampère Prize in 1992. He also received the IBM Prize in 1987 and the Philip Morris Prize in 1991.
In addition to the Paris Academy, Lions has been elected a member of the Naples Academy and the European Academy. He is also Chevalier of the Légion d'Honneur. He has been awarded an honorary doctorate from Heriot-Watt University in Edinburgh, Scotland. He is on the editorial board of around 25 journals world-wide.
Finally let us note that Lions was awarded the Prix Thomson (2004) and, on behalf of his team, the Prix Institut de Finance Europlace (2003).
Lions lists his hobbies as cinema and reading, and his favourite sports as rugby and swimming.
### References (show)
1. Biography in Encyclopaedia Britannica. http://www.britannica.com/biography/Pierre-Louis-Lions
2. J Lindenstrauss, L C Evans, A Douady, A Shalev and N Pippenger, Fields Medals and Nevanlinna Prize presented at ICM-94 in Zürich, Notices Amer. Math. Soc. 41 (9) (1994), 1103-1111.
3. M Vanninathan, On the work of P -L Lions, Current Sci. 70 (2) (1996), 125-135.
4. S R S Varadhan, The work of Pierre-Louis Lions, Proceedings of the International Congress of Mathematicians, Zurich, 1994 1 (Basel, 1995), 6-10.
|
2023-04-02 02:16:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46055006980895996, "perplexity": 1176.0418794519753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00706.warc.gz"}
|
https://docs.chef.io/windows.html
|
# Chef for Microsoft Windows¶
[edit on GitHub]
The chef-client has specific components that are designed to support unique aspects of the Microsoft Windows platform, including Windows PowerShell, Internet Information Services (IIS), and SQL Server.
• The chef-client is installed on a machine running Microsoft Windows by using a Microsoft Installer Package (MSI)
• Over a dozen resources dedicated to the Microsoft Windows platform are built into the chef-client.
• Use the dsc_resource to use PowerShell DSC resources in Chef!
• Two knife plugins dedicated to the Microsoft Windows platform are available: knife azure is used to manage virtual instances in Microsoft Azure; knife windows is used to interact with and manage physical nodes that are running Microsoft Windows, such as desktops and servers
• Many community cookbooks on Supermarket provide Windows specific support. Chef maintains cookbooks for PowerShell, IIS, SQL Server, and Windows.
• Two community provisioners for Kitchen: kitchen-dsc and kitchen-pester
The most popular core resources in the chef-client—cookbook_file, directory, env, execute, file, group, http_request, link, mount, package, remote_directory, remote_file, ruby_block, service, template, and user—work the same way in Microsoft Windows as they do on any UNIX- or Linux-based platform.
The file-based resources—cookbook_file, file, remote_file, and template—have attributes that support unique requirements within the Microsoft Windows platform, including inherits (for file inheritance), mode (for octal modes), and rights (for access control lists, or ACLs).
Note
The Microsoft Windows platform does not support running as an alternate user unless full credentials (a username and password or equivalent) are specified.
The following sections are pulled in from the larger https://docs.chef.io/ site and represents the documentation that is specific to the Microsoft Windows platform, compiled here into a single-page reference.
## Install the chef-client on Windows¶
The chef-client can be installed on machines running Microsoft Windows in the following ways:
• By using knife windows to bootstrap the chef-client; this process requires the target node be available via the WinRM port (typically port 5985)
• By downloading the chef-client to the target node, and then running the Microsoft Installer Package (MSI) locally
• By using an existing process already in place for managing Microsoft Windows machines, such as System Center
To run the chef-client at periodic intervals (so that it can check in with the Chef server automatically), configure the chef-client to run as a service or as a scheduled task. (The chef-client can be configured to run as a service during the setup process.)
The chef-client can be used to manage machines that run on the following versions of Microsoft Windows:
Operating System Version Architecture
Windows 2008 R2, 2012, 2012 R2, 2016 x86_64
(The recommended amount of RAM available to the chef-client during a chef-client run is 512MB. Each node and workstation must have access to the Chef server via HTTPS.)
The Microsoft Installer Package (MSI) for Microsoft Windows is available at https://downloads.chef.io/chef.
After the chef-client is installed, it is located at C:\chef. The main configuration file for the chef-client is located at C:\chef\client.rb.
### Set the System Ruby¶
To set the system Ruby for the Microsoft Windows platform the steps described for all platforms are true, but then require the following manual edits to the chef shell-init bash output for the Microsoft Windows platform:
1. Add quotes around the variable assignment strings.
2. Convert C:/ to /c/.
3. Save those changes.
### Spaces and Directories¶
Directories that are used by Chef on Windows cannot have spaces. For example, C:\Users\User Name will not work, but C:\Users\UserName will. Chef commands may fail if used against a directory with a space in its name.
### Top-level Directory Names¶
Windows will throw errors when path name lengths are too long. For this reason, it’s often helpful to use a very short top-level directory, much like what is done in UNIX and Linux. For example, Chef uses /opt/ to install the Chef development kit on macOS. A similar approach can be done on Microsoft Windows, by creating a top-level directory with a short name. For example: C:\chef.
### Use knife-windows¶
The knife windows subcommand is used to configure and interact with nodes that exist on server and/or desktop machines that are running Microsoft Windows. Nodes are configured using WinRM, which allows native objects—batch scripts, Windows PowerShell scripts, or scripting library variables—to be called by external applications. The knife windows subcommand supports NTLM and Kerberos methods of authentication.
For more information about the knife windows plugin, see windows.
#### Ports¶
WinRM requires that a target node be accessible via the ports configured to support access via HTTP or HTTPS.
#### Msiexec.exe¶
Msiexec.exe is used to install the chef-client on a node as part of a bootstrap operation. The actual command that is run by the default bootstrap script is:
### Use MSI Installer¶
A Microsoft Installer Package (MSI) is available for installing the chef-client on a Microsoft Windows machine from Chef Downloads.
#### Enable as a Scheduled Task¶
To run the chef-client at periodic intervals (so that it can check in with the Chef server automatically), configure the chef-client to run as a scheduled task. This can be done via the MSI, by selecting the Chef Unattended Execution Options –> Chef Client Scheduled Task option on the Custom Setup page or by running the following command after the chef-client is installed:
For example:
$SCHTASKS.EXE /CREATE /TN ChefClientSchTask /SC MINUTE /MO 30 /F /RU "System" /RP /RL HIGHEST /TR "cmd /c \"C:\opscode\chef\embedded\bin\ruby.exe C:\opscode\chef\bin\chef-client -L C:\chef\chef-client.log -c C:\chef\client.rb\"" Refer Schedule a Task for more details. After the chef-client is configured to run as a scheduled task, the default file path is: c:\chef\chef-client.log. ### Use an Existing Process¶ Many organizations already have processes in place for managing the applications and settings on various Microsoft Windows machines. For example, System Center. The chef-client can be installed using this method. ### PATH System Variable¶ On Microsoft Windows, the chef-client must have two entries added to the PATH environment variable: • C:\opscode\chef\bin • C:\opscode\chef\embedded\bin This is typically done during the installation of the chef-client automatically. If these values (for any reason) are not in the PATH environment variable, the chef-client will not run properly. This value can be set from a recipe. For example, from the php cookbook: # the following code sample comes from the package recipe in the php cookbook: https://github.com/chef-cookbooks/php if platform?('windows') include_recipe 'iis::mod_cgi' install_dir = File.expand_path(node['php']['conf_dir']).gsub('/', '\\') windows_package node['php']['windows']['msi_name'] do source node['php']['windows']['msi_source'] installer_type :msi options %W[ /quiet INSTALLDIR="#{install_dir}" ADDLOCAL=#{node['php']['packages'].join(',')} ].join(' ') end ... ENV['PATH'] += ";#{install_dir}" windows_path install_dir ... ## Proxy Settings¶ To determine the current proxy server on the Microsoft Windows platform: 1. Open Internet Properties. 2. Open Connections. 3. Open LAN settings. 4. View the Proxy server setting. If this setting is blank, then a proxy server may not be available. To configure proxy settings in Microsoft Windows: 1. Open System Properties. 2. Open Environment Variables. 3. Open System variables. 4. Set http_proxy and https_proxy to the location of your proxy server. This value MUST be lowercase. ## Resources¶ A resource is a statement of configuration policy that: • Describes the desired state for a configuration item • Declares the steps needed to bring that item to the desired state • Specifies a resource type—such as package, template, or service • Lists additional details (also known as resource properties), as necessary • Are grouped into recipes, which describe working configurations ### Common Functionality¶ The following sections describe Microsoft Windows-specific functionality that applies generally to all resources: #### Relative Paths¶ The following relative paths can be used with any resource: #{ENV['HOME']} Use to return the ~ path in Linux and macOS or the %HOMEPATH% in Microsoft Windows. ##### Examples¶ template "#{ENV['HOME']}/chef-getting-started.txt" do source 'chef-getting-started.txt.erb' mode '0755' end #### Windows File Security¶ To support Microsoft Windows security, the template, file, remote_file, cookbook_file, directory, and remote_directory resources support the use of inheritance and access control lists (ACLs) within recipes. Note Windows File Security applies to the cookbook_file, directory, file, remote_directory, remote_file, and template resources. ##### ACLs¶ The rights property can be used in a recipe to manage access control lists (ACLs), which allow permissions to be given to multiple users and groups. Use the rights property can be used as many times as necessary; the chef-client will apply them to the file or directory as required. The syntax for the rights property is as follows: rights permission, principal, option_type => value where permission Use to specify which rights are granted to the principal. The possible values are: :read, :write, read_execute, :modify, and :full_control. These permissions are cumulative. If :write is specified, then it includes :read. If :full_control is specified, then it includes both :write and :read. (For those who know the Microsoft Windows API: :read corresponds to GENERIC_READ; :write corresponds to GENERIC_WRITE; :read_execute corresponds to GENERIC_READ and GENERIC_EXECUTE; :modify corresponds to GENERIC_WRITE, GENERIC_READ, GENERIC_EXECUTE, and DELETE; :full_control corresponds to GENERIC_ALL, which allows a user to change the owner and other metadata about a file.) principal Use to specify a group or user name. This is identical to what is entered in the login box for Microsoft Windows, such as user_name, domain\user_name, or user_name@fully_qualified_domain_name. The chef-client does not need to know if a principal is a user or a group. option_type A hash that contains advanced rights options. For example, the rights to a directory that only applies to the first level of children might look something like: rights :write, 'domain\group_name', :one_level_deep => true. Possible option types: Option Type Description :applies_to_children Specify how permissions are applied to children. Possible values: true to inherit both child directories and files; false to not inherit any child directories or files; :containers_only to inherit only child directories (and not files); :objects_only to recursively inherit files (and not child directories). :applies_to_self Indicates whether a permission is applied to the parent directory. Possible values: true to apply to the parent directory or file and its children; false to not apply only to child directories and files. :one_level_deep Indicates the depth to which permissions will be applied. Possible values: true to apply only to the first level of children; false to apply to all children. For example: resource 'x.txt' do rights :read, 'Everyone' rights :write, 'domain\group' rights :full_control, 'group_name_or_user_name' rights :full_control, 'user_name', :applies_to_children => true end or: rights :read, ['Administrators','Everyone'] rights :full_control, 'Users', :applies_to_children => true rights :write, 'Sally', :applies_to_children => :containers_only, :applies_to_self => false, :one_level_deep => true Some other important things to know when using the rights attribute: • Only inherited rights remain. All existing explicit rights on the object are removed and replaced. • If rights are not specified, nothing will be changed. The chef-client does not clear out the rights on a file or directory if rights are not specified. • Changing inherited rights can be expensive. Microsoft Windows will propagate rights to all children recursively due to inheritance. This is a normal aspect of Microsoft Windows, so consider the frequency with which this type of action is necessary and take steps to control this type of action if performance is the primary consideration. Use the deny_rights property to deny specific rights to specific users. The ordering is independent of using the rights property. For example, it doesn’t matter if rights are granted to everyone is placed before or after deny_rights :read, ['Julian', 'Lewis'], both Julian and Lewis will be unable to read the document. For example: resource 'x.txt' do rights :read, 'Everyone' rights :write, 'domain\group' rights :full_control, 'group_name_or_user_name' rights :full_control, 'user_name', :applies_to_children => true deny_rights :read, ['Julian', 'Lewis'] end or: deny_rights :full_control, ['Sally'] ##### Inheritance¶ By default, a file or directory inherits rights from its parent directory. Most of the time this is the preferred behavior, but sometimes it may be necessary to take steps to more specifically control rights. The inherits property can be used to specifically tell the chef-client to apply (or not apply) inherited rights from its parent directory. For example, the following example specifies the rights for a directory: directory 'C:\mordor' do rights :read, 'MORDOR\Minions' rights :full_control, 'MORDOR\Sauron' end and then the following example specifies how to use inheritance to deny access to the child directory: directory 'C:\mordor\mount_doom' do rights :full_control, 'MORDOR\Sauron' inherits false # Sauron is the only person who should have any sort of access end If the deny_rights permission were to be used instead, something could slip through unless all users and groups were denied. Another example also shows how to specify rights for a directory: directory 'C:\mordor' do rights :read, 'MORDOR\Minions' rights :full_control, 'MORDOR\Sauron' rights :write, 'SHIRE\Frodo' # Who put that there I didn't put that there end but then not use the inherits property to deny those rights on a child directory: directory 'C:\mordor\mount_doom' do deny_rights :read, 'MORDOR\Minions' # Oops, not specific enough end Because the inherits property is not specified, the chef-client will default it to true, which will ensure that security settings for existing files remain unchanged. #### Properties for File-based Resources¶ This resource has the following attributes: Attribute Description group A string or ID that identifies the group owner by group name, including fully qualified group names such as domain\group or group@domain. If this value is not specified, existing groups remain unchanged and new group assignments use the default POSIX group (if available). inherits Microsoft Windows only. Whether a file inherits rights from its parent directory. Default value: true. mode If mode is not specified and if the file already exists, the existing mode on the file is used. If mode is not specified, the file does not exist, and the :create action is specified, the chef-client assumes a mask value of '0777' and then applies the umask for the system on which the file is to be created to the mask value. For example, if the umask on a system is '022', the chef-client uses the default value of '0755'. Microsoft Windows: A quoted 3-5 character string that defines the octal mode that is translated into rights for Microsoft Windows security. For example: '755', '0755', or 00755. Values up to '0777' are allowed (no sticky bits) and mean the same in Microsoft Windows as they do in UNIX, where 4 equals GENERIC_READ, 2 equals GENERIC_WRITE, and 1 equals GENERIC_EXECUTE. This property cannot be used to set :full_control. This property has no effect if not specified, but when it and rights are both specified, the effects are cumulative. owner A string or ID that identifies the group owner by user name, including fully qualified user names such as domain\user or user@domain. If this value is not specified, existing owners remain unchanged and new owner assignments use the current user (when necessary). path The full path to the file, including the file name and its extension. Microsoft Windows: A path that begins with a forward slash (/) will point to the root of the current working directory of the chef-client process. This path can vary from system to system. Therefore, using a path that begins with a forward slash (/) is not recommended. rights Microsoft Windows only. The permissions for users and groups in a Microsoft Windows environment. For example: rights <permissions>, <principal>, <options> where <permissions> specifies the rights granted to the principal, <principal> is the group or user name, and <options> is a Hash with one (or more) advanced rights options. Note Use the owner and right attributes and avoid the group and mode attributes whenever possible. The group and mode attributes are not true Microsoft Windows concepts and are provided more for backward compatibility than for best practice. #### Atomic File Updates¶ Atomic updates are used with file-based resources to help ensure that file updates can be made when updating a binary or if disk space runs out. Atomic updates are enabled by default. They can be managed globally using the file_atomic_update setting in the client.rb file. They can be managed on a per-resource basis using the atomic_update property that is available with the cookbook_file, file, remote_file, and template resources. Note On certain platforms, and after a file has been moved into place, the chef-client may modify file permissions to support features specific to those platforms. On platforms with SELinux enabled, the chef-client will fix up the security contexts after a file has been moved into the correct location by running the restorecon command. On the Microsoft Windows platform, the chef-client will create files so that ACL inheritance works as expected. Note Atomic File Updates applies to the template resource. ### batch¶ Use the batch resource to execute a batch script using the cmd.exe interpreter on Windows. The batch resource creates and executes a temporary file (similar to how the script resource behaves), rather than running the command inline. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. #### Syntax¶ A batch resource block executes a batch script using the cmd.exe interpreter: batch 'echo some env vars' do code <<-EOH echo %TEMP% echo %SYSTEMDRIVE% echo %PATH% echo %WINDIR% EOH end The full syntax for all of the properties that are available to the batch resource is: batch 'name' do architecture Symbol code String command String, Array creates String cwd String flags String group String, Integer guard_interpreter Symbol interpreter String returns Integer, Array timeout Integer, Float user String password String domain String action Symbol # defaults to :run if not specified end where: • batch is the resource. • name is the name given to the resource block. • action identifies which steps the chef-client will take to bring the node into the desired state. • architecture, code, command, creates, cwd, flags, group, guard_interpreter, interpreter, returns, timeout, user, password and domain are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. #### Actions¶ The batch resource has the following actions: :nothing This resource block does not act unless notified by another resource to take action. Once notified, this resource block either runs immediately or is queued up to run at the end of the Chef Client run. :run Run a batch file. #### Properties¶ The batch resource has the following properties: architecture Ruby Type: Symbol The architecture of the process under which a script is executed. If a value is not provided, the chef-client defaults to the correct value for the architecture, as determined by Ohai. An exception is raised when anything other than :i386 is specified for a 32-bit process. Possible values: :i386 (for 32-bit processes) and :x86_64 (for 64-bit processes). code Ruby Type: String | REQUIRED A quoted (” “) string of code to be executed. command Ruby Type: String, Array The name of the command to be executed. creates Ruby Type: String Prevent a command from creating a file when that file already exists. cwd Ruby Type: String The current working directory from which the command will be run. flags Ruby Type: String One or more command line flags that are passed to the interpreter when a command is invoked. group Ruby Type: String, Integer The group name or group ID that must be changed before running a command. guard_interpreter Ruby Type: Symbol | Default Value: :batch When this property is set to :batch, the 64-bit version of the cmd.exe shell will be used to evaluate strings values for the not_if and only_if properties. Set this value to :default to use the 32-bit version of the cmd.exe shell. interpreter Ruby Type: String The script interpreter to use during code execution. Changing the default value of this property is not supported. returns Ruby Type: Integer, Array | Default Value: 0 The return value for a command. This may be an array of accepted values. An exception is raised when the return value(s) do not match. timeout Ruby Type: Integer, Float | Default Value: 3600 The amount of time (in seconds) a command is to wait before timing out. user Ruby Type: String The user name of the user identity with which to launch the new process. The user name may optionally be specified with a domain, i.e. domainuser or [email protected] via Universal Principal Name (UPN)format. It can also be specified without a domain simply as user if the domain is instead specified using the domain attribute. On Windows only, if this property is specified, the password property must be specified. password Ruby Type: String Windows only: The password of the user specified by the user property. This property is mandatory if user is specified on Windows and may only be specified if user is specified. The sensitive property for this resource will automatically be set to true if password is specified. domain Ruby Type: String Windows only: The domain of the user user specified by the user property. If not specified, the user name and password specified by the user and password properties will be used to resolve that user against the domain in which the system running Chef client is joined, or if that system is not joined to a domain it will resolve the user as a local account on that system. An alternative way to specify the domain is to leave this property unspecified and specify the domain as part of the user property. Note See https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/cmd for more information about the cmd.exe interpreter. #### Examples¶ The following examples demonstrate various approaches for using resources in recipes: Unzip a file, and then move it To run a batch file that unzips and then moves Ruby, do something like: batch 'unzip_and_move_ruby' do code <<-EOH 7z.exe x #{Chef::Config[:file_cache_path]}/ruby-1.8.7-p352-i386-mingw32.7z -oC:\\source -r -y xcopy C:\\source\\ruby-1.8.7-p352-i386-mingw32 C:\\ruby /e /y EOH end batch 'echo some env vars' do code <<-EOH echo %TEMP% echo %SYSTEMDRIVE% echo %PATH% echo %WINDIR% EOH end or: batch 'unzip_and_move_ruby' do code <<-EOH 7z.exe x #{Chef::Config[:file_cache_path]}/ruby-1.8.7-p352-i386-mingw32.7z -oC:\\source -r -y xcopy C:\\source\\ruby-1.8.7-p352-i386-mingw32 C:\\ruby /e /y EOH end batch 'echo some env vars' do code 'echo %TEMP%\\necho %SYSTEMDRIVE%\\necho %PATH%\\necho %WINDIR%' end ### dsc_resource¶ A resource defines the desired state for a single configuration item present on a node that is under management by Chef. A resource collection—one (or more) individual resources—defines the desired state for the entire node. During a chef-client run, the current state of each resource is tested, after which the chef-client will take any steps that are necessary to repair the node and bring it back into the desired state. Desired State Configuration (DSC) is a feature of Windows PowerShell that provides a set of language extensions, cmdlets, and resources that can be used to declaratively configure software. DSC is similar to Chef, in that both tools are idempotent, take similar approaches to the concept of resources, describe the configuration of a system, and then take the steps required to do that configuration. The most important difference between Chef and DSC is that Chef uses Ruby and DSC is exposed as configuration data from within Windows PowerShell. The dsc_resource resource allows any DSC resource to be used in a Chef recipe, as well as any custom resources that have been added to your Windows PowerShell environment. Microsoft frequently adds new resources to the DSC resource collection. Warning Using the dsc_resource has the following requirements: • Windows Management Framework (WMF) 5.0 February Preview (or higher), which includes Windows PowerShell 5.0.10018.0 (or higher). • The RefreshMode configuration setting in the Local Configuration Manager must be set to Disabled. NOTE: Starting with the chef-client 12.6 release, this requirement applies only for versions of Windows PowerShell earlier than 5.0.10586.0. The latest version of Windows Management Framework (WMF) 5 has relaxed the limitation that prevented the chef-client from running in non-disabled refresh mode. • The dsc_script resource may not be used in the same run-list with the dsc_resource. This is because the dsc_script resource requires that RefreshMode in the Local Configuration Manager be set to Push, whereas the dsc_resource resource requires it to be set to Disabled. NOTE: Starting with the chef-client 12.6 release, this requirement applies only for versions of Windows PowerShell earlier than 5.0.10586.0. The latest version of Windows Management Framework (WMF) 5 has relaxed the limitation that prevented the chef-client from running in non-disabled refresh mode, which allows the Local Configuration Manager to be set to Push. • The dsc_resource resource can only use binary- or script-based resources. Composite DSC resources may not be used. This is because composite resources aren’t “real” resources from the perspective of the Local Configuration Manager (LCM). Composite resources are used by the “configuration” keyword from the PSDesiredStateConfiguration module, and then evaluated in that context. When using DSC to create the configuration document (the Managed Object Framework (MOF) file) from the configuration command, the composite resource is evaluated. Any individual resources from that composite resource are written into the Managed Object Framework (MOF) document. As far as the Local Configuration Manager (LCM) is concerned, there is no such thing as a composite resource. Unless that changes, the dsc_resource resource and/or Invoke-DscResource command cannot directly use them. #### Syntax¶ A dsc_resource resource block allows DSC resources to be used in a Chef recipe. For example, the DSC Archive resource: Archive ExampleArchive { Ensure = "Present" Path = "C:\Users\Public\Documents\example.zip" Destination = "C:\Users\Public\Documents\ExtractionPath" } and then the same dsc_resource with Chef: dsc_resource 'example' do resource :archive property :ensure, 'Present' property :path, "C:\Users\Public\Documents\example.zip" property :destination, "C:\Users\Public\Documents\ExtractionPath" end The full syntax for all of the properties that are available to the dsc_resource resource is: dsc_resource 'name' do module_name String module_version String property Symbol reboot_action Symbol # default value: :nothing resource Symbol timeout Integer action Symbol # defaults to :run if not specified end where: • dsc_resource is the resource. • name is the name given to the resource block. • action identifies which steps the chef-client will take to bring the node into the desired state. • property is zero (or more) properties in the DSC resource, where each property is entered on a separate line, :dsc_property_name is the case-insensitive name of that property, and "property_value" is a Ruby value to be applied by the chef-client • module_name, module_version, property, reboot_action, resource, and timeout are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. #### Properties¶ The dsc_resource resource has the following properties: module_name Ruby Type: String The name of the module from which a DSC resource originates. If this property is not specified, it will be inferred. module_version Ruby Type: String The version number of the module to use. PowerShell 5.0.10018.0 (or higher) supports having multiple versions of a module installed. This should be specified along with the module_name. property Ruby Type: Symbol A property from a Desired State Configuration (DSC) resource. Use this property multiple times, one for each property in the Desired State Configuration (DSC) resource. The format for this property must follow property :dsc_property_name, "property_value" for each DSC property added to the resource block. The :dsc_property_name must be a symbol. Use the following Ruby types to define property_value: Ruby Windows PowerShell Array Object[] Chef::Util::Powershell:PSCredential PSCredential False bool($false)
Fixnum Integer
Float Double
Hash Hashtable
True bool($true) These are converted into the corresponding Windows PowerShell type during the chef-client run. reboot_action Ruby Type: Symbol | Default Value: :nothing Use to request an immediate reboot or to queue a reboot using the :reboot_now (immediate reboot) or :request_reboot (queued reboot) actions built into the reboot resource. resource Ruby Type: Symbol The name of the DSC resource. This value is case-insensitive and must be a symbol that matches the name of the DSC resource. For built-in DSC resources, use the following values: Value Description :archive Use to unpack archive (.zip) files. :environment Use to manage system environment variables. :file Use to manage files and directories. :group Use to manage local groups. :log Use to log configuration messages. :package Use to install and manage packages. :registry Use to manage registry keys and registry key values. :script Use to run PowerShell script blocks. :service Use to manage services. :user Use to manage local user accounts. :windowsfeature Use to add or remove Windows features and roles. :windowsoptionalfeature Use to configure Microsoft Windows optional features. :windowsprocess Use to configure Windows processes. Any DSC resource may be used in a Chef recipe. For example, the DSC Resource Kit contains resources for configuring Active Directory components, such as xADDomain, xADDomainController, and xADUser. Assuming that these resources are available to the chef-client, the corresponding values for the resource attribute would be: :xADDomain, :xADDomainController, and xADUser. timeout Ruby Type: Integer The amount of time (in seconds) a command is to wait before timing out. #### Examples¶ Open a Zip file dsc_resource 'example' do resource :archive property :ensure, 'Present' property :path, 'C:\Users\Public\Documents\example.zip' property :destination, 'C:\Users\Public\Documents\ExtractionPath' end Manage users and groups dsc_resource 'demogroupadd' do resource :group property :groupname, 'demo1' property :ensure, 'present' end dsc_resource 'useradd' do resource :user property :username, 'Foobar1' property :fullname, 'Foobar1' property :password, ps_credential('P@assword!') property :ensure, 'present' end dsc_resource 'AddFoobar1ToUsers' do resource :Group property :GroupName, 'demo1' property :MembersToInclude, ['Foobar1'] end Create a test message queue The following example creates a file on a node (based on one that is located in a cookbook), unpacks the MessageQueue.zip Windows PowerShell module, and then uses the dsc_resource to ensure that Message Queuing (MSMQ) sub-features are installed, a test queue is created, and that permissions are set on the test queue: cookbook_file 'cMessageQueue.zip' do path "#{Chef::Config[:file_cache_path]}\\MessageQueue.zip" action :create_if_missing end windows_zipfile "#{ENV['PROGRAMW6432']}\\WindowsPowerShell\\Modules" do source "#{Chef::Config[:file_cache_path]}\\MessageQueue.zip" action :unzip end dsc_resource 'install-sub-features' do resource :windowsfeature property :ensure, 'Present' property :name, 'msmq' property :IncludeAllSubFeature, true end dsc_resource 'create-test-queue' do resource :cPrivateMsmqQueue property :ensure, 'Present' property :name, 'Test_Queue' end dsc_resource 'set-permissions' do resource :cPrivateMsmqQueuePermissions property :ensure, 'Present' property :name, 'Test_Queue_Permissions' property :QueueNames, 'Test_Queue' property :ReadUsers, node['msmq']['read_user'] end Example to show usage of module properties dsc_resource 'test-cluster' do resource :xCluster module_name 'xFailOverCluster' module_version '1.6.0.0' property :name, 'TestCluster' property :staticipaddress, '10.0.0.3' property :domainadministratorcredential, ps_credential('abcd') end ### dsc_script¶ A resource defines the desired state for a single configuration item present on a node that is under management by Chef. A resource collection—one (or more) individual resources—defines the desired state for the entire node. During a chef-client run, the current state of each resource is tested, after which the chef-client will take any steps that are necessary to repair the node and bring it back into the desired state. Windows PowerShell is a task-based command-line shell and scripting language developed by Microsoft. Windows PowerShell uses a document-oriented approach for managing Microsoft Windows-based machines, similar to the approach that is used for managing Unix and Linux-based machines. Windows PowerShell is a tool-agnostic platform that supports using Chef for configuration management. Desired State Configuration (DSC) is a feature of Windows PowerShell that provides a set of language extensions, cmdlets, and resources that can be used to declaratively configure software. DSC is similar to Chef, in that both tools are idempotent, take similar approaches to the concept of resources, describe the configuration of a system, and then take the steps required to do that configuration. The most important difference between Chef and DSC is that Chef uses Ruby and DSC is exposed as configuration data from within Windows PowerShell. Many DSC resources are comparable to built-in Chef resources. For example, both DSC and Chef have file, package, and service resources. The dsc_script resource is most useful for those DSC resources that do not have a direct comparison to a resource in Chef, such as the Archive resource, a custom DSC resource, an existing DSC script that performs an important task, and so on. Use the dsc_script resource to embed the code that defines a DSC configuration directly within a Chef recipe. Note Windows PowerShell 4.0 is required for using the dsc_script resource with Chef. Note The WinRM service must be enabled. (Use winrm quickconfig to enable the service.) Warning The dsc_script resource may not be used in the same run-list with the dsc_resource. This is because the dsc_script resource requires that RefreshMode in the Local Configuration Manager be set to Push, whereas the dsc_resource resource requires it to be set to Disabled. #### Syntax¶ A dsc_script resource block embeds the code that defines a DSC configuration directly within a Chef recipe: dsc_script 'get-dsc-resource-kit' do code <<-EOH Archive reskit { ensure = 'Present' path = "#{Chef::Config[:file_cache_path]}\\DSCResourceKit620082014.zip" destination = "#{ENV['PROGRAMW6432']}\\WindowsPowerShell\\Modules" } EOH end where the remote_file resource is first used to download the DSCResourceKit620082014.zip file. The full syntax for all of the properties that are available to the dsc_script resource is: dsc_script 'name' do code String command String configuration_data String configuration_data_script String configuration_name String cwd String environment Hash flags Hash imports Array timeout Integer action Symbol # defaults to :run if not specified end where: • dsc_script is the resource. • name is the name given to the resource block. • action identifies which steps the chef-client will take to bring the node into the desired state. • code, command, configuration_data, configuration_data_script, configuration_name, cwd, environment, flags, imports, and timeout are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. #### Actions¶ The dsc_script resource has the following actions: :nothing This resource block does not act unless notified by another resource to take action. Once notified, this resource block either runs immediately or is queued up to run at the end of the Chef Client run. :run Default. Use to run the DSC configuration defined as defined in this resource. #### Properties¶ The dsc_script resource has the following properties: code Ruby Type: String The code for the DSC configuration script. This property may not be used in conjunction with the command property. command Ruby Type: String The path to a valid Windows PowerShell data file that contains the DSC configuration script. This data file must be capable of running independently of Chef and must generate a valid DSC configuration. This property may not be used in conjunction with the code property. configuration_data Ruby Type: String The configuration data for the DSC script. The configuration data must be a valid Windows PowerShell data file. This property may not be used in conjunction with the configuration_data_script property. configuration_data_script Ruby Type: String The path to a valid Windows PowerShell data file that also contains a node called localhost. This property may not be used in conjunction with the configuration_data property. configuration_name Ruby Type: String The name of a valid Windows PowerShell cmdlet. The name may only contain letter (a-z, A-Z), number (0-9), and underscore (_) characters and should start with a letter. The name may not be null or empty. This property may not be used in conjunction with the code property. cwd Ruby Type: String The current working directory. environment Ruby Type: Hash A Hash of environment variables in the form of ({'ENV_VARIABLE' => 'VALUE'}). (These variables must exist for a command to be run successfully.) flags Ruby Type: Hash Pass parameters to the DSC script that is specified by the command property. Parameters are defined as key-value pairs, where the value of each key is the parameter to pass. This property may not be used in the same recipe as the code property. For example: flags ({ :EditorChoice => 'emacs', :EditorFlags => '--maximized' }). imports Ruby Type: Array Warning This property MUST be used with the code attribute. Use to import DSC resources from a module. To import all resources from a module, specify only the module name: imports 'module_name' To import specific resources, specify the module name, and then specify the name for each resource in that module to import: imports 'module_name', 'resource_name_a', 'resource_name_b', ... For example, to import all resources from a module named cRDPEnabled: imports 'cRDPEnabled' To import only the PSHOrg_cRDPEnabled resource: imports 'cRDPEnabled', 'PSHOrg_cRDPEnabled' timeout Ruby Type: Integer The amount of time (in seconds) a command is to wait before timing out. #### Examples¶ The following examples demonstrate various approaches for using resources in recipes: Specify DSC code directly DSC data can be specified directly in a recipe: dsc_script 'emacs' do code <<-EOH Environment 'texteditor' { Name = 'EDITOR' Value = 'c:\\emacs\\bin\\emacs.exe' } EOH end Specify DSC code using a Windows PowerShell data file Use the command property to specify the path to a Windows PowerShell data file. For example, the following Windows PowerShell script defines the DefaultEditor: Configuration 'DefaultEditor' { Environment 'texteditor' { Name = 'EDITOR' Value = 'c:\emacs\bin\emacs.exe' } } Use the following recipe to specify the location of that data file: dsc_script 'DefaultEditor' do command 'c:\dsc_scripts\emacs.ps1' end Pass parameters to DSC configurations If a DSC script contains configuration data that takes parameters, those parameters may be passed using the flags property. For example, the following Windows PowerShell script takes parameters for the EditorChoice and EditorFlags settings: $choices = @{'emacs' = 'c:\emacs\bin\emacs';'vi' = 'c:\vim\vim.exe';'powershell' = 'powershell_ise.exe'}
Configuration 'DefaultEditor'
{
[CmdletBinding()]
param
(
$EditorChoice,$EditorFlags = ''
)
Environment 'TextEditor'
{
Name = 'EDITOR'
Value = "$($choices[$EditorChoice])$EditorFlags"
}
}
Use the following recipe to set those parameters:
dsc_script 'DefaultEditor' do
flags ({ :EditorChoice => 'emacs', :EditorFlags => '--maximized' })
command 'c:\dsc_scripts\editors.ps1'
end
Use custom configuration data
Configuration data in DSC scripts may be customized from a recipe. For example, scripts are typically customized to set the behavior for Windows PowerShell credential data types. Configuration data may be specified in one of three ways:
• By using the configuration_data attribute
• By using the configuration_data_script attribute
• By specifying the path to a valid Windows PowerShell data file
The following example shows how to specify custom configuration data using the configuration_data property:
dsc_script 'BackupUser' do
configuration_data <<-EOH
@{
AllNodes = @(
@{
NodeName = "localhost";
PSDscAllowPlainTextPassword = $true }) } EOH code <<-EOH$user = 'backup'
$password = ConvertTo-SecureString -String "YourPass$(random)" -AsPlainText -Force
$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList$user, $password User$user
{
UserName = $user Password =$cred
Description = 'Backup operator'
Ensure = "Present"
Disabled = $false PasswordNeverExpires =$true
PasswordChangeRequired = $false } EOH end The following example shows how to specify custom configuration data using the configuration_name property. For example, the following Windows PowerShell script defines the vi configuration: Configuration 'emacs' { Environment 'TextEditor' { Name = 'EDITOR' Value = 'c:\emacs\bin\emacs.exe' } } Configuration 'vi' { Environment 'TextEditor' { Name = 'EDITOR' Value = 'c:\vim\bin\vim.exe' } } Use the following recipe to specify that configuration: dsc_script 'EDITOR' do configuration_name 'vi' command 'C:\dsc_scripts\editors.ps1' end Using DSC with other Chef resources The dsc_script resource can be used with other resources. The following example shows how to download a file using the remote_file resource, and then uncompress it using the DSC Archive resource: remote_file "#{Chef::Config[:file_cache_path]}\\DSCResourceKit620082014.zip" do source 'http://gallery.technet.microsoft.com/DSC-Resource-Kit-All-c449312d/file/124481/1/DSC%20Resource%20Kit%20Wave%206%2008282014.zip' end dsc_script 'get-dsc-resource-kit' do code <<-EOH Archive reskit { ensure = 'Present' path = "#{Chef::Config[:file_cache_path]}\\DSCResourceKit620082014.zip" destination = "#{ENV['PROGRAMW6432']}\\WindowsPowerShell\\Modules" } EOH end ### env¶ Use the windows_env resource to manage environment keys in Microsoft Windows. After an environment key is set, Microsoft Windows must be restarted before the environment key will be available to the Task Scheduler. This resource was previously called the env resource; its name was updated in Chef Client 14.0 to reflect the fact that only Windows is supported. Existing cookbooks using env will continue to function, but should be updated to use the new name. #### Syntax¶ A windows_env resource block manages environment keys in Microsoft Windows: windows_env 'ComSpec' do value 'C:\\Windows\\system32\\cmd.exe' end The full syntax for all of the properties that are available to the env resource is: windows_env 'name' do delim String key_name String # defaults to 'name' if not specified value String action Symbol # defaults to :create if not specified end where • windows_env is the resource • name is the name of the resource block • action identifies the steps the chef-client will take to bring the node into the desired state • delim, key_name, and value are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. #### Actions¶ The windows_env resource has the following actions: :create Default. Create an environment variable. If an environment variable already exists (but does not match), update that environment variable to match. :delete Delete an environment variable. :modify Modify an existing environment variable. This prepends the new value to the existing value, using the delimiter specified by the delim property. :nothing This resource block does not act unless notified by another resource to take action. Once notified, this resource block either runs immediately or is queued up to run at the end of the Chef Client run. #### Properties¶ The windows_env resource has the following properties: delim Ruby Type: String, false The delimiter that is used to separate multiple values for a single key. key_name Ruby Type: String | Default Value: The resource block's name An optional property to set the name of the key that is to be created, deleted, or modified if it differs from the resource block’s name. user Ruby Type: String | Default Value: "<System>" value Ruby Type: String | REQUIRED The value of the environmental variable to set. #### Examples¶ The following examples demonstrate various approaches for using resources in recipes: Set an environment variable windows_env 'ComSpec' do value "C:\\Windows\\system32\\cmd.exe" end ### powershell_script¶ Use the powershell_script resource to execute a script using the Windows PowerShell interpreter, much like how the script and script-based resources—bash, csh, perl, python, and ruby—are used. The powershell_script is specific to the Microsoft Windows platform and the Windows PowerShell interpreter. The powershell_script resource creates and executes a temporary file (similar to how the script resource behaves), rather than running the command inline. Commands that are executed with this resource are (by their nature) not idempotent, as they are typically unique to the environment in which they are run. Use not_if and only_if to guard this resource for idempotence. #### Syntax¶ A powershell_script resource block executes a batch script using the Windows PowerShell interpreter. For example, writing to an interpolated path: powershell_script 'write-to-interpolated-path' do code <<-EOH$stream = [System.IO.StreamWriter] "#{Chef::Config[:file_cache_path]}/powershell-test.txt"
$stream.WriteLine("In #{Chef::Config[:file_cache_path]}...word.")$stream.close()
EOH
end
The full syntax for all of the properties that are available to the powershell_script resource is:
powershell_script 'name' do
architecture Symbol
code String
command String, Array
convert_boolean_return true, false
creates String
cwd String
environment Hash
flags String
group String, Integer
guard_interpreter Symbol
interpreter String
returns Integer, Array
timeout Integer, Float
user String
domain String
action Symbol # defaults to :run if not specified
elevated true, false
end
where:
• powershell_script is the resource.
• name is the name given to the resource block.
• command is the command to be run and cwd is the location from which the command is run.
• action identifies the steps the chef-client will take to bring the node into the desired state.
• architecture, code, command, convert_boolean_return, creates, cwd, environment, flags, group, guard_interpreter, interpreter, returns, sensitive, timeout, user, password, domain and elevated are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource.
#### Actions¶
The powershell_script resource has the following actions:
:nothing
Inherited from execute resource. Prevent a command from running. This action is used to specify that a command is run only when another resource notifies it.
:run
Default. Run the script.
#### Properties¶
The powershell_script resource has the following properties:
architecture
Ruby Type: Symbol
The architecture of the process under which a script is executed. If a value is not provided, the chef-client defaults to the correct value for the architecture, as determined by Ohai. An exception is raised when anything other than :i386 is specified for a 32-bit process. Possible values: :i386 (for 32-bit processes) and :x86_64 (for 64-bit processes).
code
Ruby Type: String | REQUIRED
A quoted (” “) string of code to be executed.
command
Ruby Type: String, Array
The name of the command to be executed. Default value: the name of the resource block. See “Syntax” section above for more information.
convert_boolean_return
Ruby Type: true, false | Default Value: false
Return 0 if the last line of a command is evaluated to be true or to return 1 if the last line is evaluated to be false.
When the guard_interpreter common attribute is set to :powershell_script, a string command will be evaluated as if this value were set to true. This is because the behavior of this attribute is similar to the value of the "$?" expression common in UNIX interpreters. For example, this: powershell_script 'make_safe_backup' do guard_interpreter :powershell_script code 'cp ~/data/nodes.json ~/data/nodes.bak' not_if 'test-path ~/data/nodes.bak' end is similar to: bash 'make_safe_backup' do code 'cp ~/data/nodes.json ~/data/nodes.bak' not_if 'test -e ~/data/nodes.bak' end creates Ruby Type: String Inherited from execute resource. Prevent a command from creating a file when that file already exists. cwd Ruby Type: String Inherited from execute resource. The current working directory from which a command is run. environment Ruby Type: Hash Inherited from execute resource. A Hash of environment variables in the form of ({"ENV_VARIABLE" => "VALUE"}). (These variables must exist for a command to be run successfully.) flags Ruby Type: String A string that is passed to the Windows PowerShell command. Default value (Windows PowerShell 3.0+): -NoLogo, -NonInteractive, -NoProfile, -ExecutionPolicy Bypass, -InputFormat None. group Ruby Type: String, Integer Inherited from execute resource. The group name or group ID that must be changed before running a command. guard_interpreter Ruby Type: Symbol | Default Value: :powershell_script When this property is set to :powershell_script, the 64-bit version of the Windows PowerShell shell will be used to evaluate strings values for the not_if and only_if properties. Set this value to :default to use the 32-bit version of the cmd.exe shell. interpreter Ruby Type: String The script interpreter to use during code execution. Changing the default value of this property is not supported. returns Ruby Type: Integer, Array | Default Value: 0 Inherited from execute resource. The return value for a command. This may be an array of accepted values. An exception is raised when the return value(s) do not match. timeout Ruby Type: Integer, Float Inherited from execute resource. The amount of time (in seconds) a command is to wait before timing out. Default value: 3600. user Ruby Type: String | Default Value: nil The user name of the user identity with which to launch the new process. The user name may optionally be specified with a domain, i.e. domain\user or [email protected] via Universal Principal Name (UPN)format. It can also be specified without a domain simply as user if the domain is instead specified using the domain attribute. On Windows only, if this property is specified, the password property must be specified. password Ruby Type: String Windows only: The password of the user specified by the user property. Default value: nil. This property is mandatory if user is specified on Windows and may only be specified if user is specified. The sensitive property for this resource will automatically be set to true if password is specified. domain Ruby Type: String Windows only: The domain of the user specified by the user property. Default value: nil. If not specified, the user name and password specified by the user and password properties will be used to resolve that user against the domain in which the system running Chef client is joined, or if that system is not joined to a domain it will resolve the user as a local account on that system. An alternative way to specify the domain is to leave this property unspecified and specify the domain as part of the user property. elevated Ruby Type: true, false Determines whether the script will run with elevated permissions to circumvent User Access Control (UAC) interactively blocking the process. This will cause the process to be run under a batch login instead of an interactive login. The user running Chef needs the “Replace a process level token” and “Adjust Memory Quotas for a process” permissions. The user that is running the command needs the “Log on as a batch job” permission. Because this requires a login, the user and password properties are required. #### Examples¶ The following examples demonstrate various approaches for using resources in recipes: Write to an interpolated path powershell_script 'write-to-interpolated-path' do code <<-EOH$stream = [System.IO.StreamWriter] "#{Chef::Config[:file_cache_path]}/powershell-test.txt"
$stream.WriteLine("In #{Chef::Config[:file_cache_path]}...word.")$stream.close()
EOH
end
Change the working directory
powershell_script 'cwd-then-write' do
cwd Chef::Config[:file_cache_path]
code <<-EOH
$stream = [System.IO.StreamWriter] "C:/powershell-test2.txt"$pwd = pwd
$stream.WriteLine("This is the contents of:$pwd")
$dirs = dir foreach ($dir in $dirs) {$stream.WriteLine($dir.fullname) }$stream.close()
EOH
end
Change the working directory in Microsoft Windows
powershell_script 'cwd-to-win-env-var' do
cwd '%TEMP%'
code <<-EOH
$stream = [System.IO.StreamWriter] "./temp-write-from-chef.txt"$stream.WriteLine("chef on windows rox yo!")
$stream.close() EOH end Pass an environment variable to a script powershell_script 'read-env-var' do cwd Chef::Config[:file_cache_path] environment ({'foo' => 'BAZ'}) code <<-EOH$stream = [System.IO.StreamWriter] "./test-read-env-var.txt"
$stream.WriteLine("FOO is$env:foo")
$stream.close() EOH end ### registry_key¶ Use the registry_key resource to create and delete registry keys in Microsoft Windows. #### Syntax¶ A registry_key resource block creates and deletes registry keys in Microsoft Windows: registry_key "HKEY_LOCAL_MACHINE\\...\\System" do values [{ name: "NewRegistryKeyValue", type: :multi_string, data: ['foo\0bar\0\0'] }] action :create end Use multiple registry key entries with key values that are based on node attributes: registry_key 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\name_of_registry_key' do values [{name: 'key_name', type: :string, data: 'C:\Windows\System32\file_name.bmp'}, {name: 'key_name', type: :string, data: node['node_name']['attribute']['value']}, {name: 'key_name', type: :string, data: node['node_name']['attribute']['value']} ] action :create end The full syntax for all of the properties that are available to the registry_key resource is: registry_key 'name' do architecture Symbol # default value: :machine key String # default value: 'name' unless specified recursive true, false # default value: false values action Symbol # defaults to :create if not specified end where • registry_key is the resource • name is the name of the resource block • values is a hash that contains at least one registry key to be created or deleted. Each registry key in the hash is grouped by brackets in which the name:, type:, and data: values for that registry key are specified. • type: represents the values available for registry keys in Microsoft Windows. Use :binary for REG_BINARY, :string for REG_SZ, :multi_string for REG_MULTI_SZ, :expand_string for REG_EXPAND_SZ, :dword for REG_DWORD, :dword_big_endian for REG_DWORD_BIG_ENDIAN, or :qword for REG_QWORD. Warning :multi_string must be an array, even if there is only a single string. • action identifies the steps the chef-client will take to bring the node into the desired state • architecture, key, recursive and values are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. ##### Path Separators¶ A Microsoft Windows registry key can be used as a string in Ruby code, such as when a registry key is used as the name of a recipe. In Ruby, when a registry key is enclosed in a double-quoted string (" "), the same backslash character (\) that is used to define the registry key path separator is also used in Ruby to define an escape character. Therefore, the registry key path separators must be escaped when they are enclosed in a double-quoted string. For example, the following registry key: HKCU\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Themes may be enclosed in a single-quoted string with a single backslash: 'HKCU\SOFTWARE\path\to\key\Themes' or may be enclosed in a double-quoted string with an extra backslash as an escape character: "HKCU\\SOFTWARE\\path\\to\\key\\Themes" #### Recipe DSL Methods¶ Six methods are present in the Recipe DSL to help verify the registry during a chef-client run on the Microsoft Windows platform—registry_data_exists?, registry_get_subkeys, registry_get_values, registry_has_subkeys?, registry_key_exists?, and registry_value_exists?—these helpers ensure the powershell_script resource is idempotent. Note The recommended order in which registry key-specific methods should be used within a recipe is: key_exists?, value_exists?, data_exists?, get_values, has_subkeys?, and then get_subkeys. ##### registry_data_exists?¶ Use the registry_data_exists? method to find out if a Microsoft Windows registry key contains the specified data of the specified type under the value. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_data_exists? method is as follows: registry_data_exists?( KEY_PATH, { name: 'NAME', type: TYPE, data: DATA }, ARCHITECTURE ) where: • KEY_PATH is the path to the registry key value. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • { name: 'NAME', type: TYPE, data: DATA } is a hash that contains the expected name, type, and data of the registry key value • type: represents the values available for registry keys in Microsoft Windows. Use :binary for REG_BINARY, :string for REG_SZ, :multi_string for REG_MULTI_SZ, :expand_string for REG_EXPAND_SZ, :dword for REG_DWORD, :dword_big_endian for REG_DWORD_BIG_ENDIAN, or :qword for REG_QWORD. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ##### registry_get_subkeys¶ Use the registry_get_subkeys method to get a list of registry key values that are present for a Microsoft Windows registry key. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_get_subkeys method is as follows: subkey_array = registry_get_subkeys(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This returns an array of registry key values. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ##### registry_get_values¶ Use the registry_get_values method to get the registry key values (name, type, and data) for a Microsoft Windows registry key. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_get_values method is as follows: subkey_array = registry_get_values(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This returns an array of registry key values. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ##### registry_has_subkeys?¶ Use the registry_has_subkeys? method to find out if a Microsoft Windows registry key has one (or more) values. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_has_subkeys? method is as follows: registry_has_subkeys?(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ##### registry_key_exists?¶ Use the registry_key_exists? method to find out if a Microsoft Windows registry key exists at the specified path. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_key_exists? method is as follows: registry_key_exists?(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. (Any registry key values that are associated with this registry key are ignored.) Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ##### registry_value_exists?¶ Use the registry_value_exists? method to find out if a registry key value exists. Use registry_data_exists? to test for the type and data of a registry key value. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_value_exists? method is as follows: registry_value_exists?( KEY_PATH, { name: 'NAME' }, ARCHITECTURE ) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • { name: 'NAME' } is a hash that contains the name of the registry key value; if either type: or :value are specified in the hash, they are ignored • type: represents the values available for registry keys in Microsoft Windows. Use :binary for REG_BINARY, :string for REG_SZ, :multi_string for REG_MULTI_SZ, :expand_string for REG_EXPAND_SZ, :dword for REG_DWORD, :dword_big_endian for REG_DWORD_BIG_ENDIAN, or :qword for REG_QWORD. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. #### Actions¶ The registry_key resource has the following actions: :create Default. Create a registry key. If a registry key already exists (but does not match), update that registry key to match. :create_if_missing Create a registry key if it does not exist. Also, create a registry key value if it does not exist. :delete Delete the specified values for a registry key. :delete_key Delete the specified registry key and all of its subkeys. :nothing This resource block does not act unless notified by another resource to take action. Once notified, this resource block either runs immediately or is queued up to run at the end of the Chef Client run. Note Be careful when using the :delete_key action with the recursive attribute. This will delete the registry key, all of its values and all of the names, types, and data associated with them. This cannot be undone by the chef-client. #### Properties¶ The registry_key resource has the following properties: architecture Ruby Type: Symbol | Default Value: :machine The architecture of the node for which keys are to be created or deleted. Possible values: :i386 (for nodes with a 32-bit registry), :x86_64 (for nodes with a 64-bit registry), and :machine (to have the chef-client determine the architecture during the chef-client run). In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. key Ruby Type: String | Default Value: The resource block's name The path to the location in which a registry key is to be created or from which a registry key is to be deleted. Default value: the name of the resource block. See “Syntax” section above for more information. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. recursive Ruby Type: true, false | Default Value: false When creating a key, this value specifies that the required keys for the specified path are to be created. When using the :delete_key action in a recipe, and if the registry key has subkeys, then set the value for this property to true. Note Be careful when using the :delete_key action with the recursive attribute. This will delete the registry key, all of its values and all of the names, types, and data associated with them. This cannot be undone by the chef-client. values Ruby Type: Hash, Array An array of hashes, where each Hash contains the values that are to be set under a registry key. Each Hash must contain name:, type:, and data: (and must contain no other key values). type: represents the values available for registry keys in Microsoft Windows. Use :binary for REG_BINARY, :string for REG_SZ, :multi_string for REG_MULTI_SZ, :expand_string for REG_EXPAND_SZ, :dword for REG_DWORD, :dword_big_endian for REG_DWORD_BIG_ENDIAN, or :qword for REG_QWORD. Warning :multi_string must be an array, even if there is only a single string. #### Examples¶ The following examples demonstrate various approaches for using resources in recipes: Create a registry key Use a double-quoted string: registry_key "HKEY_LOCAL_MACHINE\\path-to-key\\Policies\\System" do values [{ name: 'EnableLUA', type: :dword, data: 0 }] action :create end or a single-quoted string: registry_key 'HKEY_LOCAL_MACHINE\path-to-key\Policies\System' do values [{ name: 'EnableLUA', type: :dword, data: 0 }] action :create end Delete a registry key value Use a double-quoted string: registry_key "HKEY_LOCAL_MACHINE\\SOFTWARE\\path\\to\\key\\AU" do values [{ name: 'NoAutoRebootWithLoggedOnUsers', type: :dword, data: '' }] action :delete end or a single-quoted string: registry_key 'HKEY_LOCAL_MACHINE\SOFTWARE\path\to\key\AU' do values [{ name: 'NoAutoRebootWithLoggedOnUsers', type: :dword, data: '' }] action :delete end Note If data: is not specified, you get an error: Missing data key in RegistryKey values hash Delete a registry key and its subkeys, recursively ### remote_file¶ Specify local Windows file path as a valid URI When specifying a local Microsoft Windows file path as a valid file URI, an additional forward slash (/) is required. For example: remote_file 'file:///c:/path/to/file' do ... # other attributes end ### windows_package¶ Use the windows_package resource to manage Microsoft Installer Package (MSI) packages for the Microsoft Windows platform. #### Syntax¶ A windows_package resource block manages a package on a node, typically by installing it. The simplest use of the windows_package resource is: windows_package 'package_name' which will install the named package using all of the default options and the default action (:install). The full syntax for all of the properties that are available to the windows_package resource is: windows_package 'name' do checksum String installer_type Symbol options String package_name String, Array remote_file_attributes Hash response_file String response_file_variables Hash returns String, Integer, Array # default value: [0] source String timeout String, Integer # default value: 600 version String, Array action Symbol # defaults to :install if not specified end where: • windows_package is the resource. • name is the name given to the resource block. • action identifies which steps the chef-client will take to bring the node into the desired state. • checksum, installer_type, options, package_name, remote_file_attributes, response_file, response_file_variables, returns, source, timeout, and version are the properties available to this resource. #### Actions¶ The windows_package resource has the following actions: :install Default. Install a package. If a version is specified, install the specified version of the package. :nothing This resource block does not act unless notified by another resource to take action. Once notified, this resource block either runs immediately or is queued up to run at the end of the Chef Client run. :remove Remove a package. #### Properties¶ The windows_package resource has the following properties: checksum Ruby Type: String The SHA-256 checksum of the file. Use to prevent a file from being re-downloaded. When the local file matches the checksum, the chef-client does not download it. Use when a URL is specified by the source property. installer_type Ruby Type: Symbol A symbol that specifies the type of package. Possible values: :custom (such as installing a non-.msi file that embeds an .msi-based installer), :inno (Inno Setup), :installshield (InstallShield), :msi (Microsoft Installer Package (MSI)), :nsis (Nullsoft Scriptable Install System (NSIS)), :wise (Wise). options Ruby Type: String One (or more) additional options that are passed to the command. package_name Ruby Type: String, Array The name of the package. Defaults to the name of the resource block unless specified. remote_file_attributes Ruby Type: Hash A package at a remote location define as a Hash of properties that modifies the properties of the remote_file resource. returns Ruby Type: Integer, Array of integers | Default Value: 0 A comma-delimited list of return codes that indicate the success or failure of the command that was run remotely. This code signals a successful :install action. source Ruby Type: String Optional. The path to a package in the local file system. The location of the package may be at a URL. Default value: the name of the resource block. See the “Syntax” section above for more information. If the source property is not specified, the package name MUST be exactly the same as the display name found in Add/Remove programs or exactly the same as the DisplayName property in the appropriate registry key, which may be one of the following: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall Note If there are multiple versions of a package installed with the same display name, all of those packages will be removed unless a version is provided in the version property or unless it can be discovered in the installer file specified by the source property. timeout Ruby Type: String, Integer | Default Value: 600 (seconds) The amount of time (in seconds) to wait before timing out. version Ruby Type: String, Array The version of a package to be installed or upgraded. #### Providers¶ This resource has the following providers: Chef::Provider::Package, package When this short name is used, the chef-client will attempt to determine the correct provider during the chef-client run. Chef::Provider::Package::Windows, windows_package The provider for the Microsoft Windows platform. #### Examples¶ The following examples demonstrate various approaches for using resources in recipes: Install a package windows_package '7zip' do action :install source 'C:\7z920.msi' end Specify a URL for the source attribute windows_package '7zip' do source 'http://www.7-zip.org/a/7z938-x64.msi' end Specify path and checksum windows_package '7zip' do source 'http://www.7-zip.org/a/7z938-x64.msi' checksum '7c8e873991c82ad9cfc123415254ea6101e9a645e12977dcd518979e50fdedf3' end Modify remote_file resource attributes The windows_package resource may specify a package at a remote location using the remote_file_attributes property. This uses the remote_file resource to download the contents at the specified URL and passes in a Hash that modifies the properties of the remote_file resource. For example: windows_package '7zip' do source 'http://www.7-zip.org/a/7z938-x64.msi' remote_file_attributes ({ :path => 'C:\\7zip.msi', :checksum => '7c8e873991c82ad9cfc123415254ea6101e9a645e12977dcd518979e50fdedf3' }) end Download a nsis (Nullsoft) package resource windows_package 'Mercurial 3.6.1 (64-bit)' do source 'http://mercurial.selenic.com/release/windows/Mercurial-3.6.1-x64.exe' checksum 'febd29578cb6736163d232708b834a2ddd119aa40abc536b2c313fc5e1b5831d' end Download a custom package windows_package 'Microsoft Visual C++ 2005 Redistributable' do source 'https://download.microsoft.com/download/6/B/B/6BB661D6-A8AE-4819-B79F-236472F6070C/vcredist_x86.exe' installer_type :custom options '/Q' end ### windows_service¶ Use the windows_service resource to create, delete, and manage a service on the Microsoft Windows platform. #### Syntax¶ A windows_service resource block manages the state of a service on a machine that is running Microsoft Windows. For example: windows_service 'BITS' do action :configure_startup startup_type :manual end The full syntax for all of the properties that are available to the windows_service resource is: windows_service 'name' do binary_path_name String delayed_start true, false # default value: false dependencies String, Array description String desired_access Integer # default value: 983551 display_name String error_control Integer # default value: 1 init_command String load_order_group String pattern String reload_command String, false restart_command String, false run_as_password String run_as_user String # default value: "LocalSystem" service_name String # default value: 'name' unless specified service_type Integer # default value: "SERVICE_WIN32_OWN_PROCESS" start_command String, false startup_type Symbol # default value: :automatic status_command String, false stop_command String, false supports Hash # default value: {"restart"=>nil, "reload"=>nil, "status"=>nil} timeout Integer action Symbol # defaults to :nothing if not specified end where: • windows_service is the resource. • name is the name given to the resource block. • action identifies which steps the chef-client will take to bring the node into the desired state. • binary_path_name, display_name, desired_access, delayed_start, dependencies, description, error_control, init_command, load_order_group, pattern, reload_command, restart_command, run_as_password, run_as_user, service_name, service_type, start_command, startup_type, status_command, stop_command, supports, and timeout are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. #### Actions¶ The windows_service resource has the following actions: :configure Configure a pre-existing service. New in Chef Client 14.0. :configure_startup Configure a service based on the value of the startup_type property. :create Create the service based on the value of the binary_path_name, service_name and/or display_name property. New in Chef Client 14.0. :delete Delete the service based on the value of the service_name property. New in Chef Client 14.0. :disable Disable a service. This action is equivalent to a Disabled startup type on the Microsoft Windows platform. :enable Enable a service at boot. This action is equivalent to an Automatic startup type on the Microsoft Windows platform. :nothing Default. Do nothing with a service. :reload Reload the configuration for this service. This action is not supported on the Windows platform and will raise an error if used. :restart Restart a service. :start Start a service, and keep it running until stopped or disabled. :stop Stop a service. :nothing This resource block does not act unless notified by another resource to take action. Once notified, this resource block either runs immediately or is queued up to run at the end of the Chef Client run. #### Properties¶ The windows_service resource has the following properties: binary_path_name Ruby Type: String The fully qualified path to the service binary file. The path can also include arguments for an auto-start service. This is required for ‘:create’ and ‘:configure’ actions New in Chef Client 14.0. delayed_start Ruby Type: true, false | Default Value: false Set the startup type to delayed start. This only applies if startup_type is :automatic. New in Chef Client 14.0. dependencies Ruby Type: String, Array A pointer to a double null-terminated array of null-separated names of services or load ordering groups that the system must start before this service. Specify nil or an empty string if the service has no dependencies. Dependency on a group means that this service can run if at least one member of the group is running after an attempt to start all members of the group. New in Chef Client 14.0. description Ruby Type: String Description of the service. New in Chef Client 14.0. desired_access Ruby Type: Integer | Default Value: 983551 display_name Ruby Type: String The display name to be used by user interface programs to identify the service. This string has a maximum length of 256 characters. New in Chef Client 14.0. init_command Ruby Type: String The path to the init script that is associated with the service. This is typically /etc/init.d/SERVICE_NAME. The init_command property can be used to prevent the need to specify overrides for the start_command, stop_command, and restart_command attributes. load_order_group Ruby Type: String The name of the service’s load ordering group(s). Specify nil or an empty string if the service does not belong to a group. New in Chef Client 14.0. pattern Ruby Type: String | Default Value: service_name The pattern to look for in the process table. reload_command Ruby Type: String, false The command used to tell a service to reload its configuration. restart_command Ruby Type: String, false The command used to restart a service. run_as_password Ruby Type: String The password for the user specified by run_as_user. run_as_user Ruby Type: String | Default Value: "LocalSystem" The user under which a Microsoft Windows service runs. service_name Ruby Type: String | Default Value: The resource block's name An optional property to set the service name if it differs from the resource block’s name. start_command Ruby Type: String The command used to start a service. startup_type Ruby Type: Symbol | Default Value: :automatic Use to specify the startup type for a Microsoft Windows service. Possible values: :automatic, :disabled, or :manual. status_command Ruby Type: String, false The command used to check the run status for a service. stop_command Ruby Type: String, false The command used to stop a service. supports Ruby Type: Hash A list of properties that controls how the chef-client is to attempt to manage a service: :restart, :reload, :status. For :restart, the init script or other service provider can use a restart command; if :restart is not specified, the chef-client attempts to stop and then start a service. For :reload, the init script or other service provider can use a reload command. For :status, the init script or other service provider can use a status command to determine if the service is running; if :status is not specified, the chef-client attempts to match the service_name against the process table as a regular expression, unless a pattern is specified as a parameter property. Default value: { restart: false, reload: false, status: false } for all platforms (except for the Red Hat platform family, which defaults to { restart: false, reload: false, status: true }.) timeout Ruby Type: Integer | Default Value: 60 The amount of time (in seconds) to wait before timing out. #### Examples¶ Start a service manually windows_service 'BITS' do action :configure_startup startup_type :manual end ## Cookbook Resources¶ Some of the most popular Chef-maintained cookbooks that contain custom resources useful when configuring machines running Microsoft Windows are listed below: Cookbook Description iis The iis cookbook is used to install and configure Internet Information Services (IIS). webpi The webpi cookbook is used to run the Microsoft Web Platform Installer (WebPI). windows The windows cookbook is used to configure auto run, batch, reboot, enable built-in operating system packages, configure Microsoft Windows packages, reboot machines, and more. ## Recipe DSL Methods¶ Six methods are present in the Recipe DSL to help verify the registry during a chef-client run on the Microsoft Windows platform—registry_data_exists?, registry_get_subkeys, registry_get_values, registry_has_subkeys?, registry_key_exists?, and registry_value_exists?—these helpers ensure the powershell_script resource is idempotent. Note The recommended order in which registry key-specific methods should be used within a recipe is: key_exists?, value_exists?, data_exists?, get_values, has_subkeys?, and then get_subkeys. ### registry_data_exists?¶ Use the registry_data_exists? method to find out if a Microsoft Windows registry key contains the specified data of the specified type under the value. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_data_exists? method is as follows: registry_data_exists?( KEY_PATH, { name: 'NAME', type: TYPE, data: DATA }, ARCHITECTURE ) where: • KEY_PATH is the path to the registry key value. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • { name: 'NAME', type: TYPE, data: DATA } is a hash that contains the expected name, type, and data of the registry key value • type: represents the values available for registry keys in Microsoft Windows. Use :binary for REG_BINARY, :string for REG_SZ, :multi_string for REG_MULTI_SZ, :expand_string for REG_EXPAND_SZ, :dword for REG_DWORD, :dword_big_endian for REG_DWORD_BIG_ENDIAN, or :qword for REG_QWORD. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ### registry_get_subkeys¶ Use the registry_get_subkeys method to get a list of registry key values that are present for a Microsoft Windows registry key. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_get_subkeys method is as follows: subkey_array = registry_get_subkeys(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This returns an array of registry key values. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ### registry_get_values¶ Use the registry_get_values method to get the registry key values (name, type, and data) for a Microsoft Windows registry key. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_get_values method is as follows: subkey_array = registry_get_values(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This returns an array of registry key values. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ### registry_has_subkeys?¶ Use the registry_has_subkeys? method to find out if a Microsoft Windows registry key has one (or more) values. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_has_subkeys? method is as follows: registry_has_subkeys?(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ### registry_key_exists?¶ Use the registry_key_exists? method to find out if a Microsoft Windows registry key exists at the specified path. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_key_exists? method is as follows: registry_key_exists?(KEY_PATH, ARCHITECTURE) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. (Any registry key values that are associated with this registry key are ignored.) Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ### registry_value_exists?¶ Use the registry_value_exists? method to find out if a registry key value exists. Use registry_data_exists? to test for the type and data of a registry key value. Note This method can be used in recipes and from within the not_if and only_if blocks in resources. This method is not designed to create or modify a registry key. If a registry key needs to be modified, use the registry_key resource. The syntax for the registry_value_exists? method is as follows: registry_value_exists?( KEY_PATH, { name: 'NAME' }, ARCHITECTURE ) where: • KEY_PATH is the path to the registry key. The path must include the registry hive, which can be specified either as its full name or as the 3- or 4-letter abbreviation. For example, both HKLM\SECURITY and HKEY_LOCAL_MACHINE\SECURITY are both valid and equivalent. The following hives are valid: HKEY_LOCAL_MACHINE, HKLM, HKEY_CURRENT_CONFIG, HKCC, HKEY_CLASSES_ROOT, HKCR, HKEY_USERS, HKU, HKEY_CURRENT_USER, and HKCU. • { name: 'NAME' } is a hash that contains the name of the registry key value; if either type: or :value are specified in the hash, they are ignored • type: represents the values available for registry keys in Microsoft Windows. Use :binary for REG_BINARY, :string for REG_SZ, :multi_string for REG_MULTI_SZ, :expand_string for REG_EXPAND_SZ, :dword for REG_DWORD, :dword_big_endian for REG_DWORD_BIG_ENDIAN, or :qword for REG_QWORD. • ARCHITECTURE is one of the following values: :x86_64, :i386, or :machine. In order to read or write 32-bit registry keys on 64-bit machines running Microsoft Windows, the architecture property must be set to :i386. The :x86_64 value can be used to force writing to a 64-bit registry location, but this value is less useful than the default (:machine) because the chef-client returns an exception if :x86_64 is used and the machine turns out to be a 32-bit machine (whereas with :machine, the chef-client is able to access the registry key on the 32-bit machine). This method will return true or false. Note The ARCHITECTURE attribute should only specify :x86_64 or :i386 when it is necessary to write 32-bit (:i386) or 64-bit (:x86_64) values on a 64-bit machine. ARCHITECTURE will default to :machine unless a specific value is given. ### Helpers¶ A recipe can define specific behaviors for specific Microsoft Windows platform versions by using a series of helper methods. To enable these helper methods, add the following to a recipe: require 'chef/win32/version' Then declare a variable using the Chef::ReservedNames::Win32::Version class: variable_name = Chef::ReservedNames::Win32::Version.new And then use this variable to define specific behaviors for specific Microsoft Windows platform versions. For example: if variable_name.helper_name? # Ruby code goes here, such as resource_name do # resource block end elsif variable_name.helper_name? # Ruby code goes here resource_name do # resource block for something else end else variable_name.helper_name? # Ruby code goes here, such as log 'log entry' do level :level end end The following Microsoft Windows platform-specific helpers can be used in recipes: Helper Description cluster? Use to test for a Cluster SKU (Windows Server 2003 and later). core? Use to test for a Core SKU (Windows Server 2003 and later). datacenter? Use to test for a Datacenter SKU. marketing_name Use to display the marketing name for a Microsoft Windows platform. windows_7? Use to test for Windows 7. windows_8? Use to test for Windows 8. windows_8_1? Use to test for Windows 8.1. windows_10? Use to test for Windows 10. windows_2000? Use to test for Windows 2000. windows_home_server? Use to test for Windows Home Server. windows_server_2003? Use to test for Windows Server 2003. windows_server_2003_r2? Use to test for Windows Server 2003 R2. windows_server_2008? Use to test for Windows Server 2008. windows_server_2008_r2? Use to test for Windows Server 2008 R2. windows_server_2012? Use to test for Windows Server 2012. windows_server_2012_r2? Use to test for Windows Server 2012 R2. windows_server_2016? Use to test for Windows Server 2016. windows_server_2019? Use to test for Windows Server 2019. windows_vista? Use to test for Windows Vista. windows_xp? Use to test for Windows XP. The following example installs Windows PowerShell 2.0 on systems that do not already have it installed. Microsoft Windows platform helper methods are used to define specific behaviors for specific platform versions: case node['platform'] when 'windows' require 'chef/win32/version' windows_version = Chef::ReservedNames::Win32::Version.new if (windows_version.windows_server_2008_r2? || windows_version.windows_7?) && windows_version.core? windows_feature 'NetFx2-ServerCore' do action :install end windows_feature 'NetFx2-ServerCore-WOW64' do action :install only_if { node['kernel']['machine'] == 'x86_64' } end elsif windows_version.windows_server_2008? || windows_version.windows_server_2003_r2? || windows_version.windows_server_2003? || windows_version.windows_xp? if windows_version.windows_server_2008? windows_feature 'NET-Framework-Core' do action :install end else windows_package 'Microsoft .NET Framework 2.0 Service Pack 2' do source node['ms_dotnet2']['url'] checksum node['ms_dotnet2']['checksum'] installer_type :custom options '/quiet /norestart' action :install end end else log '.NET Framework 2.0 is already enabled on this version of Windows' do level :warn end end else log '.NET Framework 2.0 cannot be installed on platforms other than Windows' do level :warn end end The previous example is from the ms_dotnet2 cookbook, created by community member juliandunn. ## chef-client¶ A chef-client is an agent that runs locally on every node that is under management by Chef. When a chef-client is run, it will perform all of the steps that are required to bring the node into the expected state, including: • Registering and authenticating the node with the Chef server • Building the node object • Synchronizing cookbooks • Compiling the resource collection by loading each of the required cookbooks, including recipes, attributes, and all other dependencies • Taking the appropriate and required actions to configure the node • Looking for exceptions and notifications, handling each as required This command has the following syntax: $ chef-client OPTION VALUE OPTION VALUE ...
This command has the following options specific to Microsoft Windows:
-A, --fatal-windows-admin-check
Cause a chef-client run to fail when the chef-client does not have administrator privileges in Microsoft Windows.
-d, --daemonize
Run the executable as a daemon.
This option is only available on machines that run in UNIX or Linux environments. For machines that are running Microsoft Windows that require similar functionality, use the chef-client::service recipe in the chef-client cookbook: https://supermarket.chef.io/cookbooks/chef-client. This will install a chef-client service under Microsoft Windows using the Windows Service Wrapper.
Note
chef-solo also uses the --daemonize setting for Microsoft Windows.
### Run w/Elevated Privileges¶
The chef-client may need to be run with elevated privileges in order to get a recipe to converge correctly. On UNIX and UNIX-like operating systems this can be done by running the command as root. On Microsoft Windows this can be done by running the command prompt as an administrator.
On Microsoft Windows, running without elevated privileges (when they are necessary) is an issue that fails silently. It will appear that the chef-client completed its run successfully, but the changes will not have been made. When this occurs, do one of the following to run the chef-client as the administrator:
\$ runas /user:Administrator "cmd /C chef-client"
When running Microsoft Windows, the config.rb file is located at %HOMEDRIVE%:%HOMEPATH%\.chef (e.g. c:\Users\<username>\.chef). If this path needs to be scripted, use %USERPROFILE%\.chef`.
|
2019-04-20 22:58:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20373065769672394, "perplexity": 9007.801384932302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530060.34/warc/CC-MAIN-20190420220657-20190421002657-00298.warc.gz"}
|
https://www.geeksforgeeks.org/logarithmic-differentiation/?ref=rp
|
Related Articles
# Logarithmic Differentiation
• Last Updated : 13 Jan, 2021
The method of finding the derivative of a function by first taking the logarithm and then differentiating is called logarithmic differentiation. This method is specially used when the function is of type y = f(x)g(x). In this type of problem where y is a composite function, we first need to take a logarithm, making the function log (y) = g(x) log (f(x)). This creates a situation where the differentiation of exponent function was quite difficult but after taking log on both sides of the equation, we can easily differentiate it using logarithm properties and chain rule. This method is also known as Composite exponential function differentiation. This approach allows us to calculate the derivative of complex exponential functions in an efficient manner.
## Logarithmic Differentiation Formula
Giveny = f(x)g(x)
Taking log on both sides:
log(y) = log(f(x)g(x))
Using log properties:
log(y) = g(x)⋅ log(f(x))
Differentiating both sides:
Using uv rule:
Using Chain rule:
The only constraint for using logarithmic differentiation rules is that f(x) and u(x) must be positive as logarithmic functions are only defined for positive values.
## Steps to Solve Logarithmic Differentiation Problems
These are the steps given here to solve find the differentiation of logarithmic functions:
1. Taking log on both sides.
2. Use log property to remove exponent.
3. Now differentiate the equation.
4. Simplify the obtained equation.
5. Substitute back the value of y.
Sometimes it may get a bit messy but keep your calm and just differentiate. Following are some examples of Logarithmic Differentiation.
### Example 1: Find the derivative of xx?
Solution:
Let y = xx
Step 1: Taking log on both sides
log(y) = log(xx)
Step 2: Use log property to remove exponent
log(y) = x ⋅ log(x)
Step 3: Now differentiate the equation
Step 4: Simplify the obtained equation
Step 5: Substitute back the value of y
### Example 2: Find the derivative of?
Given,
y =
Taking log on both sides,
log(y) = log()
log(y) = xx⋅ log(x) {log(ab) = b⋅ log(a)}
Differentiating both sides,
Since now we know the derivative of xx, We will substitute here directly.
### Example 3: Find the derivative of y = (log x)x?
Given,
y = (logx)x
Taking log on both sides,
log(y) = log((logx)x)
log(y) = x ⋅ log(logx) {log(ab) = b⋅ log(a)}
Differentiating both sides,
### Example 4: Find the derivative of y = x√x?
Given,
y = x√x
Taking log on both sides,
log(y) = log(x√x)
log(y) = √x⋅ log(x) {log(ab) = b⋅ log(a)}
Differentiating both sides,
Attention reader! Don’t stop learning now. Join the First-Step-to-DSA Course for Class 9 to 12 students , specifically designed to introduce data structures and algorithms to the class 9 to 12 students
My Personal Notes arrow_drop_up
|
2021-09-23 13:06:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370290994644165, "perplexity": 867.8236771633484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00244.warc.gz"}
|
http://georgehernandez.com/h/xComputers/VB/HandyVB/Tips.asp
|
Here are some handy VB tips I've collected.
• Coding Conventions
• A long statement can be broken into several lines by including a space and underscore ( _) at the end of each line.
• One statement per line is standard. Multiple statements can be put on the same line if a colon (:) is placed between statements.
• Use ampersand (&) to link strings and use plus sign (+) to link numerical values.
• Visual Basic prefixes octal numbers with &O and hexadecimal numbers with &H. EG: 15 = &O17 = &HF.
• Using the Option Explicit keywords at the start of each code module should almost be mandatory. It enables VB to check for errors on your identifiers.
• vbCrLf is a Standard Visual Basic Constant signifying the ANSI characters 13 and 10, i.e. Carriage Return and Line Feed. It is useful in entering a linebreak in string.
• DoEvents, aka "Pixie Dust", is a Visual Basic Statement that temporarily returns processor control to the operating system. It is useful to execute this statement in the middle of code that takes a long time to process. It is also good for whenever you want stuff redrawn immediately.
• Use the Container property of an object to specify which container contains the control. A container, eg a Form, Frame, PictureBox, or Toolbar, can contain several controls of different types.
• To free memory and system resources used by objects, be sure to assign the Nothing keyword to the object variable with the Set statement when finished with the object variable and to use the Unload keyword for forms.
• When establishing your tab order, start with the last control by setting its TabIndex property to 0 and then repeat with the second to the last, and so on.
• Use Str() to convert a number to a string. Use Val() to convert a string to a number. Use Format() to format a string.
• Right clicking on a variable or procedure can yield lots of info. EG: Definition will take you to where the variable was declared or to where the procedure was written.
• Help for VB is available through the MSDN Library. If you have VB and the Library open and then close VB, it will also close the Library but your favorites in the Library will not be saved. If you have VB and the Library open and then close the Library, then your favorites will be saved but you'll have to then close VB.
• The graphic formats that VB recognize include: BMP (bitmap), ICO (icon), CUR (cursor), RLE (run-length encoded), WMF (Windows metafile), GIF (Graphical Interchange Format), and JPEG (Joint Photographic Experts Group).
• An enumeration is like a Long Integer data type that is limited to the values of enumerated constants.
• When using a variable whose data type is enumerated, VB can show utilize Auto List Member.
• Enumerations start with 0 by default and go up by 1.
• You can explicitly set values of enumerated constants.
• More than one enumerated constant may have the same value.
• You may qualify the enumerated constant with the enumeration name.
• Create public or private enumerations with the Enum statement in the declarations section of a standard module or public class module.
• A UDT (User Defined Type) is like an object with different objects, each of which may be a different data type or object type.
• Create public or private UDTs with the Type statement in the declarations section of a standard module or class module.
• Create private UDTs with the Type statement in the declarations section of a form module.
• UDTs may be nested.
• Debug a VB project for a dll without compiling the dll:
1. Make sure any objects called by the dll are registered on the developer's machine.
2. Set break points in the VB project for a dll.
3. Run that VB project for a dll.
4. Run a VB project that uses the dll.
• Here are 3 methods to place a quotation mark (") in your strings. All of the following examples say: He said, "hi"..
• Use 2 quotation marks in a row. EG: str1 = "He said, ""hi"".". (c or JS would use \".)
• Use the a character code. EG: str1 = "He said, " & Chr(34) & "hi" & Chr(34) & ".". (c would use \u0022.)
• Use a constant. EG:
Const q As String = Chr(34)
str1 = "He said, " & q & "hi" & q & "."
• VBScript has 2 undocumented functions of note:
• Escape(string). Returns string where the non-alphanumeric characters are URL encoded (i.e. with the format of %hexadecimal;). This include the following characters: * + - . / @ _.
• Unescape(string). Returns string where the URL encoding is converted to its corresponding ASCII character.
Page Modified: (Hand noted: ) (Auto noted: )
|
2017-07-23 22:54:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17419904470443726, "perplexity": 4195.795096071567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424623.68/warc/CC-MAIN-20170723222438-20170724002438-00453.warc.gz"}
|
https://deeptools.readthedocs.io/en/2.1.0/content/tools/bamCoverage.html
|
# bamCoverage¶
If you are not familiar with BAM, bedGraph and bigWig formats, you can read up on that in our Glossary of NGS terms
This tool takes an alignment of reads or fragments as input (BAM file) and generates a coverage track (bigWig or bedGraph) as output. The coverage is calculated as the number of reads per bin, where bins are short consecutive counting windows of a defined size. It is possible to extended the length of the reads to better reflect the actual fragment length. bamCoverage offers normalization by scaling factor, Reads Per Kilobase per Million mapped reads (RPKM), and 1x depth (reads per genome coverage, RPGC).
usage: An example usage is:$bamCoverage -b reads.bam -o coverage.bw Required arguments --bam, -b BAM file to process Output --outFileName, -o Output file name. --outFileFormat=bigwig, -of=bigwig Output file type. Either “bigwig” or “bedgraph”. Possible choices: bigwig, bedgraph Optional arguments --scaleFactor=1.0 Indicate a number that you would like to use. When used in combination with –normalizeTo1x or –normalizeUsingRPKM, the computed scaling factor will be multiplied by the given scale factor. --MNase=False Determine nucleosome positions from MNase-seq data. Only 3 nucleotides at the center of each fragment are counted. The fragment ends are defined by the two mate reads. Only fragment lengthsbetween 130 - 200 bp are considered to avoid dinucleosomes or other artifacts.*NOTE*: Requires paired-end data. A bin size of 1 is recommended. --version show program’s version number and exit --binSize=50, -bs=50 Size of the bins, in bases, for the output of the bigwig/bedgraph file. --region, -r Region of the genome to limit the operation to - this is useful when testing parameters to reduce the computing time. The format is chr:start:end, for example –region chr10 or –region chr10:456700:891000. --numberOfProcessors=max/2, -p=max/2 Number of processors to use. Type “max/2” to use half the maximum number of processors or “max” to use all available processors. --verbose=False, -v=False Set to see processing messages. Read coverage normalization options --normalizeTo1x Report read coverage normalized to 1x sequencing depth (also known as Reads Per Genomic Content (RPGC)). Sequencing depth is defined as: (total number of mapped reads * fragment length) / effective genome size. The scaling factor used is the inverse of the sequencing depth computed for the sample to match the 1x coverage. To use this option, the effective genome size has to be indicated after the option. The effective genome size is the portion of the genome that is mappable. Large fractions of the genome are stretches of NNNN that should be discarded. Also, if repetitive regions were not included in the mapping of reads, the effective genome size needs to be adjusted accordingly. Common values are: mm9: 2,150,570,000; hg19:2,451,960,000; dm3:121,400,000 and ce10:93,260,000. See Table 2 of http://www.plosone.org/article/info:doi/10.1371/journal.pone.0030377 or http://www.nature.com/nbt/journal/v27/n1/fig_tab/nbt.1518_T1.html for several effective genome sizes. --normalizeUsingRPKM=False Use Reads Per Kilobase per Million reads to normalize the number of reads per bin. The formula is: RPKM (per bin) = number of reads per bin / ( number of mapped reads (in millions) * bin length (kb) ). Each read is considered independently,if you want to only count either of the mate pairs inpaired-end data, use the –samFlag option. --ignoreForNormalization, -ignore A list of space-delimited chromosome names containing those chromosomes that should be excluded for computing the normalization. This is useful when considering samples with unequal coverage across chromosomes, like male samples. An usage examples is –ignoreForNormalization chrX chrM. --skipNonCoveredRegions=False, --skipNAs=False This parameter determines if non-covered regions (regions without overlapping reads) in a BAM file should be skipped. The default is to treat those regions as having a value of zero. The decision to skip non-covered regions depends on the interpretation of the data. Non-covered regions may represent, for example, repetitive regions that should be skipped. --smoothLength The smooth length defines a window, larger than the binSize, to average the number of reads. For example, if the –binSize is set to 20 and the –smoothLength is set to 60, then, for each bin, the average of the bin and its left and right neighbors is considered. Any value smaller than –binSize will be ignored and no smoothing will be applied. Read processing options --extendReads=False, -e=False This parameter allows the extension of reads to fragment size. If set, each read is extended, without exception. *NOTE*: This feature is generally NOT recommended for spliced-read data, such as RNA-seq, as it would extend reads over skipped regions. *Single-end*: Requires a user specified value for the final fragment length. Reads that already exceed this fragment length will not be extended. *Paired-end*: Reads with mates are always extended to match the fragment size defined by the two read mates. Unmated reads, mate reads that map too far apart (>4x fragment length) or even map to different chromosomes are treated like single-end reads. The input of a fragment length value is optional. If no value is specified, it is estimated from the data (mean of the fragment size of all mate reads). --ignoreDuplicates=False If set, reads that have the same orientation and start position will be considered only once. If reads are paired, the mate’s position also has to coincide to ignore a read. --minMappingQuality If set, only reads that have a mapping quality score of at least this are considered. --centerReads=False By adding this option, reads are centered with respect to the fragment length. For paired-end data, the read is centered at the fragment length defined by the two ends of the fragment. For single-end data, the given fragment length is used. This option is useful to get a sharper signal around enriched regions. --samFlagInclude Include reads based on the SAM flag. For example, to get only reads that are the first mate, use a flag of 64. This is useful to count properly paired reads only once, as otherwise the second mate will be also considered for the coverage. --samFlagExclude Exclude reads based on the SAM flag. For example, to get only reads that map to the forward strand, use –samFlagExclude 16, where 16 is the SAM flag for reads that map to the reverse strand. ## Usage hints¶ • A smaller bin size value will result in a higher resolution of the coverage track but also in a larger file size. • The 1x normalization (RPGC) requires the input of a value for the effective genome size, which is the mappable part of the reference genome. Of course, this value is species-specific. The command line help of this tool offers suggestions for a number of model species. • It might be useful for some studies to exclude certain chromosomes in order to avoid biases, e.g. chromosome X, as male mice contain a pair of each autosome, but usually only a single X chromosome. • By default, the read length is NOT extended! This is the preferred setting for spliced-read data like RNA-seq, where one usually wants to rely on the detected read locations only. A read extension would neglect potential splice sites in the unmapped part of the fragment. Other data, e.g. Chip-seq, where fragments are known to map contiguously, should be processed with read extension (--extendReads [INTEGER]). • For paired-end data, the fragment length is generally defined by the two read mates. The user provided fragment length is only used as a fallback for singletons or mate reads that map too far apart (with a distance greater than four times the fragment length or are located on different chromosomes). Warning If you already normalized for GC bias using correctGCbias, you should absolutely NOT set the parameter --ignoreDuplicates! Warning If you know that your files will be strongly affected by the kind of filtering you would like to apply (e.g., removal of duplicates with --ignoreDuplicates or ignoring reads of low quality) then consider removing those reads beforehand. Note Like BAM files, bigWig files are compressed, binary files. If you would like to see the coverage values, choose the bedGraph output via --outFileFormat. ## Usage example for ChIP-seq¶ This is an example for ChIP-seq data using additional options (smaller bin size for higher resolution, normalizing coverage to 1x mouse genome size, excluding chromosome X during the normalization step, and extending reads): bamCoverage --bam a.bam -o a.SeqDepthNorm.bw \ --binSize 10 --normalizeTo1x 2150570000 --ignoreForNormalization chrX --extendReads If you had run the command with --outFileFormat bedgraph, you could easily peak into the resulting file. $ head SeqDepthNorm_chr19.bedgraph
19 60150 60250 9.32
19 60250 60450 18.65
19 60450 60650 27.97
19 60650 60950 37.29
19 60950 61000 27.97
19 61000 61050 18.65
19 61050 61150 27.97
19 61150 61200 18.65
19 61200 61300 9.32
19 61300 61350 18.65
As you can see, each row corresponds to one region. If consecutive bins have the same number of reads overlapping, they will be merged.
## Usage examples for RNA-seq¶
Note that some BAM files are filtered based on SAM flags (Explain SAM flags).
### Regular bigWig track¶
bamCoverage -b a.bam -o a.bw
### Separate tracks for each strand¶
Sometimes it makes sense to generate two independent bigWig files for all reads on the forward and reverse strand, respectively.
To follow the examples, you need to know that -f will tell samtools view to include reads with the indicated flag, while -F will lead to the exclusion of reads with the respective flag.
For a stranded single-end library
# Forward strand
bamCoverage -b a.bam -o a.fwd.bw --samFlagExclude 16
# Reverse strand
bamCoverage -b a.bam -o a.rev.bw --samFlagInclude 16
For a stranded paired-end library
Now, this gets a bit cumbersome, but future releases of deepTools will make this more straight-forward. For now, bear with us and perhaps read up on SAM flags, e.g. here.
For paired-end samples, we assume that a proper pair should have the mates on opposing strands where the Illumina strand-specific protocol produces reads in a R2-R1 orientation. We basically follow the recipe given in this biostars tutorial.
To get the file for transcripts that originated from the forward strand:
# include reads that are 2nd in a pair (128);
# exclude reads that are mapped to the reverse strand (16)
$samtools view -b -f 128 -F 16 a.bam > a.fwd1.bam # exclude reads that are mapped to the reverse strand (16) and # first in a pair (64): 64 + 16 = 80$ samtools view -b -f 80 a.bam > a.fwd2.bam
# combine the temporary files
$samtools merge -f fwd.bam fwd1.bam fwd2.bam # index the filtered BAM file$ samtools index fwd.bam
# run bamCoverage
$bamCoverage -b fwd.bam -o a.fwd.bigWig # remove the temporary files$ rm a.fwd*.bam
To get the file for transcripts that originated from the reverse strand:
# include reads that map to the reverse strand (128)
# and are second in a pair (16): 128 + 16 = 144
$samtools view -b -f 144 a.bam > a.rev1.bam # include reads that are first in a pair (64), but # exclude those ones that map to the reverse strand (16)$ samtools view -b -f 64 -F 16 a.bam > a.rev2.bam
# merge the temporary files
$samtools merge -f rev.bam rev1.bam rev2.bam # index the merged, filtered BAM file$ samtools index rev.bam
# run bamCoverage
$bamCoverage -b rev.bam -o a.rev.bw # remove temporary files$ rm a.rev*.bam
|
2022-09-28 04:19:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22765375673770905, "perplexity": 7765.717527546172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00553.warc.gz"}
|
http://vkundeti.blogspot.com/2009/
|
## Thursday, December 24, 2009
### [TECH] An interesting problem on the mailing list [[email protected]]
Some one asked the following question of GDB mailing list.
Hi,
I was wondering how come the call to "call cos(.2)" returns the wrong
#include "math.h"
double (*fcn)(double);
main()
{
double t;
fcn = cos;
t=cos(.2);
t=cos(-.2);
}
(gdb) call cos(.2)
$16 = 104 (gdb) call fcn(.2)$17 = 0.98006657784124163
Thanks,
--
View this message in context: http://old.nabble.com/Why-%22call-cos%28.2%29%22-returns-garbage--tp26909516p26909516.html
Sent from the Gnu - gdb - General mailing list archive at Nabble.com.
Here is my analysis.
Hello,
Looks like 'gdb' is missing debug information about the function 'cos' ,
and the debugger implicitly assumes all the functions return an 'int' ,
if the function is not compiled in debug mode.
I derive my explanation from the following test case.
1. Create a function 'double test_cos(double)' and compile it in optimized mode with -O3 using gcc
"$gcc -c -O3 test1.c" 2. Complie the file containing the 'main' function in debug mode "$gcc -g test.c test1.o -lm"
============[gdb a.out]====================
(gdb) list 0
1 #include "math.h"
2 #include "test1.h"
3 double (*fcn)(double);
4 main()
5 {
6 double t;
7 fcn = cos;
8 t=cos(.2);
9 t=cos(-.2);
10 test_cos(.2);
(gdb) call fcn(0.2)
$4 = 0.98006657784124163 (gdb) call cos(0.2)$5 = -1073792992
(gdb) call test_cos(0.2)
$6 = -1073792992 ===================================== ============[test1.h]================= #ifndef _test_1_h #define _test_1_h double test_cos(double); #endif ==================================== ============[test1.c]================== #include "math.h" #include "test1.h" double test_cos(double a){ return cos(a); } ==================================== ============[test.c]================== #include "math.h" #include "test1.h" double (*fcn)(double); main() { double t; fcn = cos; t=cos(.2); t=cos(-.2); test_cos(.2); } =================================== May be the developers of gdb can you more details. also ======================================= (gdb) call cos$6 = {text variable, no debug info} 0xc325d0 {cos}
=======================================
## Wednesday, December 23, 2009
### [TECH] Manipulating DNA sequences with bits
The alphabet of DNA sequences consists of 4 alphabets {A,T,G,C} and they exists in compliments -- A-T, G-C. Often in sequence assembly algorithms we need to squeeze these into bits for saving space since we have too many of them. For example a typical input to a sequence assembler consists of at least 10 million reads (string of fixed length, say 21). In my last post I briefly described the Sequence Assembly (SA) problem. As I pointed one of the basic operations on these reads would be to obtain a reverse compliment of a given read. For example our read is ATTACCAG , then its reverse compliment would be CTGGTAAT -- reverse the original sequence replace every symbol by its compliment (A-T, G-C). The following routines may be useful when we squeeze these symbols into bits using only 2bits per symbol. While using the bit-wise operators I realized that == operator has more precedence than & (bit-wise or). So if you a&b==b then that will always be true, so we need to be more careful use (a&b)==b.
#define MASK_A 3
/*Lets fix the maximum size of the read as 32 base pairs*/
typedef unsigned long long kmer_t;
/*Takes a read of length K and returns its bit represenation*/
kmer_t CreateBKmer(char *kmer, unsigned long K){
unsigned long i;
unsigned char bitc;
/*A binary k-mer*/
kmer_t bkmer = (kmer_t) 0;
for(i=0; i<K; i++){
bkmer <<= 2;
bkmer |= Char2Bit(kmer[i]);
}
return bkmer;
}
/*Return the reverse compliment of the kmer_a*/
kmer_t ReverseCompliment(kmer_t a, unsigned int K){
kmer_t rc=0;
unsigned int i=0;
for(i=0; i<K; i++){
rc <<=2;
rc |= (3-(a&3));
a >>=2;
}
return rc;
}
/*Print the ASCII equivalent of the underlying bits*/
void PrintKmer(kmer_t a, unsigned int K, FILE *ptr){
char ntide;
do{
ntide = 'A';
ntide = 'C';
ntide = 'G';
}else{ /*This should be the last because T=0*/
ntide = 'T';
}
fprintf(ptr, "%c", ntide);
fprintf(ptr, "\n");
}
inline char Bit2Char(unsigned char a){
switch(a){
return 'A';
return 'G';
return 'C';
return 'T';
}
}
inline unsigned char Char2Bit(char c){
switch(c){
case 'A':
case 'T':
case 'G':
case 'C':
}
}
## Friday, December 18, 2009
### [TECH] Loops in Bi-directed De Bruijn Graphs
De Bruijn graphs recently found applications in Sequence Assembly(). problem is close to Shortest Common super String() problem, however there are some fundamental differences. Given a set of input strings strings the problem outputs a string such that every is a sub-string in and . On the other hand the input to the problem is a set of strings, every is a randomly chosen sub-string of a bigger string - called the genome. Unfortunately we do not know what is, so given the problem asks to find the such that we can explain every . Adding to the complexity of problem may contain several repeating regions and the direct appliction of to solve problem would result in collapsing the repeating regions in - so we cannot directly reduce the problem to . Also the string is special string coming from the so its double stranded and each may be originating from the forward or reverse compliments of .
A De Bruijn graph on a set of strings is a graph whose vertices correspond to the unique mers (a string of length ) from all the and whose edges correspond to a some mer in some . Also to consider the double stranded-ness we also include a reverse compliment of every mer into the graph . In the following simple example I show how a loop (i.e. both the forward and reverse compliments overlapping each other by ) can originate into . This interesting situation seems to be referred to as hairpin-loop.
## Saturday, December 12, 2009
### [TECH] Simplified Proof For The Application Of Freivalds' Technique to Verify Matrix Multiplication
We first give a simple and alternative proof for Theorem- in [Randomized Algorithms, Motwani R. and Raghavan P., pp 162--163]. Later in Theorem 2 we show that the assumption on the uniformness is not necessary.
Proof. Let be a matrix and be the column vectors of . Then . This means that multiplying a vector with a matrix is linear combination of the columns, the coefficient is the component of . Since is a boolean and acts as an indicator variable on the selection of column . So if is chosen from a uniform distribution .
Now let and be the column vectors of , similarly let be the column vectors of . Let , clearly since . Then since . Intuitively this means we select our random vector such that for all , such a selection will always ensure even though .
Proof. Continuing with the proof of Theorem- 1 , .
|
2017-09-24 06:50:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6715890169143677, "perplexity": 3178.945333568242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689897.78/warc/CC-MAIN-20170924062956-20170924082956-00281.warc.gz"}
|
https://www.physicsforums.com/threads/drawing-domain-of-r-2-integral.402407/
|
# Drawing domain of R^2 integral
1. May 11, 2010
### Jerbearrrrrr
$$\int _1 ^e \int _{1/e} ^{1/y} f dx dy$$
If asked how to sketch the bounds for that:
"x starts at x=1/e and ends at x=1/y
1/e is just a constant so that's a straight line
and x=1/y is the exact same line as y=1/x"
That's a decent enough engineer's explanation right?
(no one in university should need the y-bounds explained)
2. May 11, 2010
### vandanak
well, y=1/x is not a straight line,it's a rectangular hyperbola present in 1st and 4th quadrant,but we have to consider only 1st quadrant as limits of x and y are +ive.
3. May 11, 2010
### kiwakwok
The domain in the first quadrant is bounded by the curves y = 1, y = e and x = 1/y.
The points of intersection are (1/e, 1), (1/e, e) and (1, 1).
|
2017-12-13 13:53:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7079971432685852, "perplexity": 2625.5687136329493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948523222.39/warc/CC-MAIN-20171213123757-20171213143757-00458.warc.gz"}
|
https://treningmedica.com.pl/241_molecular-distillation-xenon.html
|
Get a Quote
## xenon | Definition, Properties, Atomic Mass, …
Xenon occurs in slight traces in gases within Earth and is present to an extent of about 0.0000086 percent, or about 1 part in 10 million by volume of dry air. Like several other noble gases, xenon is present in meteorites. Xenon is manufactured on a small scale by the fractional distillation of liquid air. It is the least volatile (boiling point, −108.0 °C [−162.4 °F]) of the noble gases …
Get Price
## Molecular Distillation Xenon
Ramsay and his coworkers searched for related gases and by fractional distillation of liquid air discovered krypton, neon, and xenon, all in 1898. Radon was first identified in 1900 by German chemist Friedrich E. Dorn; it was established as a member of the noble-gas group in 1904. Rayleigh and Ramsay won Nobel Prizes in 1904 for their work.
Get Price
## What Is Molecular Distillation? - WkieLab.com
Jul 08, 2021 · Molecular distillation is a type of short-path vacuum distillation, characterized by an extremely low vacuum pressure, 0.01 torr or below, which is performed using a molecular still. It is a process of separation, purification and concentration of natural products, complex and thermally sensitive molecules for example vitamins and polyunsaturated fatty acids.
Get Price
## Molecular distillation - Lab Instrument Manufacturer
xenon | Definition, Properties, Atomic Mass, Compounds
Get Price
## What is Molecular distillation and how does it work?
xenon | Definition, Properties, Atomic Mass, Compounds
Get Price
## Molecular Distillation — Vobis, LLC Pilot Evaporators
xenon | Definition, Properties, Atomic Mass, Compounds
Get Price
## Molecular distillation equipment price | Distillation Equipment
xenon | Definition, Properties, Atomic Mass, Compounds
Get Price
## Molecular distillation - ScienceDirect
Molecular distillation is a special liquid liquid separation technology, operating under extremely high vacuum.He is according to,the molecular motion of a different material has different mean free path of the physical characteristics and achieve the purpose of separation,and the liquid at a temperature below the boiling point will be the separation,and is especially suitable for the …
Get Price
## Molecular Distillation — Vobis, LLC
Molecular Distillation. Vobis pilot scale wiped film molecular distillation systems that we design and manufacture for clients are also called short path evaporator systems and are agitated film evaporation systems supplied which can be implemented with various rolling and wiper systems and that have an internal condenser which is inside of the evaporator and whose …
Get Price
## Distillation of liquid xenon to remove krypton
The molecular distillation system is a special liquid-liquid separation technology, which is different from the principle of traditional distillation relying on boiling point difference separation. This is a process of distilling and purifying heat-sensitive substances or high-boiling-point substances by using the differences in the free paths of the molecules of different substances …
Get Price
## Distillation of Liquid Xenon to Remove Krypton – arXiv Vanity
Molecular distillation became an important industry during the past fifteen years, and has enabled many thermally sensitive materials to be prepared on a commercial scale, but the development of stills appears now to have reached a resting-point. After a short introduction, the author discusses in turn the various forms of molecular still existing at the present time, …
Get Price
## Removing krypton from xenon by cryogenic …
The Vobis molecular distillation or short path distillation systems with evaporator heat transfer surface area to 100m2 that we design, construct and provide to clients are also called short path evaporator systems and are similar to our wiped film evaporation systems except that they have an internal condenser which is inside of the evaporator and whose condensing surfaces are …
Get Price
## A cryogenic distillation column for the XENON1T experiment
May 01, 2009 · In the commercial production of xenon, krypton is removed by distillation or adsorption. But, the commercially available xenon contains 10 −9 –10 −6 mol/mol of krypton, because the removal starts from the Kr/Xe ratio of about 10 and such purity is enough for most of the applications of xenon. 85 Kr is a radioactive nucleus which decays into rubidium-85 with a …
Get Price
## PandaX-4T cryogenic distillation system for removing krypton from xenon ...
Before entering the distillation apparatus, the xenon flows through a getter SPRG-100H-00030X (Taiyo Toyo Sanso Co.), which is able to decrease concentrations of N 2, CH 4, O 2, CO 2, and CO to below 1 × 10 − 10 mol/mol, and H 2 and H 2 O below 5 × 10 − 10 mol/mol. The xenon then flows through a heat exchanger which pre-cools the xenon.
Get Price
## Chemical Precipitation and Membrane Distillation Process …
May 02, 2017 · The distillation process as described in [12, 14] is based on the difference of vapour pressures of the two components in an ideal binary mixture, in our case krypton and xenon.Assuming a static liquid xenon reservoir in …
Get Price
## BYJUS
Considering some samples drawn from the output of the XENON100 cryogenic distillation column, the krypton concentration was measured to be (0.95 ± 0.16) ppt, the purest xenon target ever employed ...
Get Price
## Utilization of cationic microporous metal-organic framework …
Dec 20, 2021 · This distillation system is designed to reduce the concentration of krypton in commercial xenon from 5 × 10 −7 to ∼ 1 0 − 14 mol/mol with 99% xenon collection efficiency at a maximum flow rate of 10 kg/h. The offline distillation operation has been completed and 5.75 tons of ultra-high purity xenon was produced, which is used as the ...
Get Price
## Experimental determination of the xenon isotopic fractionation …
1 day ago · Chemical precipitation with NaOH, CaCO 3, and Ca(OH) 2 followed by membrane distillation was used to treat acidic wastewater from an anodized plating facility. The precipitation process achieved over 50% and 60% sulfate and conductivity removal, respectively. • The removal of conductivity, sulfate, aluminum, and iron with the MD process was generally 90% …
Get Price
BYJUS
Get Price
## Short Path Cannabinoid Distillation | Wiped Film Stills
Jun 03, 2022 · The separation of xenon/krypton (Xe/Kr) mixtures plays a vital role in the industrial process of manufacturing high-purity xenon. Compared with energy-intensive cryogenic distillation, porous materials based on physical adsorption are very promising in the low-cost and energy-saving separation processes. Herein, we show that a cationic metal-organic framework …
Get Price
## Cryogenic system with GM cryocooler for krypton, xenon
Aug 14, 2013 · Assuming a Rayleigh distillation, for instance if surface sediments having adsorbed xenon would be continuously recycled into the mantle, does not solve the problem. In equation 1, the fraction f corresponding to a 20-fold depletion is 0.05 and the ratio R f /R i corresponding to a 3.5% per u is 1.035. Therefore, the fractionation factor α ...
Get Price
## Xenon (Xe) - Discovery, Occurrence, Production, Properties …
Molecular distillation is a technology made exclusively for liquid-liquid separation. While traditional distillation does so with the boiling point difference separation principle, this method separates substances at the molecular level. This allows for a safer and purer technique of distillation. This method also produces a shorter residence ...
Get Price
## Distillation of liquid xenon to remove krypton
Backed by over 50 years of industry processing experience: Pope Scientific’s Wiped-Film Short Path Molecular Stills are recognized as the premier machines in the marijuana industry for cannabis distillation, including hemp oil distillation and terpene recovery after extraction. We have led the way worldwide in high-vacuum molecular distillation and wiped-film evaporation …
Get Price
## Xenon- the light at the end of the tunnel - Scientific Update - UK
Jan 29, 2014 · The intrinsic contamination of the xenon with radioactive {sup 85}Kr makes a significant background for these kinds of low count-rate experiments and has to be removed beforehand. This can be achieved by cryogenic distillation, a technique widely used in industry, using the different vapor pressures of krypton and xenon. In this paper, we ...
Get Price
## Xenon - Sciencemadness Wiki
Nine naturally occurring isotopes of xenon include xenon-124, xenon-126, xenon-128, xenon-129, xenon-130, xenon-131, xenon-132, xenon-134, and xenon-136. Production Xenon is produced from the residues of liquefied air via fractional distillation, as it …
Get Price
## Metal–organic framework with optimally selective xenon ... - Nature
May 01, 2009 · In the commercial production of xenon, krypton is removed by distillation or adsorption. But, the commercially available xenon contains 10 −9 –10 −6 mol/mol of krypton, because the removal starts from the Kr/Xe ratio of about 10 and such purity is enough for most of the applications of xenon. 85 Kr is a radioactive nucleus which decays into rubidium-85 with a …
Get Price
## How Molecular Distillation You Need - beautydebska.pl
Feb 11, 2019 · Xenon (Xe, Element 54) Xenon, from the Greek for ‘stranger’ is a colourless, odourless group 18 noble gas. Discovered in 1898 in London by William Ramsay, xenon is produced commercially by the fractional distillation of liquid air and is isolated as a by-product of the cryogenic production of oxygen and nitrogen.
Get Price
## Rare Gases: Krypton and Xenon Applications
Xenon compounds rarely appear for sale at chemical suppliers, limiting our ability to explore the unique chemistry of this element. Xenon lamps are also a source, though a poor one. Isolation. Xenon can be isolated from the fractional distillation of air, but this process is too expensive for the amateur chemist to be feasible. Projects. Deep voice
Get Price
## A microporous metal–organic framework with commensurate adsorption …
Jun 13, 2016 · The existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal–organic frameworks have demonstrated the ability ...
Get Price
## Chemical Precipitation and Membrane Distillation Process …
Cannabis Distillation Guide - HumBay Technology. Jan 21, 2020 Wiped film distillation is so far the most effective molecular distillation technique. Wiped film distillation is a generally compact. Such distillation process starts with the feed material within a flask being pumped into a heated evaporation column.
Get Price
## Xenon Tetrafluoride (XeF4)-Chemical Compound - What's …
Dec 17, 2020 · Xenon is used in a wide variety of applications. It is a component of excimer laser mixes to produce certain wavelengths (282nm XeBr, 308nm XeCl, 351nm XeF). Xenon is used in radiation detectors, that are designed to measure X-rays and gamma-rays. It is used in sputter deposition, especially when depositing coatings with high molecular weights.
Get Price
## Molecular Structure of Xenon Tetroxide, XeO4 - Semantic …
Feb 07, 2018 · Temperature dependent isotherms, adsorption kinetics experiments, single column breakthrough curves and molecular simulation studies collaboratively support the claim, underlining the potential of this material for energy and cost-effective removal of xenon from nuclear fuel reprocessing plants compared with cryogenic distillation.
Get Price
## Occurrence, Preparation, and Properties of the Noble Gases
1 day ago · Chemical precipitation with NaOH, CaCO 3, and Ca(OH) 2 followed by membrane distillation was used to treat acidic wastewater from an anodized plating facility. The precipitation process achieved over 50% and 60% sulfate and conductivity removal, respectively. • The removal of conductivity, sulfate, aluminum, and iron with the MD process was generally 90% …
Get Price
## Utilization of cationic microporous metal-organic framework …
Xenon tetrafluoride (XeF4) is a crystalline compound that is normally colourless/white. It is composed of xenon (a noble gas) and fluoride (a naturally occurring mineral). XeF4 can be used to detect and analyse trace metals that contaminate silicone rubber. XeF4 is a chemical compound made up of Xenon and Fluorine atoms.
Get Price
## Short Path Cannabinoid Distillation | Wiped Film Stills
The molecular structure of XeO4 has been investigated in the gas phase by electron diffraction. The data are completely compatible with the tetrahedral structure proposed from analysis of the infrared spectrum. Refinement of the structure by least squares based upon intensity functions, treating each distance and amplitude as independent parameters, yielded the results rXe–O = …
Get Price
## Xenon (Xe) - Discovery, Occurrence, Production, Properties …
Dry, solid xenon trioxide, XeO 3, is extremely explosive—it will spontaneously detonate.Both XeF 6 and XeO 3 disproportionate in basic solution, producing xenon, oxygen, and salts of the perxenate ion, $${\text{XeO}}_{6}{}^{4-},$$ in which xenon reaches its maximum oxidation sate of 8+.. Radon apparently forms RnF 2 —evidence of this compound comes from radiochemical …
Get Price
## Xenon Properties, Uses & Facts | What is Xenon? | Study.com
Jun 03, 2022 · The separation of xenon/krypton (Xe/Kr) mixtures plays a vital role in the industrial process of manufacturing high-purity xenon. Compared with energy-intensive cryogenic distillation, porous materials based on physical adsorption are very promising in the low-cost and energy-saving separation processes. Herein, we show that a cationic metal-organic framework …
Get Price
## Xenon - Introduction, Properties, Compounds, and …
Backed by over 50 years of industry processing experience: Pope Scientific’s Wiped-Film Short Path Molecular Stills are recognized as the premier machines in the marijuana industry for cannabis distillation, including hemp oil distillation and terpene recovery after extraction. We have led the way worldwide in high-vacuum molecular distillation and wiped-film evaporation …
Get Price
## Xenon - DrugFuture
Nine naturally occurring isotopes of xenon include xenon-124, xenon-126, xenon-128, xenon-129, xenon-130, xenon-131, xenon-132, xenon-134, and xenon-136. Production Xenon is produced from the residues of liquefied air via fractional distillation, as it …
Get Price
### Need a Quick Quote?
We believe that customer service comes first.If you want to know more, please contact us.
Inquiry Online
Related Articles
X
CONTACT US
If you are interested in our products, please contact us, your satisfaction is our eternal pursuit!
|
2022-07-04 12:37:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5415241122245789, "perplexity": 9135.00154691271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00553.warc.gz"}
|
https://www.physicsforums.com/threads/the-relation-between-the-natural-log-composed-with-hyperbolic-tangent-and-this-ratio.415560/
|
# The relation between the natural log composed with hyperbolic tangent and this ratio
1. Jul 12, 2010
### afbase
Hello,
Consider $$x \in (0,1)$$, that is x between 0 and 1. Can someone explain why the following is true:
$$\frac{x-1}{x+1} = \tanh \left( \ln \left( \frac{x}{2} \right) \right)$$
2. Jul 12, 2010
### Mute
Re: The relation between the natural log composed with hyperbolic tangent and this ra
It's not true. That equality doesn't hold. The correct expression is
$$\frac{x-1}{x+1} = \mbox{tanh}\left(\frac{\ln x}{2}\right)$$
This follows from the identity
$$\mbox{artanh}(x) = \frac{1}{2} \ln \left( \frac{1+x}{1-x}\right)$$
You can get from this to the other form by making the replacement $y = (1+x)/(1-x)$. To derive this identity, solve the following for w:
$$z = \mbox{tanh}(w) = \frac{e^w-e^{-w}}{e^w+e^{-w}}$$
3. Jul 12, 2010
### afbase
Re: The relation between the natural log composed with hyperbolic tangent and this ra
Ah thank you!
|
2018-09-22 16:39:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7868264317512512, "perplexity": 830.8895258126174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158609.70/warc/CC-MAIN-20180922162437-20180922182837-00197.warc.gz"}
|
http://ask.learncbse.in/t/consider-a-closed-loop-c-in-a-magnetic-field-as-shown-in-the-figure/12747
|
# Consider a closed loop C in a magnetic field as shown in the figure
#1
Consider a closed loop C in a magnetic field as shown in the figure. The flux passing through the loop is defined by choosing a surface whose edge coincides with the loop and using the formula Now, if we choose two different surfaces having C as their edge, would we get the same answer for flux. Justify your answer.
#2
The magnetic flux linked with the surface can be considered as the number of magnetic field lines passing through the surface. So, let d$\phi$ = BA represents magnetic lines in an area A and B.
By the concept of continuity of magnetic field lines, it cannot end or start in space, therefore the number of lines passing through surface ${ S }{ 1 }$
must be the same as the number of lines passing through the surface ${ S } { 2 }$. Therefore, in both the cases, we get the same answer for flux.
|
2018-10-19 22:03:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7284881472587585, "perplexity": 143.90275930850922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512460.41/warc/CC-MAIN-20181019212313-20181019233813-00399.warc.gz"}
|
http://physics.stackexchange.com/tags/electromagnetism/hot
|
# Tag Info
18
First, the strong force acts on scales where our classical idea of forces as something that obeys Newton's laws breaks down anyway. The proper description of the strong force is as a quantum field theory. On the level of quarks, this is a theory of gluons, but on scales of the nucleus, only a "residual strong force", the nuclear force remains, which can be ...
12
An "RF cage" is commonly used to keep signals IN as well as OUT (see for example the mesh door of a microwave oven.) The short answer is - reciprocity says "if it works in one direction, it works in the opposite direction". The induced charges on the sphere (in the case of a charge inside it) are just enough to cancel the field outside exactly - because ...
9
They are technically units for incommensurate quantities, but in practice this is often just a technicality. The magnetic field that makes sense ($B$) is measured in teslas (SI) or gauss (CGS), and the magnetic field that people spoke about 100 years ago ($H$) is measured in amps per meter (SI, also equivalent to a number of other things) or oersteds (CGS). ...
7
The radiation pattern of any dipole antenna looks similar to what you are showing in the 2D plots - but in your interpretation of the 3D pattern you have the axes wrong. A dipole antenna with the main axis vertical will transmit power in the horizontal plane, with less and less power as you go further away (inverse square law). If you measure the power as a ...
6
This is a relatively tricky one, because it involves the differences between the $\mathbf B$ field and the $\mathbf H$ field in the SI and CGS systems, and those relationships change in the different systems. In short: Oersteds are used to measure the $\mathbf H$ field in CGS units. Teslas are used to measure the $\mathbf B$ field in SI units. In the SI ...
5
Your existing answer talks about quark confinement, but stable nuclei can't really be described using quark and gluon degrees of freedom. Also your existing answer doesn't answer your title question: why don't nuclei collapse to a point? To first approximation, nuclei do collapse into a point. The diameter of a nucleus is typically about $10^{-5}$ the ...
4
From a quick google search, it seems that Oersteds are used for defining magnetic field strength and Teslas are used for defining magnetic field strength in terms of flux density. They seem to not really be meant to be converted between, though you technically can (as evidenced by the other answers here). This website and this website might be helpful to ...
3
Any electric charge would experience a force due to an electric field. Therefore, the electric field in electromagnetic waves produces currents in antennas. It happens all the time in wireless communication.
3
All objects and fields that have a nonzero mass, energy, or momentum interact gravitationally, and so do neutrinos – although they're very light and hard to produce so the gravitational force from any neutrinos we know is undetectable at this time. Neutrinos also have negligible but nonzero interactions with the electromagnetic field. They're uncharged and ...
3
The problem lies in what we learn about good old constrained dynamics from traditional Dirac approach is not complete and is somehow inconsistent, and the above is one example of this. This was the message of Pitts' paper mentioned in the question above, who reviewed a bunch of previous work on this very matter. I will mention couple of references from that ...
3
In general, it is not. Assume a constant current flowing through a cylindrical conductor. Applying Ampere's circuital law for a surface inside the cylinder: $$\oint {\vec Bd\vec l = {\mu _0}\int\!\!\!\int {\vec J} d\vec s}$$ $$B2\pi r = {\mu _0}J{{\pi {r^2}} \over {\pi {a^2}}}$$ $$B = {{{\mu _0}} \over {2\pi {a^2}}}Jr$$
2
1) It is necessary for a plane EM wave. If one assumes solutions to the Maxwell's equation to be plane waves, it is not hard to show that $\vec B \cdot \vec E = 0$. Namely, take the third Maxwell's equation and dot both sides with $\vec E$. $$\nabla \times \vec E = - {{\partial \vec B} \over {\partial t}}$$ i\vec k \times \vec E = i\omega \vec B{\rm{...
2
In stimulated emission in a laser the emitted radiation has the same phase (and hence direction) as the incident radiation. The mirrors select some of those for regeneration back through the amplifier, where the process continues, and intense radiation builds up between the mirrors. Some of the atoms will decay by spontaneous emission, and that radiation ...
2
So the total current supplied to your two solenoids will be 4 times larger. And that means that you are heating the supply lines, or fry your circuit breakers
2
If you have a single tube, the current will flow on it directly without making the $N$ loops. It will result a different direction, i.e. different magnetic field, its magnetic field will be much weaker. Having the loops, the magnetic fields created by the induvidual loops is added. Actually, you have "the same current" using $N$ times to produce the ...
2
Even a "perfect" Faraday cage does not block EM radiation. If a EM radiation hits a cage the incoming photons or get dissipated in the mesh and re-transmitted with longer wavelength to both sides of the mesh (in principle a sort of black body radiation), heating the inside and the surrounding of the cage, or some amount of photons are going through the mesh ...
2
https://en.wikipedia.org/wiki/Magnetic_monopole You are correct in your intuition that there would be a "Magnetic Current" if monopoles existed. Indeed it is fun to imagine a beautiful symmetry between the Electric Force and Magnetic Force, but alas, nature only gives us half of it. No it isnt possible to simplify this relationship mathmatically. It takes ...
1
According to Wikipedia In perfect conductors, the interior magnetic field must remain fixed but can have a zero or nonzero value. Require a constant magnetic flux - the magnetic flux within the perfect conductor must be constant with time. Any external field applied to a perfect conductor will have no effect on its internal field configuration. ...
1
The magnetic field is not 0, but the integral around the contour is. The reason you cannot apply Ampere's law in this case is that the magnetic field is not homogeneous along the circle
1
According to this website, A Birkeland current is a set of currents that flow along geomagnetic field lines connecting the Earth’s magnetosphere to the Earth's high latitude ionosphere. These then are driven by solar wind in the Earth's magnetosphere. Birkeland currents are caused by the movement of plasma perpendicular to the magnetic field. They ...
1
In the book "Plasmonics and Plasmonic Metamaterials: Analysis and Applications" edited by G. Shvets, Igor Tsukerman, we read in section 2.1: In other words - they clearly state that the enhanced reflectivity is a result of the presence of a inverted dye - that is, a dye with a population inversion, meaning that it can be subject to stimulated emission. ...
1
Once in it won't come out, it just becomes part of the BH, adding mass and charge (plus or minus), and probably angularity momentum, to the BH. When something gets inside the horizon, it can never escape, out of any side of the BH. There is another process where one can use charged particles to extract energy from the electric energy of the BH. This is ...
1
No. The reason: Below the event horizon, there is no outgoing timelike direction. It means, it doesn't matter how do you accelerate a particle, it will move inwardly: The cones are the light cones of the object, what means if it would send out a radio signal in every direction, it would go in these cones. The best reachable orbit (-> the most delayed ...
1
A neutrino is thought to interact only through the weak force and gravity. They interact primarily, though, through the weak force (perhaps explaining the Martin/Shaw comment). Interestingly, since the neutrino has a minuscule mass (as opposed to none at all), it could have tiny neutrino magnetic movements, therefore allowing the possibility that it could ...
1
You understand that mechanical devices such as levers, gears, springs and pulleys all conserve energy. Do you think that some elaborate combination of such devices can violate conservation of energy? The same applies if magnets are included - we know that interactions between magnets conserve energy, so any combination of mechanical devices and magnets also ...
1
Google mathematical methods in the physical sciences pdf and you will be able to download an ebook by Mary Boas, which was written for people like yourself. As Jacob says above, calculus is a must learn, and lots of websites give you examples of different levels of calculus problems. Conceptually, a good textbook is Halliday and Resnicks Physics, which sets ...
1
The statement "On the cylindrical surface $\mathbf{J}\cdot\hat{n}=0$..." refers to just inside the wire so $\sigma\neq0$ and $\mathbf{E}$ cannot be whatever it wants.
1
The problem with this question is all of the assumptions that go into it. When we're taught physics, we are given analogies that help our understanding, but mislead us when we try to dig deeper. Firstly, charged particles like electrons are always surrounded by an electromagnetic field. Changes in that field propagate through space at the speed of light and ...
1
I would like to add that if we do not consider the elementary particles but think of those charged spheres made of metal, they can actually break. If you keep on removing electrons from a material block and protect the discharge from the neighboring atmosphere, after a stage the repulsion among the like charges become stronger than their cohesive force of ...
1
If the charges are moving at (near) c relative to a given reference frame (there is no mention of a reference frame in the question, but there must be one, otherwise we wouldn't know there is any - inertial - movement whatsoever), but they are at rest to each other, then according to SR postulates we may as well assume that they are simply not moving at all. ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-07-28 02:54:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6813896894454956, "perplexity": 398.17441067755055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827782.42/warc/CC-MAIN-20160723071027-00076-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.transtutors.com/questions/a-let-b-be-any-vector-prove-that-the-matrix-b-a-vbt-also-has-v-as-an-eigenvector-no-3986820.htm
|
# (a) Let b be any vector. Prove that the matrix B = A – vbT also has v as an eigenvector, now with...
(a) Let b be any vector. Prove that the matrix B = A – vbT also has v as an eigenvector, now with eigenvalue λ – β where β = v • b.
(b) Prove that if μ ≠ λ – β is any other eigenvalue of A, then it is also an eigenvalue of B.
(c) Given a nonsingular matrix A with eigenvalues λ1, λ2, … , λn and λ1 ≠ λj, j ≥ 2, explain how to construct a deflated matrix B whose eigenvalues are 0, λ2, …, λn.
(d) Try out your method on the matrices
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
|
2021-06-15 22:55:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694129586219788, "perplexity": 1390.4379756441838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00205.warc.gz"}
|
https://testbook.com/question-answer/a-man-can-row-at-the-speed-of-10-kmh-in-still-wat--5fb7a1ddfc52bb505d4a9714
|
# A man can row at the speed of 10 km/h in still water. If the man takes 1 hour to row to a place and back, in a river flowing at the speed of 3 km/h, then, calculate the distance he travels.
This question was previously asked in
MP Police Constable Previous Paper 13 (Held On: 26 Aug 2017 Shift 1)
View all MP Police Constable Papers >
1. 2.3 km
2. 4.34 km
3. 9.1 km
4. 4.55 km
Option 3 : 9.1 km
Free
MP Police Constable: Memory Based Paper: 8 Jan 2022 Shift 1
35834
100 Questions 100 Marks 120 Mins
## Detailed Solution
Given:
A man can row at the speed of 10 km/h in still water. If the man takes 1 hour to row to a place and back, in a river flowing at the speed of 3 km/h.
Concept used:
Boat and Stream
Calculation:
Let the speed of boat in still water be x
Speed of boat in flowing river be y
Speed of boat Downstream = (x + y) km/hr
Speed of boat Upstream = (x - y) km/hr
Downstream = 10 + 3 = 13 km/hr
Upstream = 10 - 3 = 7 km/hr
Time = $$\frac{{Distan ce}}{{Speed}}$$
Let the Distance covered be D
⇒ $$\frac{D}{{13}} + \frac{D}{7} = 1$$
⇒ $$\frac{{20D}}{{91}} = 1$$
⇒ D = $$\frac{{91}}{{20}}$$
Distance travelled is 4.55 km × 2 = 9.1 km.
|
2022-01-27 08:05:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2602227032184601, "perplexity": 2436.598977454017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00534.warc.gz"}
|
http://math.stackexchange.com/questions/262288/functionals-defined-on-curves
|
# Functionals defined on curves
I am looking for classical (real) functionals defined on rectifiable curves (neither necessarily simple nor closed) in either $\mathbb{R}^2$ or $\mathbb{R}^3$.
There's length, turning number, total curvature, mean curvature, ... What else?
This question is quite loosely stated, I'm sorry, but I'm just in a pre-hunting phase, looking for (real) numbers associated to curves :)
Thanks!
-
Energy is pretty classical, but very similar to length though. – wspin Dec 19 '12 at 21:28
add comment
|
2014-03-08 11:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7311171293258667, "perplexity": 2715.710433400462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654396/warc/CC-MAIN-20140305060734-00022-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://tug.org/pipermail/xetex/2010-July/017502.html
|
[XeTeX] xunicode and TIPA
Ulrike Fischer news3 at nililand.de
Wed Jul 28 20:03:53 CEST 2010
Am Wed, 28 Jul 2010 11:25:30 -0400 schrieb Alan Munn:
> Hi
>
> The xunicode package provides a textipa command which recognizes (most
> of?) the commands from the tipa package. This is very useful, since
> it allows one to convert legacy documents containing IPA to xelatex
> with minimal trouble. However, the tipa package also provided an IPA
> environment. This is not supplied by xunicode. Is there a way to
> emulate that too? Here's a minimal document:
>
> I know the definition for the IPA environment isn't correct; what I
> want is characters inside that environment to be interpreted in the
> same way that they are within the \textipa command provided by xunicode.
You must use a name with small letters for your environment (tipa
activates the others)
\documentclass{article}
\usepackage{xltxtra}
\newfontfamily{\ipafont}{Doulos SIL}
\def\useTIPAfont{\ipafont}
\newenvironment{ipa}{%
\let\stone\TIPAstonebar
\let\tone\TIPAtonebar
\setTIPAcatcodes\activatetipa
\csname useTIPAfont\endcsname
}{}
\begin{document}
\textipa{RPAQIOE} % This will give you correct phonetic characters
\begin{ipa}
RPAQIOE
\end{ipa}
\end{document}
--
Ulrike Fischer
|
2023-03-31 16:00:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9103843569755554, "perplexity": 10897.038264050754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00485.warc.gz"}
|
https://zbmath.org/?q=ci%3A5778485
|
## Collected papers III. 1988–2012. Edited by Joachim Schwermer, Silke Wimmer-Zagier and Don Zagier. (Gesammelte Abhandlungen III. 1988–2012.)(English)Zbl 1442.01030
Cham: Springer (ISBN 978-3-030-02915-9/hbk). xv, 659 p. (2019).
The third volume of Friedrich Hirzebruch’s collected papers (for Volumes I and II see [Zbl 0627.01044; Reprint: Zbl 1281.01012]) contains manuscripts published between 1988 and 2012; of these the following have been reviewed: [Zbl 0667.32009; Zbl 0697.94010; Zbl 0679.14006; Zbl 0712.57010; Zbl 0743.00019; Zbl 0752.57013; Zbl 0767.57014; Zbl 1453.14001; Zbl 0966.01503; Zbl 1288.01019; Zbl 0994.01014; Zbl 0928.01012; Zbl 0938.57025; Zbl 0972.14014; Zbl 1061.58022; Zbl 1052.01012; Zbl 1187.14009; Zbl 1156.01340; Zbl 1244.01033; Zbl 1255.11004; Zbl 1171.57300; Zbl 1209.01018; Zbl 1195.01029].
In addition, this volume contains the opening address of the meeting of the DMV in 1990, an article Kombinatorik in der Geometrie on the occurrence of Euler polynomials in geometry, a review of P. Deligne and G. D. Mostow [Commensurabilities among lattices in $$\text{PU}(1,n)$$. Princeton, NJ: Princeton University Press (1993; Zbl 0826.22011)], the handwritten abstract of a talk on Hilbert modular surfaces and the icosahedron in Kyoto 1996, a talk at the MFO on Chern characteristic classes in topology and algebraic geometry in 2009, memories of Henri Cartan 1904–2008 published in the Notices of the AMS in 2010, and Why do I like Chern, and why do I like Chern classes? from the same journal in 2011. The last part collects various addresses, beginning with Hirzebruch’s laudatio for Jacques Tits, a section with short comments on the papers in this volume, an article by Don Zagier on the Life and work of Friedrich Hirzebruch, an interview with Hirzebruch published in the newsletter of the EMS (1998), and a list of publications.
### MSC:
01A75 Collected or selected works; reprintings or translations of classics 14-XX Algebraic geometry 55-XX Algebraic topology 54-XX General topology 57-XX Manifolds and cell complexes 58-XX Global analysis, analysis on manifolds
|
2022-09-29 23:14:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39574337005615234, "perplexity": 2491.747242363616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00531.warc.gz"}
|
https://www.quizover.com/online/course/1-3-simple-harmonic-motion-a-special-periodic-motion-by-openstax?page=3
|
# 1.3 Simple harmonic motion: a special periodic motion (Page 4/7)
Page 4 / 7
## Conceptual questions
What conditions must be met to produce simple harmonic motion?
(a) If frequency is not constant for some oscillation, can the oscillation be simple harmonic motion?
(b) Can you think of any examples of harmonic motion where the frequency may depend on the amplitude?
Give an example of a simple harmonic oscillator, specifically noting how its frequency is independent of amplitude.
Explain why you expect an object made of a stiff material to vibrate at a higher frequency than a similar object made of a spongy material.
As you pass a freight truck with a trailer on a highway, you notice that its trailer is bouncing up and down slowly. Is it more likely that the trailer is heavily loaded or nearly empty? Explain your answer.
Some people modify cars to be much closer to the ground than when manufactured. Should they install stiffer springs? Explain your answer.
## Problems&Exercises
A type of cuckoo clock keeps time by having a mass bouncing on a spring, usually something cute like a cherub in a chair. What force constant is needed to produce a period of 0.500 s for a 0.0150-kg mass?
$2\text{.}\text{37}\phantom{\rule{0.25em}{0ex}}\text{N/m}$
If the spring constant of a simple harmonic oscillator is doubled, by what factor will the mass of the system need to change in order for the frequency of the motion to remain the same?
A 0.500-kg mass suspended from a spring oscillates with a period of 1.50 s. How much mass must be added to the object to change the period to 2.00 s?
0.389 kg
By how much leeway (both percentage and mass) would you have in the selection of the mass of the object in the previous problem if you did not wish the new period to be greater than 2.01 s or less than 1.99 s?
Suppose you attach the object with mass $m$ to a vertical spring originally at rest, and let it bounce up and down. You release the object from rest at the spring’s original rest length. (a) Show that the spring exerts an upward force of $2.00\phantom{\rule{0.25em}{0ex}}\mathrm{mg}$ on the object at its lowest point. (b) If the spring has a force constant of $\text{10}\text{.}0\phantom{\rule{0.25em}{0ex}}\text{N/m}$ and a 0.25-kg-mass object is set in motion as described, find the amplitude of the oscillations. (c) Find the maximum velocity.
A diver on a diving board is undergoing simple harmonic motion. Her mass is 55.0 kg and the period of her motion is 0.800 s. The next diver is a male whose period of simple harmonic oscillation is 1.05 s. What is his mass if the mass of the board is negligible?
94.7 kg
Suppose a diving board with no one on it bounces up and down in a simple harmonic motion with a frequency of 4.00 Hz. The board has an effective mass of 10.0 kg. What is the frequency of the simple harmonic motion of a 75.0-kg diver on the board?
The device pictured in [link] entertains infants while keeping them from wandering. The child bounces in a harness suspended from a door frame by a spring constant.
(a) If the spring stretches 0.250 m while supporting an 8.0-kg child, what is its spring constant?
(b) What is the time for one complete bounce of this child? (c) What is the child’s maximum velocity if the amplitude of her bounce is 0.200 m?
A 90.0-kg skydiver hanging from a parachute bounces up and down with a period of 1.50 s. What is the new period of oscillation when a second skydiver, whose mass is 60.0 kg, hangs from the legs of the first, as seen in [link] .
1.94 s
#### Questions & Answers
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers!
|
2018-10-16 06:35:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.434153288602829, "perplexity": 1644.2208417399822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510019.12/warc/CC-MAIN-20181016051435-20181016072935-00393.warc.gz"}
|
https://retype.com/components/math-formulas/
|
# #Math formulas
Retype supports rendering math formulas written according to the LaTeX grammar. Internally, Retype is powered by \KaTeX and supports all syntax of the library.
Math equations can be rendered inline by wrapping the equation in characters, or as separate blocks by fencing the equation in $$ characters. ## #Inline syntax An inline math equation is wrapped in characters. Inline formula \displaystyle \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right) This formula \displaystyle \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right) is inlined with text. ## #Block syntax A block math equation is wrapped with $$ characters. Block equations are center aligned when rendered to the finished page. The $$multiline formula block $$ \displaystyle {1 + \frac{q^2}{(1-q)}+\frac{q^6}{(1-q)(1-q^2)}+\cdots }= \prod_{j=0}^{\infty}\frac{1}{(1-q^{5j+2})(1-q^{5j+3})}, \quad\quad \text{for }\lvert q\rvert<1.$ \displaystyle {1 + \frac{q^2}{(1-q)}+\frac{q^6}{(1-q)(1-q^2)}+\cdots }= \prod_{j=0}^{\infty}\frac{1}{(1-q^{5j+2})(1-q^{5j+3})}, \quad\quad \text{for }\lvert q\rvert<1. ## #LaTeX code highlighting Math formula blocks can benefit from syntax highlighting by adding the latex language specifier to code blocks. Demo Source \bigg\{ \;\mathbb{F}[x]\text{-modules } V\; \bigg\} \longleftrightarrow \bigg\{ \substack{\text{$\mathbb{F}$-vector spaces$V$with a} \\ \text{linear map$T : V \rightarrow V$}} \bigg\} latex \bigg\{ \;\mathbb{F}[x]\text{-modules } V\; \bigg\} \longleftrightarrow \bigg\{ \substack{\text{$\mathbb{F}$-vector spaces$V$with a} \\ \text{linear map$T : V \rightarrow V\$}} \bigg\}
|
2023-01-31 04:54:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328234195709229, "perplexity": 10592.96525039638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00119.warc.gz"}
|
https://chemistry.stackexchange.com/questions/154261/g-tensor-and-hyperfine-tensor-for-nitroxide-spin-label-with-15n
|
# g-tensor and hyperfine tensor for nitroxide spin label with 15N
Does anyone know where I can find numerical data for the $$g$$ tensor and hyperfine tensor for a nitroxide spin label where the nitrogen-14 atom has been replaced by a nitrogen-15 atom?
I'm really getting lost in the literature. I come from outside the field of biological EPR so it's very difficult to find the information I need from the plots and the jargon.
• Some data here might be of use - it's actually a good paper with a large theoretical component and you can at least get ballpark numbers for $g$, A, and D. Jul 23, 2021 at 13:40
• This could certainly be useful for getting familiar with the nomenclature, thank you. However, I don't think it discusses the case of 15N, unless I'm mistaken. Maybe the numbers are not too different from 14N, but I don't know.
– Ben
Jul 23, 2021 at 13:57
|
2023-03-28 21:10:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7460786700248718, "perplexity": 598.8095233800966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00208.warc.gz"}
|
https://socratic.org/questions/what-is-the-chemical-formula-of-baking-soda
|
# What is the chemical formula of baking soda?
Baking soda is largely $N a H C {O}_{3}$; some other salts are present.
|
2020-01-20 12:14:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5649867653846741, "perplexity": 5310.0020563129465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00211.warc.gz"}
|
https://cloud.r-project.org/web/packages/CVXR/vignettes/version1.html
|
## What is new?
Version 1.0 is a major upgrade. The improvements include:
• A new reduction framework that automatically chooses the most appropriate solver for a problem
• Addition of a large number of solvers using native R interfaces
• Facilities for geometric programming
• Improvements in speed.
## What has changed?
Implementation changes include:
• Default QP solver: now, OSQP, which means that some results will differ slightly from previous runs. However, ECOS can be explicitly specified if exact replication of old results are desired
• Strict inequalities are not allowed in constraints.
Syntax changes (to match cvxpy 1.x) include:
• Int(m, n) is now Variable(m, n, integer = TRUE)
• Bool(m, n) is now Variable(m, n, boolean = TRUE)
• Semidef(n, n) is now Variable(n, n, PSD = TRUE)
and so on. Details may be found on the CVXR Website.
|
2020-09-24 04:51:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059220790863037, "perplexity": 9004.576221052857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213454.52/warc/CC-MAIN-20200924034208-20200924064208-00043.warc.gz"}
|
http://www.cjmenet.com.cn/CN/Y1994/V30/I5/47
|
• CN:11-2187/TH
• ISSN:0577-6686
›› 1994, Vol. 30 ›› Issue (5): 47-49.
• 论文 •
### 平砧拔长矩形截面毛坯的新理论
1. 东北重型机械学院
• 发布日期:1994-09-01
### NEW THEORY OF STRETCHING A BLANK WITH A RECTANGULAR CROSSING SECTION BETWEEN FLAT PLATENS
Liu Zhubai
1. Northeast Heavy Machinery Institute
• Published:1994-09-01
Abstract: In the aspect of the stretching technology theory, it is demonstrated that mere a technological parameter-tool width ratio is inadeguate, that only when another technological parameter-blank width ratio is replendished, can the stress state in the center zone of stretching blank be correctly described and can the quality of forging be effectively controlled. The expound has been testified by experiments. The stretching technology which requires two technological parameters——blank width ratio and tool width ratio are simultaneously used to control the internal quality of forging is called L Z technology.
|
2021-11-28 15:14:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29609885811805725, "perplexity": 7978.593815924661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00466.warc.gz"}
|
https://socratic.org/questions/59e8aaec11ef6b0be000a7e5#492376
|
# Question 0a7e5
Oct 19, 2017
Let's have a look.
#### Explanation:
Let the length be $x$ and the breadth be $y$.
Given perimeter, $P = 200$.
$\therefore P = 2 x + 2 y$
$\therefore 2 x + 2 y = 200$
$\therefore x + y = 100$..........(1).
Now let the area be $A$.
Hence, $A = x \cdot y$
Now, please note that $\rightarrow$
4xy=(x+y)^2−(x−y)^2#
$\therefore 4 x y = {\left(100\right)}^{2} - {\left(x - y\right)}^{2}$
$\therefore x y = \frac{10000}{4} - {\left(x - y\right)}^{2} / 4$..........From (1).
$\therefore A = 2500 - {\left(x - y\right)}^{2} / 4$.
Now, area $\left(A\right)$ to be maximum, ${\left(x - y\right)}^{2}$ should be minimum.
Hence, $\left(x - y\right)$ should be minimum. This is possible only if $x = y$.
Hence, a rectangle would have maximum area only of it is a square.
Thus, $x = y = \frac{100}{2}$
$\therefore x = y = 50$.
Therefore the dimensions of the field should be $50 m , 50 m$. (Answer).
Oct 19, 2017
A square, 50 m on a side...
#### Explanation:
...There is a calculus derivation you can do to prove this. This problem is under algebra, so you may or may not be able to follow along with that, (included at the end) so here's just a quick observation:
You could fence off a rectangle of width 10, length 90, and that would have a perimeter length of 200, which would use up all your fencing. This would have an area of 900 square meters.
A square of 50 m on a side would also have perimeter 200, but would have area of 2500 square meters.
Incidentally - the problem statement as given says you have a rectangular field, but does not explicitly state that you are restricted to fencing off a rectangular portion of it.
So if you could cheat by laying out your 200 m of fencing in a circle, this would have radius 31.8 meters (radus is $\frac{200}{2 \pi}$), and would have an area of 3177 sq. meters, which would be the actual maximum fenced-in area.
Now, calculus: Your rectangular field will have dimentions $l$ and $w$, such that:
$2 l + 2 w = 200$ (therefore, $l + w = 100$)
and $l \cdot w = A$
We can eliminate one variable: $l = 100 - w$
So area $A$ is $\left(100 - w\right) w$
$A = 100 w - {w}^{2}$
We know that A will be at a maximum/minimum when the first derivative of this function is zero:
$\frac{\mathrm{dA}}{\mathrm{dw}} = 100 - 2 w = 0$
$100 = 2 w$
$w = \frac{100}{2} = 50$ so $l = 50 , w = 50$
And we can show that the area of the rectangle at these dimensions has the maximum area by taking the 2nd deriviative:
$\frac{{d}^{2} A}{{\mathrm{dw}}^{2}} = - 2$
Since this value is negative, we know that area A is at a maximum with l = w = 50.
GOOD LUCK
|
2022-08-16 03:25:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 35, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8494644165039062, "perplexity": 629.0285056513336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00112.warc.gz"}
|
https://paperswithcode.com/paper/a-note-on-the-quasiconvex-jensen-divergences
|
# A note on the quasiconvex Jensen divergences and the quasiconvex Bregman divergences derived thereof
We first introduce the class of strictly quasiconvex and strictly quasiconcave Jensen divergences which are oriented (asymmetric) distances, and study some of their properties. We then define the strictly quasiconvex Bregman divergences as the limit case of scaled and skewed quasiconvex Jensen divergences, and report a simple closed-form formula which shows that these divergences are only pseudo-divergences at countably many inflection points of the generators... (read more)
PDF Abstract
|
2020-05-28 23:41:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155661463737488, "perplexity": 982.8816693440743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00046.warc.gz"}
|
http://cda.psych.uiuc.edu/latex_class_2014/wp.html
|
# Word Processors: Stupid and Inefficient
### Allin Cottrell
April 2003: German translation
June 2003: Spanish translation
April 2004: French translation
April 2005: Hebrew translation
May 2005: Korean translation (HTML or PDF)
### Contents
1 The claim
2 Printed documents
2.1 Composition versus typesetting
2.2 The evils of WYSIWYG
2.3 Document structure
2.4 Text editors
2.5 The virtues of ASCII
2.6 The typesetter
2.7 Putting it together
3 Digital dissemination
3.1 Simple documents
3.2 Complex documents
4 Qualification
5 Rant, rant
6 References
## 1 The claim
The word processor is a stupid and grossly inefficient tool for preparing text for communication with others. That is the claim I shall defend below. It will probably strike you as bizarre at first sight. If I am against word processors, what do I propose: that we write in longhand, or use a mechanical typewriter? No. While there are things to be said in favor of these modes of text preparation I take it for granted that most readers of this essay will do most of their writing using a computer, as I do. My claim is that there are much better ways of preparing text, using a computer, than the word processor.
The wording of my claim is intended to be provocative, but let me be clear: when I say word processors are stupid I am not saying that you, if you are a user of a word processor, are stupid. I am castigating a technology, but one that is assiduously promoted by the major software vendors, and that has become a de facto standard of sorts. Unless you happen to have been in the right place at the right time, you are likely unaware of the existence of alternatives. The alternatives are not promoted by the major vendors, for good reason: as we shall see, they are available for free.
Let's begin by working back from the end product. Text that is designed to communicate ideas and information to others is disseminated in two main ways:
1. As hard copy'', that is, in the form of traditional printed documents.
2. By digital means: electronic mail, web pages, documents designed to be viewable on screen.
There is some overlap here. For instance, a document that is intended for printing may be distributed in digital form, in the hope that the recipient has the means to print the file in question. But let us consider these two modes of dissemination in turn.
## 2 Printed documents
You want to type a document at your computer keyboard, and have it appear in nicely printed form at your computer's printer. Naturally you don't want this to happen in real time (the material appearing at the printer as you type). You want to type the document first and save'' it in digital form on some storage medium. You want to be able to retrieve the document and edit it at will, and to send it to the printer when the time is right. Surely a word processor-such as the market leader, Microsoft Word-is the natural'' way to do all this? Well, it's one way, but not the best. Why not?
### 2.1 Composition versus typesetting
Preparing printable text using a word processor effectively forces you to conflate two tasks that are conceptually distinct and that, to ensure that people's time is used most effectively and that the final communication is most effective, ought also to be kept practically distinct. The two tasks are
1. The composition of the text itself. By this I mean the actual choice of words to express one's ideas, and the logical structuring of the text. The latter may take various forms depending on the nature of the document. It includes matters such as the division of the text into paragraphs, sections or chapters, the choice of whether certain material will appear as footnotes or in the main text, the adding of special emphasis to certain portions of the text, the representation of some pieces of text as block quotations rather than as the author's own words, and so on.
2. The typesetting of the document. This refers to matters such as the choice of the font family in which the text is to be printed, and the way in which structural elements will be visually represented. Should section headings be in bold face or small capitals? Should they be flush left or centered? Should the text be justified or not? Should the notes appear at the foot of the page or at the end? Should the text be set in one column or two? And so on.
The author of a text should, at least in the first instance, concentrate entirely on the first of these sets of tasks. That is the author's business. Adam Smith famously pointed out the great benefits that flow from the division of labor. Composition and logical structuring of text is the author's specific contribution to the production of a printed text. Typesetting is the typesetter's business. This division of labour was of course fulfilled in the traditional production of books and articles in the pre-computer age. The author wrote, and indicated to the publisher the logical structure of the text by means of various annotations. The typesetter translated the author's text into a printed document, implementing the author's logical design in a concrete typographical design. One only has to imagine, say, Jane Austen wondering in what font to put the chapter headings of Pride and Prejudice to see how ridiculous the notion is. Jane Austen was a great writer; she was not a typesetter.
You may be thinking this is beside the point. Jane Austen's writing was publishable; professional typesetters were interested in laying it out and printing it. You and I are not so lucky; if we want a printed article we will have to do it ourselves (and besides, we want it done much faster than via traditional typesetting). Well, yes and no. We will in a sense have to do it ourselves (on our own computers), but we have a lot of help at our disposal. In particular we have a professional-quality typesetting program available. This program (or set of programs) will in effect do for us, for free and in a few seconds or fractions of a second, the job that traditional typesetters did for Shakespeare, Jane Austen, Sir Walter Scott and all the rest. We just have to supply the program with a suitably marked-up text, as the traditional author did.
I am suggesting, therefore, that should be two distinct moments'' in the production of a printed text using a computer. First one types one's text and gets its logical structure right, indicating this structure in the text via simple annotations. This is accomplished using a text editor, a piece of software not to be confused with a word processor. (I will explain this distinction more fully below.) Then one hands over'' one's text to a typesetting program, which in a very short time returns beautifully typeset copy.
### 2.2 The evils of WYSIWYG
These two jobs are rolled into one with the modern WYSIWYG (What You See Is What You Get'') word processor. You type your text, and as you go the text is given, on the computer screen, a concrete typographical representation which supposedly corresponds closely to what you will see when you send the document to the printer (although for various reasons it does not always do so). In effect, the text is continuously typeset as you key it in. At first sight this may seem to be a great convenience; on closer inspection it is a curse. There are three aspects to this.
1. The author is distracted from the proper business of composing text, in favor of making typographical choices in relation to which she may have no expertise (fiddling with fonts and margins'' when she should be concentrating on content).
2. The typesetting algorithm employed by WYSIWYG word processor sacrifices quality to the speed required for the setting and resetting of the user's input in real time. The final product is greatly inferior to that of a real typesetting program.
3. The user of a word processor is under a strong temptation to lose sight of the logical structure of the text and to conflate this with superficial typographical elements.
The first two points above should be self-explanatory. Let me expand on the third. (Its importance depends on the sort of document under consideration.)
### 2.3 Document structure
Take for instance a section heading. So far as the logical structure of a document is concerned, all that matters is that a particular piece of text should be marked'' somehow as a section heading. One might for instance type \section{Text of heading}. How section headings will be implemented typographically in the printed version is a separate question. When you're using a word processor, though, what you see is (all!) you get. You are forced to decide on a specific typographical appearance for your heading as you create it.
Suppose you decide you'd like your headings in boldface, and slightly larger than the rest of the text. How are you going to achieve this appearance? There's more than one way to do it, but for most people the most obvious and intuitive way (given the whole WYSIWYG context) is to type the text of the heading, highlight it, click the boldface icon, pull down the little box of point sizes for the type, and select a larger size. The heading is now bold and large.
Great! But what says it's a heading? There's nothing in your document that logically identifies this little bit of text as a section heading. Suppose at some later date you decide that you'd actually prefer to have the headings in small caps, or numbered with roman numerals, or centered, or whatever. What you'd like to say is Please make such-and-such a change in the setting of all section headings.'' But if you've applied formatting as described above, you'll have to go through your entire document and alter each heading manually.
Now there is a way of specifying the structural status of bits of text in (for instance) Microsoft Word. You can, if you are careful, achieve effects such as changing the appearance of all section headings with one command. But few users of Word exploit this consistently, and that is not surprising: the WYSIWYG approach does not encourage concern with structure. You can easily-all too easily-fake'' structure with low-level formatting commands. When typing one's text using a text editor, on the other hand, the need to indicate structure is immediately apparent.
### 2.4 Text editors
OK, it's probably time to explain what a text editor is, and how it differs from a word processor. A modern text editor looks a bit like a word processor. It has the usual apparatus of pull-down menus and/or clickable icons for functions like opening and saving files, searching and replacing, checking spelling, and so on. But it has no typesetting functionality. The text you type appears on screen in a clear visual representation, but with no pretense at representing the final printed appearance of the document.
When you save your document, it is saved in the form of plain text, which in the US context usually means in ASCII'' (the American Standard Code for Information Interchange). ASCII is composed of 128 characters (this is sometimes referred to as a 7-bit'' character set, since it requires 7 binary digits for its encoding: 2 to the seventh power = 128). It includes the numerals 0 through 9, the roman alphabet in both upper and lower case, the standard punctuation marks, and a number of special characters. ASCII is the lowest common denominator of textual communication in digital form. An ASCII message will be understandable'' by any computer in the world. If you send such a message, you can be sure that the recipient will see precisely what you typed.
By contrast, when you save a file from a word processor, the file contains various control'' characters, outside of the ASCII range. These characters represent the formatting that you applied (e.g. boldface or italics) plus various sorts of internal business'' relating to the mechanics of the word processor. They are not universally understandable''. To make sense of them, you need a copy of the word processor with which the document was created (or some suitable conversion filter). If you open a word processor file in a text editor, you will see (besides the text, or bits of it) a lot of funny looking stuff'': this is the binary formatting code.
Since a text editor does not insert any binary formatting codes, if you want to represent features such as italics you have to do this via mark-up. That is, you type in an annotation (using nothing but ASCII), which will tell the typesetter to put the specified text into italics. For example, for the LaTeX typesetter (more on this below) you would type \textit{stuff you want in italics}. Actually, if you are using a text editor which is designed to cooperate with LaTeX you would not have to type this yourself. You'd type some kind of shortcut sequence, select from a menu, or click an icon, and the appropriate annotation would be inserted for you; the mechanics of typing an ASCII document suitable for feeding to LaTeX are not much different from typing in a modern word processor.
### 2.5 The virtues of ASCII
The approach of composing your text in plain ASCII using a text editor, then typesetting it with a separate program, has several incidental'' virtues.
1. Portability: as mentioned above, anybody, using any computer platform, will be able to read your marked-up text, even if they don't have the means to view or print the typeset version. By contrast your Snazz 9.0 word processor file can be completely incomprehensible to a recipient who doesn't have the same brand and version of word processor as you-unless he or she is quite knowledgeable about computers and is able to extract the actual text from the binary garbage''. And this applies to you over time, as much as to you and a correspondent at one time. You may well have difficulty reading Snazz 8.0 files using Snazz 9.0, or vice versa, but you'll never have any trouble reading old ASCII files.
2. Compactness: an ASCII file represents your ideas, and not a lot of word processor business''. For small documents in particular, word processor files can be as much as 10 times as large as a corresponding ASCII file containing the same relevant information.
3. Security: the text editor to typesetter'' approach virtually guarantees that you will never have any problem of corruption of your documents (unless you suffer a hard disk crash or some comparable calamity). The source text will always be there, even if the typesetter fails for some reason. If you regularly use a word processor and have not had a problem with file corruption then you're very lucky!
(Further reading: Sam Steingold's page "No Proprietary Binary Data Formats".)
### 2.6 The typesetter
By this time I owe you a bit more detail on the typesetter part of the strategy I'm advocating. I won't go into technical details here, but will try to say enough to give you some idea of what I'm talking about.
The basic typesetting program that I have in mind is called TeX, and was written by Donald Knuth of Stanford University. TeX is available for free (via downloading from many Internet sites), in formats suitable for just about every computer platform. (You can if you wish purchase a CDROM with a complete set of TeX files for a very modest price.) Knuth started work on TeX in 1977; in 1990 he announced that he no longer intended to develop the program-not because of lack of interest, but rather because by this time the program was essentially perfected. It is as bug-free as any computer program can be, and it does a superb job of typesetting just about any material, from simple text to the higher mathematics.
I referred above to LaTeX. If TeX is the basic typesetting engine, LaTeX is a large set of macros, initially developed by Leslie Lamport in the 1980s and now maintained by an international group of experts. These macros make life a lot easier for the average user of the system. LaTeX is still under active development, as new capabilities and packages are built on top of the underlying typesetter. Various add-ons'' for TeX are also under development, such as a system which allows you to make PDF (Adobe's Portable Document Format'') files directly from your ASCII source files. (I say under development'' but by this I just mean that they are continuously being improved. The programs are already very stable and full-featured.)
As mentioned above, you indicate the desired structure and formatting of your document to LaTeX in the form of a set of annotations. There are many books (and web-based guides) that give the details of these annotations, and I will not go into them here. The common annotations are simple and easily remembered, besides which LaTeX-friendly text editors (of which there are many) offer you a helping hand.
One very attractive feature of LaTeX is the ability to change the typeset appearance of your text drastically and consistently with just a few commands. The overall appearance is controlled by
1. The document class'' that you choose (e.g. report, letter, article, book).
2. The packages'' or style files that you decide to load.
You can, for instance, completely change the font family (consistently across text, section headings, footnotes and all) and/or the sizes of the fonts used, by altering just one or two parameters in the preamble'' of your ASCII source file. Similarly, you can put everything into two-column format, or rotate it from portrait to landscape. It may be possible to accomplish something similar using a word processor, but generally it's much less convenient and you are far more likely to mess up and introduce unintended inconsistencies of formatting.
You can get as complex as you care to, typesetting with LaTeX. You can choose a hands off'' approach: just specify a document class and leave the rest up to the default macros. Generally this produces good results, the typesetting being of much higher quality than any word processor. (Naturally, things like numbering of chapters, sections and footnotes, cross-references and so on, are all taken care of automatically.) Or you can take a more interventionist'' approach, loading various packages (or even writing your own) to control various aspects of the typography. If this is your inclination, you can produce truly beautiful and individual output.
### 2.7 Putting it together
Let me give you just a brief idea of how this all works. If you have a good TeX setup it's like this: You type your text into a TeX-aware editor. You can type the required annotations directly or have the editor insert them via menus or buttons. When you reach a point where you'd like to take a look at the typeset version you make a menu choice or click a button in the editor to invoke the typesetter. Another menu item or button will open a previewer in which you see the text as it will appear at the printer. And generally this is true WYSIWYG''-the previewer will show a highly accurate representation of the printed output. You can zoom in or out, page around, and so on. You send the output to the printer with another menu choice or button, or go back to editing.
At some later point in the process you want to preview the updated file. Click the typesetter button again. This time you don't have to invoke the previewer again: if you've left it running in the background it will now automatically display the updated typeset version. When you're done with an editing session you can delete the typeset version of the file to conserve disk space. You just need to save the ASCII source file; the typeset version can easily be recreated whenever you need it.
## 3 Digital dissemination
The previous section was mostly angled towards producing good-looking typeset output at the printer. Some other considerations arise when you're preparing a document with digital transmission in mind (email, web pages and so on).
Take email first. Typically if people wish to send a short, ad hoc, message they type that message directly into an email client program, whether it be a traditional'' text-based client such as Pine or a GUI (Graphical User Interface) program such as Netscape or Eudora. In that case the message probably goes out in the form of ASCII (or perhaps in HTML, i.e. HyperText Mark-up Language, the language of Web pages, which is itself mostly composed of ASCII). But what if you want to send a longer piece of text that you have already prepared independently of your email client program?
For this purpose it is increasingly common to attach'' a document in a word processor format. How does the alternative strategy work in this case?
Well, we have to distinguish between two situations: Is the text in question relatively short and uncomplicated (a memo, a letter, minutes of a meeting, a listing of agenda, a schedule for a visit) or is it more complex (an academic paper-perhaps with a lot of mathematics, a report with illustrations, a book manuscript)? The ASCII plus typesetter'' approach leads to different suggestions in these two cases.
### 3.1 Simple documents
With simple documents, we have to ask: Do we really need the typesetting, the font information and all that? Is it not more efficient, more in the interest of effective and economical communication, just to post plain ASCII text, with the minimal formatting that ASCII allows? This both conserves communications bandwidth (remember that word processor files can be much bigger than ASCII files containing the same actual text) and ensures that nobody will be frozen out of the communication effort because they happen not to be running Snazz 9.0. You can attach an ASCII file, created in a text editor, in the same way that you'd attach a word processor file, or you can simply paste it into the body of your email (since it's nothing but plain text). Since TeX source files are nothing but ASCII-and if we're talking about a simple document there won't be too many annotations, and those pretty self-explanatory-they can be treated in the same way.
### 3.2 Complex documents
Longer and more complicated documents may well be easier to read in typeset form. Math may be hard to convey in ASCII and of course complex diagrams and images are out altogether. So what about TeX, in this context? I have argued that word processor files can be problematic, because your correspondent might not have Snazz 9.0 like you do. But doesn't this cut both ways? Even if you're fired up enough about TeX to give it a try, how many of your correspondents have a TeX installation? This is a reasonable query, but it is answerable. If you want your correspondent to be able to see a typeset version of your file, and she doesn't have a TeX installation, you have these options:
1. Convert the TeX source file to HTML. There are good conversion programs for this purpose. (HTML and TeX actually have a strong family resemblance, in that they both involve logical mark-up, so inter-conversion can be accomplished with a high degree of fidelity.1) Then your correspondent can read your text using a web browser.
2. Does your correspondent have access to a Postscript printer? In an academic or business environment this is quite likely. In that case you could send a fully typeset version of your document in the form of a postscript file, which she can just send to the printer. And/or she can view it on screen if she installs the ghostview'' program (free for downloading from the Internet).
3. Does your correspondent have the Acroread'' reader for Adobe PDF files installed? (Again, it's a free download.) If so, you can send a PDF version of your typeset document.
In discussing the options for transmitting text via email, we've already hit on the issue of preparing text for web pages. You have the option of writing HTML directly. If you don't want to do that, you can write HTML indirectly using a suitable GUI editor, Netscape Communicator for example. Sure, you can also produce HTML using MS Word (incidentally, horrible HTML, full of extraneous tags that make it awkward to edit using any other application). If you're in TeX mode it's easy to convert your documents into (clean, standard-compliant) HTML.
## 4 Qualification
I have attempted to make a strong pitch for the ascii plus typesetter'' alternative to word processors. I will admit, however, that there are some sorts of documents for which a WYSIWYG word processor is indeed the natural tool. I'm thinking of short, ad hoc, documents which have a high ratio of formatting business'' to textual content: flyers, posters, party invitations and the like. You could do these in TeX, but it would not be efficient. The standard LaTeX document classes (report, article, etc.) would be of little use to you. And while LaTeX is very smart at handling automatically the range of fonts that you're likely to want in a formal text, it's not geared toward the sort of mixing and matching'' of jolly fonts that you might want in a casual production. Logical structure is not really an issue: you're interested in raw formatting''. You want to know, for instance, If I put that line into a 36-point font, will that push my last line onto the next page, which I don't want? WYSIWYG is your man.
If most of your word-processing work is of this kind, you probably stopped reading a long time ago. If most of your text preparation work involves the production of relatively formal documents, this qualification doesn't affect the essentials of my case.
## 5 Rant, rant
It may not have escaped your notice that I'm a bit worked up about this theme. Yes, I am. The point is that it's not just a matter of an academic debate between alternative modes of text preparation. It's a set of scales in which the might and wealth of the major software vendors is all on one side. To be blunt, we're looking at a situation in which MS Word is poised to become, for much of the world, the standard for the preparation of documents using computers. But Word is a standard that has little to commend it other than the fact that it is (or aspires to be) a standard.
It's a bit like QWERTY. Do you know that story? Why the standard arrangement of keys on typewriter keyboards (and by extension computer keyboards) has QWERTYUIOP along the top line? That was not the original arrangement of typewriter keys. It was designed for a purpose, namely to slow typists down. The problem was that the expertise of the early typists quickly outran the capabilities of the early mechanical typewriters: a fast typist could jam the keys, hitting them faster than they could return after striking the ribbon. QWERTY distributed the keys so they couldn't go so fast. This is clearly a crazy arrangement for the keys on an electronic keyboard, but it's too late to change: QWERTY is standard, and all attempts to rationalize the keyboard have failed in the face of that reality.
Similarly, I'm arguing that MS Word has no right to be a standard for document preparation, since it's clearly less efficient (for most purposes) than readily available alternatives. I'm hoping that it's not too late in this case, that there's still the opportunity of saying No to Word. Actually, in a sense Word is worse than QWERTY: it's not a real standard, but rather an escalator. The Microsoft standard'' for the binary representation of document formatting is something that is variable at the whim of Microsoft Corporation. The MS Word quasi-monopoly piggybacks off the Microsoft Windows quasi-monopoly (an issue which I will not get into here). And so long as they are not hard-pressed by commercial rivals, Microsoft has no particular interest in establishing any sort of long-term standard for the binary representation of formatting. On the contrary, they have a strong interest in forcing you to upgrade'' Word at regular intervals. Oh dear, Word N.0 won't read the document your colleague just sent you, prepared using N+1? Well, you'd better update then, hadn't you? Even if there are no features in N+1, that were not present in N, that are of any real value to you.2
## 6 References
If you've come with me this far, you might be interested in more details about good text editors, the TeX typesetting system and so on.
The best place to start for info about TeX and friends is probably the TUG homepage (TUG is the TeX Users Group). This will provide all the links you might need; one of the main ones is to the Comprehensive TeX Archive Network (CTAN) sites, from which you can download complete TeX systems for just about all computer platforms. Such systems include the actual typesetter, a large collection of macros, a previewer, and software for generating printable files.
TeX packages (free ones at any rate) do not generally include the text editor that you'll also need (unless you already have one that you like). There are many choices, but my personal favorite for working with TeX files is Emacs, along with the AUC TeX package. The latter makes Emacs very TeX-friendly: it will highlight TeX syntax so you can see any errors in your mark-up at a glance, and it also offers a wide range of TeX-related commands on convenient menus.
In case you're interested, here's a screen shot of an TeX editing session using Emacs (PNG, 40678 bytes).
### Footnotes:
1 The binary coding used by word processors is a quite different animal, so inter-conversion between TeX and word processor formats is not easy. In addition, since TeX is a superior typesetting engine, it is in principle impossible to convert a TeX document to, say, Word without loss of information.
2 For what it is worth, in my opinion as somebody who used Word for several years before switching to TeX, and who has a keen interest in typesetting, no worthwhile features have been introduced into MS Word for Windows since version 2.0 of circa 1990.
File translated from TEX by TTH, version 1.93.
On 29 Jun 1999, 14:47.
|
2017-07-23 10:48:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6951372623443604, "perplexity": 1272.458279137945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424549.12/warc/CC-MAIN-20170723102716-20170723122716-00073.warc.gz"}
|
https://hrj.odkryjswojzawod.pl/matlab-latex-title.html
|
# Matlab latex title
Search: Bifurcation Matlab. In order to create a bifurcation diagram for a continuous system you have to create a Poincaré section for each variation of the parameter and then plot the result (removing transients) m Benjamin Seibold Applied Mathematics Massachusetts Institute of Technology www-math I need to get the bifurcation points on two graphs on Matlab or you can use matcont 83 and you.
• 2 hours ago
## walmart pay calendar
Analyze data, develop algorithms, and create mathematical models. Explore MATLAB. Panel Navigation. Run simulations, generate code, and test and verify embedded systems. Explore Simulink. Panel Navigation. What's new in the latest release of MATLAB and Simulink. See release highlights.
## toyota 4runner will not accelerate
Sep 23, 2017 · Latex in plot title with variables. Learn more about matlab plot, latex in plot, latex plot, title latex variable.
## coachmen apex vs freedom express
kentucky ibew wages
## twin flame awakening symptoms
humminbird helix screen mirror
prayers for healing and strength
ffxiv face sculpt moddell bios unlock tool
food calorimetry lab answersericsson air 6488
house for sale bb1 7ne blackburn
battle of the borders aau basketball tournament
lg compressor warranty
rapport items lost arkmr223 vs mr556fairy name generator
funkytown gore actual video
20 odds dailymy beloved enemy thai drama dailymotionwindows update stuck at 20 installing 21h1
presto cross join unnest map
180 crash fresnoeduqas sociology past papers a levelmathcounts results 2022
j12 carolina skiff specs
polyphaser installationthomas simons ageseadek jon boat
lizzy musi photos in spandex
## sen sick agents
You can't combine Latex and Tex in a title. You have to use one or the other (i.e. whichever one you set for the 'Interpreter' property ). The following will work: title ( ['$\overline {\beta}=$' num2str (beta_b) ... ' TEO , $\lambda=$' num2str (lambda*1e6) ' $\mu$m'],... 'Interpreter','latex');.
marinette and adrien sleep together fanfiction
Most Read how to pirate eu4 dlc
• Tuesday, Jul 21 at 12PM EDT
• Tuesday, Jul 21 at 1PM EDT
revere pewter benjamin moore cabinets
## kenmore water heater electric
$\begingroup$ I have, but if I am not mistaken, the difference is in the LaTex Typesetting and that can be accomplished over TeXForm or similar features, if I am not mistaken. Further, an equation on a frame label can just be written as one would write it in Mathematica. It will look so similar to "proper" latex typesetting, that it should be good even for a thesis project.
## the ranch at first creek
latex MATLAB multi-line response step titles So by combining several tips and tricks from past articles on this website, I have come up with the some code in an attempt to make multi-line titles of multiple step responses (subplots) in one figure with a variable that changes. All of this inside a for loop. Then I got stuck.
• 1 hour ago
sbm 988 tire changer parts
the divorced billionaire heiress chapter 276
## college sailing nationals 2022 dates
May 04, 2012 · LaTeX labels in Matlab and Octave. Very happy to've finally discovered the LaTeX interpreter option in Matlab labels this evening. Previously I'd been so frustrated when wanting to use something mathy like a partial derivative expression to label a plot axis, because while I can specify a few LaTeX-like things in there like partials and sub ....
fifty shades fanfiction ana and gideon
ryobi jet fan blower troubleshooting
clockwise font
## vw polo gear linkage problems
Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python. Matplotlib makes easy things easy and hard things possible. Create publication quality plots . Make interactive figures that can zoom, pan, update. Customize visual style and layout.
## 1955 chevy 210 4 door value
r107 mods
remote wire on radio
quake dm2
## zb commodore led headlights
Referencing an appendix in LaTeX is as easy as any other chapter or object. You just have to put an anchor to it using \label {name} and then you can reference the appendix using \ref {name}. Here is a minimal working example of how you could implement this: % Reference an appendix in LaTeX. \documentclass{book}.
toyota tdi swap adapter
new apartments in whitehall ohio
## employers workers
Steps to write the graph of the function. There are certain steps that you need to follow for Matlab function plot, and these are: Define the variable x, by highlighting the range of the values for x variable, for which the functions can be plotted. Define the function, y = f (x). Call the command for function plot with the file name plot (x,y.
## signs of a narcissist husband
While writing a paper, I had to cite Matlab. Simple thing, no? The whole problem is that MATLAB is a software (@electronic, right?), but it is a corporate publisher, with specified version. So one would expect an output like [1] MATLAB version 7.10.0. Natick, Massachusetts: The MathWorks Inc., 2010. Where: Title of software: MATLAB.
## zohar namaz time start and end
Way out 1. You may try matlab2tikz and export your figures into tikz code. Way out 2. Try matfig2pgf and get pgf code for your figure. matfig2pgf also provides a nice menu in the matlab figure window, thus some what more friendly. Way out 3. If you a a PSTricks fan try fig2texps and get pstricks code. Way out 4..
porsche custom order delivery time
social services rent voucher
## walgreens mclane
kuka programming manual
May 04, 2012 · LaTeX labels in Matlab and Octave. Very happy to've finally discovered the LaTeX interpreter option in Matlab labels this evening. Previously I'd been so frustrated when wanting to use something mathy like a partial derivative expression to label a plot axis, because while I can specify a few LaTeX-like things in there like partials and sub ....
## optometrist jobs in saudi arabia
The Octave interpreter can be run in GUI mode, as a console, or invoked as part of a shell script. More Octave examples can be found in the Octave wiki. Solve systems of equations with linear algebra operations on vectors and matrices . b = [4; 9; 2] # Column vector A = [ 3 4 5; 1 3 1; 3 5 9 ] x = A \ b # Solve the system Ax = b.
## highland cattle for sale maryland
1.3. ENVIRONMENTS 3 1.3.2 Homework, de nitions, theorems, and proof The important statements in mathematics each have their own environment; these include defny, thmy, propy, lemy, and conjy.These key words are intrinsic to my .tex le; if you use someone else's.
## aluminum porting tools
Using LaTeX is a user's guide for LaTeX Using LaTeX to format documents tells you how to format a complete document using LaTeX Using LaTeX in a wiki describes the $. . .$ subset of LaTeX LaTeX language reference Document classes Preamble Environments Symbols Commands The LaTeX Wiki is a work in progress. You can help by creating wanted articles and expanding short articles marked.
## vulcan 900 forum
The legend () function in MATLAB/Octave allows you to add descriptive labels to your plots. The simplest way to use the function is to pass in a character string for each line on the plot. The basic syntax is: legend ( 'Description 1', 'Description 2', . ). For the examples in this section, we will generate a sample figure using the. MATLAB is a suitable platform for both new and experienced programmers that are in need of visualizing their matrix and array mathematics. The four-paneled interface helps you decide which tools you need at any given time. In addition, its two native file formats allow the program to easily identify commands and other visual aids.
lotto 6d 8 9 21
## kenneth hagin prophecy for 2021
code-section titles. system commands. Additional features include three predefined styles: standard, black & white, and a MatlabLexer -like style for mimicking Pygments ( minted) output, seamless compatibility with listings ' environments and macros, manual highlighting of variables with shared scope, manual highlighting of unquoted strings,.
15x10 centerline wheels
2013 porsche cayenne electric parking brake in service mode
roberts radio resetpawn stars fraudscript mod
belgium phone number generatorhardwood connection price listbutterfly mirrors set of 3
fleeing police in motor vehicle
famous minority physicistsblus30418 updategold casting grain
imx290 vs imx291
## washing machine making grinding noise
Way out 1. You may try matlab2tikz and export your figures into tikz code. Way out 2. Try matfig2pgf and get pgf code for your figure. matfig2pgf also provides a nice menu in the matlab figure window, thus some what more friendly. Way out 3. If you a a PSTricks fan try fig2texps and get pstricks code. Way out 4..
## eden construction inc
Nov 11, 2015 · M-code LaTeX Package. Easily include nicely syntax highlighted m-code in your LaTeX documents. Added support for µ, get a nicer looking tilde character, included Stefan Karlsson's fix for the minus character issue (thanks!!). Renamed internal variables in order to prevent conflicts with other packages.. Matplotlib is an excellent 2D and 3D graphics library for generating scientific figures. Some of the many advantages of this library includes: Easy to get started. Support for \ (\LaTeX\) formatted labels and texts. Great control of every element in a figure, including figure size and DPI.
emirates post international parcel tracking
## which blood pressure should be reported to the nurse immediately
Using standard cross-referencing in LaTeX only produces the label number, a name describing the label such as figure, chapter or equation has to be added manually. The cleveref package overcomes this limitation by automatically producing the label name and number. It further allows cross-referencing ranges of labels and multiple labels of the.
## polk county jaws
In this series, you can also learn to use XFig, the diagram component of LaTeX. 1\textheight} \title{TITLE} \author{NAME} %\date{} \begin{document} \maketitle \end{document} Paragraphs are separated by a blank line of input. ... as of MATLAB R2016a, equations in LaTeX only support basic TeX math mode commands. The math environment is equivalent.
Tags: figure, font, latex, matlab, plot, tex, title. This entry was posted on Thursday, September 6th, 2012 at 5:09 am and is filed under code. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. One Response to "Nabla (∇) in matlab plot title".
## peterbilt 359 dash mat
Sep 26, 2013 · Keep in mind for LaTeX tables, the entries are separated by & and the rows are terminated with \\. In MATLAB, to print a \, you must actually use the backslash command, which is \\. To get MATLAB to go to the next line, you need to use the command. The following loop might be used to create the main parts of the LaTeX table for the matrices ....
## eecu account number
Accepted Answer: Wayne King. Hi, I am making a graph and the below command is not writing a fraction, when the frac is used: Theme. title ('Q \geq frac {I_h H} {I_h H+I_z C}, b_1 \geq b_2'); The entire code is : set (0,'DefaultAxesFontSize',10) time=140; Ct0=0.86; % Initial efficacy of Deltamethrin.
seed world reviews
## chicago mob locations
james luketich obituary
## mib2 retrofit
install docker mac homebrew
122037760 tax id
## transmission front pump seal
baltimore markets
## 1996 roamer truck camper for sale near ehime
what does it mean that your case is actively under review and nothing is outstanding by uscis
acf file steam
openpyxl column
## discord emoji art copy and paste
velocette engine oil
40 lb longbow
|
2022-08-08 10:44:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7179372906684875, "perplexity": 7853.218867653194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00124.warc.gz"}
|
https://www.x-mol.com/paper/1277744982739013632
|
Journal of Dynamics and Differential Equations ( IF 1.473 ) Pub Date : 2020-06-29 , DOI: 10.1007/s10884-020-09862-3
Min Lu, Jicai Huang, Shigui Ruan, Pei Yu
A susceptible-infectious-recovered (SIRS) epidemic model with a generalized nonmonotone incidence rate $$\frac{kIS}{1+\beta I+\alpha I^2}$$ ($$\beta >-2 \sqrt{\alpha }$$ such that $$1+\beta I+\alpha I^{2}>0$$ for all $$I\ge 0$$) is considered in this paper. It is shown that the basic reproduction number $$R_0$$ does not act as a threshold value for the disease spread anymore, and there exists a sub-threshold value $$R_*(<1)$$ such that: (i) if \(R_0
down
wechat
bug
|
2020-07-08 11:53:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6594626307487488, "perplexity": 1074.835332185307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00050.warc.gz"}
|
https://dustingmixon.wordpress.com/2013/10/10/a-fully-automatic-problem-solver-with-human-style-output/
|
# A fully automatic problem solver with human-style output
I think most would agree that the way we do math research has completely changed with technology in the last few decades. Today, I type research notes in LaTeX, I run simulations in MATLAB and Mathematica, I email with collaborators on a daily basis, I read the arXiv and various math blogs to keep in the know, and when I get stuck on something that’s a little outside my expertise, I ask a question on MathOverflow. With this in mind, can you think of another step we can take with technology that will revolutionize the way we conduct math research? It might sound ambitious, but the following paper is looking to make one such step:
A fully automatic problem solver with human-style output
M. Ganesalingam, W. T. Gowers
The vision of this paper is to make automated provers extremely mathematician-friendly so that they can be used on a day-to-day basis to help prove various lemmas and theorems. Today, we might use computers as a last resort to verify a slew of cases (e.g., to prove the four color theorem or the Kepler conjecture). The hope is that in the future, we will be able to seamlessly interface with computers to efficiently implement human-type logic (imagine HAL 9000 as a collaborator).
The paper balances its ambitious vision with a modest scope: The goal is to produce an algorithm which emulates the way human mathematicians (1) prove some of the simplest results that might appear in undergraduate-level math homework, and (2) write the proofs in LaTeX. To be fair, the only thing modest about this scope is the hardness of the results that are attempted, and as a first step toward the overall vision, this simplification is certainly acceptable. (Examples of attempted results include “a closed subset of a complete metric space is complete” and “the intersection of open sets is open.”)
The first section of the paper extensively describes the state of the art in automatic theorem proving. My favorite part is the discussion of how a modern automated prover proves Problem 1: “sets which are closed under differences are also closed under inverses.” The discussion is a revealing portrayal of how alien the computer’s thought process really is, and considering the paper’s vision, it illustrates that there’s really a lot of work to be done. This section also discusses the challenges of producing an output with a reader-friendly natural-language format. This helps in clarifying the goal of mimicking human authors (for example, statements like “by modus ponens” should be avoided, despite being the reasoning behind the next line of a proof).
Section 2 starts by systematizing the thought process of human mathematicians. From my perspective, this is interesting in its own right because it helps me to better understand how I think as a mathematician (and perhaps I can exploit that understanding to prove future theorems faster). For the sake of illustration, the authors focus on the result, “a closed subset of a complete metric space is complete.” After sketching a high-level description of a plausible thought process, they proceed to break the thought process down into basic “moves” and they discuss logical reasons for doing these moves. For example:
• If a property is satisfied for all $x$, then just pick an $x$ and note that it enjoys the property.
• Expand a statement (e.g., replace “$X$ is complete” with “every Cauchy sequence in $X$ converges”) only if a mathematician would. In particular, if you can’t do any other move, expand a statement in order to have other objects (e.g., a Cauchy sequence) to do further moves with.
• Remove a statement from working memory if it’s already been used enough times (one or two, say).
Note that these last two examples don’t have well-defined criteria for implementing the moves. Indeed, formulating reasonable criteria is the artistic portion of this project, and the authors discuss the heuristics they decided on in subsection 2.4. Actually, the main heuristic seems rather natural: The authors devised a prioritized list of moves such that each successive move should be the highest legal move on the list. As an overarching principle, the authors prioritized moves so that the “safest” legal move is always next. After explaining each move type in gory detail, the authors give an example of the automated prover in action (they prove “the intersection of open sets is open”). More examples of their automated proofs can be found here.
At the end of section 2, the authors also describe how to express the proof moves in human speak. Here, the easiest thing to do is replace computer jargon like $\mathtt{compose(f,g)}$ with $f\circ g$. Also, each sentence should sound different by using synonyms for “therefore” and “for all” and by playing with sentence structure (as much as a mathematician might). Now that the sentences are individually good, they need to flow well together. For example, it doesn’t sound good to declare the same hypothesis at the beginning of two consecutive sentences. This is yet another artistic portion of the project, but they managed to devise an algorithm that deterministically converts their automated proofs into reader-friendly text (actually, LaTeX code).
|
2017-12-14 04:07:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7038256525993347, "perplexity": 525.9190841300791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948539745.30/warc/CC-MAIN-20171214035620-20171214055620-00173.warc.gz"}
|
https://terrytao.wordpress.com/2008/12/
|
You are currently browsing the monthly archive for December 2008.
In 1917, Soichi Kakeya posed the following problem:
Kakeya needle problem. What is the least amount of area required to continuously rotate a unit line segment in the plane by a full rotation (i.e. by $360^\circ$)?
In 1928, Besicovitch showed that given any $\varepsilon > 0$, there exists a planar set of area at most $\varepsilon$ within which a unit needle can be continuously rotated; the proof relies on the construction of what is now known as a Besicovitch set – a set of measure zero in the plane which contains a unit line segment in every direction. So the answer to the Kakeya needle problem is “zero”.
I was recently asked (by Claus Dollinger) whether one can take $\varepsilon = 0$; in other words,
Question. Does there exist a set of measure zero within which a unit line segment can be continuously rotated by a full rotation?
This question does not seem to be explicitly answered in the literature. In the papers of von Alphen and of Cunningham, it is shown that it is possible to continuously rotate a unit line segment inside a set of arbitrarily small measure and of uniformly bounded diameter; this result is of course implied by a positive answer to the above question (since continuous functions on compact sets are bounded), but the converse is not true.
Below the fold, I give the answer to the problem… but perhaps readers may wish to make a guess as to what the answer is first before proceeding, to see how good their real analysis intuition is. (To partially prevent spoilers for those reading this post via RSS, I will be whitening the text; you will have to highlight the text in order to see it. Unfortunately, I do not know how to white out the LaTeX in such a way that it is visible upon highlighting, so RSS readers may wish to stop reading right now; but I suppose one can view the LaTeX as supplying hints to the problem, without giving away the full solution.)
[Update, March 13: a non-whitened version of this article can be found as part of this book.]
Title: Use basic examples to calibrate exponents
Motivation: In the more quantitative areas of mathematics, such as analysis and combinatorics, one has to frequently keep track of a large number of exponents in one’s identities, inequalities, and estimates. For instance, if one is studying a set of N elements, then many expressions that one is faced with will often involve some power $N^p$ of N; if one is instead studying a function f on a measure space X, then perhaps it is an $L^p$ norm $\|f\|_{L^p(X)}$ which will appear instead. The exponent $p$ involved will typically evolve slowly over the course of the argument, as various algebraic or analytic manipulations are applied. In some cases, the exact value of this exponent is immaterial, but at other times it is crucial to have the correct value of $p$ at hand. One can (and should) of course carefully go through one’s arguments line by line to work out the exponents correctly, but it is all too easy to make a sign error or other mis-step at one of the lines, causing all the exponents on subsequent lines to be incorrect. However, one can guard against this (and avoid some tedious line-by-line exponent checking) by continually calibrating these exponents at key junctures of the arguments by using basic examples of the object of study (sets, functions, graphs, etc.) as test cases. This is a simple trick, but it lets one avoid many unforced errors with exponents, and also lets one compute more rapidly.
Quick description: When trying to quickly work out what an exponent p in an estimate, identity, or inequality should be without deriving that statement line-by-line, test that statement with a simple example which has non-trivial behaviour with respect to that exponent p, but trivial behaviour with respect to as many other components of that statement as one is able to manage. The “non-trivial” behaviour should be parametrised by some very large or very small parameter. By matching the dependence on this parameter on both sides of the estimate, identity, or inequality, one should recover p (or at least a good prediction as to what p should be).
General discussion: The test examples should be as basic as possible; ideally they should have trivial behaviour in all aspects except for one feature that relates to the exponent p that one is trying to calibrate, thus being only “barely” non-trivial. When the object of study is a function, then (appropriately rescaled, or otherwise modified) bump functions are very typical test objects, as are Dirac masses, constant functions, Gaussians, or other functions that are simple and easy to compute with. In additive combinatorics, when the object of study is a subset of a group, then subgroups, arithmetic progressions, or random sets are typical test objects. In graph theory, typical examples of test objects include complete graphs, complete bipartite graphs, and random graphs. And so forth.
This trick is closely related to that of using dimensional analysis to recover exponents; indeed, one can view dimensional analysis as the special case of exponent calibration when using test objects which are non-trivial in one dimensional aspect (e.g. they exist at a single very large or very small length scale) but are otherwise of a trivial or “featureless” nature. But the calibration trick is more general, as it can involve parameters (such as probabilities, angles, or eccentricities) which are not commonly associated with the physical concept of a dimension. And personally, I find example-based calibration to be a much more satisfying (and convincing) explanation of an exponent than a calibration arising from formal dimensional analysis.
When one is trying to calibrate an inequality or estimate, one should try to pick a basic example which one expects to saturate that inequality or estimate, i.e. an example for which the inequality is close to being an equality. Otherwise, one would only expect to obtain some partial information on the desired exponent p (e.g. a lower bound or an upper bound only). Knowing the examples that saturate an estimate that one is trying to prove is also useful for several other reasons – for instance, it strongly suggests that any technique which is not efficient when applied to the saturating example, is unlikely to be strong enough to prove the estimate in general, thus eliminating fruitless approaches to a problem and (hopefully) refocusing one’s attention on those strategies which actually have a chance of working.
Calibration is best used for the type of quick-and-dirty calculations one uses when trying to rapidly map out an argument that one has roughly worked out already, but without precise details; in particular, I find it particularly useful when writing up a rapid prototype. When the time comes to write out the paper in full detail, then of course one should instead carefully work things out line by line, but if all goes well, the exponents obtained in that process should match up with the preliminary guesses for those exponents obtained by calibration, which adds confidence that there are no exponent errors have been committed.
A dynamical system is a space X, together with an action $(g,x) \mapsto gx$ of some group $G = (G,\cdot)$. [In practice, one often places topological or measure-theoretic structure on X or G, but this will not be relevant for the current discussion. In most applications, G is an abelian (additive) group such as the integers ${\Bbb Z}$ or the reals ${\Bbb R}$, but I prefer to use multiplicative notation here.] A useful notion in the subject is that of an (abelian) cocycle; this is a function $\rho: G \times X \to U$ taking values in an abelian group $U = (U,+)$ that obeys the cocycle equation
$\rho(gh, x) = \rho(h,x) + \rho(g,hx)$ (1)
for all $g,h \in G$ and $x \in X$. [Again, if one is placing topological or measure-theoretic structure on the system, one would want $\rho$ to be continuous or measurable, but we will ignore these issues.] The significance of cocycles in the subject is that they allow one to construct (abelian) extensions or skew products $X \times_\rho U$ of the original dynamical system X, defined as the Cartesian product $\{ (x,u): x \in X, u \in U \}$ with the group action $g(x,u) := (gx,u + \rho(g,x))$. (The cocycle equation (1) is needed to ensure that one indeed has a group action, and in particular that $(gh)(x,u) = g(h(x,u))$.) This turns out to be a useful means to build complex dynamical systems out of simpler ones. (For instance, one can build nilsystems by starting with a point and taking a finite number of abelian extensions of that point by a certain type of cocycle.)
A special type of cocycle is a coboundary; this is a cocycle $\rho: G \times X \to U$ that takes the form $\rho(g,x) := F(gx) - F(x)$ for some function $F: X \to U$. (Note that the cocycle equation (1) is automaticaly satisfied if $\rho$ is of this form.) An extension $X \times_\rho U$ of a dynamical system by a coboundary $\rho(g,x) := F(gx) - F(x)$ can be conjugated to the trivial extension $X \times_0 U$ by the change of variables $(x,u) \mapsto (x,u-F(x))$.
While every coboundary is a cocycle, the converse is not always true. (For instance, if X is a point, the only coboundary is the zero function, whereas a cocycle is essentially the same thing as a homomorphism from G to U, so in many cases there will be more cocycles than coboundaries. For a contrasting example, if X and G are finite (for simplicity) and G acts freely on X, it is not difficult to see that every cocycle is a coboundary.) One can measure the extent to which this converse fails by introducing the first cohomology group $H^1(G,X,U) := Z^1(G,X,U) / B^1(G,X,U)$, where $Z^1(G,X,U)$ is the space of cocycles $\rho: G \times X \to U$ and $B^1(G,X,U)$ is the space of coboundaries (note that both spaces are abelian groups). In my forthcoming paper with Vitaly Bergelson and Tamar Ziegler on the ergodic inverse Gowers conjecture (which should be available shortly), we make substantial use of some basic facts about this cohomology group (in the category of measure-preserving systems) that were established in a paper of Host and Kra.
The above terminology of cocycles, coboundaries, and cohomology groups of course comes from the theory of cohomology in algebraic topology. Comparing the formal definitions of cohomology groups in that theory with the ones given above, there is certainly quite a bit of similarity, but in the dynamical systems literature the precise connection does not seem to be heavily emphasised. The purpose of this post is to record the precise fashion in which dynamical systems cohomology is a special case of cochain complex cohomology from algebraic topology, and more specifically is analogous to singular cohomology (and can also be viewed as the group cohomology of the space of scalar-valued functions on X, when viewed as a G-module); this is not particularly difficult, but I found it an instructive exercise (especially given that my algebraic topology is extremely rusty), though perhaps this post is more for my own benefit that for anyone else.
Starting on January 5th, the beginning of the winter quarter here at UCLA, I will be teaching Math 245B, a graduate course on real analysis. As the name suggests, the course is a continuation of the Math 245A course that just concluded in this fall quarter, taught by Jim Ralston, who covered the basics of measure theory and $L^p$ spaces. In this quarter, I plan to cover more of the foundational theory of graduate real analysis, specifically
I will be using Folland’s “Real analysis” as a primary text and Stein-Shakarachi’s “Real analysis” as a secondary text. These two texts already do quite a good job of covering the above material, but it is likely that I will supplement them as the course progresses with my own lecture notes, which I will post here, though I do not intend to make these notes nearly as self-contained and structurally interlinked as my notes on ergodic theory or on the Poincaré conjecture, being supporting material for the main texts rather than a substitute for them.
Now that the project to upgrade my old multiple choice applet to a more modern and collaborative format is underway (see this server-side demo and this javascript/wiki demo, as well as the discussion here), I thought it would be a good time to collect my own personal opinions and thoughts regarding how multiple choice quizzes are currently used in teaching mathematics, and on the potential ways they could be used in the future. The short version of my opinions is that multiple choice quizzes have significant limitations when used in the traditional classroom setting, but have a lot of interesting and underexplored potential when used as a self-assessment tool.
I was recently at an international airport, trying to get from one end of a very long terminal to another. It inspired in me the following simple maths puzzle, which I thought I would share here:
Suppose you are trying to get from one end A of a terminal to the other end B. (For simplicity, assume the terminal is a one-dimensional line segment.) Some portions of the terminal have moving walkways (in both directions); other portions do not. Your walking speed is a constant $v$, but while on a walkway, it is boosted by the speed $u$ of the walkway for a net speed of $v+u$. (Obviously, given a choice, one would only take those walkways that are going in the direction one wishes to travel in.) Your objective is to get from A to B in the shortest time possible.
1. Suppose you need to pause for some period of time, say to tie your shoe. Is it more efficient to do so while on a walkway, or off the walkway? Assume the period of time required is the same in both cases.
2. Suppose you have a limited amount of energy available to run and increase your speed to a higher quantity $v'$ (or $v'+u$, if you are on a walkway). Is it more efficient to run while on a walkway, or off the walkway? Assume that the energy expenditure is the same in both cases.
3. Do the answers to the above questions change if one takes into account the various effects of special relativity? (This is of course an academic question rather than a practical one. But presumably it should be the time in the airport frame that one wants to minimise, not time in one’s personal frame.)
It is not too difficult to answer these questions on both a rigorous mathematical level and a physically intuitive level, but ideally one should be able to come up with a satisfying mathematical explanation that also corresponds well with one’s intuition.
[Update, Dec 11: Hints deleted, as they were based on an incorrect calculation of mine.]
This week I am in Seville, Spain, for a conference in harmonic analysis and related topics. My talk is titled “the uniform uncertainty principle and compressed sensing“. The content of this talk overlaps substantially with my Ostrowski lecture on the same topic; the slides I prepared for the Seville lecture can be found here.
[Update, Dec 6: Some people have asked about my other lecture given in Seville, on structure and randomness in the prime numbers. This lecture is largely equivalent to the one posted here.]
This post is going to be of a technological nature rather than a mathematical one.
Several years ago (long before the advent of “Web 2.0“), I wrote a number of Java applets (in Java 1.0!) to illustrate various mathematical operations (e.g. Mobius transforms or honeycombs, the latter being a joint project with Allen Knutson), or to help my undergraduate students self-test their mathematical knowledge (via my multiple choice applet that tested them on several prepared quizzes). I had generally received quite positive feedback on these (the logic quiz on my multiple choice applet, in particular, seemed to be particularly good at identifying weaknesses in one’s mathematical reasoning skills). However, being largely self-taught in my programming skills (beyond a handful of undergraduate CS classes), I did not manage to make my code easy to maintain or upgrade, and indeed I have not made any attempt to modernise these applets in any way for many years now.
And now it appears that these applets are slowly becoming incompatible with modern browsers. (IE7 still seems to run these applets well enough, but my version of Firefox no longer does (in fact, I should warn you that it actually freezes up on some of these applets), and only certain versions of Sun Java Virtual Machine seem to run the applets properly.) It also appears that the Java language itself has changed over the years; I have found that the most recent Java compilers have some minor issues with some of my old code.
But I would be interested in seeing some version of these applets (or at least the concepts behind these applets) to persist in some more stable and modern form (which presumably would mean using a different language than Java 1.0), as they seem to occupy a niche (viz., interactive demonstrations and testing of higher mathematical concepts) that does not appear to be well represented on the internet today. Given that I have much less time to devote to these things nowadays, I would be more than happy to somehow donate these applets to some collaborative or open source project that might be able to develop them in some better (and more Web 2.0 compatible) format, but I do not have much experience with these things and am not sure how to proceed. So I’m asking my technologically inclined readers if they have any suggestions for what to do with these old pieces of code. (I’m particularly interested in building up my multiple choice applet, as this seems well suited to some collaborative project in which multiple contributors can upload their own quizzes on various topics (which need not be restricted to mathematics) that could aid in students who wish to self-test their understanding of a given topic. And there are various features I would have liked – e.g. support for mathematical symbols – that I simply did not have the time or expertise to put into the applet, but which would have been very nice to have.) I would particularly prefer suggestions which might require some one-time work on my part, but not a continued obligation to maintain code indefinitely.
[Also, any suggestions for relatively quick fixes that would allow these applets to be runnable on most modern browsers without too much recoding on my part would be greatly appreciated.]
|
2018-10-20 02:23:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7148357629776001, "perplexity": 459.3673039645256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512501.27/warc/CC-MAIN-20181020013721-20181020035221-00380.warc.gz"}
|
https://csedoubts.gateoverflow.in/16341/previous-year-made-easy?show=16354
|
15 views
int f1(int n)
{
if(n==0||n==1)
return n;
else
return (2*f1(n-1)+3*f1(n-2));
}
Time complexity of above function
| 15 views
$\text{F(n) = F(n-1) + F(n-2)}$
just that the original function call has a factor of constant multiplied to $\text{F(n-1) and F(n-2)}$
Hence the time complexity will $O(2^n)$
|
2020-04-04 23:48:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584845662117004, "perplexity": 8338.07946631545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00171.warc.gz"}
|
https://studydaddy.com/question/what-volume-of-0-255-m-k2s-solution-is-required-to-completely-react-with-170-ml
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# What volume of 0.255 M K2S solution is required to completely react with 170 mL of 0.110 M Co(NO3)2?
"73 mL"
Take a look at the balanced chemical equation for this
"K"_2"S"_text((aq]) + "Co"("NO"_3)_text(2(aq]) -> 2"KNO"_text(3(aq]) + "CoS"_text((s]) darr
Notice that you have a 1:1 between potassium sulfide, "K"_2"S", and cobalt(II) nitrate, "Co"("NO"_3)_2. This tells you that the reaction wil consume equal numbers of moles of the two reactants.
Now, you are given the volume and of the cobalt(II) nitrate. As you know, a solution's is defined as the number of moles of divided by the volume of the solution - expressed in liters!
color(blue)(c = n/V)
This means that you can rearrange the above equation and solve for n, the number of moles of cobalt(II) nitrate you have in that solution - do not forget to convert the volume from mililiters to liters
c = n/V implies n = c * V
n = "0.110 M" * 170 * 10^(-3)"L" = "0.0187 moles Co"("NO"_3)_2
Since you've established that you need equal numbers of moles of cobalt(II) nitrate and potassium sulfide, all you need to do now is figure out what volume of the "0.255-M" "K"_2"S" solution will contain 0.0187 moles of "K"_2"S"
c = n/V implies V = n/c
V = (0.0187color(red)(cancel(color(black)("moles"))))/(0.255color(red)(cancel(color(black)("moles")))/"L") = "0.07333 L"
You need to round the answer to two , the number of sig figs you have for the volume of the cobalt(II) nitrate solution, and expressed in mililiters, the answer will be
V = color(green)("73 mL")
|
2019-05-24 21:53:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7015494704246521, "perplexity": 1972.16033604575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257767.49/warc/CC-MAIN-20190524204559-20190524230559-00348.warc.gz"}
|
http://mathematica.stackexchange.com/questions/51705/second-derivative-implicit-differentiation-using-wolfram-alpha-input
|
# Second derivative implicit differentiation using Wolfram Alpha input?
How would you perform second derivative implicit differentiation using Wolfram Alpha input? The reason that I'm using WA input is that it gives you step-by-step solutions and I'm a first year calculus student trying to figure things out.
I've tried all of the obvious queries that I can think of without getting the desired results. If I type just the equation in it will give the results I'm seeking but without the step-by-step solution since it is not the primary output for the query.
Here's an example of the results I get from just entering the equation x^2 + xy = 5 (w/desired result circled):
***P.S. Cross posted to community.wolfram.com/groups/-/m/t/283665?p_p_auth=kD3FBYSv *
-
fyi, cross posted community.wolfram.com/groups/-/m/t/283665?p_p_auth=kD3FBYSv please mention this in your question on both sites so not to waste people time duplicating answers and efforts. – Nasser Jun 27 '14 at 16:57
Done. Thanks for mentioning that. – WXB13 Jun 27 '14 at 20:33
D[x^2 + x y[x] == 5, {x, 1}]
@Gary If you want the most "granular" thing: Stay with Wolfram Alpha (although I think that your question received a nice answer). – eldo Jun 27 '14 at 20:52
|
2016-04-28 21:54:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7193121314048767, "perplexity": 729.6529777907397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860109830.69/warc/CC-MAIN-20160428161509-00177-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/what-is-a-linear-equation.762902/
|
# What is a linear equation
1. Jul 23, 2014
### Greg Bernhardt
Definition/Summary
A first order polynomial equation in one variable, its general form is $Mx+B=0$ where x is the variable. The quantities M, and B are constants and $M\neq 0$.
Equations
$$Mx+B=0$$
Extended explanation
Since $M\neq 0$ the solution is given by
$$x=-B/M\;.$$
The variable x does not have to be a number. For example, x and B could be vectors and M could be a matrix.
In this case the condition for a solution to exist is
$$\det(M)\neq 0\;,$$
and the solution is given by
$$\vec x = -M^{-1}\vec B\;,$$
where $M^{-1}$ is the matrix inverse of M.
Another (more abstract) example, is the Green's function equation for the time-dependent Schrodinger equation. In this case x is a Green's function, and B is a (dirac) delta function in time, and M is the operator
$$M=\left(\frac{i}{\hbar}\frac{\partial}{\partial t}-\hat H\right)\;,$$
where $\hat H$ is the hamiltonian.
* This entry is from our old Library feature. If you know who wrote it, please let us know so we can attribute a writer. Thanks!
|
2018-01-22 22:36:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.615763783454895, "perplexity": 313.77874322636677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00695.warc.gz"}
|
http://mathhelpforum.com/geometry/135488-how-calculate-circle-s-radius-when-tangent-2-intersecting-lines.html
|
# Thread: How to calculate a circle's radius when tangent to 2 intersecting lines
1. ## How to calculate a circle's radius when tangent to 2 intersecting lines
In the diagram below, circle c is tangent to lines p, q, and r. Lines q and s are perpendicular to each other. Angle a is 20 degrees (the subtended angle is 40 degrees). What is the radius of circle c, and how did you calculate it? Thanks.
Involute
2. If I understand the diagram correctly, this is true.
$\sin(20^o)=\frac{r}{1-r}$.
3. Looks right to me. Thanks!
|
2013-05-22 18:50:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7514051198959351, "perplexity": 497.496865041925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702298845/warc/CC-MAIN-20130516110458-00078-ip-10-60-113-184.ec2.internal.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.