id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
69088046
Probabilistic numerics
Scientific field at the intersection of statistics, machine learning and applied mathematics Probabilistic numerics is an active field of study at the intersection of applied mathematics, statistics, and machine learning centering on the concept of uncertainty in computation. In probabilistic numerics, tasks in numerical analysis such as finding numerical solutions for integration, linear algebra, optimization and simulation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference. Introduction. A numerical method is an algorithm that "approximates" the solution to a mathematical problem (examples below include the solution to a linear system of equations, the value of an integral, the solution of a differential equation, the minimum of a multivariate function). In a "probabilistic" numerical algorithm, this process of approximation is thought of as a problem of "estimation", "inference" or "learning" and realised in the framework of probabilistic inference (often, but not always, Bayesian inference). Formally, this means casting the setup of the computational problem in terms of a prior distribution, formulating the relationship between numbers computed by the computer (e.g. matrix-vector multiplications in linear algebra, gradients in optimization, values of the integrand or the vector field defining a differential equation) and the quantity in question (the solution of the linear problem, the minimum, the integral, the solution curve) in a likelihood function, and returning a posterior distribution as the output. In most cases, numerical algorithms also take internal adaptive decisions about which numbers to compute, which form an active learning problem. Many of the most popular classic numerical algorithms can be re-interpreted in the probabilistic framework. This includes the method of conjugate gradients, Nordsieck methods, Gaussian quadrature rules, and quasi-Newton methods. In all these cases, the classic method is based on a regularized least-squares estimate that can be associated with the posterior mean arising from a Gaussian prior and likelihood. In such cases, the variance of the Gaussian posterior is then associated with a worst-case estimate for the squared error. Probabilistic numerical methods promise several conceptual advantages over classic, point-estimate based approximation techniques: These advantages are essentially the equivalent of similar functional advantages that Bayesian methods enjoy over point-estimates in machine learning, applied or transferred to the computational domain. Numerical tasks. Integration. Probabilistic numerical methods have been developed for the problem of numerical integration, with the most popular method called "Bayesian quadrature". In numerical integration, function evaluations formula_0 at a number of points formula_1 are used to estimate the integral formula_2 of a function formula_3 against some measure formula_4. Bayesian quadrature consists of specifying a prior distribution over formula_5 and conditioning this prior on formula_0 to obtain a posterior distribution over formula_5, then computing the implied posterior distribution on formula_2. The most common choice of prior is a Gaussian process as this allows us to obtain a closed-form posterior distribution on the integral which is a univariate Gaussian distribution. Bayesian quadrature is particularly useful when the function formula_3 is expensive to evaluate and the dimension of the data is small to moderate. Optimization. Probabilistic numerics have also been studied for mathematical optimization, which consist of finding the minimum or maximum of some objective function formula_3 given (possibly noisy or indirect) evaluations of that function at a set of points. Perhaps the most notable effort in this direction is Bayesian optimization, a general approach to optimization grounded in Bayesian inference. Bayesian optimization algorithms operate by maintaining a probabilistic belief about formula_3 throughout the optimization procedure; this often takes the form of a Gaussian process prior conditioned on observations. This belief then guides the algorithm in obtaining observations that are likely to advance the optimization process. Bayesian optimization policies are usually realized by transforming the objective function posterior into an inexpensive, differentiable "acquisition function" that is maximized to select each successive observation location. One prominent approach is to model optimization via Bayesian sequential experimental design, seeking to obtain a sequence of observations yielding the most optimization progress as evaluated by an appropriate utility function. A welcome side effect from this approach is that uncertainty in the objective function, as measured by the underlying probabilistic belief, can guide an optimization policy in addressing the classic exploration vs. exploitation tradeoff. Local optimization. Probabilistic numerical methods have been developed in the context of stochastic optimization for deep learning, in particular to address main issues such as learning rate tuning and line searches, batch-size selection, early stopping, pruning, and first- and second-order search directions. In this setting, the optimization objective is often an empirical risk of the form formula_6 defined by a dataset formula_7, and a loss formula_8 that quantifies how well a predictive model formula_9 parameterized by formula_10 performs on predicting the target formula_11 from its corresponding input formula_12 . Epistemic uncertainty arises when the dataset size formula_13 is large and cannot be processed at once meaning that local quantities (given some formula_14) such as the loss function formula_15 itself or its gradient formula_16 cannot be computed in reasonable time. Hence, generally mini-batching is used to construct estimators of these quantities on a random subset of the data. Probabilistic numerical methods model this uncertainty explicitly and allow for automated decisions and parameter tuning. Linear algebra. Probabilistic numerical methods for linear algebra have primarily focused on solving systems of linear equations of the form formula_17 and the computation of determinants formula_18. A large class of methods are iterative in nature and collect information about the linear system to be solved via repeated matrix-vector multiplication formula_19 with the system matrix formula_20 with different vectors formula_21. Such methods can be roughly split into a solution- and a matrix-based perspective, depending on whether belief is expressed over the solution formula_22 of the linear system or the (pseudo-)inverse of the matrix formula_23. The belief update uses that the inferred object is linked to matrix multiplications formula_24 or formula_25 via formula_26 and formula_27. Methods typically assume a Gaussian distribution, due to its closedness under linear observations of the problem. While conceptually different, these two views are computationally equivalent and inherently connected via the right-hand-side through formula_28. Probabilistic numerical linear algebra routines have been successfully applied to scale Gaussian processes to large datasets. In particular, they enable "exact" propagation of the approximation error to a combined Gaussian process posterior, which quantifies the uncertainty arising from both the "finite number of data" observed and the "finite amount of computation" expended. Ordinary differential equations. Probabilistic numerical methods for ordinary differential equations formula_29, have been developed for initial and boundary value problems. Many different probabilistic numerical methods designed for ordinary differential equations have been proposed, and these can broadly be grouped into the two following categories: The boundary between these two categories is not sharp, indeed a Gaussian process regression approach based on randomised data was developed as well. These methods have been applied to problems in computational Riemannian geometry, inverse problems, latent force models, and to differential equations with a geometric structure such as symplecticity. Partial differential equations. A number of probabilistic numerical methods have also been proposed for partial differential equations. As with ordinary differential equations, the approaches can broadly be divided into those based on randomisation, generally of some underlying finite-element mesh and those based on Gaussian process regression. Probabilistic numerical PDE solvers based on Gaussian process regression recover classical methods on linear PDEs for certain priors, in particular methods of mean weighted residuals, which include Galerkin methods, finite element methods, as well as spectral methods. History and related fields. The interplay between numerical analysis and probability is touched upon by a number of other areas of mathematics, including average-case analysis of numerical methods, information-based complexity, game theory, and statistical decision theory. Precursors to what is now being called "probabilistic numerics" can be found as early as the late 19th and early 20th century. The origins of probabilistic numerics can be traced to a discussion of probabilistic approaches to polynomial interpolation by Henri Poincaré in his "Calcul des Probabilités". In modern terminology, Poincaré considered a Gaussian prior distribution on a function formula_35, expressed as a formal power series with random coefficients, and asked for "probable values" of formula_36 given this prior and formula_37 observations formula_38 for formula_39. A later seminal contribution to the interplay of numerical analysis and probability was provided by Albert Suldin in the context of univariate quadrature. The statistical problem considered by Suldin was the approximation of the definite integral formula_40 of a function formula_41, under a Brownian motion prior on formula_34, given access to pointwise evaluation of formula_34 at nodes formula_42. Suldin showed that, for given quadrature nodes, the quadrature rule with minimal mean squared error is the trapezoidal rule; furthermore, this minimal error is proportional to the sum of cubes of the inter-node spacings. As a result, one can see the trapezoidal rule with equally-spaced nodes as statistically optimal in some sense — an early example of the average-case analysis of a numerical method. Suldin's point of view was later extended by Mike Larkin. Note that Suldin's Brownian motion prior on the integrand formula_34 is a Gaussian measure and that the operations of integration and of point wise evaluation of formula_34 are both linear maps. Thus, the definite integral formula_40 is a real-valued Gaussian random variable. In particular, after conditioning on the observed pointwise values of formula_34, it follows a normal distribution with mean equal to the trapezoidal rule and variance equal to formula_43. This viewpoint is very close to that of Bayesian quadrature, seeing the output of a quadrature method not just as a point estimate but as a probability distribution in its own right. As noted by Houman Owhadi and collaborators, interplays between numerical approximation and statistical inference can also be traced back to Palasti and Renyi, Sard, Kimeldorf and Wahba (on the correspondence between Bayesian estimation and spline smoothing/interpolation) and Larkin (on the correspondence between Gaussian process regression and numerical approximation). Although the approach of modelling a perfectly known function as a sample from a random process may seem counterintuitive, a natural framework for understanding it can be found in information-based complexity (IBC), the branch of computational complexity founded on the observation that numerical implementation requires computation with partial information and limited resources. In IBC, the performance of an algorithm operating on incomplete information can be analyzed in the worst-case or the average-case (randomized) setting with respect to the missing information. Moreover, as Packel observed, the average case setting could be interpreted as a mixed strategy in an adversarial game obtained by lifting a (worst-case) minmax problem to a minmax problem over mixed (randomized) strategies. This observation leads to a natural connection between numerical approximation and Wald's decision theory, evidently influenced by von Neumann's theory of games. To describe this connection consider the optimal recovery setting of Micchelli and Rivlin in which one tries to approximate an unknown function from a finite number of linear measurements on that function. Interpreting this optimal recovery problem as a zero-sum game where Player I selects the unknown function and Player II selects its approximation, and using relative errors in a quadratic norm to define losses, Gaussian priors emerge as optimal mixed strategies for such games, and the covariance operator of the optimal Gaussian prior is determined by the quadratic norm used to define the relative error of the recovery. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f(x_1), \\ldots, f(x_n)" }, { "math_id": 1, "text": "x_1, \\ldots, x_n" }, { "math_id": 2, "text": " \\textstyle \\int f(x) \\nu(dx) " }, { "math_id": 3, "text": " f " }, { "math_id": 4, "text": " \\nu " }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "\\textstyle L(\\theta) = \\frac{1}{N}\\sum_{n=1}^N \\ell(y_n, f_{\\theta}(x_n))" }, { "math_id": 7, "text": "\\textstyle \\mathcal{D}=\\{(x_n, y_n)\\}_{n=1}^N" }, { "math_id": 8, "text": "\\ell(y, f_{\\theta}(x)) " }, { "math_id": 9, "text": " f_{\\theta}(x)" }, { "math_id": 10, "text": " \\theta" }, { "math_id": 11, "text": " y " }, { "math_id": 12, "text": " x " }, { "math_id": 13, "text": " N " }, { "math_id": 14, "text": " \\theta " }, { "math_id": 15, "text": " L(\\theta) " }, { "math_id": 16, "text": "\\nabla L(\\theta)" }, { "math_id": 17, "text": "Ax=b" }, { "math_id": 18, "text": "|A|" }, { "math_id": 19, "text": "v \\mapsto Av" }, { "math_id": 20, "text": "A" }, { "math_id": 21, "text": "v" }, { "math_id": 22, "text": "x" }, { "math_id": 23, "text": "H=A^{\\dagger}" }, { "math_id": 24, "text": " y= Av " }, { "math_id": 25, "text": " z=A^\\intercal v " }, { "math_id": 26, "text": " b^\\intercal z = x^\\intercal v " }, { "math_id": 27, "text": " v = A^{-1} y " }, { "math_id": 28, "text": "x = A^{-1}b" }, { "math_id": 29, "text": " \\dot{y}(t) = f(t, y(t)) " }, { "math_id": 30, "text": " \\mathrm{d}x(t) = A x(t) \\, \\mathrm{d} t + B \\, \\mathrm{d}v(t) " }, { "math_id": 31, "text": " x(t) " }, { "math_id": 32, "text": " y(t) " }, { "math_id": 33, "text": " v(t) " }, { "math_id": 34, "text": "u" }, { "math_id": 35, "text": "f \\colon \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 36, "text": "f(x)" }, { "math_id": 37, "text": "n \\in \\mathbb{N}" }, { "math_id": 38, "text": "f(a_{i}) = B_{i}" }, { "math_id": 39, "text": "i = 1, \\dots, n" }, { "math_id": 40, "text": "\\textstyle \\int u(t) \\, \\mathrm{d} t" }, { "math_id": 41, "text": "u \\colon [a, b] \\to \\mathbb{R}" }, { "math_id": 42, "text": "t_{1}, \\dots, t_{n} \\in [a, b]" }, { "math_id": 43, "text": "\\textstyle \\frac{1}{12} \\sum_{i = 2}^{n} (t_{i} - t_{i - 1})^{3}" } ]
https://en.wikipedia.org/wiki?curid=69088046
69112876
Bayesian quadrature
Method in statistics Bayesian quadrature is a method for approximating intractable integration problems. It falls within the class of probabilistic numerical methods. Bayesian quadrature views numerical integration as a Bayesian inference task, where function evaluations are used to estimate the integral of that function. For this reason, it is sometimes also referred to as "Bayesian probabilistic numerical integration" or "Bayesian numerical integration". The name "Bayesian cubature" is also sometimes used when the integrand is multi-dimensional. A potential advantage of this approach is that it provides probabilistic uncertainty quantification for the value of the integral. Bayesian quadrature. Numerical integration. Let formula_0 be a function defined on a domain formula_1 (where typically formula_2). In numerical integration, function evaluations formula_3 at distinct locations formula_4 in formula_1 are used to estimate the integral of formula_5 against a measure formula_6: i.e. formula_7 Given weights formula_8, a quadrature rule is an estimator of formula_9 of the form formula_10 Bayesian quadrature consists of specifying a prior distribution over formula_11, conditioning this prior on formula_3 to obtain a posterior distribution formula_11, then computing the implied posterior distribution on formula_12. The name "quadrature" comes from the fact that the posterior mean on formula_12 sometimes takes the form of a quadrature rule whose weights are determined by the choice of prior. Bayesian quadrature with Gaussian processes. The most common choice of prior distribution for formula_5 is a Gaussian process as this permits conjugate inference to obtain a closed-form posterior distribution on formula_12. Suppose we have a Gaussian process with prior mean function formula_13 and covariance function (or kernel function) formula_14. Then, the posterior distribution on formula_5 is a Gaussian process with mean formula_15 and kernel formula_16 given by: formula_17 where formula_18, formula_19, formula_20 and formula_21. Furthermore, the posterior distribution on formula_12 is a univariate Gaussian distribution with mean formula_22 and variance formula_23 given by formula_24 The function formula_25 is the kernel mean embedding of formula_26 and formula_27 denotes the integral of formula_26 with respect to both inputs. In particular, note that the posterior mean is a quadrature rule with weights formula_28 and the posterior variance provides a quantification of the user's uncertainty over the value of formula_12. In more challenging integration problems, where the prior distribution cannot be relied upon as a meaningful representation of epistemic uncertainty, it is necessary to use the data formula_3 to set the kernel hyperparameters using, for example, maximum likelihood estimation. The estimation of kernel hyperparameters introduces adaptivity into Bayesian quadrature. Example. Consider estimation of the integral formula_29 using a Bayesian quadrature rule based on a zero-mean Gaussian process prior with the Matérn covariance function of smoothness formula_30 and correlation length formula_31. This covariance function is formula_32 It is straightforward (though tedious) to compute that formula_33 formula_34 Convergence of the Bayesian quadrature point estimate formula_35 and concentration of the posterior mass, as quantified by formula_36, around the true integral formula_9 as formula_11 is evaluated at more and more points is displayed in the accompanying animation. Advantages and disadvantages. Since Bayesian quadrature is an example of probabilistic numerics, it inherits certain advantages compared with traditional numerical integration methods: Despite these merits, Bayesian quadrature methods possess the following limitations: Algorithmic design. Prior distributions. The most commonly used prior for formula_11 is a Gaussian process prior. This is mainly due to the advantage provided by Gaussian conjugacy and the fact that Gaussian processes can encode a wide range of prior knowledge including smoothness, periodicity and sparsity through a careful choice of prior covariance. However, a number of other prior distributions have also been proposed. This includes multi-output Gaussian processes, which are particularly useful when tackling multiple related numerical integration tasks simultaneously or sequentially, and tree-based priors such as Bayesian additive regression trees, which are well suited for discontinuous formula_5. Additionally, Dirichlet processes priors have also been proposed for the integration measure formula_6. Point selection. The points formula_41 are either considered to be given, or can be selected so as to ensure the posterior on formula_9 concentrates at a faster rate. One approach consists of using point sets from other quadrature rules. For example, taking independent and identically distributed realisations from formula_42 recovers a Bayesian approach to Monte Carlo, whereas using certain deterministic point sets such as low-discrepancy sequences or lattices recovers a Bayesian alternative to quasi-Monte Carlo. It is of course also possible to use point sets specifically designed for Bayesian quadrature; see for example the work of who exploited symmetries in point sets to obtain scalable Bayesian quadrature estimators. Alternatively, points can also be selected adaptively following principles from active learning and Bayesian experimental design so as to directly minimise posterior uncertainty, including for multi-output Gaussian processes. Kernel mean and initial error. One of the challenges when implementing Bayesian quadrature is the need to evaluate the function formula_43 and the constant formula_44. The former is commonly called the kernel mean, and is a quantity which is key to the computation of kernel-based distances such as the maximum mean discrepancy. The latter is commonly called the initial error since it provides an upper bound on the integration error before any function values are observed. Unfortunately, the kernel mean and initial error can only be computed for a small number of formula_45 pairs; see for example Table 1 in. Theory. There have been a number of theoretical guarantees derived for Bayesian quadrature. These usually require Sobolev smoothness properties of the integrand, although recent work also extends to integrands in the reproducing kernel Hilbert space of the Gaussian kernel. Most of the results apply to the case of Monte Carlo or deterministic grid point sets, but some results also extend to adaptive designs. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f:\\mathcal{X} \\rightarrow \\mathbb{R}" }, { "math_id": 1, "text": "\\mathcal{X}" }, { "math_id": 2, "text": "\\mathcal{X}\\subseteq \\mathbb{R}^d" }, { "math_id": 3, "text": "f(x_1), \\ldots, f(x_n)" }, { "math_id": 4, "text": "x_1, \\ldots, x_n" }, { "math_id": 5, "text": " f " }, { "math_id": 6, "text": " \\nu " }, { "math_id": 7, "text": " \\textstyle \\nu[f] := \\int_{\\mathcal{X}} f(x) \\nu(\\mathrm{d}x). " }, { "math_id": 8, "text": "w_1, \\ldots, w_n \\in \\mathbb{R}" }, { "math_id": 9, "text": "\\nu[f]" }, { "math_id": 10, "text": " \\textstyle \\hat{\\nu}[f] := \\sum_{i=1}^n w_i f(x_i). " }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": " \\nu[f] " }, { "math_id": 13, "text": " m: \\mathcal{X} \\rightarrow \\mathbb{R} " }, { "math_id": 14, "text": " k: \\mathcal{X} \\times \\mathcal{X} \\rightarrow \\mathbb{R} " }, { "math_id": 15, "text": " m_n:\\mathcal{X} \\rightarrow \\mathbb{R} " }, { "math_id": 16, "text": " k_n:\\mathcal{X} \\times \\mathcal{X} \\rightarrow \\mathbb{R} " }, { "math_id": 17, "text": " m_n(x) = m(x) + k(x,X)k(X,X)^{-1} f(X) \\qquad \\text{and} \\qquad k_n(x,y) = k(x,y)-k(x,X)k(X,X)^{-1}k(X,y). " }, { "math_id": 18, "text": " (k(X,X))_{ij} = k(x_i,x_j)" }, { "math_id": 19, "text": " (f(X))_{i} = f(x_i)" }, { "math_id": 20, "text": " (k(\\cdot,X))_i = k(\\cdot,x_i)" }, { "math_id": 21, "text": " (k(X,\\cdot))_i = k(x_i,\\cdot)" }, { "math_id": 22, "text": " \\mathbb{E}[\\nu[f]] " }, { "math_id": 23, "text": " \\mathbb{V}[\\nu[f]] " }, { "math_id": 24, "text": " \\mathbb{E}[\\nu[f]] = \\nu[m]+ \\nu[k(\\cdot,X)]k(X,X)^{-1} f(X) \\qquad \\text{and} \\qquad \\mathbb{V}[\\nu[f]] = \\nu\\nu[k]-\\nu[k(\\cdot,X)]k(X,X)^{-1}\\nu[k(X,\\cdot)]. " }, { "math_id": 25, "text": " \\textstyle \\nu[k(\\cdot, x)] = \\int_\\mathcal{X} k(y, x) \\nu(\\mathrm{d} y)" }, { "math_id": 26, "text": "k" }, { "math_id": 27, "text": " \\textstyle \\nu\\nu[k] = \\int_\\mathcal{X} k(x, y) \\nu(dx) \\nu(\\mathrm{d}y)" }, { "math_id": 28, "text": " \\textstyle w_i = (\\nu[k(\\cdot,X)]k(X,X)^{-1})_i. " }, { "math_id": 29, "text": " \\nu[f] = \\int_0^1 f(x) \\, \\mathrm{d}x \\approx 1.79 \\quad \\text{ of the function } \\quad f(x) = (1 + x^2) \\sin(5 \\pi x) + \\frac{8}{5}" }, { "math_id": 30, "text": "3/2" }, { "math_id": 31, "text": "\\rho = 1/5" }, { "math_id": 32, "text": " \\textstyle k(x, y) = (1 + \\sqrt{3} \\, |x - y| / \\rho ) \\exp( \\! - \\sqrt{3} \\, |x - y|/\\rho ). " }, { "math_id": 33, "text": " \\nu[k(\\cdot, x)] = \\int_0^1 k(y, x) \\,\\mathrm{d}y = \\frac{4\\rho}{\\sqrt{3}} - \\frac{1}{3} \\exp\\bigg(\\frac{\\sqrt{3}(x-1)}{\\rho}\\bigg) \\big(3+2\\sqrt{3}\\,\\rho-3x\\big)-\\frac{1}{3} \\exp\\bigg(-\\frac{\\sqrt{3} \\, x}{\\rho}\\bigg)\\big(3x+2\\sqrt{3}\\,\\rho\\big) " }, { "math_id": 34, "text": " \\nu\\nu[k] = \\int_0^1 \\int_0^1 k(x, y) \\,\\mathrm{d} x \\,\\mathrm{d} y = \\frac{2\\rho}{3} \\Bigg[ 2\\sqrt{3} - 3\\rho + \\exp\\bigg(\\!-\\frac{\\sqrt{3}}{\\rho}\\bigg) \\big( \\sqrt{3} + 3\\rho \\big) \\Bigg]." }, { "math_id": 35, "text": "\\mathbb{E}[\\nu[f]]" }, { "math_id": 36, "text": "\\mathbb{V}[\\nu[f]]" }, { "math_id": 37, "text": "\\nu[k(\\cdot, x)]" }, { "math_id": 38, "text": "\\nu" }, { "math_id": 39, "text": "\\mathcal{O}(n^3)" }, { "math_id": 40, "text": "n \\times n" }, { "math_id": 41, "text": "x_1, \\ldots, x_n " }, { "math_id": 42, "text": "\\nu " }, { "math_id": 43, "text": " \\nu[k(\\cdot,x)] " }, { "math_id": 44, "text": " \\nu\\nu[k] " }, { "math_id": 45, "text": " (k, \\nu) " } ]
https://en.wikipedia.org/wiki?curid=69112876
6911288
List of common astronomy symbols
This is a compilation of symbols commonly used in astronomy, particularly professional astronomy. Astrometry parameters. Astrometry parameters Cosmological parameters. Cosmological parameters Distance description. Distance description for orbital and non-orbital parameters: Galaxy comparison. Galaxy type and spectral comparison: Luminosity comparison. Luminosity comparison: Luminosity of certain object: Mass comparison. Mass comparison: Mass of certain object: Metallicity comparison. Metallicity comparison: Orbital parameters. Orbital Parameters of a Cosmic Object: Radius comparison. Radius comparison: Spectral comparison. Spectral comparison: Temperature description. Temperature description: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "[\\ce{Fe/H}] = \\log[\\ce{Fe/H}]_* - \\log[\\ce{Fe/H}]_\\odot" } ]
https://en.wikipedia.org/wiki?curid=6911288
69116423
Reducing subspace
Concept in linear algebra In linear algebra, a reducing subspace formula_0 of a linear map formula_1 from a Hilbert space formula_2 to itself is an invariant subspace of formula_3 whose orthogonal complement formula_4 is also an invariant subspace of formula_5 That is, formula_6 and formula_7 One says that the subspace formula_0 reduces the map formula_5 One says that a linear map is reducible if it has a nontrivial reducing subspace. Otherwise one says it is irreducible. If formula_2 is of finite dimension formula_8 and formula_0 is a reducing subspace of the map formula_1 represented under basis formula_9 by matrix formula_10 then formula_11 can be expressed as the sum formula_12 where formula_13 is the matrix of the orthogonal projection from formula_2 to formula_0 and formula_14 is the matrix of the projection onto formula_15 (Here formula_16 is the identity matrix.) Furthermore, formula_2 has an orthonormal basis formula_17 with a subset that is an orthonormal basis of formula_0. If formula_18 is the transition matrix from formula_9 to formula_17 then with respect to formula_17 the matrix formula_19 representing formula_3 is a block-diagonal matrix formula_20 with formula_21 where formula_22, and formula_23 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "W" }, { "math_id": 1, "text": "T:V\\to V" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "W^\\perp" }, { "math_id": 5, "text": "T." }, { "math_id": 6, "text": "T(W) \\subseteq W" }, { "math_id": 7, "text": "T(W^\\perp) \\subseteq W^\\perp." }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "B" }, { "math_id": 10, "text": "M \\in\\R^{r\\times r}" }, { "math_id": 11, "text": "M" }, { "math_id": 12, "text": " M = P_W M P_W + P_{W^\\perp} M P_{W^\\perp}" }, { "math_id": 13, "text": "P_W \\in\\R^{r\\times r}" }, { "math_id": 14, "text": "P_{W^\\perp} = I - P_{W}" }, { "math_id": 15, "text": "W^\\perp." }, { "math_id": 16, "text": "I \\in \\R^{r\\times r}" }, { "math_id": 17, "text": "B'" }, { "math_id": 18, "text": "Q \\in \\R^{r\\times r}" }, { "math_id": 19, "text": "Q^{-1}MQ" }, { "math_id": 20, "text": "Q^{-1}MQ = \\left[ \\begin{array}{cc} A & 0 \\\\ 0 & B \\end{array} \\right] " }, { "math_id": 21, "text": " A\\in\\R^{d\\times d}," }, { "math_id": 22, "text": " d= \\dim W" }, { "math_id": 23, "text": " B\\in\\R^{(r-d)\\times(r-d)}." } ]
https://en.wikipedia.org/wiki?curid=69116423
69127462
1 Samuel 6
First Book of Samuel chapter 1 Samuel 6 is the sixth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter describes how the Ark of Covenant was returned to Israel by the Philistines, a part of the "Ark Narrative" (1 Samuel 4:1–7:1) within a section concerning the life of Samuel (1 Samuel 1:1–7:17). Text. This chapter was originally written in the Hebrew language. It is divided into 21 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–13, 16–18, 20–21. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Period. The event in this chapter happened at the end of judges period in Israel, about 1100 BC. The Ark returned to Israel (6:1–19). The Philistines realized that the Ark of the Covenant had to be returned to Israel to stop the plagues (verse 2, cf. 1 Samuel 5:11), so they consulted their priests and diviners to avoid further humiliation (verses 1–9). Two issues were raised in verse 3: The answer for the first concern is to send gifts (cf. Exodus 3:21) on the basis of value ('gold'), corresponding to the victims ('five' for the five lords of the Philistines) and representing the plagues ('tumors' and 'mice'). The gifts are called 'guilt offering ("ʼašām"), serving a double function: as a sacrifice to ensure that YHWH would 'lighten his hand' and as a compensatory tribute to YHWH. They learned from the Exodus tradition 'not to be obstinate and prevent the return of the ark' (verse 6). The answer to the second concern was sought by the use of divination (verses 7–9), utilizing untrained cows, separated from their young calves (therefore inclined to return home), and released unguided, so when the cows went straight to the territory of Israel (in the direction of border city Beth-shemesh; verses 10–18), the Philistines were convinced that the plagues came from YHWH and their gifts were acceptable (verses 16–18). The Israelites celebrated the return of the ark, and utilized the cows to be an appropriate sacrifice for the removal of ritual 'contamination', as the animals and the cart were new, unused, and therefore ritually clean (cf. Numbers 19:2). The sacrifice was performed on a 'large stone of Abel' in the field of an unknown Joshua (verse 18), which afterward also became the resting place for the ark (verse 15). "And the ark of the Lord was in the country of the Philistines seven months." "Therefore you will make images of your tumors and images of your mice that ravage the land. And you will give glory to the God of Israel. Perhaps He will lighten His hand from off you, even from off your gods and from off your land." The Ark at Kirjath Jearim (6:20–21). Similar to what happened to the Philistines, the ark caused plagues for Israelites when they did not show due respect to it, so the ark was moved from Beth-shemesh to Kiriath Jearim ("city of the forests"), probably due to its previous connection with Baal-worship (cf. 'city of Baal', Joshua 18:14, and 'Baalah', Joshua 15:9, 10). The custodian of the city was Eleazar, son of Abinadab, both had names that often appear in levitical lists. "And they sent messengers to the inhabitants of Kiriath Jearim, saying, "The Philistines have brought back the ark of the LORD. Come down, and take it up to you."" Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" /> Sources. Commentaries on Samuel. <templatestyles src="Refbegin/styles.css" /> General. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69127462
69127586
Chord diagram (mathematics)
Cyclic order and one-to-one pairing of a set of objects In mathematics, a chord diagram consists of a cyclic order on a set of objects, together with a one-to-one pairing (perfect matching) of those objects. Chord diagrams are conventionally visualized by arranging the objects in their order around a circle, and drawing the pairs of the matching as chords of the circle. The number of different chord diagrams that may be given for a set of formula_0 cyclically ordered objects is the double factorial formula_1. There is a Catalan number of chord diagrams on a given ordered set in which no two chords cross each other. The crossing pattern of chords in a chord diagram may be described by a circle graph, the intersection graph of the chords: it has a vertex for each chord and an edge for each two chords that cross. In knot theory, a chord diagram can be used to describe the sequence of crossings along the planar projection of a knot, with each point at which a crossing occurs paired with the point that crosses it. To fully describe the knot, the diagram should be annotated with an extra bit of information for each pair, indicating which point crosses over and which crosses under at that crossing. With this extra information, the chord diagram of a knot is called a Gauss diagram. In the Gauss diagram of a knot, every chord crosses an even number of other chords, or equivalently each pair in the diagram connects a point in an even position of the cyclic order with a point in an odd position, and sometimes this is used as a defining condition of Gauss diagrams. In algebraic geometry, chord diagrams can be used to represent the singularities of algebraic plane curves. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "2n" }, { "math_id": 1, "text": "(2n-1)!!" } ]
https://en.wikipedia.org/wiki?curid=69127586
69127618
Method of equal shares
Method of counting ballots following elections The method of equal shares is a proportional method of counting ballots that applies to participatory budgeting, to committee elections, and to simultaneous public decisions. It can be used when the voters vote via approval ballots, ranked ballots or cardinal ballots. It works by dividing the available budget into equal parts that are assigned to each voter. The method is only allowed to use the budget share of a voter to implement projects that the voter voted for. It then repeatedly finds projects that can be afforded using the budget shares of the supporting voters. In contexts other than participatory budgeting, the method works by equally dividing an abstract budget of "voting power". In 2023, the method of equal shares was being used in a participatory budgeting program in the Polish city of Wieliczka. The program, known as Green Million ("Zielony Milion"), was set to distribute 1 million złoty to ecological projects proposed by residents of the city. It was also used in a participatory budgeting program in the Swiss city of Aarau in 2023 ("Stadtidee"). Use in academic literature. The method of equal shares was first discussed in the context of committee elections in 2019, initially under the name "Rule X". From 2022, the literature has referred to the rule as the "method of equal shares", particularly when referring to it in the context of participatory budgeting algorithms. The method can be described as a member of a class of voting methods called expanding approvals rules introduced earlier in 2019 by Aziz and Lee for ordinal preferences (that include approval ballots). Motivation. The method is an alternative to the knapsack algorithm which is used by most cities even though it is a disproportional method. For example, if 51 percent of the population support 10 red projects and 49 percent support 10 blue projects, and the money suffices only for 10 projects, the knapsack budgeting will choose the 10 red supported by the 51 percent, and ignore the 49 percent altogether. In contrast, the method of equal shares would pick 5 blue and 5 red projects. The method guarantees proportional representation: it satisfies a strong variant of the justified representation axiom adapted to participatory budgeting. This says that a group of X percent of the population will have X percent of the budget spent on projects supported by the group (assuming that all members of the group have voted the same or at least similarly). Intuitive explanation. In the context of participatory budgeting the method assumes that the municipal budget is initially evenly distributed among the voters. Each time a project is selected its cost is divided among those voters who supported the project and who still have money. The savings of these voters are decreased accordingly. If the voters vote via approval ballots, then the cost of a selected project is distributed equally among the voters; if they vote via cardinal ballots, then the cost is distributed proportionally to the utilities the voters enjoy from the project. The rule selects the projects which can be paid this way, starting with those that minimise the voters' marginal costs per utility. Example 1. The following example with 100 voters and 9 projects illustrates how the rule works. In this example the total budget equals $1000, that is it allows to select five from the nine available projects. See the animated diagram below, which illustrates the behaviour of the rule. The budget is first divided equally among the voters, thus each voters gets $10. Project formula_0 received most votes, and it is selected in the first round. If we divided the cost of formula_0 equally among the voters, who supported formula_0, each of them would pay formula_1. In contrast, if we selected, e.g., formula_2, then the cost per voter would be formula_3. The method selects first the project that minimises the price per voter. Note that in the last step project formula_4 was selected even though there were projects which were supported by more voters, say formula_2. This is because, the money that the supporters of formula_2 had the right to control, was used previously to justify the selection of formula_0, formula_5, and formula_6. On the other hand, the voters who voted for formula_4 form 20 percent of the population, and so shall have right to decide about 20 percent of the budget. Those voters supported only formula_4, and this is why this project was selected. For a more detailed example including cardinal ballots see Example 2. Definition. This section presents the definition of the rule for cardinal ballots. See discussion for a discussion on how to apply this definition to approval ballots and ranked ballots. We have a set of projects formula_7, and a set of voters formula_8. For each project formula_9 let formula_10 denote its cost, and let formula_11 denote the size of the available municipal budget. For each voter formula_12 and each project formula_9 let formula_13 denote the formula_14's cardinal ballot on formula_15, that is the number that quantifies the level of appreciation of voter formula_14 towards project formula_16. The method of equal shares works in rounds. At the beginning it puts an equal part of the budget, in each voter's virtual bank account, formula_17. In each round the method selects one project according to the following procedure. Example 2. The following diagram illustrates the behaviour of the method. Discussion. This section provides a discussion on other variants of the method of equal shares. Other types of ballots. The method of equal shares can be used with other types of voters ballots. Approval ballots. The method can be applied in two ways to the setting where the voters vote by marking the projects they like (see Example 1): Ranked ballots. The method applies to the model where the voters vote by ranking the projects from the most to the least preferred one. Assuming lexicographic preferences, one can use the convention that formula_23 depends on the position of project formula_19 in the voter's formula_20 ranking, and that formula_24, whenever formula_20 ranks formula_19 as more preferred than formula_25. Formally, the method is defined as follows. For each voter formula_12 let formula_26 denote the ranking of voter formula_14 over the projects. For example, formula_27 means that formula_28 is the most preferred project from the perspective of voter formula_14, formula_29 is the voter's second most preferred project and formula_30 is the least preferred project. In this example we say that project formula_28 is ranked in the first position and write formula_31, project formula_29 is ranked in the second position (formula_32), and formula_30 in the third position (formula_33). Each voter is initially assigned an equal part of the budget formula_17. The rule proceeds in rounds, in each round: Committee elections. In the context of committee elections the projects are typically called candidates. It is assumed that cost of each candidate equals one; then, the budget formula_11 can be interpreted as the number of candidates in the committee that should be selected. Unspent budget. The method of equal shares can return a set of projects that does not exhaust the whole budget. There are multiple ways to use the unspent budget: Comparison to other voting methods. In the context of committee elections the method is often compared to Proportional Approval Voting (PAV), since both methods are proportional (they satisfy the axiom of Extended Justified Representation (EJR)). The difference between the two methods can be described as follow. MES is similar to the Phragmen's sequential rule. The difference is that in MES the voters are given their budgets upfront, while in the Phragmen's sequential rule the voters earn money continuously over time. The methods compare as follows: MES with adjusting initial budget, PAV and Phragmen's voting rules can all be viewed as extensions of the D'Hondt method to the setting where the voters can vote for individual candidates rather than for political parties. MES further extends to participatory budgeting. Implementation. Below there is a Python implementation of the method that applies to participatory budgeting. For the model of committee elections, the rules is implemented as a part of the Python package "abcvoting". import math def method_of_equal_shares(N, C, cost, u, b): """Method of Equal Shares Args: N: a list of voters. C: a list of projects (candidates). cost: a dictionary that assigns each project its cost. b: the total available budget. u: a dictionary; u[c][i] is the value that voter i assigns to candidate c. an empty entry means that the corresponding value u[c][i] equals 0. W = set() while True: next_candidate = None lowest_rho = float("inf") for c in C.difference(W): if _leq(cost[c], sum([budget[i] for i in supporters[c]])): supporters_sorted = sorted(supporters[c], key=lambda i: budget[i] / u[c][i]) price = cost[c] util = total_utility[c] for i in supporters_sorted: if _leq(price * u[c][i], budget[i] * util): break price -= budget[i] util -= u[c][i] rho = price / util \ if not math.isclose(util, 0) and not math.isclose(price, 0) \ else budget[supporters_sorted[-1]] / u[c][supporters_sorted[-1]] if rho < lowest_rho: next_candidate = c lowest_rho = rho if next_candidate is None: break W.add(next_candidate) for i in N: budget[i] -= min(budget[i], lowest_rho * u[next_candidate][i]) return _complete_utilitarian(N, C, cost, u, b, W) # one of the possible completions def _complete_utilitarian(N, C, cost, u, b, W): committee_cost = sum([cost[c] for c in W]) while True: next_candidate = None highest_util = float("-inf") for c in C.difference(W): if _leq(committee_cost + cost[c], b): if util[c] / cost[c] > highest_util: next_candidate = c highest_util = util[c] / cost[c] if next_candidate is None: break W.add(next_candidate) committee_cost += cost[next_candidate] return W def _leq(a, b): return a < b or math.isclose(a, b) Extensions. Fairstein, Meir and Gal extend MES to a setting in which some projects may be substitute goods. Empirical support. Fairstein, Benade and Gal compare MES to greedy aggregation methods. They find that greedy aggregation leads to outcomes that are highly sensitive to the input format used, and the fraction of the population that participates. In contrast, MES leads to outcomes that are not sensitive to the type of voting format used. This means that MES can be used with approval ballots, ordinal ballots or cardinal ballots, without much difference in the outcome. These outcomes are stable even when only 25 to 50 percent of the population participates in the election. Fairstein, Meir, Vilenchik and Gal study variants of MES both on real and synthetic datasets. They find that these variants do very well in practice, both with respect to social welfare and with respect to justified representation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{D}" }, { "math_id": 1, "text": "\\$200/66 \\approx \\$3.03" }, { "math_id": 2, "text": "\\mathrm{E}" }, { "math_id": 3, "text": "\\$200/46 \\approx \\$4.34" }, { "math_id": 4, "text": "\\mathrm{H}" }, { "math_id": 5, "text": "\\mathrm{A}" }, { "math_id": 6, "text": "\\mathrm{C}" }, { "math_id": 7, "text": "P = \\{p_1, p_2, \\ldots, p_m\\}" }, { "math_id": 8, "text": "N = \\{1, 2, \\ldots, n\\}" }, { "math_id": 9, "text": "p \\in P" }, { "math_id": 10, "text": "\\mathrm{cost}(p)" }, { "math_id": 11, "text": "b" }, { "math_id": 12, "text": "i \\in N" }, { "math_id": 13, "text": "u_i(p) " }, { "math_id": 14, "text": "i" }, { "math_id": 15, "text": "c" }, { "math_id": 16, "text": "p" }, { "math_id": 17, "text": "b_i = b/n" }, { "math_id": 18, "text": " u_i(p) = \\mathrm{cost}(p) " }, { "math_id": 19, "text": " p " }, { "math_id": 20, "text": " i " }, { "math_id": 21, "text": " u_i(p) = 0 " }, { "math_id": 22, "text": " u_i(p) = 1 " }, { "math_id": 23, "text": " u_i(p) " }, { "math_id": 24, "text": "u_i(p)/u_i(p') \\to \\infty " }, { "math_id": 25, "text": " p' " }, { "math_id": 26, "text": "\\succ_i " }, { "math_id": 27, "text": "Y \\succ_i X \\succ_i Z" }, { "math_id": 28, "text": "Y" }, { "math_id": 29, "text": "X" }, { "math_id": 30, "text": "Z" }, { "math_id": 31, "text": "\\mathrm{pos}_i(Y) = 1" }, { "math_id": 32, "text": "\\mathrm{pos}_i(X) = 2" }, { "math_id": 33, "text": "\\mathrm{pos}_i(Z) = 3" } ]
https://en.wikipedia.org/wiki?curid=69127618
69130803
Yoshiokaite
Yoshiokaite, a mineral formed as shocked crystal fragments in devitrified glass, was discovered in lunar regolith breccia collected from a trench by the Apollo 14 crew in 1971. Although there have been other minerals (armalcolite and tranquillityite) that have been originally discovered on the Moon, yoshiokaite is the first new mineral with origin related to lunar highlands. Yoshiokaite is considered to be a member of the feldspathoid group. Yoshiokaite was named after mineralogist, Takashi Yoshioka, who synthesized a metastable phase solid solution between <chem>CaAl2Si2O8 </chem> and <chem>CaAl2O4</chem> with a nepheline-like structure and formula. Yoshioka's research on the synthetic phase helped understand the probable origins of the mineral. Yoshiokaite was approved as a mineral in 1989 by the Commission on New Minerals and Mineral Names of the International Mineralogical Association. Occurrence. The regolith breccia, sample 14076, containing the small crystals of yoshiokaite was collected at the bottom of a 30-cm-deep trench about 224 m from the Apollo landing site, Fra Mauro Base. Sample 14076 was described as having two distinct parts, the regolith breccia that was common of the local Apollo 14 regolith and the unknown part (called exotic) that was very high in Al with a very small ratio of fine-grained iron metal to ferrous oxide suggesting that it is from an older unknown regolith. The exotic portion of sample 14076 is described having the composition of glass that has undergone devitrification which is considered to be uncommon with glasses found elsewhere on the Moon. The devitrified glass may be caused by shock melting of anorthite. The anorthositic percentage of yoshiokaite along with remote-sensing evidence adds to the suggestion that there are pure anorthositic crusts common on the Moon. Unfortunately, the purest anorthositic crust is found on the far side of the Moon. Physical properties. Yoshiokaite is a colorless, transparent mineral with a vitreous luster and white streak. It is hexagonal, although most crystals found in the regolith breccia are distorted due to strain, likely from shock impact. Its crystals are characterized by having poor cleavage along {100}. Devitrified glasses with 30 wt% <chem>SiO2 </chem> or less have interlocking crystals that are angular and anhedral with no preferred orientation up to 235 formula_0 in size. Devitrified glass with 34 wt% <chem>SiO2 </chem> are spherical with positive elongation fibers and plumose crystals with inclined extinction. Shock-induced lamellae sets at {201} and {200}. Crystallography properties. Yoshiokaite has a trigonal crystal system with either a formula_1 andformula_2 space group. Crystal cell parameters are a = 9.939 Å and c = 8.245 Å (ratio a:c = 1:0.83), The unit cell volume is equal to 705.35 Å. It is uniaxial positive with refractive index values formula_3= 1.560 - 1.580 and formula_4= 1.620 - 1.640 with a max birefringence of formula_5= 0.060 and a moderate surface relief. Powder X-Ray Diffraction data: Chemical properties. Crystals and devitrified glass that are high in silica (>34 wt% <chem>SiO2 </chem>) form plumose or spherical crystals that are enriched in Al and Si but are lower in content of Mg, Fe, and Ti. Lower silica crystals and glasses (30 wt% - 27 wt% <chem>SiO2 </chem>) are enriched in Ca but have lower contents of Mg, Al, and Si. Devitrified glass with less than 27 wt.% <chem>SiO2 </chem> show varied chemistry in intergranular regions although these regions are generally very small (<0.5 formula_0). Yoshiokaite's chemical content can be represented on a <chem>CaO-Al2O3-SiO2 </chem> ternary diagram. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mu m" }, { "math_id": 1, "text": "P3" }, { "math_id": 2, "text": "P\\bar{3}" }, { "math_id": 3, "text": "n_\\omega" }, { "math_id": 4, "text": "n_\\varepsilon" }, { "math_id": 5, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=69130803
69133408
Microswimmer
Microscopic object able to traverse fluid A microswimmer is a microscopic object with the ability to move in a fluid environment. Natural microswimmers are found everywhere in the natural world as biological microorganisms, such as bacteria, archaea, protists, sperm and microanimals. Since the turn of the millennium there has been increasing interest in manufacturing synthetic and biohybrid microswimmers. Although only two decades have passed since their emergence, they have already shown promise for various biomedical and environmental applications. Given the recent nature of the field, there is yet no consensus in the literature for the nomenclature of the microscopic objects this article refers to as "microswimmers". Among the many alternative names such objects are given in the literature, microswimmers, micro/nanorobots and micro/nanomotors are likely the most frequently encountered. Other common terms may be more descriptive, including information about the object shape, e.g., microtube or microhelix, its components, e.g., biohybrid, spermbot, bacteriabot, or micro-bio-robot, or behavior, e.g., microrocket, microbullet, microtool or microroller. Researchers have also named their specific microswimmers e.g., medibots, hairbots, iMushbots, IRONSperm, teabots, biobots, T-budbots, or MOFBOTS. Background. In 1828, the British biologist Robert Brown discovered the incessant jiggling motion of pollen in water and described his finding in his article "A Brief Account of Microscopical Observations…", leading to extended scientific discussion about the origin of this motion. This enigma was resolved only in 1905, when Albert Einstein published his celebrated essay "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen". Einstein not only deduced the diffusion of suspended particles in quiescent liquids, but also suggested these findings could be used to determine particle size — in a sense, he was the world's first microrheologist. Ever since Newton established his equations of motion, the mystery of motion on the microscale has emerged frequently in scientific history, as famously demonstrated by a couple of articles that should be discussed briefly. First, an essential concept, popularized by Osborne Reynolds, is that the relative importance of inertia and viscosity for the motion of a fluid depends on certain details of the system under consideration. The Reynolds number "Re", named in his honor, quantifies this comparison as a dimensionless ratio of characteristic inertial and viscous forces: formula_0 Here, "ρ" represents the density of the fluid; "u" is a characteristic velocity of the system (for instance, the velocity of a swimming particle); "l" is a characteristic length scale (e.g., the swimmer size); and "μ" is the viscosity of the fluid. Taking the suspending fluid to be water, and using experimentally observed values for "u", one can determine that inertia is important for macroscopic swimmers like fish ("Re" = 100), while viscosity dominates the motion of microscale swimmers like bacteria ("Re" = 10−4). The overwhelming importance of viscosity for swimming at the micrometer scale has profound implications for swimming strategy. This has been discussed memorably by E. M. Purcell, who invited the reader into the world of microorganisms and theoretically studied the conditions of their motion. In the first place, propulsion strategies of large scale swimmers often involve imparting momentum to the surrounding fluid in periodic , such as vortex shedding, and coasting between these events through inertia. This cannot be effective for microscale swimmers like bacteria: due to the large viscous damping, the inertial coasting time of a micron-sized object is on the order of 1 μs. The coasting distance of a microorganism moving at a typical speed is about 0.1 angstroms (Å). Purcell concluded that only forces that are exerted in the present moment on a microscale body contribute to its propulsion, so a constant energy conversion method is essential. Microorganisms have optimized their metabolism for continuous energy production, while purely artificial microswimmers (microrobots) must obtain energy from the environment, since their on-board-storage-capacity is very limited. As a further consequence of the continuous dissipation of energy, biological and artificial microswimmers do not obey the laws of equilibrium statistical physics, and need to be described by non-equilibrium dynamics. Mathematically, Purcell explored the implications of low Reynolds number by taking the Navier-Stokes equation and eliminating the inertial terms: formula_1 where formula_2 is the velocity of the fluid and formula_3 is the gradient of the pressure. As Purcell noted, the resulting equation — the Stokes equation — contains no explicit time dependence. This has some important consequences for how a suspended body (e.g., a bacterium) can swim through periodic mechanical motions or deformations (e.g., of a flagellum). First, the rate of motion is practically irrelevant for the motion of the microswimmer and of the surrounding fluid: changing the rate of motion will change the scale of the velocities of the fluid and of the microswimmer, but it will not change the pattern of fluid flow. Secondly, reversing the direction of mechanical motion will simply reverse all velocities in the system. These properties of the Stokes equation severely restrict the range of feasible swimming strategies. As a concrete illustration, consider a mathematical scallop that consists of two rigid pieces connected by a hinge. Can the "scallop" swim by periodically opening and closing the hinge? No: regardless of how the cycle of opening and closing depends on time, the scallop will always return to its starting point at the end of the cycle. Here originated the striking quote: "Fast or slow, it exactly retraces its trajectory and it's back where it started". In light of this scallop theorem, Purcell developed approaches concerning how artificial motion at the micro scale can be generated. This paper continues to inspire ongoing scientific discussion; for example, recent work by the Fischer group from the Max Planck Institute for Intelligent Systems experimentally confirmed that the scallop principle is only valid for Newtonian fluids. Physics of Microscale Swimmers. Derivations for the parallel and normal components of drag on simple geometries in creeping flow can be found in literature, and recorded media, Notably spheres: formula_4 and spheroids with major and minor axis "a, b": formula_5 formula_6 Due to the linear nature of the governing fluid equations, the superposition principle may be used to model more complex geometries, such as corkscrews, following the analysis of Purcell and others. Here, we present the drag and torque on the helical coil. formula_7 formula_8 formula_9 formula_10 formula_11 Where formula_12. It is important to note that while the Scallop theorem requires more than one degree of freedom, external forcing (e.g., magnetic) allows for the motion of a simple corkscrew. Types. Different types of microswimmers are powered and actuated in different ways. Swimming strategies for individual microswimmers  as well as swarms of microswimmers  have been examined down through the years. Typically, microswimmers rely either on external power sources, as it is the case for magnetic, optic, or acoustic control, or employ the fuel available in their surroundings, as is the case with biohybrid or catalytic microswimmers. Magnetic and acoustic actuation are typically compatible with "in vivo" microswimmer manipulation and catalytic microswimmers can be specifically engineered to employ "in vivo" fuels. The use of optical forces in biological fluids or "in vivo" is more challenging, but interesting examples have nevertheless been demonstrated. Often, researchers choose to take inspiration from nature, either for the entire microswimmer design, or for achieving a desired propulsion type. For example, one of the first bioinspired microswimmers consisted of human red blood cells modified with a flagellum-like artificial component made of filaments of magnetic particles bonded via biotin–streptavidin interactions. More recently, biomimetic swimming inspired by worm-like travelling wave features, shrimp locomotion, and bacterial run-and-tumble motion, was demonstrated by using shaped light. A different nature-inspired approach is the use of biohybrid microswimmers. These comprise a living component and a synthetic one. Biohybrids most often take advantage of the microscale motion of various biological systems and can also make use of other behaviours characterising the living component. For magnetic bioinspired and biohybrid microswimmers, typical model organisms are bacteria, sperm cells and magnetotactic cells. In addition to the use of magnetic forces, actuation of bioinspired microswimmers was also demonstrated using e.g., acoustic excitation  or optical forces. Another nature-inspired behavior related to optical forces is that of phototaxis, which can be exploited by e.g., cargo-carrying microorganisms, synthetic microswimmers  or biohybrid microswimmers. A number of recent review papers are focused on explaining or comparing existing propulsion and control strategies used in microswimmer actuation. Magnetic actuation is most often included for controlled "in vivo" guiding, even for microswimmers which rely on a different type of propulsion. In 2020, Koleoso et al. reviewed the use of magnetic small scale robots for biomedical applications and provide details about the various magnetic fields and actuation systems developed for such purposes. Strategies for the fabrication of microswimmers include two-photon polymerisation 3D printing, photolithography, template-assisted electrodeposition, or bonding of a living component to an inanimate one by exploiting different strategies. More recent approaches exploit 4D printing, which is the 3D printing of stimuli-responsive materials. Further functionalization is often required, either to enable a certain type of actuation, e.g., metal coating for magnetic control or thermoplasmonic responses, or as part of the application, if certain characteristics are required for e.g., sensing, cargo transport, controlled interactions with the environment, or biodegradation. Microswimmers can also be categorized by their propulsion methods, and two primary methods are used: self-propulsion and external-field propulsion. In self-propulsion, a chemical fuel is coated over the robot that reacts with the liquid environment to create bubbles that propel the robot. External-field propulsion offers more variety, using optical, magnetic, acoustic, or electric fields. External field is better suited for biological applications as it will not need chemical fuels that produce pollutants that may be harmful to the host that the microswimmers are servicing including films and chemicals that may be biocompatible. This propulsive method also provides higher spatial resolution and more controllability, with recent advancements enabling three-dimensional movement enhancing the flexibility and functionality of microswimmers. Natural microswimmers. Motile systems have developed in the natural world over time and length scales spanning several orders of magnitude, and have evolved anatomically and physiologically to attain optimal strategies for self-propulsion and overcome the implications of high viscosity forces and Brownian motion, as shown in the diagram on the right. Some of the smallest known natural motile systems are motor proteins, i.e., proteins and protein complexes present in cells that carry out a variety of physiological functions by transducing chemical energy into mechanical energy. These motor proteins are classified as myosins, kinesins, or dyneins. Myosin motors are responsible for muscle contractions and the transport of cargousing actin filaments as tracks. Dynein motors and kinesin motors, on the other hand, use microtubules to transport vesicles across the cell. The mechanism these protein motors use to convert chemical energy into movement depends on ATP hydrolysis, which leads to a conformation modification in the globular motor domain, leading to directed motion. Apart from motor proteins, enzymes, traditionally recognized for their catalytic functions in biochemical processes, can function as nanoscale machines that convert chemical energy into mechanical action at the molecular dimension. Diffusion of various enzymes (e.g. urease, and catalase), measured by fluorescent correlated spectroscopy (FCS), increases in a substrate-dependent manner. Moreover, when enzymes are membrane-bound, their catalytic actions can drive lipid vesicle movement. For instance, lipid vesicles integrated with enzymes such as transmembrane adenosine 5’-triphosphatase, membrane-bound acid phosphatase, or urease exhibit enhanced mobility correlating with the enzymatic turnover rate. Bacteria can be roughly divided into two fundamentally different groups, gram-positive and gram-negative bacteria, distinguished by the architecture of their cell envelope. In each case the cell envelope is a complex multi-layered structure that protects the cell from its environment. In gram-positive bacteria, the cytoplasmic membrane is only surrounded by a thick cell wall of peptidoglycan. By contrast, the envelope of gram-negative bacteria is more complex and consists (from inside to outside) of the cytoplasmic membrane, a thin layer of peptidoglycan, and an additional outer membrane, also called the lipopolysaccharide layer. Other bacterial cell surface structures range from disorganised slime layers to highly structured capsules. These are made from secreted slimy or sticky polysaccharides or proteins that provide protection for the cells and are in direct contact with the environment. They have other functions, including attachment to solid surfaces. Additionally, protein appendages can be present on the surface: fimbriae and pili can have different lengths and diameters and their functions include adhesion and twitching motility. Specifically, for microorganisms that live in aqueous environments, locomotion refers to swimming, and hence the world is full of different classes of swimming microorganisms, such as bacteria, spermatozoa, protozoa, and algae. Bacteria move due to rotation of hair-like filaments called flagella, which are anchored to a protein motor complex on the bacteria cell wall. The following table, based on Schwarz "et al.", 2017, lists some examples of natural or biological microswimmers. Synthetic microswimmers. <templatestyles src="Template:Quote_box/styles.css" /> "An artificial microswimmer is a cutting-edge technology with engineering and medical applications. A natural microswimmer, such as bacteria and sperm cells, also play important roles in wide varieties of engineering, medical and biological phenomena. Due to the small size of the microswimmer, the inertial effect of the surrounding flow field may be negligible. In such a case, reciprocal body deformation cannot induce migration of a swimmer, which is known as the scallop theorem. To overcome the implications of the scallop theorem, the microswimmer needs to undergo a nonreciprocal body deformation to achieve migration. The swimming strategy is thus completely different from macro-scale swimmers...". One of the current engineering challenges is to create miniaturized functional vehicles that can carry out complex tasks at a small scale that would be otherwise impractical, inefficient, or outright impossible by conventional means. These vehicles are termed nano/micromotors or nano/microrobots, and should be distinguished from even smaller molecular machines for energy, computing, or other applications on the one side and static microelectromechanical systems (MEMS) on the other side of this size scale. Rather than being electronic devices on a chip, micromotors are able to move freely through a liquid medium while being steered or directed externally or by intrinsic design, which can be achieved by various mechanisms, most importantly catalytic reactions, magnetic fields, or ultrasonic waves. There are a variety of sensing, actuating, or pickup-and-delivery applications that scientists are currently aiming for, with local drug targeting for cancer treatment being one of the more prominent examples. For applications like this, a micromotor needs to be able to move, i.e., to swim, freely in three dimensions efficiently controlled and directed with a reliable mechanism. It is a direct consequence of the small size scale of microswimmers that they have a low Reynolds number. This means the physics of how microswimmers swim is dominated by viscous drag forces, a problem which has been discussed extensively by physicists in the field. This kind of swimming has challenged engineers as it is not commonly experienced in everyday life, but can nonetheless be observed in nature for motile microorganisms like sperm or certain bacteria. Naturally, these microorganisms served as inspiration from the very beginning to create artificial micromotors, as they were able to tackle the challenges that an active, self-sufficient microswimmer vehicle has to face. With biomimetic approaches, researchers were able to imitate the flagella-based motion strategy of sperm and "Escherichia coli" bacteria by reproducing their respective flagellum shape and actuating it with magnetic fields. Synthetic microswimmers are designed in a wide variety of shapes depending on the applications that they are used for.  Similar to natural microswimmers there is an energy cost associated with the movement and control of a microswimmer.  In nature, it is observed that micron-sized bacteria expend very little energy while larger microorganisms expend more energy.  This principle can be translated to synthetic microswimmers where the connection between the sizing and shape of a microswimmer and the energy spent has been studied by researchers.  Piro et al. argue that needle-like microswimmers are more energy efficient than other shapes, while disk-like microswimmers are carried by flow gradients in the liquid and will be naturally predisposed to follow a time-optimal trajectory.  Helical microswimmers have also gained interest as a geometrical shape for microswimmers due to being bioinspired by microstructures within many different types of plants that serve as water vessels. At the microscale, surface wettability is an important consideration in the material selection due to affecting the locomotion of the microswimmer.  Hydrophobic surfaces produce a large contact angle with the liquid which has the effect of exerting less frictional drag torque on the microswimmer body resulting in a lower step-out frequency required for movement. Microorganisms have adapted their locomotion to the harsh environment of low Reynolds number regime by invoking different swimming strategy. For example, the "E. coli" moves by rotating its helical flagellum, Chlamydomonas flagella have a breaststroke kind of motion. African trypanosome has a helical flagellum attached to the cell body with a planar wave passing through it. Swimming of these kind of natural swimmers have been investigated for the last half-century. As a result of these studies, artificial swimmers have also been proposed, like Taylor sheet, Purcell's two-hinge swimmer, three-linked spheres swimmer, elastic two-sphere swimmer  and three-sphere with a passive elastic arm, which have further enhanced understanding about low Reynolds number swimmers. One of the challenges in proposing an artificial swimmer lies in the fact that the proposed movement stroke should not be reciprocal otherwise it cannot propel itself due to the Scallop theorem. In Scallop theorem, Purcell had argued that a swimmer with one-hinge or one degree of freedom is bound to perform reciprocal motion and thus will not be able to swim in the Stokes regime. Purcell proposed two possible ways to elude from Scallop theorem, one is 'corkscrew' motion  and the other is 'flexible oar' motion. Using the concept of flexible oar, Dreyfus et al reported a micro swimmer that exploit elastic property of a slender filament made up of paramagnetic beads. To break the time inversion symmetry, a passive head was attached to the flexible arm. The passive head reduces the velocity of the flexible swimmer, bigger the head, higher is the drag force experienced by the swimmer. The head is essential for swimming because without it the tail performs a reciprocal motion and the velocity of the swimmer reduces to zero. In a study by Huang et al. microswimmers were placed in a sucrose solution to represent a viscosity that is similar to blood and tested different microswimmers and their ability to propel within the fluid using variants of the corkscrew and flexible oar techniques at different angles of alignment with an external magnetic field.  Due to the misalignment a helical motion was produced for the flexible oar and corkscrew case.  Under this test, the propulsion method that performed the fastest was the microswimmer with the tubular body and flexible planar tail due to taking advantage of the helical and corkscrew motion generated at an angle of 30-degree misalignment from the external magnetic field.  Microswimmers relying on the corkscrew motion had a reduced speed due to an increase in the drag experienced by the microswimmer due to the wobbling motion on the body.  However, when the microswimmer’s body is perpendicular to the external magnetic field the mobility of the flexible oar microswimmer was reduced due to a lack of the body’s helical motion while the wobbling effect on the corkscrew microswimmer was reduced so it was able to achieve better motion. The corkscrew and flexible oar motion of a synthetic microswimmer can be largely affected by the viscosity of the fluid.  An increase in viscosity decreases the motion of the microswimmer using either method, however, the decrease in movement at higher viscosity is larger for microswimmers using the flexible oar propulsion system.  This is due to the reduction in helical motion of the body of the microswimmer causing an increase in the drag experienced by the body.  Another effect experienced is a reduction in the bending of the tail which reduces the microswimmer’s ability to overcome the time-reversal symmetry. Another way microswimmers can propel is through catalytic reactions. Taking inspiration from Whitesides, who used the decomposition of hydrogen peroxide (H2O2) to propel cm/mm-scale objects on a water surface, Sen et al. (2004) fabricated catalytic motors in the micrometer range. These microswimmers were rod-shaped particles 370 nm in diameter and consisted of 1 μm long Pt and Au segments. They propelled via the decomposition of hydrogen peroxide in solution which would be catalyzed into water and oxygen. The Pt/Au rods were able to consistently reach speeds of up to 8 μm/s in a solution of 3.3% hydrogen peroxide. The decomposition of hydrogen peroxide in the Pt side produces oxygen, two protons and two electrons. The two protons and electrons will travel towards the Au, where they will be used to react with another hydrogen peroxide molecule, to produce two water molecules. The movements of the two protons and the two electrons through the rod drag the fluid towards the Au side, thus this fluid flow will propel the rod in the opposite direction. This self-electrophoresis mechanism is what powers the motion of these rods. Further analysis of the Pt/Au rods showed that they were capable of performing chemotaxis towards higher hydrogen peroxide concentrations, transport cargo, and exhibited steerable motion in an external magnetic field when inner Ni segments were added. Interest has been shown in using high-frequency sound waves for microswimmer navigation due to being cleared as safe for clinical studies by the U.S. Food and Drug Administration which would allow them to be used in biomedical applications.  The microswimmer is designed to have a hydrophobic surface from being manufactured with a resin and small cavities which produce an air bubble when immersed in a liquid.  When the high-frequency sound waves are applied to the microswimmer the bubble produces oscillations and generates enough movement to propel the microswimmer in a controlled direction.   An additional method that microswimmers can travel is through a response to a difference in temperature.  Huang et al. designed a microswimmer to study the control of the shape of a 3D microswimmer.  The microswimmer contains a rigid, non-expandable poly (ethylene glycol) diacrylate (PEGDA) combined with an N-isopropylacrylamide (NIPAAm) hydrogel layer that is thermally responsive.  Within the hydrogel layer, there are magnetic nanoparticles that can control the folding axes.  Through the alignment of the magnetic particles along the folding axes, the change in temperature can cause an active control of the microswimmer shape to propel in a fluid.  Another example of thermal-based control is through the use of a pNIPAM-AAc hydrogel embedded with iron oxide which can be controlled through a magnetic field.  Through the combination of magnetic fields and temperature-responsive materials, a dynamic control can be achieved. Movement without external forces has been demonstrated for microswimmers.  Bioinspired by microvelia beetles which are capable of gliding on the water at high speeds, microswimmers are proposed to take advantage of the Marangoni effect which is the mass transfer across a gradient of the surface tension for a fluid.  Choi et al. demonstrated that photopatterned microswimmers without any mechanical actuation system or external force are capable of traversing a fluid through a polyvinyl alcohol (PVA) fuel source which causes the surface tension of the water once it is dissolved. Responding to stimuli. Reconfigurable synthetic or artificial microswimmers need internal feedback Self-propelling microparticles are often proposed as synthetic models for biological microswimmers, yet they lack the internally regulated adaptation of their biological counterparts. Conversely, adaptation can be encoded in larger-scale soft-robotic devices but remains elusive to transfer to the colloidal scale. The ubiquity and success of motile bacteria are strongly coupled to their ability to autonomously adapt to different environments as they can reconfigure their shape, metabolism, and motility via internal feedback mechanisms. Realizing artificial microswimmers with similar adaptation capabilities and autonomous behavior might substantially impact technologies ranging from optimal transport to sensing and microrobotics. Focusing on adaptation, existing approaches at the colloidal scale mostly rely on external feedback, either to regulate motility via the spatiotemporal modulation of the propulsion velocity and direction  or to induce shape changes via the same magnetic or electric fields, which are also driving the particles. On the contrary, endowing artificial microswimmers with an internal feedback mechanism, which regulates motility in response to stimuli that are decoupled from the source of propulsion, remains an elusive task. A promising route to achieve this goal is to exploit the coupling between particle shape and motility. Efficient switching between different propulsion states can, for instance, be reached by the spontaneous aggregation of symmetry-breaking active clusters of varying geometry, albeit this process does not have the desired deterministic control. Conversely, designing colloidal clusters with fixed shapes and compositions offers fine control on motility  but lacks adaptation. Although progress on reconfigurable robots at the sub-millimeter scale has been made, downscaling these concepts to the colloidal level demands alternative fabrication and design. Shape-shifting colloidal clusters reconfiguring along a predefined pathway in response to local stimuli  would combine both characteristics, with high potential toward the vision of realising adaptive artificial microswimmers. Biohybrid microswimmers. The so-called biohybrid microswimmer can be defined as a microswimmer that consist of both biological and artificial parts, for instance, one or several living microorganisms attached to one or various synthetic parts. The biohybrid approach directly employs living microorganisms to be a main component or modified base of a functional microswimmer. Initially microorganisms were used as the motor units for artificial devices, but in recent years this role has been extended and modified toward other functionalities that take advantage of the biological capabilities of these organisms considering their means of interacting with other cells and living matter, specifically for applications inside the human body like drug delivery or fertilisation. A distinct advantage of microorganisms is that they naturally integrate motility and various biological functions in a conveniently miniaturised package, coupled with autonomous sensing and decision-making capabilities. They are able to adapt and thrive in complex "in vivo" environments and are capable of self-repair and self-assembly upon interaction with their surroundings. In that sense, self-sufficient microorganisms naturally function very similar to what we envision for artificially created microrobots: They harvest chemical energy from their surroundings to power molecular motor proteins that serve as actuators, they employ ion channels and microtubular networks to act as intracellular wiring, they rely on RNA or DNA as memory for control algorithms, and they feature an array of various membrane proteins to sense and evaluate their surroundings. All these abilities act together to allow microbes to thrive and pursue their goal and function. In principle, these abilities also qualify them as biological microrobots for novel operations like theranostics, the combination of diagnosis and therapy, if we are able to impose such functions artificially, for example, by functionalisation with therapeutics. Further, artificial extensions may be used as handles for external control and supervision mechanisms or to enhance the microbe's performance to guide and tailor its functions for specific applications. In fact, the biohybrid approach can be conceived in a dualistic way, with respect to the three basic ingredients of an in vivo microrobot, which are motility, control, and functionality. Figure 1 illustrates how these three ingredients can be either realized biologically, i.e., by the microorganism, or artificially, i.e., by the synthetic component. For example, a hybrid biomicromotor based on a sperm cell can be driven by the flagellum of the sperm or by an attached artificial helical flagellum. It can orient itself autonomously via biological interactions with its surroundings and other cells, or be controlled and supervised externally via artificial sensors and actuators. Finally, it can carry out a biological function, like its inherent ability to fertilize an egg cell, or an artificially imposed function, like the delivery of synthetic drugs or DNA vectors. A biohybrid device may deploy any feasible combination of such biological and artificial components in order to carry out a specific application. Another example of the biohybrid approach is by designing a microswimmer that is designed for studying skeletal muscle stimulation.  The microswimmer is fabricated from an Iron Oxide nanoparticles being dip-coated with chlorella microalgae allowing for the microrobot to be used in different biological environments while still having a high degree of control due to the superparamagnetic nanoparticles.  Guided by an external magnetic field the microswimmer is capable of reaching its target.  The microswimmer can be safely irradiated with a near-infrared (NIR) laser causing the nanoparticles to be heated through a photothermal effect and to trigger a targeted contraction in the skeletal muscle.  This technique demonstrates a safe and controllable method for movement within a biological environment. Navigation. Hydrodynamics can determine the optimal route for microswimmer navigation Compared to the well explored problem of how to steer a macroscopic agent, like an airplane or a moon lander, to optimally reach a target, optimal navigation strategies for microswimmers experiencing hydrodynamic interactions with walls and obstacles are far-less understood. Furthermore, hydrodynamic interactions in suspensions of microswimmers produce complex behavior. The quest on how to navigate or steer to optimally reach a target is important, e.g., for airplanes to save fuel while facing complex wind patterns on their way to a remote destination, or for the coordination of the motion of the parts of a space-agent to safely land on the moon. These classical problems are well-explored and are usually solved using optimal control theory. Likewise, navigation and search strategies are frequently encountered in a plethora of biological systems, including the foraging of animals for food, or of T cells searching for targets to mount an immune response. There is growing interest in optimal navigation problems and search strategies  of microswimmers  and "dry" active Brownian particles, The general problem regarding the optimal trajectory of a microswimmer which can freely steer but cannot control its speed toward a predefined target (point-to-point navigation) can be referred to as "the optimal microswimmer navigation problem". The characteristic differences between the optimal microswimmer navigation problem and conventional optimal control problems for macroagents like airplanes, cruise-ships, or moon-landers root in the presence of a low-Reynolds-number solvent in the former problem only. They comprise (i) overdamped dynamics, (ii) thermal fluctuations, and (iii) long-ranged fluid-mediated hydrodynamic interactions with interfaces, walls, and obstacles, all of which are characteristic for microswimmers. In particular, the non-conservative hydrodynamic forces which microswimmers experience call for a distinct navigation strategy than the conservative gravitational forces acting, e.g. on space vehicles. Recent work has explored optimal navigation problems of dry active particles (and particles in external flow fields) accounting for (i) and partly also for (ii). Specifically recent research has pioneered the use of reinforcement learning  such as determining optimal steering strategies of active particles to optimally navigate toward a target position  or to exploit external flow fields to avoid getting trapped in certain flow structures by learning smart gravitaxis. Deep reinforcement learning has been used to explore microswimmer navigation problems in mazes and obstacle arrays  assuming global  or only local  knowledge of the environment. Analytical approaches to optimal active particle navigation  complement these works and allow testing machine-learned results. An example of a successful machine learning locomotion being used in navigation is by Zou et al. where it was inspired by microorganisms having the ability to naturally switch between locomotory gaits such as a run-and-tumble or a roll-and-flick motion depending on the need to navigate the environment.  The artificial intelligence system allowed for the development of distinct gaits for steering, transition, and translational movement. Applications. As is the case for microtechnology and nanotechnology in general, the history of microswimmer applications arguably starts with Richard Feynman’s famous lecture "There's Plenty of Room at the Bottom". In the visionary speech, among other topics, Feynman addressed the idea of microscopic surgeons, saying: "...it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and «looks» around (of course the information has to be fed out). It finds out which valve is the faulty one and takes a little knife and slices it out. Other small machines might be permanently incorporated in the body to assist some inadequately-functioning organ." The concept of the surgeon one could swallow was soon after presented in the science-fiction movie "Fantastic Voyage" and in Isaac Asimov’s writings. Only a few decades later, microswimmers aiming to become true microscale surgeons evolved from an intriguing science-fiction concept to a reality explored in many research laboratories around the world, as already highlighted by Metin Sitti in 2009. These active agents that can self-propel in a low Reynolds number environment might play a key role in the future of nanomedicine, as popularised in 2016 by Yuval Noah Harari in "". In particular, they might become useful for the targeted delivery of genes  or drugs  and other cargo  to a certain target (e.g. a cancer cell) through our blood vessels, requiring them to find a good, or ideally optimal, path toward the target avoiding, e.g., obstacles and unfortunate flow field regions. Already in 2010, Nelson et al. reviewed the existing and envisioned applications of microrobots in minimally invasive medicine. Since then, the field has grown, and it has become clear that microswimmers have much potential for biomedical applications. Already, many interesting tasks can be performed "in vitro" using tailored microswimmers. Still, as of 2020, a number of challenges regarding "in vivo" control, biocompatibility and long-term biosafety need to be overcome before microswimmers can become a viable option for many clinical applications. A schematic representation of the classification of biomedical applications is shown in the diagram on the left below. This includes the use of microswimmers for cargo transport in drug delivery and other biomedical applications, as well as assisted fertilisation, sensing, micromanipulation and imaging. Some of the more complex microswimmers fit into multiple categories, as they are applied simultaneously for e.g., sensing and drug delivery. The design of an untethered microscopic mobile machine or microrobot to function "in vivo" with medical interventional capabilities should assume an integrated approach where design 3D body shape, material composition, manufacturing technique, deployment strategy, actuation and control methods, imaging modality, permeation of biological barriers, and the execution of the prescribed medical tasks need to be considered altogether, as illustrated in the diagram on the right above. Each of these essential aspects contains a special design consideration, which must be reflected at the physical design of the microrobot. Delivering therapeutic agents to precise locations in deep tissue remains a significant challenge since magnetic actuation becomes less effective as the magnetic flux density weakens farther away from the electromagnetic control platform. Biohybrid microswimmers have demonstrated promise in drug delivery, capable of precise drug delivery to deep tissue cancer tumors, and one such example is magnetostatic bacteria. Magnetostatic bacteria (MTB) were discovered in the 1970s, and since then, their mechanism and dynamics have been heavily studied. Microaerophilic alphaproteobacterium Magnetospirillum gryphiswaldense is one of the most well-characterized magnetostatic bacteria, and it contains a prominent amount of magnetite Fe3O4 which acts internal compass that guides the bacteria with the surrounding magnetic field. A recent study by Mirkhani N, et al. demonstrated drug delivery using rotating magnetic fields (RMF) controlled magnetostatic bacteria (MTB) on a mouse tumor model. RMF concentrates magnetic torque density within specific regions by inducing a magnetostatic selection field with a field-free point or field-free line. Experimental validation using the mouse tumor model has confirmed the efficacy of the RMF control in enhancing the translational velocity and the penetration of MTB into deep tissues. This strategy holds the potential for systemic drug delivery with heightened spatial selectivity. Another proposed application is for microswimmers to help improve the protection of the environment by reducing the amount of waste and pollutants in different parts of the environment.  Some examples of pollutants that can be reduced or removed from the field include microplastics, oil-based chemicals, and other waste.  A wide variety of microswimmers are being developed in research labs for this purpose using different variants of actuation systems using different multiphysics including light, magnetic, and chemical gradients.  One example is the removal of Bisphenol A (BPA) which is a common waste product from factories producing plastics-based products.  One example is a microswimmer developed by Dekanovsky et al. using a Mxene-based microswimmer which is controlled by light. The propulsion system has two components one being the Mxene component grafted with nanoparticles and an iron oxide layer. The components within the propulsion system produce a chemical reaction with BPA where the products produce oxygen bubbles which can propel the microswimmer forward.  Research is being conducted on designing synthetic microswimmers to react with other waste chemicals to reduce pollution in the environment.  Many of the current synthetic microswimmers are being designed to use a multiphysics propulsion system with a magnetic force along with a chemical or optical-based system. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{Re} = \\frac{\\rho u l}{\\mu}" }, { "math_id": 1, "text": " \\begin{align} \\mu \\nabla^2 \\mathbf{u} -\\boldsymbol{\\nabla}p &= \\boldsymbol{0} \\\\ \\end{align}" }, { "math_id": 2, "text": "\\mathbf{u}" }, { "math_id": 3, "text": "\\boldsymbol{\\nabla} p" }, { "math_id": 4, "text": "F_{sphere}= 6\\pi\\mu u r" }, { "math_id": 5, "text": "F_{parallel}= 6 \\pi \\left ( \\frac{4 + a/b}{5} \\right )" }, { "math_id": 6, "text": "F_{perpendicular}= 6 \\pi \\left ( \\frac{3 + 2 a/b}{5} \\right )" }, { "math_id": 7, "text": "\\begin{bmatrix} F \\\\ T \\end{bmatrix}=\\begin{bmatrix} a & b \\\\ b & c \\end{bmatrix} \\begin{bmatrix} u \\\\ \\omega \\end{bmatrix}" }, { "math_id": 8, "text": "a = 2 \\pi n \\sigma\\left ( \\frac{\\xi_{\\parallel}\\cos^2(\\theta)+\\xi_{\\perp}\\sin^2(\\theta)}{\\sin(\\theta)} \\right )" }, { "math_id": 9, "text": "b=2 \\pi n \\sigma^2 (\\xi_{\\parallel}-\\xi_{\\perp})\\cos(\\alpha)" }, { "math_id": 10, "text": "c = 2 \\pi n \\sigma^3 \\left ( \\frac{\\xi_{\\parallel}\\sin^2(\\theta)+\\xi_{\\perp}\\cos^2(\\theta)}{\\sin(\\theta)} \\right )" }, { "math_id": 11, "text": "\\xi_{\\parallel}=\\frac{2\\pi\\mu}{\\ln\\left ( \\frac{0.36\\pi\\sigma}{r\\sin(\\theta)} \\right )} ,\\xi_{\\perp}=\\frac{4\\pi\\mu}{\\ln\\left ( \\frac{0.36\\pi\\sigma}{r\\sin(\\theta)+0.5} \\right )}" }, { "math_id": 12, "text": "\\theta= \\arctan(\\frac{2\\pi\\sigma}{\\lambda})" } ]
https://en.wikipedia.org/wiki?curid=69133408
69138931
Longtermism
Philosophical view which prioritises the long-term future Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity. The key argument for longtermism has been summarized as follows: "future people matter morally just as much as people alive today;... there may well be more people alive in the future than there are in the present or have been in the past; and... we can positively affect future peoples' lives." These three ideas taken together suggest, to those advocating longtermism, that it is the responsibility of those living now to ensure that future generations get to survive and flourish. Definition. Philosopher William MacAskill defines "longtermism" as "the view that positively influencing the longterm future is a key moral priority of our time".4 He distinguishes it from "strong longtermism", "the view that positively influencing the longterm future is "the" key moral priority of our time". In his book "", philosopher Toby Ord describes longtermism as follows: "longtermism... is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity's potential is one avenue for such a lasting impact and there may be others too." In addition, Ord notes that "longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose." Because it is generally infeasible to use traditional research techniques such as randomized controlled trials to analyze existential risks, researchers such as Nick Bostrom have used methods such as expert opinion elicitation to estimate their importance. Ord offered probability estimates for a number of existential risks in "The Precipice".167 History. The term "longtermism" was coined around 2017 by Oxford philosophers William MacAskill and Toby Ord. The view draws inspiration from the work of Nick Bostrom, Nick Beckstead, and others. While its coinage is relatively new, some aspects of longtermism have been thought about for centuries. The oral constitution of the Iroquois Confederacy, the Gayanashagowa, encourages all decision-making to “have always in view not only the present but also the coming generations”. This has been interpreted to mean that decisions should be made so as to be of benefit to the seventh generation in the future. These ideas have re-emerged in contemporary thought with thinkers such as Derek Parfit in his 1984 book "Reasons and Persons", and Jonathan Schell in his 1982 book "The Fate of the Earth". Community. Longtermist ideas have given rise to a community of individuals and organizations working to protect the interests of future generations. Organizations working on longtermist topics include Cambridge University's Centre for the Study of Existential Risk, the Future of Life Institute, the Global Priorities Institute, the Stanford Existential Risks Initiative, 80,000 Hours, Open Philanthropy, The Forethought Foundation, and Longview Philanthropy. Implications for action. Researchers studying longtermism believe that we can improve the long-term future in two ways: "by averting permanent catastrophes, thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts.Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality". Existential risks. An existential risk is "a risk that threatens the destruction of humanity’s longterm potential",59 including risks which cause human extinction or permanent societal collapse. Examples of these risks include nuclear war, natural and engineered pandemics, climate change and civilizational collapse, stable global totalitarianism, and emerging technologies like artificial intelligence and nanotechnology. Reducing any of these risks may significantly improve the future over long timescales by increasing the number and quality of future lives. Consequently, advocates of longtermism argue that humanity is at a crucial moment in its history where the choices made this century may shape its entire future. Proponents of longtermism have pointed out that humanity spends less than 0.001% of the gross world product annually on longtermist causes (i.e., activities explicitly meant to positively influence the long-term future of humanity). This is less than 5% of the amount that is spent annually on ice cream in the U.S., leading Toby Ord to argue that humanity “start by spending more on protecting our future than we do on ice cream, and decide where to go from there”.58, 63 Trajectory changes. Existential risks are extreme examples of what researchers call a "trajectory change". However, there might be other ways to positively influence how the future will unfold. Economist Tyler Cowen argues that increasing the rate of economic growth is a top moral priority because it will make future generations wealthier. Other researchers think that improving institutions like national governments and international governance bodies could bring about positive trajectory changes. Another way to achieve a trajectory change is by changing societal values. William MacAskill argues that humanity should not expect positive value changes to happen by default. He uses the abolition of slavery as an example, which historians like Christopher Leslie Brown consider to be a historical contingency rather than an inevitable event. Brown has argued that a moral revolution made slavery unacceptable at a time when it was still hugely profitable. MacAskill suggests that abolition may be a turning point in the entirety of human history, with the practice unlikely to return. For this reason, bringing about positive value changes in society may be one way in which the present generation can positively influence the long-run future. Living at a pivotal time. Longtermists argue that we live at a pivotal moment in human history. Derek Parfit wrote that we "live during the hinge of history" and William MacAskill states that "the world’s long-run fate depends in part on the choices we make in our lifetimes"6 since "society has not yet settled down into a stable state, and we are able to influence which stable state we end up in".28 According to Fin Moorhouse, for most of human history, it was not clear how to positively influence the very long-run future. However, two relatively recent developments may have changed this. Developments in technology, such as nuclear weapons, have, for the first time, given humanity the power to annihilate itself, which would impact the long-term future by preventing the existence and flourishing of future generations. At the same time, progress made in the physical and social sciences has given humanity the ability to more accurately predict (at least some) of the long-term effects of the actions taken in the present. MacAskill also notes that our present time is highly unusual in that "we live in an era that involves an extraordinary amount of change"26—both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits). Theoretical considerations. Moral theory. Longtermism has been defended by appealing to various moral theories. Utilitarianism may motivate longtermism given the importance it places on pursuing the greatest good for the greatest number, with future generations expected to be the vast majority of all people to ever exist. Consequentialist moral theories such as utilitarianism may generally be sympathetic to longtermism since whatever the theory considers morally valuable, there is likely going to be much more of it in the future than in the present. However, other non-consequentialist moral frameworks may also inspire longtermism. For instance, Toby Ord considers the responsibility that the present generation has towards future generations as grounded in the hard work and sacrifices made by past generations. He writes:42 Because the arrow of time makes it so much easier to help people who come after you than people who come before, the best way of understanding the partnership of the generations may be asymmetrical, with duties all flowing forwards in time—paying it forwards. On this view, our duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations. Evaluating effects on the future. In his book "What We Owe the Future", William MacAskill discusses how individuals can shape the course of history. He introduces a three-part framework for thinking about effects on the future, which states that the long-term value of an outcome we may bring about depends on its "significance", "persistence", and "contingency". He explains that significance "is the average value added by bringing about a certain state of affairs", persistence means "how long that state of affairs lasts, once it has been brought about", and contingency "refers to the extent to which the state of affairs depends on an individual’s action".32 Moreover, MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm. Population ethics. Population ethics plays an important part in longtermist thinking. Many advocates of longtermism accept the total view of population ethics, on which bringing more happy people into existence is good, all other things being equal. Accepting such a view makes the case for longtermism particularly strong because the fact that there could be huge numbers of future people means that improving their lives and, crucially, ensuring that those lives happen at all, has enormous value. Other sentient beings. Longtermism is often discussed in relation to the interests of future generations of humans. However, some proponents of longtermism also put high moral value on the interests of non-human beings. From this perspective, expanding humanity's moral circle to other sentient beings may be a particularly important longtermist cause area, notably because a moral norm of caring about the suffering of non-human life might persist for a very long time if it becomes widespread. Discount rate. Longtermism implies that we should use a relatively small social discount rate when considering the moral value of the far future. In the standard Ramsey model used in economics, the social discount rate formula_0 is given by: formula_1 where formula_2 is the elasticity of marginal utility of consumption, formula_3 is the growth rate, and formula_4 is a quantity combining the "catastrophe rate" (discounting for the risk that future benefits won't occur) and pure time preference (valuing future benefits intrinsically less than present ones). Ord argues that nonzero pure time preference is illegitimate, since future generations matter morally as much as the present generation. Furthermore, formula_2 only applies to monetary benefits, not moral benefits, since it is based on diminishing marginal utility of consumption. Thus, the only factor that should affect the discount rate is the catastrophe rate, or the background level of existential risk. In contrast, Andreas Mogensen argues that a positive rate of pure time preference formula_4 can be justified on the basis of kinship. That is, common-sense morality allows us to be partial to those more closely related to us, so "we can permissibly weight the welfare of each succeeding generation less than that of the generation preceding it."9 This view is called temporalism and states that "temporal proximity (...) strengthens certain moral duties, including the duty to save". Criticism. Unpredictability. One objection to longtermism is that it relies on predictions of the effects of our actions over very long time horizons, which is difficult at best and impossible at worst. In response to this challenge, researchers interested in longtermism have sought to identify "value lock in" events—events, such as human extinction, which we may influence in the near-term but that will have very long-lasting, predictable future effects. Deprioritization of immediate issues. Another concern is that longtermism may lead to deprioritizing more immediate issues. For example, some critics have argued that considering humanity's future in terms of the next 10,000 or 10 million years might lead to downplaying the nearer-term effects of climate change. They also worry that the most radical forms of strong longtermism could in theory justify atrocities in the name of attaining "astronomical" amounts of future value. Anthropologist Vincent Ialenti has argued that avoiding this will require societies to adopt a "more textured, multifaceted, multidimensional longtermism that defies insular information silos and disciplinary echo chambers". Advocates of longtermism reply that the kinds of actions that are good for the long-term future are often also good for the present. An example of this is pandemic preparedness. Preparing for the worst case pandemics—those which could threaten the survival of humanity—may also help to improve public health in the present. For example, funding research and innovation in antivirals, vaccines, and personal protective equipment, as well as lobbying governments to prepare for pandemics, may help prevent smaller scale health threats for people today. Reliance on small probabilities of large payoffs. A further objection to longtermism is that it relies on accepting low probability bets of extremely big payoffs rather than more certain bets of lower payoffs (provided that the expected value is higher). From a longtermist perspective, it seems that if the probability of some existential risk is very low, and the value of the future is very high, then working to reduce the risk, even by tiny amounts, has extremely high expected value. An illustration of this problem is Pascal’s mugging, which involves the exploitation of an expected value maximizer via their willingness to accept such low probability bets of large payoffs. Advocates of longtermism have adopted a variety of responses to this concern. Some argue that, while unintuitive, it is ethically correct to favor infinitesimal probabilities of arbitrarily high-impact outcomes over moderate probabilities with moderately impactful outcomes. Others argue that longtermism need not rely on tiny probabilities as the probabilities of existential risks are within the normal range of risks that people seek to mitigate against— for example, wearing a seatbelt in case of a car crash.
[ { "math_id": 0, "text": "\\rho" }, { "math_id": 1, "text": "\\rho = \\eta g + \\delta," }, { "math_id": 2, "text": "\\eta" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=69138931
69140733
EC 4.1.1.102
In enzymology, a phenacrylate decarboxylase (EC 4.1.1.102) is an enzyme that catalyzes the chemical reaction 4-coumarate formula_0 4-vinylphenol + 2 CO2 Hence, this enzyme has one substrate, 4-coumarate, and two products, 4-vinylphenol and carbon dioxide. This enzyme belongs to the family of lyases, specifically the carboxy-lyases, which cleave carbon-carbon bonds. The systematic name of this enzyme class is 3-phenylprop-2-enoate carboxy-lyase. Other names in common use include ferulic acid decarboxylase, and phenolic acid decarboxylase. It employs a prenylated flavin cofactor. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=69140733
69159988
Eamonn O'Brien (mathematician)
New Zealand mathematician Eamonn Anthony O'Brien is a professor of mathematics at the University of Auckland, New Zealand, known for his work in computational group theory and p-groups. Education. O'Brien obtained his B.Sc. (Hons) from the National University of Ireland (Galway) in 1983. He completed his Ph.D. in 1988 at the Australian National University. His dissertation, "The Groups of Order Dividing 256", was supervised by Michael F. Newman. Research. O'Brien's early work concerned classification, up to isomorphism, of groups of order 256. He developed early computer software to complete the classification, and to verify that the classification can correct errors in earlier counting. This led to classifications of many further families of small order groups. In 2000, together with Bettina Eick and Hans Ulrich Besche, O'Brien classified all groups of order at most 2000, excluding those of order 1024. The groups of order 1024 were instead enumerated. This classification is known as the Small Groups Library. Later with Michael F. Newman and Michael Vaughan-Lee O'Brien extended the classifications of groups of order formula_0, formula_1, and formula_2. These classifications comprise the tables provided in the computer algebra systems SageMath, GAP, and Magma. For a 20-year span from the mid-1990s, O'Brien led the so-called Matrix Group Recognition Project whose primary objective is to solve the following problem: given a list of invertible matrices over a finite field, determine the composition series of the group. Implementations of algorithms that realize the goals of this project form the bedrock of matrix group computations in the computer algebra system Magma. O'Brien's collaborations include resolution of several conjectures include the Ore conjecture, according to which all elements of non-abelian finite simple groups are commutators. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "p^6" }, { "math_id": 1, "text": "p^7" }, { "math_id": 2, "text": "p^8" } ]
https://en.wikipedia.org/wiki?curid=69159988
691634
Clubsuit
In mathematics, and particularly in axiomatic set theory, ♣"S" (clubsuit) is a family of combinatorial principles that are a weaker version of the corresponding ◊"S"; it was introduced in 1975 by Adam Ostaszewski. Definition. For a given cardinal number formula_0 and a stationary set formula_1, formula_2 is the statement that there is a sequence formula_3 such that formula_7 is usually written as just formula_8. ♣ and ◊. It is clear that ◊ ⇒ ♣, and it was shown in 1975 that ♣ + CH ⇒ ◊; however, Saharon Shelah gave a proof in 1980 that there exists a model of ♣ in which CH does not hold, so ♣ and ◊ are not equivalent (since ◊ ⇒ CH). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\kappa" }, { "math_id": 1, "text": "S \\subseteq \\kappa" }, { "math_id": 2, "text": "\\clubsuit_{S}" }, { "math_id": 3, "text": "\\left\\langle A_\\delta: \\delta \\in S\\right\\rangle" }, { "math_id": 4, "text": " A \\subseteq \\kappa" }, { "math_id": 5, "text": "\\delta" }, { "math_id": 6, "text": "A_{\\delta} \\subseteq A" }, { "math_id": 7, "text": "\\clubsuit_{\\omega_1}" }, { "math_id": 8, "text": "\\clubsuit" } ]
https://en.wikipedia.org/wiki?curid=691634
69164293
Compliance constants
Elements of an inverted Hessian matrix Compliance constants are the elements of an inverted Hessian matrix. The calculation of compliance constants provides an alternative description of chemical bonds in comparison with the widely used force constants explicitly ruling out the dependency on the coordinate system. They provide the unique description of the mechanical strength for covalent and non-covalent bonding. While force constants (as energy second derivatives) are usually given in aJ/Å2 or N/cm, compliance constants are given in Å2/aJ or Å/mdyn. History. Hitherto, recent publications that broke the wall of putative chemical understanding and presented detection/isolation of novel compounds with intriguing bonding characters can still be provocative at times. The stir in such discoveries arose partly from the lack of a universally accepted bond descriptor. While bond dissociation energies (BDE) and rigid force constants have been generally regarded as primary tools for such interpretation, they are prone to flawed definition of chemical bonds in certain scenarios whether simple or controversial. Such reasons prompted the necessity to seek an alternative approach to describe covalent and non-covalent interactions more rigorously. Jörg Grunenberg, a German chemist at the TU Braunschweig and his Ph.D. student at the time, Kai Brandhorst, developed a program COMPLIANCE (freely available to the public), which harnesses compliance constants for tackling the aforementioned tasks. The authors use an inverted matrix of force constants, "i.e.", inverted Hessian matrix, originally introduced by W. T. Taylor and K. S. Pitzer. The insight in choosing the inverted matrix is from the realization that not all elements in the Hessian matrix are necessary—and thus redundant—for describing covalent and non-covalent interactions. Such redundancy is common for many molecules, and more importantly, it ushers in the dependence of the elements of the Hessian matrix on the choice of coordinate system. Therefore, the author claimed that force constants albeit more widely used are not an appropriate bond descriptor whereas non-redundant and coordinate system-independent compliance constants are. Theory. Force constants. By Taylor series expansion, the potential energy, formula_0, of any molecule can be expressed as: formula_1 (eq. 1) where formula_2 is a column vector of arbitrary and fully determined displacement coordinates, and formula_3 and formula_4 are the corresponding gradient (first derivative of formula_0) and Hessian (second derivative of formula_0), respectively. The point of interest is the stationary point on a potential energy surface (PES), so formula_3 is treated as zero, and by considering the relative energy, formula_5 as well becomes zero. By assuming harmonic potential and regarding the third derivative term and forth as negligible, the potential energy formula then simply becomes: formula_6 (eq. 2) Transitioning from cartesian coordinates formula_2 to internal coordinates formula_7, which are more commonly used for the description of molecular geometries, gives rise to equation 3: formula_8 (eq. 3) where formula_9 is the corresponding Hessian for internal coordinates (commonly referred to as force constants), and it is in principle determined by the frequencies of a sufficient set of isotopic molecules. Since the Hessian formula_9 is the second derivative of the energy with respect to displacements and that is the same as the first derivative of the force, evaluation of this property as shown in equation 4 is often used to describe chemical bonds. formula_10 (eq. 4) Nevertheless, there are several issues with this method as explained by Grunenberg, including the dependence of force constants on the choice of internal coordinates and the presence of the redundant Hessian which has no physical meaning and consequently engenders ill-defined description of bond strength. Compliance constants. Rather than internal displacement coordinates, an alternative approach to write the potential energy of a molecule as explained by Decius is to write it as a quadratic form in terms of generalized displacement forces (negative gradient) formula_11. formula_12 (eq. 5) This gradient formula_11 is the first derivative of the potential energy with respect to the displacement coordinates, which can be expressed as shown: formula_13 (eq. 6) By substituting the expression of formula_11 in eq. 5 into equation 5, equation 7 is obtained. formula_14 (eq. 7) Thus, with the knowledge that formula_9 is positive definite, the only possible value of formula_15 which is the compliance matrix then must be: formula_16 (eq. 8) Equation 7 offers a surrogate formulation of the potential energy which proves to be significantly advantageous in defining chemical bonds. Specially, this method is independent on coordinate selection and also eliminates such issue with redundant Hessian that the common force constant calculation method suffers with. Intriguingly, compliance constants calculation can be employed regardless of the redundancy of the coordinates. Archetype of compliance constants calculation. Cyclobutane: force constants calculations. To illustrate how choices of coordinate systems for calculations of chemical bonds can immensely affect the results and consequently engender ill-defined descriptors of the bonds, sample calculations for "n"-butane and cyclobutane are shown in this section. Note that it is known that the all the four equivalent C-C bonds in cyclobutane are weaker than any of the two distinct C-C bonds in "n"-butane; therefore, juxtaposition and evaluation of the strength of the C-C bonds in this C4 system can exemplify how force constants fail and how compliance constants do not. The tables immediately below are results that are calculated at MP2/aug-cc-pvtz level of theory based on typical force constants calculation. Tables 1 and 2 display a force constant in N/cm between each pair of carbon atoms (diagonal) as well as the coupling (off-diagonal). Considering natural internal coordinates on the left, the results make chemical sense. Firstly, the C-C bonds are "n"-butane are generally stronger than those in cyclobutane, which is in line with what is expected. Secondly, the C-C bonds in cyclobutane are equivalent with the force constant values of 4.173 N/cm. Lastly, there is little coupling between the force constants as seen as the small compliance coupling constants in the off-diagonal terms. However, when z-matrix coordinates are used, the results are different from those obtained from natural internal coordinates and become erroneous. The four C-C bonds all have distinct values in cyclobutane, and the coupling becomes much more pronounced. Significantly, the force constants of the C-C bonds in cyclobutane here are also larger than those of "n"-butane, which is in conflict with chemical intuition. Clearly for cyclobutane—and numerous other molecules, using force constants therefore gives rise to inaccurate bond descriptors due to its dependence on coordinate systems. Cyclobutane: compliance constants calculations. A more accurate approach as claimed by Grunenberg is to exploit compliance constants as means for describing chemical bonds as shown below. All the calculated compliance constants above are given in N−1 unit. For both "n"-butane and cyclobutane, the results are the same regardless of the choice of the coordinate systems. One aspect of compliance constants that proves more powerful than force constants in cyclobutane is because of less coupling. This compliance coupling constants are the off-diagonal elements in the inverted Hessian matrix and altogether with the compliance constants, they physically describe the relaxed distortion of a molecule closely through a minimum energy path. Moreover, the values of the compliance constants yield the same results for all the C-C bonds and the values are less compared to those obtained for "n"-butane. Compliance constants, thus, give results that are in accordance to what are generally known about the ring strain of cyclobutane. Applications to main group compounds. Diboryne. Diboryne or a compound with boron-boron triple bond was first isolated as a N-heterocyclic carbene supported complex (NHC-BB-NHC) in the Braunschweig group, and its unique, peculiar bonding structure thereupon catalyzed new research to computationally assess the nature of this at that time controversial triple bond. A few years later, Köppe and Schnöckel published an article arguing that the B-B bond should be defined as a 1.5 bond based on thermodynamic view and rigid force constant calculations. That same year, Grunenberg reassessed the B-B bond using generalized compliance constants of which he claimed better suited as a bond strength descriptor. The calculated relaxed force constants show a clear trend as the bond order between the B-B bond increases, which advocates the existence of the triple bond in Braunschweig's compound. Digallium bonds. Grunenberg and N. Goldberg probed the bond strength of a Ga-Ga triple bond by calculating the compliance constants of digallium complexes with a single bond, a double bond, or a triple bond. The results show that the Ga-Ga triple bond of a model Na2[H-GaGa-H] compound in "C2h" symmetry has a compliance constant value of 0.870 aJ/Å2 is in fact weaker than a Ga-Ga double bond (1.201 aJ/Å2). Watson-Crick base pairs. Besides chemical bonds, compliance constants are also useful for determining non-covalent bonds, such as H-bonds in Watson-Crick base pairs. Grunenberg calculated the compliance constant for each of the donor-H⋯acceptor linkages in AT and CG base pairs and found that the central N-H⋯N bond in CG base pair is the strongest one with the compliance constant value of 2.284 Å/mdyn. (Note that the unit is reported in a reverse unit.) In addition, one of the three hydrogen bonding interactions in a AT base pair shows an extremely large compliance value of >20 Å/mdyn indicative of a weak interaction. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "V = V_0 + G^TZ + {1 \\over 2}Z^THZ + ..." }, { "math_id": 2, "text": "Z" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "V_0" }, { "math_id": 6, "text": "V = {1 \\over 2}Z^THZ" }, { "math_id": 7, "text": "Q" }, { "math_id": 8, "text": "V = {1 \\over 2}Q^TH_qQ" }, { "math_id": 9, "text": "H_q" }, { "math_id": 10, "text": "H_q = \\biggl({\\partial^2V\\over\\partial Q_i\\partial Q_j}\\biggr)_0" }, { "math_id": 11, "text": "G_q" }, { "math_id": 12, "text": "V = {1 \\over 2}{G_q}^TCG_q\n" }, { "math_id": 13, "text": "G_q = H_qQ" }, { "math_id": 14, "text": "V = {1 \\over 2}Q^T{H_q}^TCH_qQ\n" }, { "math_id": 15, "text": "C" }, { "math_id": 16, "text": "C = {H_q}^{-1}" } ]
https://en.wikipedia.org/wiki?curid=69164293
69164461
Tauc–Lorentz model
The Tauc–Lorentz model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit the complex refractive index of amorphous semiconductor materials at frequencies greater than their optical band gap. The dispersion relation bears the names of Jan Tauc and Hendrik Lorentz, whose previous works were combined by G. E. Jellison and F. A. Modine to create the model. The model was inspired, in part, by shortcomings of the Forouhi–Bloomer model, which is aphysical due to its incorrect asymptotic behavior and non-Hermitian character. Despite the inspiration, the Tauc–Lorentz model is itself aphysical due to being non-Hermitian and non-analytic in the upper half-plane. Further researchers have modified the model to address these shortcomings. Mathematical formulation. The general form of the model is given by formula_0 where The imaginary component of formula_6 is formed as the product of the imaginary component of the Lorentz oscillator model and a model developed by Jan Tauc for the imaginary component of the relative permittivity near the bandgap of a material. The real component of formula_6 is obtained via the Kramers-Kronig transform of its imaginary component. Mathematically, they are given by formula_7 formula_8 where Computing the Kramers-Kronig transform, formula_13 formula_14 formula_15 formula_16 formula_17 formula_18 where References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\varepsilon(E) = \\varepsilon_{\\infty} + \\chi^{TL}(E)" }, { "math_id": 1, "text": "\\varepsilon" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "E=\\hbar\\omega" }, { "math_id": 4, "text": "\\varepsilon_{\\infty}" }, { "math_id": 5, "text": "\\chi^{TL}" }, { "math_id": 6, "text": "\\chi^{TL}(E)" }, { "math_id": 7, "text": " \\Im\\left( \\chi^{TL}(E) \\right) = \\begin{cases} \\frac{1}{E} \\frac{A E_{0} C (E - E_{g})^{2}}{(E^{2} - E_{0}^{2})^{2} + C^{2} E^{2}}, & \\text{if } E > E_{g} \\\\ 0, & \\text{if } E \\le E_{g} \\end{cases}" }, { "math_id": 8, "text": " \\Re\\left( \\chi^{TL}(E) \\right) = \\frac{2}{\\pi} \\int_{E_{g}}^{\\infty} \\frac{\\xi \\Im\\left( \\chi^{TL}(\\xi) \\right)}{\\xi^{2} - E^{2}} d\\xi " }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "C" }, { "math_id": 11, "text": "E_{0}" }, { "math_id": 12, "text": "E_{g}" }, { "math_id": 13, "text": " \\Re\\left( \\chi^{TL}(E) \\right) \\,\\!" }, { "math_id": 14, "text": "= \\frac{A C}{\\pi \\zeta^{4}} \\frac{a_{\\mathrm{ln}}}{2 \\alpha E_{0}} \\ln{\\left( \\frac{E_{0}^{2} + E_{g}^{2} \n+ \\alpha E_{g}}{E_{0}^{2} + E_{g}^{2} - \\alpha E_{g}} \\right)} \\,\\!" }, { "math_id": 15, "text": "- \\frac{A}{\\pi \\zeta^{4}} \\frac{a_{\\mathrm{atan}}}{E_{0}} \\left[ \\pi - \\arctan{\\left( \\frac{\\alpha + 2 E_{g}}{C} \\right)} + \\arctan{\\left( \\frac{\\alpha - 2 E_{g}}{C} \\right)} \\right] \\,\\!" }, { "math_id": 16, "text": "+ 2 \\frac{A E_{0}}{\\pi \\zeta^{4} \\alpha} E_{g} \\left( E^{2} - \\gamma^{2} \\right) \\left[ \\pi + 2 \\arctan{\\left( 2 \\frac{\\gamma^{2} - E_{g}^{2}}{\\alpha C} \\right)} \\right] \\,\\!" }, { "math_id": 17, "text": " - \\frac{A E_{0} C}{\\pi \\zeta^{4}} \\frac{E^{2} + E_{g}^{2}}{E} \\ln{\\left( \\frac{\\left| E - E_{g} \\right|}{E+E_{g}} \\right)} \\,\\!" }, { "math_id": 18, "text": " + 2 \\frac{A E_{0} C}{\\pi \\zeta^{4}} E_{g} \\ln{\\left[ \\frac{\\left| E - E_{g}\\right| \\left( E + E_{g} \\right)}{\\sqrt{\\left( E_{0}^{2} - E_{g}^{2} \\right)^{2} + E_{g}^{2} C^{2} }} \\right]}\n" }, { "math_id": 19, "text": "a_{\\mathrm{ln}} = \\left( E_{g}^{2} - E_{0}^{2} \\right) E^{2} + E_{g}^{2} C^{2} - E_{0}^{2} \\left( E_{0}^{2} + 3 E_{g}^{2} \\right)" }, { "math_id": 20, "text": "a_{\\mathrm{atan}} = \\left( E^{2} - E_{0}^{2} \\right)\\left( E_{0}^{2} + E_{g}^{2} \\right) + E_{g}^{2} C^{2}" }, { "math_id": 21, "text": "\\alpha = \\sqrt{4 E_{0}^{2} - C^{2}}" }, { "math_id": 22, "text": "\\gamma = \\sqrt{E_{0}^{2} - C^{2}/2}" }, { "math_id": 23, "text": "\\zeta^{4} = \\left( E^{2} - \\gamma^{2} \\right)^{2} + \\frac{\\alpha^{2} C^{2}}{4}" } ]
https://en.wikipedia.org/wiki?curid=69164461
69164925
Tetraoxygen difluoride
<templatestyles src="Chembox/styles.css"/> Chemical compound Tetraoxygen difluoride is an inorganic chemical compound of oxygen, belonging to the family of oxygen fluorides. It consists of two O2F units bound together with a weak O-O bond, and is the dimer of the O2F radical. Preparation. Tetraoxygen difluoride can be prepared in two steps. In the first step, a photochemically generated fluorine atom reacts with oxygen to form the dioxygen fluoride radical. formula_0 This radical subsequently undergoes dimerization, entering an equilibrium with tetraoxygen difluoride at temperatures under −175 °C: formula_1 At the same time, the dioxygen fluoride radicals decompose into dioxygen difluoride and oxygen gas, which shifts the above equilibrium with O4F2 to the left. formula_2 Properties. Tetraoxygen difluoride is dark red-brown as a solid and has a melting point around −191 °C. It is a strong fluorinating and oxidizing agent, even stronger than dioxygen difluoride, so that it can, for example, oxidize Ag(II) to Ag(III) or Au(III) to Au(V). This process creates the corresponding anions AgF and AuF. With non-noble substances this oxidation can lead to explosions even at low temperatures. As an example, elemental sulfur reacts explosively to form sulfur hexafluoride even at −180 °C. Similar to [O2F]• or O2F2, tetraoxygen difluoride tends to form salts with the dioxygenyl cation O when it reacts with fluoride acceptors such as boron trifluoride (BF3). In the case of BF3, this leads to the formation of O2+•BF4−: O4F2 + 2BF3 -> 2O2+BF4− Similarly, for arsenic pentafluoride it reacts to create O2+AsF6−.
[ { "math_id": 0, "text": "\\mathrm{2 \\ O_2 + 2 \\ F^\\cdot \\longrightarrow 2 \\ [O_2F]^\\cdot }" }, { "math_id": 1, "text": "\\mathrm{2 \\ [O_2F]^\\cdot \\rightleftharpoons O_4F_2}" }, { "math_id": 2, "text": "\\mathrm{2 \\ [O_2F]^\\cdot \\longrightarrow O_2 + O_2F_2}" } ]
https://en.wikipedia.org/wiki?curid=69164925
6916708
Modal companion
In logic, a modal companion of a superintuitionistic (intermediate) logic "L" is a normal modal logic that interprets "L" by a certain canonical translation, described below. Modal companions share various properties of the original intermediate logic, which enables to study intermediate logics using tools developed for modal logic. Gödel–McKinsey–Tarski translation. Let "A" be a propositional intuitionistic formula. A modal formula "T"("A") is defined by induction on the complexity of "A": formula_0 for any propositional variable formula_1, formula_2 formula_3 formula_4 formula_5 As negation is in intuitionistic logic defined by formula_6, we also have formula_7 "T" is called the Gödel translation or Gödel–McKinsey–Tarski translation. The translation is sometimes presented in slightly different ways: for example, one may insert formula_8 before every subformula. All such variants are provably equivalent in S4. Modal companions. For any normal modal logic "M" that extends S4, we define its si-fragment "ρM" as formula_9 The si-fragment of any normal extension of S4 is a superintuitionistic logic. A modal logic "M" is a modal companion of a superintuitionistic logic "L" if formula_10. Every superintuitionistic logic has modal companions. The smallest modal companion of "L" is formula_11 where formula_12 denotes normal closure. It can be shown that every superintuitionistic logic also has a largest modal companion, which is denoted by "σL". A modal logic "M" is a companion of "L" if and only if formula_13. For example, S4 itself is the smallest modal companion of intuitionistic logic (IPC). The largest modal companion of IPC is the Grzegorczyk logic Grz, axiomatized by the axiom formula_14 over K. The smallest modal companion of classical logic (CPC) is Lewis' S5, whereas its largest modal companion is the logic formula_15 More examples: Blok–Esakia isomorphism. The set of extensions of a superintuitionistic logic "L" ordered by inclusion forms a complete lattice, denoted Ext"L". Similarly, the set of normal extensions of a modal logic "M" is a complete lattice NExt"M". The companion operators "ρM", "τL", and "σL" can be considered as mappings between the lattices ExtIPC and NExtS4: formula_16 formula_17 It is easy to see that all three are monotone, and formula_18 is the identity function on ExtIPC. L. Maksimova and V. Rybakov have shown that "ρ", "τ", and "σ" are actually complete, join-complete and meet-complete lattice homomorphisms respectively. The cornerstone of the theory of modal companions is the Blok–Esakia theorem, proved independently by Wim Blok and Leo Esakia. It states "The mappings "ρ" and "σ" are mutually inverse lattice isomorphisms of" ExtIPC "and" NExtGrz. Accordingly, "σ" and the restriction of "ρ" to NExtGrz are called the Blok–Esakia isomorphism. An important corollary to the Blok–Esakia theorem is a simple syntactic description of largest modal companions: for every superintuitionistic logic "L", formula_19 Semantic description. The Gödel translation has a frame-theoretic counterpart. Let formula_20 be a transitive and reflexive modal general frame. The preorder "R" induces the equivalence relation formula_21 on "F", which identifies points belonging to the same cluster. Let formula_22 be the induced quotient partial order (i.e., "ρF" is the set of equivalence classes of formula_23), and put formula_24 Then formula_25 is an intuitionistic general frame, called the skeleton of F. The point of the skeleton construction is that it preserves validity modulo Gödel translation: for any intuitionistic formula "A", "A" is valid in "ρ"F if and only if "T"("A") is valid in F. Therefore, the si-fragment of a modal logic "M" can be defined semantically: if "M" is complete with respect to a class "C" of transitive reflexive general frames, then "ρM" is complete with respect to the class formula_26. The largest modal companions also have a semantic description. For any intuitionistic general frame formula_27, let "σV" be the closure of "V" under Boolean operations (binary intersection and complement). It can be shown that "σV" is closed under formula_8, thus formula_28 is a general modal frame. The skeleton of "σ"F is isomorphic to F. If "L" is a superintuitionistic logic complete with respect to a class "C" of general frames, then its largest modal companion "σL" is complete with respect to formula_29. The skeleton of a Kripke frame is itself a Kripke frame. On the other hand, "σ"F is never a Kripke frame if F is a Kripke frame of infinite depth. Preservation theorems. The value of modal companions and the Blok–Esakia theorem as a tool for investigation of intermediate logics comes from the fact that many interesting properties of logics are preserved by some or all of the mappings "ρ", "σ", and "τ". For example, Other properties. Every intermediate logic "L" has an infinite number of modal companions, and moreover, the set formula_30 of modal companions of "L" contains an infinite descending chain. For example, formula_31 consists of S5, and the logics formula_32 for every positive integer "n", where formula_33 is the "n"-element cluster. The set of modal companions of any "L" is either countable, or it has the cardinality of the continuum. Rybakov has shown that the lattice Ext"L" can be embedded in formula_30; in particular, a logic has a continuum of modal companions if it has a continuum of extensions (this holds, for instance, for all intermediate logics below KC). It is unknown whether the converse is also true. The Gödel translation can be applied to rules as well as formulas: the translation of a rule formula_34 is the rule formula_35 A rule "R" is admissible in a logic "L" if the set of theorems of "L" is closed under "R". It is easy to see that "R" is admissible in a superintuitionistic logic "L" whenever "T"("R") is admissible in a modal companion of "L". The converse is not true in general, but it holds for the largest modal companion of "L".
[ { "math_id": 0, "text": "T(p)=\\Box p," }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "T(\\bot)=\\bot," }, { "math_id": 3, "text": "T(A\\land B)=T(A)\\land T(B)," }, { "math_id": 4, "text": "T(A\\lor B)=T(A)\\lor T(B)," }, { "math_id": 5, "text": "T(A\\to B)=\\Box(T(A)\\to T(B))." }, { "math_id": 6, "text": "A\\to\\bot" }, { "math_id": 7, "text": "T(\\neg A)=\\Box\\neg T(A)." }, { "math_id": 8, "text": "\\Box" }, { "math_id": 9, "text": "\\rho M=\\{A\\mid M\\vdash T(A)\\}." }, { "math_id": 10, "text": "L=\\rho M" }, { "math_id": 11, "text": "\\tau L=\\mathbf{S4}\\oplus\\{T(A)\\mid L\\vdash A\\}," }, { "math_id": 12, "text": "\\oplus" }, { "math_id": 13, "text": "\\tau L\\subseteq M\\subseteq\\sigma L" }, { "math_id": 14, "text": "\\Box(\\Box(A\\to\\Box A)\\to A)\\to A" }, { "math_id": 15, "text": "\\mathbf{Triv}=\\mathbf K\\oplus(A\\leftrightarrow\\Box A)." }, { "math_id": 16, "text": "\\rho\\colon\\mathrm{NExt}\\,\\mathbf{S4}\\to\\mathrm{Ext}\\,\\mathbf{IPC}," }, { "math_id": 17, "text": "\\tau,\\sigma\\colon\\mathrm{Ext}\\,\\mathbf{IPC}\\to\\mathrm{NExt}\\,\\mathbf{S4}." }, { "math_id": 18, "text": "\\rho\\circ\\tau=\\rho\\circ\\sigma" }, { "math_id": 19, "text": "\\sigma L=\\tau L\\oplus\\mathbf{Grz}." }, { "math_id": 20, "text": "\\mathbf F=\\langle F,R,V\\rangle" }, { "math_id": 21, "text": "x\\sim y \\iff x\\,R\\,y \\land y\\,R\\,x" }, { "math_id": 22, "text": "\\langle\\rho F,\\le\\rangle=\\langle F,R\\rangle/{\\sim}" }, { "math_id": 23, "text": "\\sim" }, { "math_id": 24, "text": "\\rho V=\\{A/{\\sim}\\mid A\\in V,A=\\Box A\\}." }, { "math_id": 25, "text": "\\rho\\mathbf F=\\langle\\rho F,\\le,\\rho V\\rangle" }, { "math_id": 26, "text": "\\{\\rho\\mathbf F;\\,\\mathbf F\\in C\\}" }, { "math_id": 27, "text": "\\mathbf F=\\langle F,\\le,V\\rangle" }, { "math_id": 28, "text": "\\sigma\\mathbf F=\\langle F,\\le,\\sigma V\\rangle" }, { "math_id": 29, "text": "\\{\\sigma\\mathbf F;\\,\\mathbf F\\in C\\}" }, { "math_id": 30, "text": "\\rho^{-1}(L)" }, { "math_id": 31, "text": "\\rho^{-1}(\\mathbf{CPC})" }, { "math_id": 32, "text": "L(C_n)" }, { "math_id": 33, "text": "C_n" }, { "math_id": 34, "text": "R=\\frac{A_1,\\dots,A_n}{B}" }, { "math_id": 35, "text": "T(R)=\\frac{T(A_1),\\dots,T(A_n)}{T(B)}." } ]
https://en.wikipedia.org/wiki?curid=6916708
6917139
Band diagram
Diagram plotting electron energy levels In solid-state physics of semiconductors, a band diagram is a diagram plotting various key electron energy levels (Fermi level and nearby energy band edges) as a function of some spatial dimension, which is often denoted "x". These diagrams help to explain the operation of many kinds of semiconductor devices and to visualize how bands change with position (band bending). The bands may be coloured to distinguish level filling. A band diagram should not be confused with a band structure plot. In both a band diagram and a band structure plot, the vertical axis corresponds to the energy of an electron. The difference is that in a band structure plot the horizontal axis represents the wave vector of an electron in an infinitely large, homogeneous material (a crystal or vacuum), whereas in a band diagram the horizontal axis represents position in space, usually passing through multiple materials. Because a band diagram shows the "changes" in the band structure from place to place, the resolution of a band diagram is limited by the Heisenberg uncertainty principle: the band structure relies on momentum, which is only precisely defined for large length scales. For this reason, the band diagram can only accurately depict evolution of band structures over long length scales, and has difficulty in showing the microscopic picture of sharp, atomic scale interfaces between different materials (or between a material and vacuum). Typically, an interface must be depicted as a "black box", though its long-distance effects can be shown in the band diagram as asymptotic band bending. Anatomy. The vertical axis of the band diagram represents the energy of an electron, which includes both kinetic and potential energy. The horizontal axis represents position, often not being drawn to scale. Note that the Heisenberg uncertainty principle prevents the band diagram from being drawn with a high positional resolution, since the band diagram shows energy bands (as resulting from a momentum-dependent band structure). While a basic band diagram only shows electron energy levels, often a band diagram will be decorated with further features. It is common to see cartoon depictions of the motion in energy and position of an electron (or electron hole) as it drifts, is excited by a light source, or relaxes from an excited state. The band diagram may be shown connected to a circuit diagram showing how bias voltages are applied, how charges flow, etc. The bands may be colored to indicate filling of energy levels, or sometimes the band gaps will be colored instead. Energy levels. Depending on the material and the degree of detail desired, a variety of energy levels will be plotted against position: Band bending. When looking at a band diagram, the electron energy states (bands) in a material can curve up or down near a junction. This effect is known as band bending. It does not correspond to any physical (spatial) bending. Rather, band bending refers to the local changes in electronic structure, in the energy offset of a semiconductor's band structure near a junction, due to space charge effects. The primary principle underlying band bending inside a semiconductor is space charge: a local imbalance in charge neutrality. Poisson's equation gives a curvature to the bands wherever there is an imbalance in charge neutrality. The reason for the charge imbalance is that, although a homogeneous material is charge neutral everywhere (since it must be charge neutral on average), there is no such requirement for interfaces. Practically all types of interface develop a charge imbalance, though for different reasons: Knowing how bands will bend when two different types of materials are brought in contact is key to understanding whether the junction will be rectifying (Schottky) or ohmic. The degree of band bending depends on the relative Fermi levels and carrier concentrations of the materials forming the junction. In an n-type semiconductor the band bends upward, while in p-type the band bends downward. Note that band bending is due neither to magnetic field nor temperature gradient. Rather, it only arises in conjunction with the force of the electric field. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "-e\\phi" }, { "math_id": 1, "text": "\\phi" } ]
https://en.wikipedia.org/wiki?curid=6917139
691736
Diamond principle
In mathematics, and particularly in axiomatic set theory, the diamond principle ◊ is a combinatorial principle introduced by Ronald Jensen in that holds in the constructible universe ("L") and that implies the continuum hypothesis. Jensen extracted the diamond principle from his proof that the axiom of constructibility ("V" "L") implies the existence of a Suslin tree. Definitions. The diamond principle ◊ says that there exists a <templatestyles src="Template:Visible anchor/styles.css" />◊-sequence, a family of sets "Aα" ⊆ "α" for "α" < "ω"1 such that for any subset "A" of ω1 the set of "α" with "A" ∩ "α" "Aα" is stationary in "ω"1. There are several equivalent forms of the diamond principle. One states that there is a countable collection A"α" of subsets of "α" for each countable ordinal "α" such that for any subset "A" of "ω"1 there is a stationary subset "C" of "ω"1 such that for all "α" in "C" we have "A" ∩ "α" ∈ A"α" and "C" ∩ "α" ∈ A"α". Another equivalent form states that there exist sets "A""α" ⊆ "α" for "α" < "ω"1 such that for any subset "A" of "ω"1 there is at least one infinite "α" with "A" ∩ "α" "A""α". More generally, for a given cardinal number "κ" and a stationary set "S" ⊆ "κ", the statement ◊"S" (sometimes written ◊("S") or ◊"κ"("S")) is the statement that there is a sequence ⟨"Aα" : "α" ∈ "S"⟩ such that "Aα"} is stationary in "κ" The principle ◊"ω"1 is the same as ◊. The diamond-plus principle ◊+ states that there exists a ◊+-sequence, in other words a countable collection A"α" of subsets of "α" for each countable ordinal α such that for any subset "A" of "ω"1 there is a closed unbounded subset "C" of "ω"1 such that for all "α" in "C" we have "A" ∩ "α" ∈ A"α" and "C" ∩ "α" ∈ A"α". Properties and use. showed that the diamond principle ◊ implies the existence of Suslin trees. He also showed that implies the diamond-plus principle, which implies the diamond principle, which implies CH. In particular the diamond principle and the diamond-plus principle are both independent of the axioms of ZFC. Also ♣ + CH implies ◊, but Shelah gave models of ♣ + ¬ CH, so ◊ and ♣ are not equivalent (rather, ♣ is weaker than ◊). Matet proved the principle formula_0 equivalent to a property of partitions of formula_1 with diagonal intersection of initial segments of the partitions stationary in formula_1. The diamond principle ◊ does not imply the existence of a Kurepa tree, but the stronger ◊+ principle implies both the ◊ principle and the existence of a Kurepa tree. used ◊ to construct a "C"*-algebra serving as a counterexample to Naimark's problem. For all cardinals "κ" and stationary subsets "S" ⊆ "κ"+, ◊"S" holds in the constructible universe. proved that for "κ" > ℵ0, ◊"κ"+("S") follows from 2"κ" "κ"+ for stationary "S" that do not contain ordinals of cofinality "κ". Shelah showed that the diamond principle solves the Whitehead problem by implying that every Whitehead group is free. References. <templatestyles src="Refbegin/styles.css" /> Citations. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\diamondsuit_\\kappa" }, { "math_id": 1, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=691736
691741
Character theory
Concept in mathematical group theory In mathematics, more specifically in group theory, the character of a group representation is a function on the group that associates to each group element the trace of the corresponding matrix. The character carries the essential information about the representation in a more condensed form. Georg Frobenius initially developed representation theory of finite groups entirely based on the characters, and without any explicit matrix realization of representations themselves. This is possible because a complex representation of a finite group is determined (up to isomorphism) by its character. The situation with representations over a field of positive characteristic, so-called "modular representations", is more delicate, but Richard Brauer developed a powerful theory of characters in this case as well. Many deep theorems on the structure of finite groups use characters of modular representations. Applications. Characters of irreducible representations encode many important properties of a group and can thus be used to study its structure. Character theory is an essential tool in the classification of finite simple groups. Close to half of the proof of the Feit–Thompson theorem involves intricate calculations with character values. Easier, but still essential, results that use character theory include Burnside's theorem (a purely group-theoretic proof of Burnside's theorem has since been found, but that proof came over half a century after Burnside's original proof), and a theorem of Richard Brauer and Michio Suzuki stating that a finite simple group cannot have a generalized quaternion group as its Sylow 2-subgroup. Definitions. Let V be a finite-dimensional vector space over a field F and let "ρ" : "G" → GL("V") be a representation of a group G on V. The character of ρ is the function "χρ" : "G" → "F" given by formula_0 where Tr is the trace. A character "χρ" is called irreducible or simple if ρ is an irreducible representation. The degree of the character χ is the dimension of ρ; in characteristic zero this is equal to the value "χ"(1). A character of degree 1 is called linear. When G is finite and F has characteristic zero, the kernel of the character "χρ" is the normal subgroup: formula_1 which is precisely the kernel of the representation ρ. However, the character is "not" a group homomorphism in general. Properties. C). Arithmetic properties. Let ρ and σ be representations of G. Then the following identities hold: where "ρ"⊕"σ" is the direct sum, "ρ"⊗"σ" is the tensor product, "ρ"∗ denotes the conjugate transpose of ρ, and Alt2 is the alternating product Alt2"ρ" "ρ" ∧ "ρ" and Sym2 is the symmetric square, which is determined by formula_8 Character tables. The irreducible complex characters of a finite group form a character table which encodes much useful information about the group G in a compact form. Each row is labelled by an irreducible representation and the entries in the row are the characters of the representation on the respective conjugacy class of G. The columns are labelled by (representatives of) the conjugacy classes of G. It is customary to label the first row by the character of the trivial representation, which is the trivial action of G on a 1-dimensional vector space by formula_9 for all formula_10. Each entry in the first row is therefore 1. Similarly, it is customary to label the first column by the identity. Therefore, the first column contains the degree of each irreducible character. Here is the character table of formula_11 the cyclic group with three elements and generator "u": where ω is a primitive third root of unity. The character table is always square, because the number of irreducible representations is equal to the number of conjugacy classes. Orthogonality relations. The space of complex-valued class functions of a finite group G has a natural inner product: formula_12 where is the complex conjugate of "β"("g"). With respect to this inner product, the irreducible characters form an orthonormal basis for the space of class-functions, and this yields the orthogonality relation for the rows of the character table: formula_13 For "g", "h" in G, applying the same inner product to the columns of the character table yields: formula_14 where the sum is over all of the irreducible characters "χi" of G and the symbol |"CG"("g")| denotes the order of the centralizer of g. Note that since g and h are conjugate iff they are in the same column of the character table, this implies that the columns of the character table are orthogonal. The orthogonality relations can aid many computations including: Character table properties. Certain properties of the group G can be deduced from its character table: "χ"(1); this is a normal subgroup of G. Each normal subgroup of G is the intersection of the kernels of some of the irreducible characters of G. The character table does not in general determine the group up to isomorphism: for example, the quaternion group Q and the dihedral group of 8 elements, "D"4, have the same character table. Brauer asked whether the character table, together with the knowledge of how the powers of elements of its conjugacy classes are distributed, determines a finite group up to isomorphism. In 1964, this was answered in the negative by E. C. Dade. The linear representations of G are themselves a group under the tensor product, since the tensor product of 1-dimensional vector spaces is again 1-dimensional. That is, if formula_16 and formula_17 are linear representations, then formula_18 defines a new linear representation. This gives rise to a group of linear characters, called the character group under the operation formula_19. This group is connected to Dirichlet characters and Fourier analysis. Induced characters and Frobenius reciprocity. The characters discussed in this section are assumed to be complex-valued. Let H be a subgroup of the finite group G. Given a character χ of G, let "χH" denote its restriction to H. Let θ be a character of H. Ferdinand Georg Frobenius showed how to construct a character of G from θ, using what is now known as "Frobenius reciprocity". Since the irreducible characters of G form an orthonormal basis for the space of complex-valued class functions of G, there is a unique class function "θG" of G with the property that formula_20 for each irreducible character χ of G (the leftmost inner product is for class functions of G and the rightmost inner product is for class functions of H). Since the restriction of a character of G to the subgroup H is again a character of H, this definition makes it clear that "θG" is a non-negative integer combination of irreducible characters of G, so is indeed a character of G. It is known as "the character of" G "induced from" θ. The defining formula of Frobenius reciprocity can be extended to general complex-valued class functions. Given a matrix representation ρ of H, Frobenius later gave an explicit way to construct a matrix representation of G, known as the representation induced from ρ, and written analogously as "ρG". This led to an alternative description of the induced character "θG". This induced character vanishes on all elements of G which are not conjugate to any element of H. Since the induced character is a class function of G, it is only now necessary to describe its values on elements of H. If one writes G as a disjoint union of right cosets of H, say formula_21 then, given an element h of H, we have: formula_22 Because θ is a class function of H, this value does not depend on the particular choice of coset representatives. This alternative description of the induced character sometimes allows explicit computation from relatively little information about the embedding of H in G, and is often useful for calculation of particular character tables. When θ is the trivial character of H, the induced character obtained is known as the permutation character of G (on the cosets of H). The general technique of character induction and later refinements found numerous applications in finite group theory and elsewhere in mathematics, in the hands of mathematicians such as Emil Artin, Richard Brauer, Walter Feit and Michio Suzuki, as well as Frobenius himself. Mackey decomposition. The Mackey decomposition was defined and explored by George Mackey in the context of Lie groups, but is a powerful tool in the character theory and representation theory of finite groups. Its basic form concerns the way a character (or module) induced from a subgroup H of a finite group G behaves on restriction back to a (possibly different) subgroup K of G, and makes use of the decomposition of G into ("H", "K")-double cosets. If formula_23 is a disjoint union, and θ is a complex class function of H, then Mackey's formula states that formula_24 where "θt" is the class function of "t"−1"Ht" defined by "θt"("t"−1"ht") "θ"("h") for all h in H. There is a similar formula for the restriction of an induced module to a subgroup, which holds for representations over any ring, and has applications in a wide variety of algebraic and topological contexts. Mackey decomposition, in conjunction with Frobenius reciprocity, yields a well-known and useful formula for the inner product of two class functions θ and ψ induced from respective subgroups H and K, whose utility lies in the fact that it only depends on how conjugates of H and K intersect each other. The formula (with its derivation) is: formula_25 (where T is a full set of ("H", "K")-double coset representatives, as before). This formula is often used when θ and ψ are linear characters, in which case all the inner products appearing in the right hand sum are either 1 or 0, depending on whether or not the linear characters "θt" and ψ have the same restriction to "t"−1"Ht" ∩ "K". If θ and ψ are both trivial characters, then the inner product simplifies to |"T"|. "Twisted" dimension. One may interpret the character of a representation as the "twisted" dimension of a vector space. Treating the character as a function of the elements of the group "χ"("g"), its value at the identity is the dimension of the space, since "χ"(1) Tr("ρ"(1)) Tr("IV") dim("V"). Accordingly, one can view the other values of the character as "twisted" dimensions. One can find analogs or generalizations of statements about dimensions to statements about characters or representations. A sophisticated example of this occurs in the theory of monstrous moonshine: the j-invariant is the graded dimension of an infinite-dimensional graded representation of the Monster group, and replacing the dimension with the character gives the McKay–Thompson series for each element of the Monster group. Characters of Lie groups and Lie algebras. If formula_26 is a Lie group and formula_27 a finite-dimensional representation of formula_26, the character formula_28 of formula_27 is defined precisely as for any group as formula_29. Meanwhile, if formula_30 is a Lie algebra and formula_27 a finite-dimensional representation of formula_30, we can define the character formula_28 by formula_31. The character will satisfy formula_32 for all formula_33 in the associated Lie group formula_34 and all formula_35. If we have a Lie group representation and an associated Lie algebra representation, the character formula_28 of the Lie algebra representation is related to the character formula_36 of the group representation by the formula formula_37. Suppose now that formula_30 is a complex semisimple Lie algebra with Cartan subalgebra formula_38. The value of the character formula_28 of an irreducible representation formula_27 of formula_30 is determined by its values on formula_38. The restriction of the character to formula_38 can easily be computed in terms of the weight spaces, as follows: formula_39, where the sum is over all weights formula_40 of formula_27 and where formula_41 is the multiplicity of formula_40. The (restriction to formula_38 of the) character can be computed more explicitly by the Weyl character formula. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\chi_{\\rho}(g) = \\operatorname{Tr}(\\rho(g))" }, { "math_id": 1, "text": "\\ker \\chi_\\rho := \\left \\lbrace g \\in G \\mid \\chi_{\\rho}(g) = \\chi_{\\rho}(1) \\right \\rbrace, " }, { "math_id": 2, "text": "[G:C_G(x)]\\frac{\\chi(x)}{\\chi(1)}" }, { "math_id": 3, "text": "\\chi_{\\rho \\oplus \\sigma} = \\chi_\\rho + \\chi_\\sigma" }, { "math_id": 4, "text": "\\chi_{\\rho \\otimes \\sigma} = \\chi_\\rho \\cdot \\chi_\\sigma" }, { "math_id": 5, "text": "\\chi_{\\rho^*} = \\overline {\\chi_\\rho}" }, { "math_id": 6, "text": "\\chi_{{\\scriptscriptstyle \\rm{Alt}^2} \\rho}(g) = \\tfrac{1}{2}\\! \\left[ \\left(\\chi_\\rho (g) \\right)^2 - \\chi_\\rho (g^2) \\right]" }, { "math_id": 7, "text": "\\chi_{{\\scriptscriptstyle \\rm{Sym}^2} \\rho}(g) = \\tfrac{1}{2}\\! \\left[ \\left(\\chi_\\rho (g) \\right)^2 + \\chi_\\rho (g^2) \\right]" }, { "math_id": 8, "text": "\\rho \\otimes \\rho = \\left(\\rho \\wedge \\rho \\right) \\oplus \\textrm{Sym}^2 \\rho." }, { "math_id": 9, "text": " \\rho(g)=1" }, { "math_id": 10, "text": " g\\in G " }, { "math_id": 11, "text": "C_3 = \\langle u \\mid u^{3} = 1 \\rangle," }, { "math_id": 12, "text": "\\left \\langle \\alpha, \\beta\\right \\rangle := \\frac{1}{|G|}\\sum_{g \\in G} \\alpha(g) \\overline{\\beta(g)}" }, { "math_id": 13, "text": "\\left \\langle \\chi_i, \\chi_j \\right \\rangle = \\begin{cases} 0 & \\mbox{ if } i \\ne j, \\\\ 1 & \\mbox{ if } i = j. \\end{cases}" }, { "math_id": 14, "text": "\\sum_{\\chi_i} \\chi_i(g) \\overline{\\chi_i(h)} = \\begin{cases} \\left | C_G(g) \\right |, & \\mbox{ if } g, h \\mbox{ are conjugate } \\\\ 0 & \\mbox{ otherwise.}\\end{cases}" }, { "math_id": 15, "text": "|G| \\!\\times\\! |G|" }, { "math_id": 16, "text": "\\rho_1:G\\to V_1" }, { "math_id": 17, "text": " \\rho_2:G\\to V_2" }, { "math_id": 18, "text": " \\rho_1\\otimes\\rho_2 (g)=(\\rho_1(g)\\otimes\\rho_2(g))" }, { "math_id": 19, "text": " [\\chi_1*\\chi_2](g)=\\chi_1(g)\\chi_2(g)" }, { "math_id": 20, "text": " \\langle \\theta^{G}, \\chi \\rangle_G = \\langle \\theta,\\chi_H \\rangle_H " }, { "math_id": 21, "text": "G = Ht_1 \\cup \\ldots \\cup Ht_n," }, { "math_id": 22, "text": " \\theta^G(h) = \\sum_{i \\ : \\ t_iht_i^{-1} \\in H} \\theta \\left (t_iht_i^{-1} \\right )." }, { "math_id": 23, "text": " G = \\bigcup_{t \\in T} HtK " }, { "math_id": 24, "text": "\\left( \\theta^{G}\\right)_K = \\sum_{ t \\in T} \\left(\\left [\\theta^{t} \\right ]_{t^{-1}Ht \\cap K}\\right)^{K}," }, { "math_id": 25, "text": "\\begin{align}\n\\left \\langle \\theta^{G},\\psi^{G} \\right \\rangle &= \\left \\langle \\left(\\theta^{G}\\right)_{K},\\psi \\right \\rangle \\\\\n&= \\sum_{ t \\in T} \\left \\langle \\left( \\left [\\theta^{t} \\right ]_{t^{-1}Ht \\cap K}\\right)^{K}, \\psi \\right \\rangle \\\\\n&= \\sum_{t \\in T} \\left \\langle \\left(\\theta^{t} \\right)_{t^{-1}Ht \\cap K},\\psi_{t^{-1}Ht \\cap K} \\right \\rangle, \n\\end{align}" }, { "math_id": 26, "text": "G" }, { "math_id": 27, "text": "\\rho" }, { "math_id": 28, "text": "\\chi_\\rho" }, { "math_id": 29, "text": "\\chi_\\rho(g)=\\operatorname{Tr}(\\rho(g))" }, { "math_id": 30, "text": "\\mathfrak g" }, { "math_id": 31, "text": "\\chi_\\rho(X)=\\operatorname{Tr}(e^{\\rho(X)})" }, { "math_id": 32, "text": "\\chi_\\rho(\\operatorname{Ad}_g(X))=\\chi_\\rho(X)" }, { "math_id": 33, "text": "g" }, { "math_id": 34, "text": " G" }, { "math_id": 35, "text": "X\\in\\mathfrak g" }, { "math_id": 36, "text": "\\Chi_\\rho" }, { "math_id": 37, "text": "\\chi_\\rho(X)=\\Chi_\\rho(e^X)" }, { "math_id": 38, "text": "\\mathfrak h" }, { "math_id": 39, "text": "\\chi_\\rho(H) = \\sum_\\lambda m_\\lambda e^{\\lambda(H)},\\quad H\\in\\mathfrak h" }, { "math_id": 40, "text": "\\lambda" }, { "math_id": 41, "text": "m_\\lambda" } ]
https://en.wikipedia.org/wiki?curid=691741
691790
List of statements independent of ZFC
The mathematical statements discussed below are provably independent of ZFC (the canonical axiomatic set theory of contemporary mathematics, consisting of the Zermelo–Fraenkel axioms plus the axiom of choice), assuming that ZFC is consistent. A statement is independent of ZFC (sometimes phrased "undecidable in ZFC") if it can neither be proven nor disproven from the axioms of ZFC. Axiomatic set theory. In 1931, Kurt Gödel proved his incompleteness theorems, establishing that many mathematical theories, including ZFC, cannot prove their own consistency. Assuming ω-consistency of such a theory, the consistency statement can also not be disproven, meaning it is independent. A few years later, other arithmetic statements were defined that are independent of any such theory, see for example Rosser's trick. The following set theoretic statements are independent of ZFC, among others: We have the following chains of implications: "V" = "L" → ◊ → CH, "V" = "L" → GCH → CH, CH → MA, and (see section on order theory): ◊ → ¬SH, MA + ¬CH → EATS → SH. Several statements related to the existence of large cardinals cannot be proven in ZFC (assuming ZFC is consistent). These are independent of ZFC provided that they are consistent with ZFC, which most working set theorists believe to be the case. These statements are strong enough to imply the consistency of ZFC. This has the consequence (via Gödel's second incompleteness theorem) that their consistency with ZFC cannot be proven in ZFC (assuming ZFC is consistent). The following statements belong to this class: The following statements can be proven to be independent of ZFC assuming the consistency of a suitable large cardinal: Set theory of the real line. There are many cardinal invariants of the real line, connected with measure theory and statements related to the Baire category theorem, whose exact values are independent of ZFC. While nontrivial relations can be proved between them, most cardinal invariants can be any regular cardinal between ℵ1 and 2ℵ0. This is a major area of study in the set theory of the real line (see Cichon diagram). MA has a tendency to set most interesting cardinal invariants equal to 2ℵ0. A subset "X" of the real line is a strong measure zero set if to every sequence ("εn") of positive reals there exists a sequence of intervals ("In") which covers "X" and such that "In" has length at most "εn". Borel's conjecture, that every strong measure zero set is countable, is independent of ZFC. A subset "X" of the real line is formula_0-dense if every open interval contains formula_0-many elements of "X". Whether all formula_0-dense sets are order-isomorphic is independent of ZFC. Order theory. Suslin's problem asks whether a specific short list of properties characterizes the ordered set of real numbers R. This is undecidable in ZFC. A "Suslin line" is an ordered set which satisfies this specific list of properties but is not order-isomorphic to R. The diamond principle ◊ proves the existence of a Suslin line, while MA + ¬CH implies EATS (every Aronszajn tree is special), which in turn implies (but is not equivalent to) the nonexistence of Suslin lines. Ronald Jensen proved that CH does not imply the existence of a Suslin line. Existence of Kurepa trees is independent of ZFC, assuming consistency of an inaccessible cardinal. Existence of a partition of the ordinal number formula_1 into two colors with no monochromatic uncountable sequentially closed subset is independent of ZFC, ZFC + CH, and ZFC + ¬CH, assuming consistency of a Mahlo cardinal. This theorem of Shelah answers a question of H. Friedman. Abstract algebra. In 1973, Saharon Shelah showed that the Whitehead problem ("is every abelian group "A" with Ext1(A, Z) = 0 a free abelian group?") is independent of ZFC. An abelian group with Ext1(A, Z) = 0 is called a Whitehead group; MA + ¬CH proves the existence of a non-free Whitehead group, while "V" = "L" proves that all Whitehead groups are free. In one of the earliest applications of proper forcing, Shelah constructed a model of ZFC + CH in which there is a non-free Whitehead group. Consider the ring "A" = R["x","y","z"] of polynomials in three variables over the real numbers and its field of fractions "M" = R("x","y","z"). The projective dimension of "M" as "A"-module is either 2 or 3, but it is independent of ZFC whether it is equal to 2; it is equal to 2 if and only if CH holds. A direct product of countably many fields has global dimension 2 if and only if the continuum hypothesis holds. Number theory. One can write down a concrete polynomial "p" ∈ Z["x"1, ..., "x"9] such that the statement "there are integers "m"1, ..., "m"9 with "p"("m"1, ..., "m"9) = 0" can neither be proven nor disproven in ZFC (assuming ZFC is consistent). This follows from Yuri Matiyasevich's resolution of Hilbert's tenth problem; the polynomial is constructed so that it has an integer root if and only if ZFC is inconsistent. Measure theory. A stronger version of Fubini's theorem for positive functions, where the function is no longer assumed to be measurable but merely that the two iterated integrals are well defined and exist, is independent of ZFC. On the one hand, CH implies that there exists a function on the unit square whose iterated integrals are not equal — the function is simply the indicator function of an ordering of [0, 1] equivalent to a well ordering of the cardinal ω1. A similar example can be constructed using MA. On the other hand, the consistency of the strong Fubini theorem was first shown by Friedman. It can also be deduced from a variant of Freiling's axiom of symmetry. Topology. The Normal Moore Space conjecture, namely that every normal Moore space is metrizable, can be disproven assuming the continuum hypothesis or assuming both Martin's axiom and the negation of the continuum hypothesis, and can be proven assuming a certain axiom which implies the existence of large cardinals. Thus, granted large cardinals, the Normal Moore Space conjecture is independent of ZFC. The existence of an S-space is independent of ZFC. In particular, it is implied by the existence of a Suslin line. Functional analysis. Garth Dales and Robert M. Solovay proved in 1976 that Kaplansky's conjecture, namely that every algebra homomorphism from the Banach algebra "C(X)" (where "X" is some compact Hausdorff space) into any other Banach algebra must be continuous, is independent of ZFC. CH implies that for any infinite "X" there exists a discontinuous homomorphism into any Banach algebra. Consider the algebra "B"("H") of bounded linear operators on the infinite-dimensional separable Hilbert space "H". The compact operators form a two-sided ideal in "B"("H"). The question of whether this ideal is the sum of two properly smaller ideals is independent of ZFC, as was proved by Andreas Blass and Saharon Shelah in 1987. Charles Akemann and Nik Weaver showed in 2003 that the statement "there exists a counterexample to Naimark's problem which is generated by ℵ1, elements" is independent of ZFC. Miroslav Bačák and Petr Hájek proved in 2008 that the statement "every Asplund space of density character ω1 has a renorming with the Mazur intersection property" is independent of ZFC. The result is shown using Martin's maximum axiom, while Mar Jiménez and José Pedro Moreno (1997) had presented a counterexample assuming CH. As shown by Ilijas Farah and N. Christopher Phillips and Nik Weaver, the existence of outer automorphisms of the Calkin algebra depends on set theoretic assumptions beyond ZFC. Wetzel's problem, which asks if every set of analytic functions which takes at most countably many distinct values at every point is necessarily countable, is true if and only if the continuum hypothesis is false. Model theory. Chang's conjecture is independent of ZFC assuming the consistency of an Erdős cardinal. Computability theory. Marcia Groszek and Theodore Slaman gave examples of statements independent of ZFC concerning the structure of the Turing degrees. In particular, whether there exists a maximally independent set of degrees of size less than continuum. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\aleph_1" }, { "math_id": 1, "text": "\\omega_2" } ]
https://en.wikipedia.org/wiki?curid=691790
691803
Erdős–Faber–Lovász conjecture
In graph theory, the Erdős–Faber–Lovász conjecture is a problem about graph coloring, named after Paul Erdős, Vance Faber, and László Lovász, who formulated it in 1972. It says: If k complete graphs, each having exactly k vertices, have the property that every pair of complete graphs has at most one shared vertex, then the union of the graphs can be properly colored with k colors. The conjecture for all sufficiently large values of k was proved by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. Equivalent formulations. introduced the problem with a story about seating assignment in committees: suppose that, in a university department, there are k committees, each consisting of k faculty members, and that all committees meet in the same room, which has k chairs. Suppose also that at most one person belongs to the intersection of any two committees. Is it possible to assign the committee members to chairs in such a way that each member sits in the same chair for all the different committees to which he or she belongs? In this model of the problem, the faculty members correspond to graph vertices, committees correspond to complete graphs, and chairs correspond to vertex colors. A "linear hypergraph" (also known as partial linear space) is a hypergraph with the property that every two hyperedges have at most one vertex in common. A hypergraph is said to be uniform if all of its hyperedges have the same number of vertices as each other. The n cliques of size n in the Erdős–Faber–Lovász conjecture may be interpreted as the hyperedges of an n-uniform linear hypergraph that has the same vertices as the underlying graph. In this language, the Erdős–Faber–Lovász conjecture states that, given any n-uniform linear hypergraph with n hyperedges, one may n-color the vertices such that each hyperedge has one vertex of each color. A "simple hypergraph" is a hypergraph in which at most one hyperedge connects any pair of vertices and there are no hyperedges of size at most one. In the graph coloring formulation of the Erdős–Faber–Lovász conjecture, it is safe to remove vertices that belong to a single clique, as their coloring presents no difficulty; once this is done, the hypergraph that has a vertex for each clique, and a hyperedge for each graph vertex, forms a simple hypergraph. And, the hypergraph dual of vertex coloring is edge coloring. Thus, the Erdős–Faber–Lovász conjecture is equivalent to the statement that any simple hypergraph with n vertices has chromatic index (edge coloring number) at most n. The graph of the Erdős–Faber–Lovász conjecture may be represented as an intersection graph of sets: to each vertex of the graph, correspond the set of the cliques containing that vertex, and connect any two vertices by an edge whenever their corresponding sets have a nonempty intersection. Using this description of the graph, the conjecture may be restated as follows: if some family of sets has n total elements, and any two sets intersect in at most one element, then the intersection graph of the sets may be n-colored. The intersection number of a graph G is the minimum number of elements in a family of sets whose intersection graph is G, or equivalently the minimum number of vertices in a hypergraph whose line graph is G. define the linear intersection number of a graph, similarly, to be the minimum number of vertices in a linear hypergraph whose line graph is G. As they observe, the Erdős–Faber–Lovász conjecture is equivalent to the statement that the chromatic number of any graph is at most equal to its linear intersection number. present another yet equivalent formulation, in terms of the theory of clones. History, partial results, and eventual proof. Paul Erdős, Vance Faber, and László Lovász formulated the harmless looking conjecture at a party in Boulder Colorado in September 1972. Its difficulty was realised only slowly. Paul Erdős originally offered US$50 for proving the conjecture in the affirmative, and later raised the reward to US$500. In fact, Paul Erdős considered this to be one of his three most favourite combinatorial problems. proved that the chromatic number of the graphs in the conjecture is at most formula_0, and improved this to "k" + "o"("k"). In 2023, almost 50 years after the original conjecture was stated, it was resolved for all sufficiently large "n" by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. Related problems. It is also of interest to consider the chromatic number of graphs formed as the union of k cliques of k vertices each, without restricting how big the intersections of pairs of cliques can be. In this case, the chromatic number of their union is at most 1+ "k"√"k" − 1, and some graphs formed in this way require this many colors. A version of the conjecture that uses the fractional chromatic number in place of the chromatic number is known to be true. That is, if a graph G is formed as the union of k k-cliques that intersect pairwise in at most one vertex, then G can be k-colored. In the framework of edge coloring simple hypergraphs, defines a number L from a simple hypergraph as the number of hypergraph vertices that belong to a hyperedge of three or more vertices. He shows that, for any fixed value of L, a finite calculation suffices to verify that the conjecture is true for all simple hypergraphs with that value of L. Based on this idea, he shows that the conjecture is indeed true for all simple hypergraphs with "L" ≤ 10. In the formulation of coloring graphs formed by unions of cliques, Hindman's result shows that the conjecture is true whenever at most ten of the cliques contain a vertex that belongs to three or more cliques. In particular, it is true for "n" ≤ 10. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\tfrac{3}{2} k - 2" } ]
https://en.wikipedia.org/wiki?curid=691803
6918116
Welding helmet
Helmet that protects eyes during welding A welding helmet is a type of personal protective equipment used in performing certain types of welding to protect the eyes, face, and neck from flash burn, sparks, infrared and ultraviolet light, and intense heat. The modern welding helmet used today was first introduced in 1937 by Willson Products. Welding helmets are most commonly used in arc welding processes such as shielded metal arc welding, gas tungsten arc welding, and gas metal arc welding. They are necessary to prevent arc eye, a painful condition where the cornea is inflamed. Welding helmets can also prevent retina burns, which can lead to a loss of vision. Both conditions are caused by unprotected exposure to the highly concentrated infrared and ultraviolet rays emitted by the welding arc. Ultraviolet emissions from the welding arc can also damage uncovered skin, causing a sunburn-like condition in a relatively short period of welding. In addition to the radiation, gases or splashes can also be a hazard to the skin and the eyes. Most welding helmets include a window (visor) covered with a filter called a lens shade, through which the welder can see to work. The window may be made of tinted glass, tinted plastic, or a variable-density filter made from a pair of polarized lenses. Different lens shades are needed for different welding processes. For example, metal inert gas (MIG) and tungsten inert gas (TIG) welding are low-intensity processes, so a lighter lens shade will be preferred. United States OSHA requirements for welding helmets are derived from standards like ANSI Z49.1, "Safety in Welding and Cutting", section 7 ("Protection of Personnel") and ANSI Z89.1 ("Safety Requirements for Industrial Head Protection"). The shade of lens that is suitable depends on the current rating of the weld. In the United States, OSHA recommends DIN shade numbers as shown in the following table: The 1967 edition of ANSI Z49.1.7.2.2.10 specifies that "all filter lenses and plates shall meet the test for transmission of radiant energy prescribed in paragraph 6.3.4.6 of the "Safety Code for Head, Eye and Respiratory Protection", USA Standard Z2.1-1959". As of 2023, OSHA's website provides standards for minimum protective shades under standard 1910.133 ("Eye and face protection"), section (a)(5), and says: "As a rule of thumb, start with a shade that is too dark to see the weld zone. Then go to a lighter shade which gives sufficient view of the weld zone without going below the minimum. In oxyfuel gas welding or cutting where the torch produces a high yellow light, it is desirable to use a filter lens that absorbs the yellow or sodium line in the visible light of the (spectrum) operation." Safety. All welding helmets are susceptible to damages such as cracks that can compromise the protection from ultraviolet and infrared rays. In addition to protecting the eyes, the helmet protects the face from hot metal sparks generated by the arc and from UV damage. When overhead welding, a leather skull cap and shoulder cover are used to prevent head and shoulder burns. Goggles. Welding goggles are protective eyewear that has dark shading, meant to protect eyes from the bright light produced by oxyfuel welding and allied processes, and also from sparks and debris. Open electrical arcs (as created by arc welding and other processes) generate much higher amounts of light and UV radiation, requiring the whole face to be protected; most welding goggles do not have a dark enough shade for arc welding. Auto-darkening filters. In 1981, Swedish manufacturer Hornell International (now owned by 3M) introduced an LCD electronic shutter that darkens automatically when sensors detect the bright welding arc, the Speedglas Auto-Darkening Filter. With such electronic auto-darkening helmets, the welder no longer has to get ready to weld and then nod their head to lower the helmet over their face. The advantage is that the welder does not need to adjust the position of welding helmet manually, which not only saves time but also reduces the risk of exposure to the harmful light generated by the welding process. ANSI standards. In the United States, the industry standard for welding helmets is ANSI Z87.1+, which specifies performance of a wide variety of eye protection devices. The standard requires that auto-darkening helmets provide full protection against both UV and IR even when they are not in the darkened state. The standard is voluntary, so buyers should confirm that the helmet is ANSI Z87.1 compliant (indicated by appropriate labeling). Shades. Per ANSI Z87.1-2003, "shade numbers" are derived as such: Shade Number, formula_0, is related to luminous transmittance formula_1 (expressed as a fraction, not as a percent) by the equation: formula_2 formula_1 is defined with respect to CIE Illuminant A (i.e a reference point for typical domestic incandescent lighting) and the CIE 1931 Standard Colorimetric Observer. The actual ANSI-specified shades are not specific numbers, but ranges; each has a designated maximum, minimum, and nominal transmittance value. Moreover, acceptable transmittance values for far ultraviolet are far lower than those for the Illuminant A light ("shall be less than one tenth of the minimum allowable luminous transmittance"). Transmittance values. While ANSI shades are ranges based on a specific illuminant, and do not directly convert into other measurements of transmittance, the following table gives a rough approximation (in terms of neutral density filter numbers and f-stops). Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "T_{L}" }, { "math_id": 2, "text": "S = \\dfrac{7}{3} log_{10}\\dfrac{1}{T_{L}} + 1" } ]
https://en.wikipedia.org/wiki?curid=6918116
691838
Unruh effect
Kinematic prediction of quantum field theory for an accelerating observer The Unruh effect (also known as the Fulling–Davies–Unruh effect) is a theoretical prediction in quantum field theory that an observer who is uniformly accelerating through empty space will perceive a thermal bath. This means that even in the absence of any external heat sources, an accelerating observer will detect particles and experience a temperature. In contrast, an inertial observer in the same region of spacetime would observe no temperature. In other words, the background appears to be warm from an accelerating reference frame. In layman's terms, an accelerating thermometer in empty space (like one being waved around), without any other contribution to its temperature, will record a non-zero temperature, just from its acceleration. Heuristically, for a uniformly accelerating observer, the ground state of an inertial observer is seen as a mixed state in thermodynamic equilibrium with a non-zero temperature bath. The Unruh effect was first described by Stephen Fulling in 1973, Paul Davies in 1975 and W. G. Unruh in 1976. It is currently not clear whether the Unruh effect has actually been observed, since the claimed observations are disputed. There is also some doubt about whether the Unruh effect implies the existence of Unruh radiation. Temperature equation. The Unruh temperature, sometimes called the Davies–Unruh temperature, was derived separately by Paul Davies and William Unruh and is the effective temperature experienced by a uniformly accelerating detector in a vacuum field. It is given by formula_0 where ħ is the reduced Planck constant, a is the proper uniform acceleration, c is the speed of light, and "k"B is the Boltzmann constant. Thus, for example, a proper acceleration of corresponds approximately to a temperature of . Conversely, an acceleration of corresponds to a temperature of . The Unruh temperature has the same form as the Hawking temperature "T"H with g denoting the surface gravity of a black hole, which was derived by Stephen Hawking in 1974. In the light of the equivalence principle, it is, therefore, sometimes called the Hawking–Unruh temperature. Solving the Unruh temperature for the uniform acceleration, it can be expressed as formula_1, where formula_2 is Planck acceleration and formula_3 is Planck temperature. Explanation. Unruh demonstrated theoretically that the notion of vacuum depends on the path of the observer through spacetime. From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles in thermal equilibrium—a warm gas. The Unruh effect would only appear to an accelerating observer. And although the Unruh effect would initially be perceived as counter-intuitive, it makes sense if the word "vacuum" is interpreted in the following specific way. In quantum field theory, the concept of "vacuum" is not the same as "empty space": Space is filled with the quantized fields that make up the universe. Vacuum is simply the lowest "possible" energy state of these fields. The energy states of any quantized field are defined by the Hamiltonian, based on local conditions, including the time coordinate. According to special relativity, two observers moving relative to each other must use different time coordinates. If those observers are accelerating, there may be no shared coordinate system. Hence, the observers will see different quantum states and thus different vacua. In some cases, the vacuum of one observer is not even in the space of quantum states of the other. In technical terms, this comes about because the two vacua lead to unitarily inequivalent representations of the quantum field canonical commutation relations. This is because two mutually accelerating observers may not be able to find a globally defined coordinate transformation relating their coordinate choices. An accelerating observer will perceive an apparent event horizon forming (see Rindler spacetime). The existence of Unruh radiation could be linked to this apparent event horizon, putting it in the same conceptual framework as Hawking radiation. On the other hand, the theory of the Unruh effect explains that the definition of what constitutes a "particle" depends on the state of motion of the observer. The free field needs to be decomposed into positive and negative frequency components before defining the creation and annihilation operators. This can only be done in spacetimes with a timelike Killing vector field. This decomposition happens to be different in Cartesian and Rindler coordinates (although the two are related by a Bogoliubov transformation). This explains why the "particle numbers", which are defined in terms of the creation and annihilation operators, are different in both coordinates. The Rindler spacetime has a horizon, and locally any non-extremal black hole horizon is Rindler. So the Rindler spacetime gives the local properties of black holes and cosmological horizons. It is possible to rearrange the metric restricted to these regions to obtain the Rindler metric. The Unruh effect would then be the near-horizon form of Hawking radiation. The Unruh effect is also expected to be present in de Sitter space. It is worth stressing that the Unruh effect only says that, according to uniformly-accelerated observers, the vacuum state is a thermal state specified by its temperature, and one should resist reading too much into the thermal state or bath. Different thermal states or baths at the same temperature need not be equal, for they depend on the Hamiltonian describing the system. In particular, the thermal bath seen by accelerated observers in the vacuum state of a quantum field is not the same as a thermal state of the same field at the same temperature according to inertial observers. Furthermore, uniformly accelerated observers, static with respect to each other, can have different proper accelerations a (depending on their separation), which is a direct consequence of relativistic red-shift effects. This makes the Unruh temperature spatially inhomogeneous across the uniformly accelerated frame. Calculations. In special relativity, an observer moving with uniform proper acceleration a through Minkowski spacetime is conveniently described with Rindler coordinates, which are related to the standard (Cartesian) Minkowski coordinates by formula_4 The line element in Rindler coordinates, i.e. Rindler space is formula_5 where "ρ" , and where σ is related to the observer's proper time τ by "σ" "aτ" (here "c" 1). An observer moving with fixed ρ traces out a hyperbola in Minkowski space, therefore this type of motion is called hyperbolic motion. The coordinate formula_6 is related to the Schwarzschild spherical coordinate formula_7 by the relation formula_8 An observer moving along a path of constant ρ is uniformly accelerating, and is coupled to field modes which have a definite steady frequency as a function of σ. These modes are constantly Doppler shifted relative to ordinary Minkowski time as the detector accelerates, and they change in frequency by enormous factors, even after only a short proper time. Translation in σ is a symmetry of Minkowski space: it can be shown that it corresponds to a boost in "x", "t" coordinate around the origin. Any time translation in quantum mechanics is generated by the Hamiltonian operator. For a detector coupled to modes with a definite frequency in σ, we can treat σ as "time" and the boost operator is then the corresponding Hamiltonian. In Euclidean field theory, where the minus sign in front of the time in the Rindler metric is changed to a plus sign by multiplying formula_9 to the Rindler time, i.e. a Wick rotation or imaginary time, the Rindler metric is turned into a polar-coordinate-like metric. Therefore any rotations must close themselves after 2π in a Euclidean metric to avoid being singular. So formula_10 A path integral with real time coordinate is dual to a thermal partition function, related by a Wick rotation. The periodicity formula_11 of imaginary time corresponds to a temperature of formula_12 in thermal quantum field theory. Note that the path integral for this Hamiltonian is closed with period 2π. This means that the H modes are thermally occupied with temperature . This is not an actual temperature, because H is dimensionless. It is conjugate to the timelike polar angle σ, which is also dimensionless. To restore the length dimension, note that a mode of fixed frequency f in σ at position ρ has a frequency which is determined by the square root of the (absolute value of the) metric at ρ, the redshift factor. This can be seen by transforming the time coordinate of a Rindler observer at fixed ρ to an inertial, co-moving observer observing a proper time. From the Rindler-line-element given above, this is just ρ. The actual inverse temperature at this point is therefore formula_13 It can be shown that the acceleration of a trajectory at constant ρ in Rindler coordinates is equal to , so the actual inverse temperature observed is formula_14 Restoring units yields formula_15 The temperature of the vacuum, seen by an isolated observer accelerating at the Earth's gravitational acceleration of g = , is only . For an experimental test of the Unruh effect it is planned to use accelerations up to , which would give a temperature of about . The Rindler derivation of the Unruh effect is unsatisfactory to some, since the detector's path is super-deterministic. Unruh later developed the Unruh–DeWitt particle detector model to circumvent this objection. Other implications. The Unruh effect would also cause the decay rate of accelerating particles to differ from inertial particles. Stable particles like the electron could have nonzero transition rates to higher mass states when accelerating at a high enough rate. Unruh radiation. Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame is. It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation. The existence of Unruh radiation is not universally accepted. Smolyaninov claims that it has already been observed, while O'Connell and Ford claim that it is not emitted at all. While these skeptics accept that an accelerating object thermalizes at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced. Experimental observation. Researchers claim experiments that successfully detected the Sokolov–Ternov effect may also detect the Unruh effect under certain conditions. Theoretical work in 2011 suggests that accelerating detectors could be used for the direct detection of the Unruh effect with current technology. The Unruh effect may have been observed for the first time in 2019 in the high energy channeling radiation explored by the NA63 experiment at CERN. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "T = \\frac{\\hbar a}{2\\pi c k_\\mathrm{B}}\\approx 4.06\\times 10^{-21}\\,\\mathrm{K{\\cdot}s^2{\\cdot}m^{-1}}\\times a," }, { "math_id": 1, "text": "a = \\frac{2\\pi c k_\\mathrm{B}}{\\hbar}T = 2\\pi a_\\mathrm{P} \\frac{T}{T_\\mathrm{P}}" }, { "math_id": 2, "text": "a_\\mathrm{P}" }, { "math_id": 3, "text": "T_\\mathrm{P}" }, { "math_id": 4, "text": "\\begin{align}\n x &= \\rho \\cosh(\\sigma) \\\\\n t &= \\rho \\sinh(\\sigma).\n\\end{align}" }, { "math_id": 5, "text": "\\mathrm{d}s^2 = -\\rho^2\\, \\mathrm{d}\\sigma^2 + \\mathrm{d}\\rho^2," }, { "math_id": 6, "text": "\\rho" }, { "math_id": 7, "text": "r_S" }, { "math_id": 8, "text": " \\rho = \\int^r_{r_S}\\frac{dr^\\prime}{\\sqrt{1-r_S/r^\\prime}}." }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "e^{2\\pi i H} = Id." }, { "math_id": 11, "text": "\\beta" }, { "math_id": 12, "text": "\\beta = 1/T" }, { "math_id": 13, "text": "\\beta = 2\\pi \\rho." }, { "math_id": 14, "text": "\\beta = \\frac{2\\pi}{a}." }, { "math_id": 15, "text": "k_\\text{B}T = \\frac{\\hbar a}{2\\pi c}." } ]
https://en.wikipedia.org/wiki?curid=691838
691839
W. G. Unruh
Canadian physicist William George Unruh (; born August 28, 1945) is a Canadian physicist at the University of British Columbia, Vancouver who described the hypothetical Unruh effect in 1976. Early life and education. Unruh was born into a Mennonite family in Winnipeg, Manitoba. His parents were Benjamin Unruh, a refugee from Russia, and Anna Janzen, who was born in Canada. He obtained his B.Sc. from the University of Manitoba in 1967, followed by an M.A. (1969) and Ph.D. (1971) from Princeton University, New Jersey, under the direction of John Archibald Wheeler. Areas of research. Unruh has made seminal contributions to our understanding of gravity, black holes, cosmology, and quantum fields in curved spaces, including the discovery of what is now known as the Unruh effect. Unruh has contributed to the foundations of quantum mechanics in areas such as decoherence and the question of time in quantum mechanics. He has helped to clarify the meaning of nonlocality in a quantum context, in particular that quantum nonlocality does not follow from Bell's theorem and that ultimately quantum mechanics is a local theory. Unruh is also one of the main critics of the Afshar experiment. Unruh is also interested in music and teaches the Physics of Music. Unruh effect. The Unruh effect, described by Unruh in 1976, is the prediction that an accelerating observer will observe black-body radiation where an inertial observer would observe none. In other words, the accelerating observer will find itself in a warm background, the temperature of which is proportional to the acceleration. The same quantum state of a field, which is taken to be the ground state for observers in inertial systems, is seen as a thermal state for the uniformly accelerated observer. The Unruh effect therefore means that the very notion of the quantum vacuum depends on the path of the observer through spacetime. The Unruh effect can be expressed in a simple equation giving the equivalent energy "kT" of a uniformly accelerating particle (with "a" being the constant acceleration), as: formula_0 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " kT = \\frac{\\hbar a}{2\\pi c}" } ]
https://en.wikipedia.org/wiki?curid=691839
691927
Invariant subspace problem
Partially unsolved problem in mathematics In the field of mathematics known as functional analysis, the invariant subspace problem is a partially unresolved problem asking whether every bounded operator on a complex Banach space sends some non-trivial closed subspace to itself. Many variants of the problem have been solved, by restricting the class of bounded operators considered or by specifying a particular class of Banach spaces. The problem is still open for separable Hilbert spaces (in other words, each example, found so far, of an operator with no non-trivial invariant subspaces is an operator that acts on a Banach space that is not isomorphic to a separable Hilbert space). History. The problem seems to have been stated in the mid-20th century after work by Beurling and von Neumann, who found (but never published) a positive solution for the case of compact operators. It was then posed by Paul Halmos for the case of operators formula_1 such that formula_2 is compact. This was resolved affirmatively, for the more general class of polynomially compact operators (operators formula_1 such that formula_3 is a compact operator for a suitably chosen non-zero polynomial formula_4), by Allen R. Bernstein and Abraham Robinson in 1966 (see for a summary of the proof). For Banach spaces, the first example of an operator without an invariant subspace was constructed by Per Enflo. He proposed a counterexample to the invariant subspace problem in 1975, publishing an outline in 1976. Enflo submitted the full article in 1981 and the article's complexity and length delayed its publication to 1987 Enflo's long "manuscript had a world-wide circulation among mathematicians" and some of its ideas were described in publications besides Enflo (1976). Enflo's works inspired a similar construction of an operator without an invariant subspace for example by Beauzamy, who acknowledged Enflo's ideas. In the 1990s, Enflo developed a "constructive" approach to the invariant subspace problem on Hilbert spaces. In May 2023, a preprint of Enflo appeared on arXiv, which, if correct, solves the problem for Hilbert spaces and completes the picture. In July 2023, a second and independent preprint of Neville appeared on arXiv, claiming the solution of the problem for separable Hilbert spaces. Precise statement. Formally, the invariant subspace problem for a complex Banach space formula_5 of dimension > 1 is the question whether every bounded linear operator formula_6 has a non-trivial closed formula_1-invariant subspace: a closed linear subspace formula_7 of formula_5, which is different from formula_8 and from formula_5, such that formula_9. A negative answer to the problem is closely related to properties of the orbits formula_1. If formula_0 is an element of the Banach space formula_5, the orbit of formula_0 under the action of formula_1, denoted by formula_10, is the subspace generated by the sequence formula_11. This is also called the formula_1-cyclic subspace generated by formula_0. From the definition it follows that formula_10 is a formula_1-invariant subspace. Moreover, it is the "minimal" formula_1-invariant subspace containing formula_0: if formula_7 is another invariant subspace containing formula_0, then necessarily formula_12 for all formula_13 (since formula_7 is formula_1-invariant), and so formula_14. If formula_0 is non-zero, then formula_10 is not equal to formula_8, so its closure is either the whole space formula_5 (in which case formula_0 is said to be a cyclic vector for formula_1) or it is a non-trivial formula_1-invariant subspace. Therefore, a counterexample to the invariant subspace problem would be a Banach space formula_5 and a bounded operator formula_6 for which every non-zero vector formula_15 is a cyclic vector for formula_1. (Where a "cyclic vector" formula_0 for an operator formula_1 on a Banach space formula_5 means one for which the orbit formula_10 of formula_0 is dense in formula_5.) Known special cases. While the case of the invariant subspace problem for separable Hilbert spaces is still open, several other cases have been settled for topological vector spaces (over the field of complex numbers): Notes. <templatestyles src="Reflist/styles.css" /> References.
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "T^2" }, { "math_id": 3, "text": "p(T)" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "H" }, { "math_id": 6, "text": "T: H \\to H " }, { "math_id": 7, "text": "W" }, { "math_id": 8, "text": "\\{0\\}" }, { "math_id": 9, "text": " T(W)\\subset W " }, { "math_id": 10, "text": "[x]" }, { "math_id": 11, "text": "\\{ T^{n}(x)\\,:\\, n \\ge 0\\}" }, { "math_id": 12, "text": "T^n(x) \\in W" }, { "math_id": 13, "text": "n \\ge 0" }, { "math_id": 14, "text": "[x]\\subset W" }, { "math_id": 15, "text": "x\\in H" }, { "math_id": 16, "text": "S" }, { "math_id": 17, "text": "l_1" } ]
https://en.wikipedia.org/wiki?curid=691927
69196020
1105 (number)
Natural number 1105 (eleven hundred [and] five, or one thousand one hundred [and] five) is the natural number following 1104 and preceding 1106. 1105 is the smallest positive integer that is a sum of two positive squares in exactly four different ways, a property that can be connected (via the sum of two squares theorem) to its factorization 5 × 13 × 17 as the product of the three smallest prime numbers that are congruent to 1 modulo 4. It is also the smallest member of a cluster of three semiprimes (1105, 1106, 1107) with eight divisors, and the second-smallest Carmichael number, after 561, one of the first four Carmichael numbers identified by R. D. Carmichael in his 1910 paper introducing this concept. Its binary representation 10001010001 and its base-4 representation 101101 are both palindromes, and (because the binary representation has nonzeros only in even positions and its base-4 representation uses only the digits 0 and 1) it is a member of the Moser–de Bruijn sequence of sums of distinct powers of four. As a number of the form formula_0 for formula_113, 1105 is the magic constant for 13 × 13 magic squares, and as a difference of two consecutive fourth powers (1105 = 74 − 64) it is a rhombic dodecahedral number (a type of figurate number), and a magic number for body-centered cubic crystals. These properties are closely related: the difference of two consecutive fourth powers is always a magic constant for an odd magic square whose size is the sum of the two consecutive numbers (here 7 + 6 = 13). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\tfrac{n(n^2+1)}{2}" }, { "math_id": 1, "text": "n={}" } ]
https://en.wikipedia.org/wiki?curid=69196020
69199096
Intrinsic bond orbitals
Intrinsic bond orbitals (IBO) are localized molecular orbitals giving exact and non-empirical representations of wave functions. They are obtained by unitary transformation and form an orthogonal set of orbitals localized on a minimal number of atoms. IBOs present an intuitive and unbiased interpretation of chemical bonding with naturally arising Lewis structures. For this reason IBOs have been successfully employed for the elucidation of molecular structures and electron flow along the intrinsic reaction coordinate (IRC). IBOs have also found application as Wannier functions in the study of solids. Theory. The IBO method entails molecular wave-functions calculated using self-consistent field (SCF) methods such as Kohn-Sham density functional theory (DFT) which are expressed as linear combinations of localized molecular orbitals. In order to arrive at IBOs, intrinsic atomic orbitals (IAOs) are first calculated as representations of a molecular wave function for which each IAO can be assigned to a specific atom. This allows for a chemically intuitive orbital picture as opposed to the commonly used large and diffuse basis sets for the construction of more complex molecular wavefunctions. IAOs are constructed from tabulated free-atom AOs of standard basis-sets under consideration of the molecular environment. This yields polarized atomic orbitals that resemble the free-atom AOs as much as possible, before orthonormalization of the polarized AOs results in the set of IAOs. IAOs are thus a minimal basis for a given molecule in which atomic contributions can be distinctly assigned. The sum of all IAOs spans exactly over the molecular orbitals which renders them an exact representation of the wavefunction. Since IAOs are associated with a specific atom, they can provide atom specific properties such as the partial charge. Compared to other charges, such as the Mulliken charge, the IAO charges are independent of the employed basis set. IBOs are constructed as a linear combination over IAOs with the condition of minimizing the number of atoms over which the orbital charge is spread. Each IBO can thereby be divided into the contributions of the atoms as the electronic occupation formula_0 of orbital formula_1 on atom formula_2. The localization is performed in the spirit of the Pipek-Mezey localization scheme, maximizing a localization functional formula_3. formula_4 with formula_5 or formula_6. While the choice of the exponent formula_7 does not affect the resulting IBOs in most cases, the choice of formula_5 localizes the orbitals in aromatic systems unlike formula_8. The process of IBO construction is performed by unitary tranfomation of canonical MOs, which ensures that the IBOs remain an exact and physically accurate representation of the molecular wavefunction due to the invariance of Slater determinant wavefunctions towards unitary rotations. formula_9 The unitary matrix formula_10, which produces the localized IBOs upon matrix multiplication with set of occupied MOs formula_11, is thereby chosen to effectively minimize spread of IBOs over the atoms of a molecule. The product is a set of localized IBOs, closely resembling the chemically intuitive shapes of molecular orbitals, allowing for distinction of bond types, atomic contributions and polarization. Application in structure and bonding. In his original paper introducing IBOs, Knizia showed the versatility of his method for describing not only classical bonding situations, such as the σ and π bond, but also aromatic systems and non-trivial bonds. The differentiation of σ and π bonds in acrylic acid is possible based on IBO geometries, as are the identification of the IBOs corresponding to the oxygen lone pairs. Benzene provided an example of a delocalized aromatic system to test the IBO method. Apart from the C-C and C-H σ-bonds, the six electron π-system is expressed as three delocalized IBOs. Representation of non-Lewis bonding was demonstrated on diborane B2H6, with one IBO stretching over B-H-B, corresponding to the 3-center-2-electron bond. Transition metal compounds. IBO analysis was used to explain the stability of electron rich gold-carbene complexes, mimicking reactive intermediates in gold catalysis. While these complexes are sometimes depicted with a Au-C double bond, representing the sigma donation of the carbene and π backbonding of Au, IBO analysis points towards a minimal amount of π-backbonding with the respective orbital mainly localized on Au. The σ-donating carbene orbital is likewise strongly polarized towards C. Stabilization of the compound thus occurs through strong donation of the aromatic carbene substituents into the carbene carbon p-orbitals, outcompeting the Au-π-backdonation. IBO analysis was thus able to negate the double bond character of the gold-carbene complexes and provided deep insight into the electronic structure of Cy3P-Au-C(4-OMe-C6H4)2) (Cy = cyclohexyl). The π-backbonding character was again evaluated for gold-vinylidene complexes, as another common type of gold catalysis intermediates. IBO analysis revealed significantly stronger π-backbonding for the gold-vinylidenes compared to the gold-carbenes. This was attributed to the geometric inability of aromatic vinylidene substituents to compete with Au for π-interactions since the respective orbitals are perpendicular to each other. Knizia and Klein similarly employed IBO for the analysis of [Fe(CO)3NO]–. The even polarization of IBOs between Fe and N points towards a covalently bonded NO ligand. The double bond occurs via two d-p π-interactions and results in a formal Fe0 center. Confirmed by further calculations, IBO proved as a fast and straightforward method to interpret bonding in this case. Making use of the low computational cost, a Cloke-Wilson rearrangement catalyzed by [Fe(CO)3NO]– was investigated by constructing the IBOs for every stationary point along the IRC. It was found that one of the Fe-NO π bonds takes active part in catalysis by electron transfer to and from the substrate, explaining the unique catalytic activity of [Fe(CO)3NO]– compared to the isoelectronic [Fe(CO)4]2–. Apart from the above mentioned compounds, the IBO method has been employed to investigate various other transition metal complexes, such as gold-diarylallenylidenes or diplatinum diboranyl complexes, proving as a valuable tool to gain insight into the extent and nature of bonding. Main group compounds. IBO analysis has been employed in main group chemistry to elucidate oftentimes non-trivial electronic structure. The bonding of phosphaaluminirenes was, for example, investigated showing a 3-center-2π-electron bond of the AlCP cycle. Further application was found for confirming the distonic nature of a phosphorus containing radical cation reported by Chen et al. (see figure). While the IAO charge analysis yielded a positive charge on the chelated P, IBOs showed the localization of the unpaired electron on the other P atom, confirming the spatial separation of radical site and charge. Another example is the elucidation of the electronic structure of the hexamethylbenzene dication. Three π bonding IBOs were found between the basal C5Me5 plane and the apical C, reminiscent of Cp* coordination complexes. The three π bonds are thereby polarized towards the apical C, which in turn coordinates to a CH3+ cation with its lone pair. IBO analysis therefore revealed the Lewis-acidic and Lewis-basic character of the apical C. Applications of IBO for cluster compounds have included zirconium doped boron clusters. IBO analysis showed, that the unusual stability of the neutral ZrB12 cluster stems from several multicenter σ bonds. The B-B σ bonding orbitals extend to the central Zr atom, forming the mulicenter bonds. This example displays the method's aptitude to analyze cluster compounds and multicenter bonding. Valence virtual intrinsic bond orbitals. Although IBOs typically describe occupied orbitals, the description of unoccupied orbitals can likewise be of value for interpreting chemical interactions. Valence virtual IBOs (vvIBOs) were introduced with the investigation of high valent formal Ni(IV) complexes. The bonding and antibonding manifold of the compound were described using IBOs and vvIBOs respectively. Compared to the widely used HOMO/LUMOs which are often spread over the whole molecule and can be difficult to interpret, vvIBOs allow for more direct interpretation of chemical interactions with unoccupied orbitals. Electron flow along the IRC. In 2015, Knizia and Klein introduced the analysis of electron flow in reactions with IBO as a non-empirical and straight-forward method of evaluating curly arrow mechanisms. Since IBOs are exact representations of Kohn-Sham wavefunctions, they can provide physical conformation for curly arrows mechanisms based on first-principles. IBOs usually represent chemical bonds and lone pairs, this method allows for elucidation of bond rearrangements in terms of the elemental steps and their sequence. By calculating the root mean square deviations of the partial charge distributions compared to the initial charge distribution, IBOs taking active part in a reaction can be distinguished from those that remain unchanged along the IRC. Knizia and Klein demonstrate the versatility of this method in their original report, first presenting a simple S2-type self-exchange reaction of H3CCl and Cl–, followed by the migration of π bonds in a substitution reaction (S2) and π- to σ-bond transformations in a Claisen rearrangement. Electron flow can be easily followed by observing the migration of an IBO and bond types are easily distinguished based on the geometries of the IBOs. The value of IBO analysis along the IRC especially shows for complex reactions, such as a cyclopropanation reaction with only one transition state and without intermediates, reported by Haven et al. Calculations by Knizia and Klein yielded a precise curly arrow mechanism for this reaction. Closed-shell systems. Examples of IBO analysis along the IRC included the investigation of C-H bond activation by gold-vinylidene complexes. Through this method, it is possible to discern between concerted and stepwise reactions. The previously thought single step C-H activation reaction was in this case revealed to consist of three distinct phases: i) hydride transfer, ii) C-C bond formation and iii) sigma to pi rearrangement of the lone pair coordinated to Au. Other reports of IBO analysis along the IRC include the elucidation and confirmation of a previously proposed mechanism for a [3,3]-sigmatropic rearrangement of a Au(I)-vinyl species or the epoxidation of alkene by peracids. For the latter, the textbook four curly arrow mechanism was found to be physically inaccurate. Instead, seven changing IBOs were found, yielding an ideal mechanism featuring seven curly arrows. The combination of IBO analysis with other computational methods, such as natural bond orbital (NBO) analysis for a Ti-catalyzed pyrrole synthesis or natural localized molecular orbital (NLMO) analysis for an intramolecular cycloaddition of a phosphaalkene to an arene has likewise led to insightful results regarding the specifics of the reaction mechanisms.   Open-shell systems. Klein and Knizia furthermore introduced the first examples of IBOs used for analysis of open-shell systems during proton-coupled electron transfer (PCET) and hydrogen atom transfer (HAT). The differentiation between pCET, a separate but concerted electron and proton transfer, and HAT, the transfer of a hydrogen atom, were shown for two well-studied model systems of enzymatic Fe-oxo active sites. IBOs along the IRC were calculated for the alpha and beta spin manifold respectively. While the IBO of the alpha spin electron travelled together with the proton to take part in the formation of a new H-O bond in case of HAT, the electron was transferred to the Fe-center separated from the transferred proton for PCET. The successful application of the IBO method for these two examples of open-shell systems was suggested to pave the way for broader applications to similar problems.
[ { "math_id": 0, "text": "n_A(i')" }, { "math_id": 1, "text": "i'" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "L = \\sum_{i}^{occ}\\sum_{A}^{atom}[n_A(i')]^p " }, { "math_id": 5, "text": "p=4" }, { "math_id": 6, "text": "2" }, { "math_id": 7, "text": "p " }, { "math_id": 8, "text": "p=2" }, { "math_id": 9, "text": "|i'\\rangle = \\sum_{i}^{occ} |i\\rangle U_{ii'}" }, { "math_id": 10, "text": "U_{ii'}" }, { "math_id": 11, "text": "|i\\rangle" } ]
https://en.wikipedia.org/wiki?curid=69199096
6920
Column
Structural element that transmits weight from above to below A column or pillar in architecture and structural engineering is a structural element that transmits, through compression, the weight of the structure above to other structural elements below. In other words, a column is a compression member. The term "column" applies especially to a large round support (the shaft of the column) with a capital and a base or pedestal, which is made of stone, or appearing to be so. A small wooden or metal support is typically called a "post". Supports with a rectangular or other non-round section are usually called "piers". For the purpose of wind or earthquake engineering, columns may be designed to resist lateral forces. Other compression members are often termed "columns" because of the similar stress conditions. Columns are frequently used to support beams or arches on which the upper parts of walls or ceilings rest. In architecture, "column" refers to such a structural element that also has certain proportional and decorative features. These beautiful columns are available in a broad selection of styles and designs in round tapered, round straight, or square shaft styles. A column might also be a decorative element not needed for structural purposes; many columns are engaged, that is to say form part of a wall. A long sequence of columns joined by an entablature is known as a colonnade. History. Antiquity. All significant Iron Age civilizations of the Near East and Mediterranean made some use of columns. Egyptian. In ancient Egyptian architecture as early as 2600 BC, the architect Imhotep made use of stone columns whose surface was carved to reflect the organic form of bundled reeds, like papyrus, lotus and palm. In later Egyptian architecture faceted cylinders were also common. Their form is thought to derive from archaic reed-built shrines. Carved from stone, the columns were highly decorated with carved and painted hieroglyphs, texts, ritual imagery and natural motifs. Egyptian columns are famously present in the Great Hypostyle Hall of Karnak (c. 1224 BC), where 134 columns are lined up in sixteen rows, with some columns reaching heights of 24 metres. One of the most important type are the papyriform columns. The origin of these columns goes back to the 5th Dynasty. They are composed of lotus (papyrus) stems which are drawn together into a bundle decorated with bands: the capital, instead of opening out into the shape of a bellflower, swells out and then narrows again like a flower in bud. The base, which tapers to take the shape of a half-sphere like the stem of the lotus, has a continuously recurring decoration of stipules. Greek and Roman. The Minoans used whole tree-trunks, usually turned upside down in order to prevent re-growth, stood on a base set in the stylobate (floor base) and topped by a simple round capital. These were then painted as in the most famous Minoan palace of Knossos. The Minoans employed columns to create large open-plan spaces, light-wells and as a focal point for religious rituals. These traditions were continued by the later Mycenaean civilization, particularly in the megaron or hall at the heart of their palaces. The importance of columns and their reference to palaces and therefore authority is evidenced in their use in heraldic motifs such as the famous lion-gate of Mycenae where two lions stand each side of a column. Being made of wood these early columns have not survived, but their stone bases have and through these we may see their use and arrangement in these palace buildings. The Egyptians, Persians and other civilizations mostly used columns for the practical purpose of holding up the roof inside a building, preferring outside walls to be decorated with reliefs or painting, but the Ancient Greeks, followed by the Romans, loved to use them on the outside as well, and the extensive use of columns on the interior and exterior of buildings is one of the most characteristic features of classical architecture, in buildings like the Parthenon. The Greeks developed the classical orders of architecture, which are most easily distinguished by the form of the column and its various elements. Their Doric, Ionic, and Corinthian orders were expanded by the Romans to include the Tuscan and Composite orders. Persian. Some of the most elaborate columns in the ancient world were those of the Persians, especially the massive stone columns erected in Persepolis. They included double-bull structures in their capitals. The Hall of Hundred Columns at Persepolis, measuring 70 × 70 metres, was built by the Achaemenid king Darius I (524–486 BC). Many of the ancient Persian columns are standing, some being more than 30 metres tall. Tall columns with bull's head capitals were used for porticoes and to support the roofs of the hypostylehall, partly inspired by the ancient Egyptian precedent. Since the columns carried timber beams rather than stone, they could be taller, slimmer and more widely spaced than Egyptian ones. Middle Ages. Columns, or at least large structural exterior ones, became much less significant in the architecture of the Middle Ages. The classical forms were abandoned in both Byzantine and Romanesque architecture in favour of more flexible forms, with capitals often using various types of foliage decoration, and in the West scenes with figures carved in relief. During the Romanesque period, builders continued to reuse and imitate ancient Roman columns wherever possible; where new, the emphasis was on elegance and beauty, as illustrated by twisted columns. Often they were decorated with mosaics. Renaissance and later styles. Renaissance architecture was keen to revive the classical vocabulary and styles, and the informed use and variation of the classical orders remained fundamental to the training of architects throughout Baroque, Rococo and Neo-classical architecture. Structure. Early columns were constructed of stone, some out of a single piece of stone. Monolithic columns are among the heaviest stones used in architecture. Other stone columns are created out of multiple sections of stone, mortared or dry-fit together. In many classical sites, sectioned columns were carved with a centre hole or depression so that they could be pegged together, using stone or metal pins. The design of most classical columns incorporates entasis (the inclusion of a slight outward curve in the sides) plus a reduction in diameter along the height of the column, so that the top is as little as 83% of the bottom diameter. This reduction mimics the parallax effects which the eye expects to see, and tends to make columns look taller and straighter than they are while entasis adds to that effect. There are flutes and fillets that run up the shaft of columns. The flute is the part of the column that is indented in with a semi circular shape. The fillet of the column is the part between each of the flutes on the Ionic order columns. The flute width changes on all tapered columns as it goes up the shaft and stays the same on all non tapered columns. This was done to the columns to add visual interest to them. The Ionic and the Corinthian are the only orders that have fillets and flutes. The Doric style has flutes but not fillets. Doric flutes are connected at a sharp point where the fillets are located on Ionic and Corinthian order columns. Nomenclature. Most classical columns arise from a basis, or base, that rests on the stylobate, or foundation, except for those of the Doric order, which usually rest directly on the stylobate. The basis may consist of several elements, beginning with a wide, square slab known as a plinth. The simplest bases consist of the plinth alone, sometimes separated from the column by a convex circular cushion known as a torus. More elaborate bases include two toruses, separated by a concave section or channel known as a scotia or trochilus. Scotiae could also occur in pairs, separated by a convex section called an astragal, or bead, narrower than a torus. Sometimes these sections were accompanied by still narrower convex sections, known as annulets or fillets. At the top of the shaft is a capital, upon which the roof or other architectural elements rest. In the case of Doric columns, the capital usually consists of a round, tapering cushion, or echinus, supporting a square slab, known as an abax or abacus. Ionic capitals feature a pair of volutes, or scrolls, while Corinthian capitals are decorated with reliefs in the form of acanthus leaves. Either type of capital could be accompanied by the same moldings as the base. In the case of free-standing columns, the decorative elements atop the shaft are known as a finial. Modern columns may be constructed out of steel, poured or precast concrete, or brick, left bare or clad in an architectural covering, or veneer. Used to support an arch, an impost, or pier, is the topmost member of a column. The bottom-most part of the arch, called the springing, rests on the impost. Equilibrium, instability, and loads. As the axial load on a perfectly straight slender column with elastic material properties is increased in magnitude, this ideal column passes through three states: stable equilibrium, neutral equilibrium, and instability. The straight column under load is in stable equilibrium if a lateral force, applied between the two ends of the column, produces a small lateral deflection which disappears and the column returns to its straight form when the lateral force is removed. If the column load is gradually increased, a condition is reached in which the straight form of equilibrium becomes so-called neutral equilibrium, and a small lateral force will produce a deflection that does not disappear and the column remains in this slightly bent form when the lateral force is removed. The load at which neutral equilibrium of a column is reached is called the critical or buckling load. The state of instability is reached when a slight increase of the column load causes uncontrollably growing lateral deflections leading to complete collapse. For an axially loaded straight column with any end support conditions, the equation of static equilibrium, in the form of a differential equation, can be solved for the deflected shape and critical load of the column. With hinged, fixed or free end support conditions the deflected shape in neutral equilibrium of an initially straight column with uniform cross section throughout its length always follows a partial or composite sinusoidal curve shape, and the critical load is given by formula_0 where "E" = elastic modulus of the material, "Imin" = the minimal moment of inertia of the cross section, and "L" = actual length of the column between its two end supports. A variant of (1) is given by formula_1 where "r" = radius of gyration of column cross-section which is equal to the square root of (I/A), "K" = ratio of the longest half sine wave to the actual column length, "E""t" = tangent modulus at the stress "F"cr, and "KL" = effective length (length of an equivalent hinged-hinged column). From Equation (2) it can be noted that the buckling strength of a column is inversely proportional to the square of its length. When the critical stress, "F"cr ("F"cr ="P"cr/"A", where "A" = cross-sectional area of the column), is greater than the proportional limit of the material, the column is experiencing inelastic buckling. Since at this stress the slope of the material's stress-strain curve, "E""t" (called the tangent modulus), is smaller than that below the proportional limit, the critical load at inelastic buckling is reduced. More complex formulas and procedures apply for such cases, but in its simplest form the critical buckling load formula is given as Equation (3), formula_2 A column with a cross section that lacks symmetry may suffer torsional buckling (sudden twisting) before, or in combination with, lateral buckling. The presence of the twisting deformations renders both theoretical analyses and practical designs rather complex. Eccentricity of the load, or imperfections such as initial crookedness, decreases column strength. If the axial load on the column is not concentric, that is, its line of action is not precisely coincident with the centroidal axis of the column, the column is characterized as eccentrically loaded. The eccentricity of the load, or an initial curvature, subjects the column to immediate bending. The increased stresses due to the combined axial-plus-flexural stresses result in a reduced load-carrying ability. Column elements are considered to be massive if their smallest side dimension is equal to or more than 400 mm. Massive columns have the ability to increase in carrying strength over long time periods (even during periods of heavy load). Taking into account the fact, that possible structural loads may increase over time as well (and also the threat of progressive failure), massive columns have an advantage compared to non-massive ones. Extensions. When a column is too long to be built or transported in one piece, it has to be extended or spliced at the construction site. A reinforced concrete column is extended by having the steel reinforcing bars protrude a few inches or feet above the top of the concrete, then placing the next level of reinforcing bars to overlap, and pouring the concrete of the next level. A steel column is extended by welding or bolting splice plates on the flanges and webs or walls of the columns to provide a few inches or feet of load transfer from the upper to the lower column section. A timber column is usually extended by the use of a steel tube or wrapped-around sheet-metal plate bolted onto the two connecting timber sections. Foundations. A column that carries the load down to a foundation must have means to transfer the load without overstressing the foundation material. Reinforced concrete and masonry columns are generally built directly on top of concrete foundations. When seated on a concrete foundation, a steel column must have a base plate to spread the load over a larger area, and thereby reduce the bearing pressure. The base plate is a thick, rectangular steel plate usually welded to the bottom end of the column. Orders. The Roman author Vitruvius, relying on the writings (now lost) of Greek authors, tells us that the ancient Greeks believed that their Doric order developed from techniques for building in wood. The earlier smoothed tree-trunk was replaced by a stone cylinder. Doric order. The Doric order is the oldest and simplest of the classical orders. It is composed of a vertical cylinder that is wider at the bottom. It generally has neither a base nor a detailed capital. It is instead often topped with an inverted frustum of a shallow cone or a cylindrical band of carvings. It is often referred to as the masculine order because it is represented in the bottom level of the Colosseum and the Parthenon, and was therefore considered to be able to hold more weight. The height-to-thickness ratio is about 8:1. The shaft of a Doric Column is almost always fluted. The Greek Doric, developed in the western Dorian region of Greece, is the heaviest and most massive of the orders. It rises from the stylobate without any base; it is from four to six times as tall as its diameter; it has twenty broad flutes; the capital consists simply of a banded necking swelling out into a smooth echinus, which carries a flat square abacus; the Doric entablature is also the heaviest, being about one-fourth the height column. The Greek Doric order was not used after c. 100 B.C. until its “rediscovery” in the mid-eighteenth century. Tuscan order. The Tuscan order, also known as Roman Doric, is also a simple design, the base and capital both being series of cylindrical disks of alternating diameter. The shaft is almost never fluted. The proportions vary, but are generally similar to Doric columns. Height to width ratio is about 7:1. Ionic order. The Ionic column is considerably more complex than the Doric or Tuscan. It usually has a base and the shaft is often fluted (it has grooves carved up its length). The capital features a volute, an ornament shaped like a scroll, at the four corners. The height-to-thickness ratio is around 9:1. Due to the more refined proportions and scroll capitals, the Ionic column is sometimes associated with academic buildings. Ionic style columns were used on the second level of the Colosseum. Corinthian order. The Corinthian order is named for the Greek city-state of Corinth, to which it was connected in the period. However, according to the architectural historian Vitruvius, the column was created by the sculptor Callimachus, probably an Athenian, who drew acanthus leaves growing around a votive basket. In fact, the oldest known Corinthian capital was found in Bassae, dated at 427 BC. It is sometimes called the feminine order because it is on the top level of the Colosseum and holding up the least weight, and also has the slenderest ratio of thickness to height. Height to width ratio is about 10:1. Composite order. The Composite order draws its name from the capital being a composite of the Ionic and Corinthian capitals. The acanthus of the Corinthian column already has a scroll-like element, so the distinction is sometimes subtle. Generally the Composite is similar to the Corinthian in proportion and employment, often in the upper tiers of colonnades. Height to width ratio is about 11:1 or 12:1. Solomonic. A Solomonic column, sometimes called "barley sugar", begins on a base and ends in a capital, which may be of any order, but the shaft twists in a tight spiral, producing a dramatic, serpentine effect of movement. Solomonic columns were developed in the ancient world, but remained rare there. A famous marble set, probably 2nd century, was brought to Old St. Peter's Basilica by Constantine I, and placed round the saint's shrine, and was thus familiar throughout the Middle Ages, by which time they were thought to have been removed from the Temple of Jerusalem. The style was used in bronze by Bernini for his spectacular St. Peter's baldachin, actually a ciborium (which displaced Constantine's columns), and thereafter became very popular with Baroque and Rococo church architects, above all in Latin America, where they were very often used, especially on a small scale, as they are easy to produce in wood by turning on a lathe (hence also the style's popularity for spindles on furniture and stairs). Caryatid. A Caryatid is a sculpted female figure serving as an architectural support taking the place of a column or a pillar supporting an entablature on her head. The Greek term literally means "maidens of Karyai", an ancient town of Peloponnese. Engaged columns. In architecture, an engaged column is a column embedded in a wall and partly projecting from the surface of the wall, sometimes defined as semi or three-quarter detached. Engaged columns are rarely found in classical Greek architecture, and then only in exceptional cases, but in Roman architecture they exist in abundance, most commonly embedded in the cella walls of pseudoperipteral buildings. Pillar tombs. Pillar tombs are monumental graves, which typically feature a single, prominent pillar or column, often made of stone. A number of world cultures incorporated pillars into tomb structures. In the ancient Greek colony of Lycia in Anatolia, one of these edifices is located at the tomb of Xanthos. In the town of Hannassa in southern Somalia, ruins of houses with archways and courtyards have also been found along with other pillar tombs, including a rare octagonal tomb. See also. <templatestyles src="Div col/styles.css"/> References. Chisholm, Hugh, ed. (1911). "Engaged Column". Encyclopædia Britannica. 9 (11th ed.). Cambridge University Press. pp. 404–405. Stierlin, Henri The Roman Empire: From the Etruscans to the Decline of the Roman Empire, TASCHEN, 2002 Alderman, Liz (7 July 2014). "Acropolis Maidens Glow Anew". The New York Times. Retrieved 9 July 2014. Stokstad, Marilyn; Cothren, Michael (2014). Art History (Volume 1 ed.). New Jersey: Pearson Education, Inc. p. 110. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f_{cr}\\equiv\\frac{\\pi^2\\textit{E}I_{min}}{{L}^2}\\qquad (1)" }, { "math_id": 1, "text": "f_{cr}\\equiv\\frac{\\pi^{2}E_T}{(\\frac{KL}{r})^{2}}\\qquad (2)" }, { "math_id": 2, "text": "f_{cr}\\equiv{F_y}-\\frac{F^{2}_{y}}{4\\pi^{2}E}\\left(\\frac{KL}{r^2}\\right)\\qquad (3)" } ]
https://en.wikipedia.org/wiki?curid=6920
692003
Kibble balance
Electromechanical weight measuring instrument A Kibble balance (also formerly known as a watt balance) is an electromechanical measuring instrument that measures the weight of a test object very precisely by the electric current and voltage needed to produce a compensating force. It is a metrological instrument that can realize the definition of the kilogram unit of mass based on fundamental constants. It was originally known as a watt balance because the weight of the test mass is proportional to the product of current and voltage, which is measured in watts. In June 2016, two months after the death of its inventor, Bryan Kibble, metrologists of the Consultative Committee for Units of the International Committee for Weights and Measures agreed to rename the device in his honor. Prior to 2019, the definition of the kilogram was based on a physical object known as the International Prototype of the Kilogram (IPK). After considering alternatives, in 2013 the General Conference on Weights and Measures (CGPM) agreed on accuracy criteria for replacing this definition with one based on the use of a Kibble balance. After these criteria had been achieved, the CGPM voted unanimously on November 16, 2018, to change the definition of the kilogram and several other units, effective May 20, 2019, to coincide with World Metrology Day. There is also a method called the joule balance. All methods that use the fixed numerical value of the Planck constant are sometimes called the Planck balance. Design. The Kibble balance is a more accurate version of the ampere balance, an early current measuring instrument in which the force between two current-carrying coils of wire is measured and then used to calculate the magnitude of the current. The Kibble balance operates in the opposite sense; the current in the coils set very precisely by the Planck constant, and the force between the coils is used to measure the weight of a test kilogram mass. Then the mass is calculated from the weight by accurately measuring the local Earth's gravity (the net acceleration combining gravitational and centrifugal effects) with a gravimeter. Thus the mass of the object is defined in terms of a current and a voltage— allowing the device to "measure mass without recourse to the IPK (International Prototype Kilogram) or any physical object". Origin. The principle that is used in the Kibble balance was proposed by Bryan Kibble of the UK National Physical Laboratory (NPL) in 1975 for measurement of the gyromagnetic ratio. In 1978 the Mark I watt balance was built at the NPL with Ian Robinson and Ray Smith. It operated until 1988. The main weakness of the ampere balance method is that the result depends on the accuracy with which the dimensions of the coils are measured. The Kibble balance uses an extra calibration step to cancel the effect of the geometry of the coils, removing the main source of uncertainty. This extra step involves moving the force coil through a known magnetic flux at a known speed. This was possible by setting of the conventional values of the von Klitzing constant and Josephson constant, which are used throughout the world for voltage and resistance calibration. Using these principles, in 1990 Bryan Kibble and Ian Robinson invented the Kibble Mark II balance, which uses a circular coil and operates in vacuum conditions . Bryan Kibble worked with Ian Robinson and Janet Belliss to build this Mark Two version of the balance. This design allowed for measurements accurate enough for use in the redefinition of the SI unit of mass: the kilogram. The Kibble balance originating from the National Physical Laboratory was transferred to the National Research Council of Canada (NRC) in 2009, where scientists from the two labs continued to refine the instrument. In 2014, NRC researchers published the most accurate measurement of the Planck constant at that time, with a relative uncertainty of 1.8×10-8. A final paper by NRC researchers was published in May 2017, presenting a measurement of the Planck constant with an uncertainty of only 9.1 parts per billion, the measurement with the least uncertainty to that date. Other Kibble balance experiments are conducted in the US National Institute of Standards and Technology (NIST), the Swiss Federal Office of Metrology (METAS) in Berne, the International Bureau of Weights and Measures (BIPM) near Paris and Laboratoire national de métrologie et d’essais (LNE) in Trappes, France. Principle. A conducting wire of length formula_0 that carries an electric current formula_1 perpendicular to a magnetic field of strength formula_2 experiences a Lorentz force equal to the product of these variables. In the Kibble balance, the current is varied so that this force counteracts the weight formula_3 of a mass formula_4 to be measured. This principle is derived from the ampere balance. formula_3 is given by the mass formula_4 multiplied by the local gravitational acceleration formula_5. Thus, formula_6 The Kibble balance avoids the problems of measuring formula_2 and formula_0 in a second calibration step. The same wire (in practice, a coil) is moved through the same magnetic field at a known speed formula_7. By Faraday's law of induction, a potential difference formula_8 is generated across the ends of the wire, which equals formula_9. Thus formula_10 The unknown product formula_11 can be eliminated from the equations to give formula_12 formula_13 With formula_8, formula_1, formula_5, and formula_7 accurately measured, this gives an accurate value for formula_4. Both sides of the equation have the dimensions of power, measured in watts in the International System of Units; hence the original name "watt balance". The product formula_11, also called the geometric factor, is not trivially equal in both calibration steps. The geometric factor is only constant under certain stability conditions on the coil. Implementation. The Kibble balance is constructed so that the mass to be measured and the wire coil are suspended from one side of a balance scale, with a counterbalance mass on the other side. The system operates by alternating between two modes: "weighing" and "moving". The entire mechanical subsystem operates in a vacuum chamber to remove the effects of air buoyancy. While "weighing", the system measures both formula_1 and formula_7. The system controls the current in the coil to pull the coil through a magnetic field at a constant velocity formula_7. Coil position and velocity measurement circuitry uses an interferometer together with a precision clock input to determine the velocity and control the current needed to maintain it. The required current is measured, using an ammeter comprising a Josephson junction voltage standard and an integrating voltmeter. While "moving", the system measures formula_8. The system ceases to provide current to the coil. This allows the counterbalance to pull the coil (and mass) upward through the magnetic field, which causes a voltage difference across the coil. The velocity measurement circuitry measures the speed of movement of the coil. This voltage is measured, using the same voltage standard and integrating voltmeter. A typical Kibble balance measures formula_8, formula_1, and formula_7, but does not measure the local gravitational acceleration formula_5, because formula_5 does not vary rapidly with time. Instead, formula_5 is measured in the same laboratory using a highly accurate and precise gravimeter. In addition, the balance depends on a highly accurate and precise frequency reference such as an atomic clock to compute voltage and current. Thus, the precision and accuracy of the mass measurement depends on the Kibble balance, the gravimeter, and the clock. Like the early atomic clocks, the early Kibble balances were one-of-a-kind experimental devices and were large, expensive, and delicate. As of 2019, work is underway to produce standardized devices at prices that permit use in any metrology laboratory that requires high-precision measurement of mass. As well as large Kibble balances, microfabricated or MEMS watt balances (now called Kibble balances) have been demonstrated since around 2003. These are fabricated on single silicon dies similar to those used in microelectronics and accelerometers, and are capable of measuring small forces in the nanonewton to micronewton range traceably to the SI-defined physical constants via electrical and optical measurements. Due to their small scale, MEMS Kibble balances typically use electrostatic rather than the inductive forces used in larger instruments. Lateral and torsional variants have also been demonstrated, with the main application (as of 2019) being in the calibration of the atomic force microscope. Accurate measurements by several teams will enable their results to be averaged and so reduce the experimental error. Measurements. Accurate measurements of electric current and potential difference are made in conventional electrical units (rather than SI units), which are based on fixed "conventional values" of the Josephson constant and the von Klitzing constant, formula_14 and formula_15 respectively. The current Kibble balance experiments are equivalent to measuring the value of the conventional watt in SI units. From the definition of the conventional watt, this is equivalent to measuring the value of the product formula_16 in SI units instead of its fixed value in conventional electrical units: formula_17 The importance of such measurements is that they are also a direct measurement of the Planck constant formula_18: formula_19 The principle of the electronic kilogram relies on the value of the Planck constant, which is as of 2019 an exact value. This is similar to the metre being defined by the speed of light. With the constant defined exactly, the Kibble balance is not an instrument to measure the Planck constant, but is instead an instrument to measure mass: formula_20 Effect of gravity. Gravity and the nature of the Kibble balance, which oscillates test masses up and down against the local gravitational acceleration "g", are exploited so that mechanical power is compared against electrical power, which is the square of voltage divided by electrical resistance. However, "g" varies significantly—by nearly 1%—depending on where on the Earth's surface the measurement is made (see "Earth's gravity"). There are also slight seasonal variations in "g" at a location due to changes in underground water tables, and larger semimonthly and diurnal changes due to tidal distortions in the Earth's shape caused by the Moon and the Sun. Although "g" is not a term in the "definition" of the kilogram, it is crucial in the process of measurement of the kilogram when relating energy to power in a kibble balance. Accordingly, "g" must be measured with at least as much precision and accuracy as are the other terms, so measurements of "g" must also be traceable to fundamental constants of nature. For the most precise work in mass metrology, "g" is measured using dropping-mass absolute gravimeters that contain an iodine-stabilised helium–neon laser interferometer. The fringe-signal, frequency-sweep output from the interferometer is measured with a rubidium atomic clock. Since this type of dropping-mass gravimeter derives its accuracy and stability from the constancy of the speed of light as well as the innate properties of helium, neon, and rubidium atoms, the 'gravity' term in the delineation of an all-electronic kilogram is also measured in terms of invariants of nature—and with very high precision. For instance, in the basement of the NIST's Gaithersburg facility in 2009, when measuring the gravity acting upon Pt‑10Ir test masses (which are denser, smaller, and have a slightly lower center of gravity inside the Kibble balance than stainless steel masses), the measured value was typically within 8 ppb of . References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "L" }, { "math_id": 1, "text": "I" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "w = mg = BLI." }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "U" }, { "math_id": 9, "text": "BLv" }, { "math_id": 10, "text": "U = BLv." }, { "math_id": 11, "text": "BL" }, { "math_id": 12, "text": "UI = mgv" }, { "math_id": 13, "text": "m = UI/gv." }, { "math_id": 14, "text": "K_\\text{J-90}" }, { "math_id": 15, "text": "R_\\text{K-90}" }, { "math_id": 16, "text": "K_\\text{J}^2 R_\\text{K}" }, { "math_id": 17, "text": "\\frac{1}{K_\\text{J}^2 R_\\text{K}} = \\frac{1}{K_\\text{J-90}^2 R_\\text{K-90}} \\frac{\\{mgv\\}_\\text{W}}{ \\{UI \\}_{W_{90}}}." }, { "math_id": 18, "text": "h" }, { "math_id": 19, "text": "h = \\frac{4}{K_\\text{J}^2 R_\\text{K}}." }, { "math_id": 20, "text": "m = \\frac{UI}{gv}." } ]
https://en.wikipedia.org/wiki?curid=692003
69201444
Anne Boutet de Monvel
French applied mathematician and mathematical physicist Anne-Marie Boutet de Monvel (née Berthier, born 1948, also published as Anne-Marie Berthier and Anne-Marie Boutet de Monvel-Berthier) is a French applied mathematician and mathematical physicist, and a professor emerita in the University of Paris, affiliated with the Institut de mathématiques de Jussieu – Paris Rive Gauche. Books. Boutet de Monvel is the author of "Spectral theory and wave operators for the Schrödinger equation" (Pitman, 1982). With Werner Amrein and Vladimir Georgescu she is the co-author of "Hardy type inequalities for abstract differential operators" (American Mathematical Society, 1987) and "formula_0-groups, commutator methods and spectral theory of formula_1-body Hamiltonians" (Birkhäuser, 1996). For many years she was co-editor-in-chief of the book series "Progress in Mathematical Physics", following its relaunch by Birkhäuser in 1999. Recognition. She was named a Fellow of the American Mathematical Society, in the 2022 class of fellows, "for contributions to mathematical physics, particularly Schroedinger operator theory, and to the theory of integrable systems". Personal life. Boutet de Monvel was married to Louis Boutet de Monvel (1941–2014), also a mathematician. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "C_0" }, { "math_id": 1, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=69201444
69201841
Natural resonance theory
In computational chemistry, natural resonance theory (NRT) is an iterative, variational functional embedded into the natural bond orbital (NBO) program, commonly run in Gaussian, GAMESS, ORCA, Ampac and other software packages. NRT was developed in 1997 by Frank A. Weinhold and Eric D. Glendening, chemistry professors at University of Wisconsin-Madison and Indiana State University, respectively. Given a list of NBOs for an idealized natural Lewis structure, the NRT functional creates a list of Lewis resonance structures and calculates the resonance weights of each contributing resonance structure. Structural and chemical properties, such as bond order, valency, and bond polarity, may be calculated from resonance weights. Specifically, bond orders may be divided into their covalent and ionic contributions, while valency is the sum of bond orders of a given atom. This aims to provide quantitative results that agree with qualitative notions of chemical resonance. In contrast to the "wavefunction resonance theory" (i.e., the superposition of wavefunctions), NRT uses the density matrix resonance theory, performing a superposition of density matrices to realize resonance. NRT has applications in "ab initio" calculations, including calculating the bond orders of intra- and intermolecular interactions and the resonance weights of radical isomers. History. During the 1930s, Professor Linus Pauling and postdoctoral researcher George Wheland applied quantum-mechanical formalism to calculate the resonance energy of organic molecules. To do this, they estimated the structure and properties of molecules described by more than one Lewis structure as a linear combination of all Lewis structures: formula_0 where aiκ and Ψaκ denote the weight and single-electron eigenfunction from the wavefunction for a Lewis structure κ, respectively. Their formalism assumes that localized valence bond wavefunctions are mutually orthogonal. formula_1 While this assumption ensures that the sum of the weights of the resonance structures describing the molecule is one, it creates difficulties in computing aiκ. The Pauling-Wheland formalism also assumes that cross-terms from density matrix multiplication may be neglected. This facilitates the averaging of chemical properties, but, like the first assumption, is not true for actual wavefunctions. Additionally, in the case of polar bonding, these assumptions necessitate the generation of ionic resonance structures that often overlap with covalent structures. In other words, superfluous resonance structures are calculated for polar molecules. Overall, the Pauling-Wheland formulation of resonance theory was unsuitable for quantitative purposes. Glendening and Weinhold sought to create a new formalism, within their "ab initio" NBO program, that would provide an accurate quantitative measure of resonance theory, matching chemical intuition. To do this, instead of evaluating a linear combination of wavefunctions, they express a linear combination of density operators, Γ, (i.e., matrices) for localized structures, where the sum of all weights, ωα, is one. formula_2 where formula_3 and formula_4 In the context of NBO, the true density operator Γ represents the NBOs of an idealized natural Lewis structure. Once NRT has generated a set of density operators, Γα, for localized resonance structures, α, a least-squares variational functional is employed to quantify the resonance weights of each structure. It does this by measuring the variational error, δw, of the linear combination of resonance structures to the true density operator Γ. formula_5 To evaluate a single resonance structure, δref, the absolute difference between a single term expansion and the true density operator, approximated as the leading reference structure, can be taken. Now, the extent to which each reference structure represents the true structure may be evaluated as the "fractional improvement", fw. formula_6 formula_7 From this equation, it is evident that as fw approaches one and δw approaches zero, δref becomes a better representation of the true structure. Updates. In 2019, Glendening, Wright and Weinhold introduced a quadratic programming (QP) strategy for variational minimization in NRT. This new feature is integrated into NBO 7.0 version of their program. In this program, the matrix root-mean square deviation (Frobenius norm) of the resonance weights is calculated. formula_8 The mean-squared density matrices, representing deviation from the true density matrix, may be rewritten as a Gram matrix, and an iterative algorithm is used to minimize the Gram matrix and solve the QP. Theory. Generation of resonance structures and their density matrices. From a given wavefunction, Ψ, a list of optimal NBOs for a Lewis-type wavefunction are generated along with a list of non-Lewis NBOs (e.g., incorporating some antibonding interactions). When these latter orbitals have nonzero value, there is "delocalization" (i.e., deviation from the ideal Lewis-type wavefunction). From this, NRT generates a "delocalization list" from deviation from the parent structure and describes a series of alternative structures reflecting the delocalization. A threshold for the number of generated resonance structures can be set by controlling the desired energetic maximum (NRTTHR threshold). The NBOs for a resonance structure formula can then be, subsequently, calculated from the CHOOSE option. Operationally, there are three ways in which alternative resonance structures may be generated: (1) from the LEWIS option, considering the Wiberg bond indices; (2) from the delocalization list; (3) specified by the user. Below is an example of how NRT may generate a list of resonance structures. (1) Given an input wavefunction, NRT creates a list of reference Lewis structures. The LEWIS option tests each structure and rejects those that do not conform to the Lewis bonding theory (i.e., those that do not fulfill the octet rule, pose unreasonable formal charges, etc.). (2) The PARENT and CHOOSE operations determine the optimal set of NBOs corresponding to a specific resonance structure. Additionally, CHOOSE is able to eliminate identical resonance structures. (3) A user may then call SELECT to select the structure that best matches to the true molecular structure. This option may also show other structures within a defined energy threshold NRTTHR, deviating from optimal Lewis density. (4) Two other operations, CONDNS and KEKULE, are ran to remove redundant ionic structures and append structures related by bond shifts, respectively. (5) Lastly, SECRES is called to calculate the NBOs and density matrices of each resonance structure. Generation of resonance weights. To compute the variational error, δw, NRT offers the following optimization methods: the steepest descent algorithms BFGS and POWELL and a "simulated annealing method" ANNEAL and MULTI. Most commonly, the NRT program computes an initial guess of the resonance weights by the following relation: formula_9 where the weight is proportional to the exponential of the non-Lewis density, ρ, of structure α. Then the BFGS and POWELL steepest descent methods optimize for the nearest local minimum in energy. In contrast, the ANNEAL option finds the global maximum of the fractional improvement, fw, and performs a controlled, iterative random walk across the fw surface. This method is more computationally expensive than the BFGS and POWELL steepest descent methods. After optimization, SUPPL evaluates the weight of each resonance structure and modifies the list of resonance structures by either retaining or adding resonance structures of high weight and deleting or excluding those of low weight. It continues this process until either convergence is achieved or oscillation occurs. Updates. In NBO version 7.0, the $NRTSTR function does not need to be called to generate a list of representative resonance structures, and the $CHOOSE algorithm has been adapted to be "essentially "identical" to the NLS [natural Lewis structure] algorithm", increasing the overall optimization of each resonance structure by reducing the amount to which the parent Lewis structure contributes to the resonance structure. Applications. Main group chemistry. Bond order of the pnictogen bond. In 2015, Liu "et al"., conducted "ab initio" MP2/aug-cc-pvDZ calculations and used NRT in NBO version 5.0 to determine the natural bond order (i.e., a measure of electron density) of noncovalent weak "pnicogen bond" interactions—analogous to the hydrogen bond—between various compounds. Their results are summarized in the following table. These results indicate that the ionic bond order of the O· · · P pnictogen bond is the greatest contribution to the total bond order. Therefore, this weak, noncovalent interaction is primarily electrostatic. Bond order of Ge2M compounds. In 2018 Minh "et al"., used NRT in the NBO 5.G program, with density obtained from the B3P86/6-311+G(d) level of theory, to calculate the bond orders in a series of Ge2M compounds, where M is a first-row transition metal. The results are found in the following table. These results show that the Ge–Ge bond order ranges from 1.5 to 2.4, while the Ge–M bond order ranges from 0.3 to 1.7. Furthermore, the Ge–Ge bond is primarily covalent, whereas the Ge–M bond usually has an equal mix of covalent and ionic nature. Exceptions to this are Cr, Mn, and Cu, where the ionic component is dominant because of smaller overlap with the 4s orbital of the M atom, leading to less stability. Interactions with M = Cr, Mn, and Cu are described as an electron transfer from the 4s atomic orbital on the M atom to a pi molecular orbital of the Ge2 fragment. Interactions with the other M atoms are described by two electron transfers: firstly, an electron transfer from the Ge2 fragment into an empty 3d atomic orbital on M and secondly, an electron transfer from the 3d atomic orbital on M into an antibonding orbital on Ge2. Resonance structures and bond order of regium bonds. In 2019, Zheng "et al"., used NRT at the wB97XD level in the GENBO 6.0W program to generate natural Lewis resonance structures and calculate the bond orders of regium bond interactions between phosphonates and metal halides MX (M = Cu, Ag, Au; X = F, Cl, Br). In a regium bond interaction, electron donors participate in a charge transfer to the metal species. Results of this analysis are shown in the following figures and tables. In the case of H3PO:· · · MX complexes, these results indicate that ωI is “the best natural Lewis structure” and the lone pair of electrons on the oxygen atom interact with a MX sigma antibonding orbital. Zheng "et al"., also analyzed MX interactions with trans- and cis-phosphinuous acid to compare the electron donating abilities of phosphorus and oxygen atoms. The results above demonstrate that when phosphorus acts as the electron donor the weights of ωI and ωII are similar. This is indicative of 3-center 4-electron bonding models. Despite greater mixing, ωII is determined to be the best natural Lewis structure for both the trans- and cis- complexes, with CuBr and AgBr as the only exceptions. Researchers explain that this result is consistent with analyses showing the preference for phosphorus to form covalent interactions. Overall, "the degree of covalency for P–M bonds decreases in the order of F> Cl > Br, Au > Cu > Ag, while the degree of noncovalent for O–M bonds, there is an increase according to F < Cl < Br, Au < Cu < Ag in the entire family." Weight of resonance structures of arsenic radicals. In 2015, Viana "et al"., used NRT to determine the weight of resonance structures of the arsenic radical isomers of AsCO, AsSiO and AsGeO, which are of interest in the fields of astrochemistry and astrobiology. The results are shown in the following figures and table. According to Viana "et al"., “for most of the isomers, the percentage weight of the secondary resonance structure is negligible. In cyclic structures, the resonance weights lead to very similar percentage values.” Limitations. Calculating chemical and physical properties by using linear combinations of density matrices, rather than wavefunctions, may result in negative, and therefore erroneous, resonance weights because it is mathematically impossible to expand the density matrix without introducing negative values. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Psi{_A{}_i}=\\sum_{\\kappa} a{_i{}_\\kappa}\\Psi{_a{}_\\kappa}" }, { "math_id": 1, "text": "\\langle\\Psi_\\alpha|\\Psi_\\beta\\rangle=\\delta_{\\alpha_\\beta}" }, { "math_id": 2, "text": "\\Gamma=\\sum_{\\alpha}\\omega_\\alpha\\Gamma_\\alpha" }, { "math_id": 3, "text": "\\omega_\\alpha\\geq0 " }, { "math_id": 4, "text": "\\sum_{\\alpha}\\omega_\\alpha=1" }, { "math_id": 5, "text": "\\delta_W=\\underset{\\{\\omega_\\alpha\\}}{min}\\|\\Gamma-\\sum_{\\alpha}\\omega_\\alpha\\Gamma_\\alpha\\|" }, { "math_id": 6, "text": "\\delta_{r_{}e_{}f}=\\|\\Gamma-\\Gamma_{r_{}e_{}f}\\|" }, { "math_id": 7, "text": "f_W= \\frac{\\delta_{r_{}e_{}f}-\\delta_W}{\\delta_{r_{}e_{}f}} " }, { "math_id": 8, "text": "\\Delta \\omega=\\underset{\\{\\omega_\\alpha\\}}{min}\\left ( \\frac{\\|\\Gamma_{Q_{}C}-\\sum_{\\alpha}\\omega_\\alpha\\Gamma_\\alpha\\|^2}{n_b} \\right )^\\frac{1}{2}" }, { "math_id": 9, "text": "\\overset{\\backsim}{\\underset{\\alpha}{\\omega}}\\propto e^{-3\\rho_\\alpha}" } ]
https://en.wikipedia.org/wiki?curid=69201841
6920233
Interspecific competition
Form of competition Interspecific competition, in ecology, is a form of competition in which individuals of "different" species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition. If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food. Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition. Types. All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric). Based on mechanism. Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availability of the resource for the other species. Thus, it is an indirect interaction because the competing species interact via a shared resource. Interference competition is a form of competition in which individuals of one species interacts directly with individuals of another species via antagonistic displays or more aggressive behavior. In a review and synthesis of experimental evidence regarding interspecific competition, Schoener described six specific types of mechanisms by which competition occurs, including consumptive, preemptive, overgrowth, chemical, territorial, and encounter. Consumption competition is always resource competition, but the others cannot always be regarded as exclusively exploitative or interference. Separating the effect of resource use from that of interference is not easy. A good example of exploitative competition is found in aphid species competing over the sap in plant phloem. Each aphid species that feeds on host plant sap uses some of the resource, leaving less for competing species. In one study, "Fordinae geoica" was observed to out-compete "F. formicaria" to the extent that the latter species exhibited a reduction in survival by 84%. Another example is the one of competition for calling space in amphibians, where the calling activity of a species prevents the other one from calling in an area as wide as it would in allopatry. A last example is driving of bisexual rock lizards of genus "Darevskia" from their natural habitats by a daughter unisexual form; interference competition can be ruled out in this case, because parthenogenetic forms of the lizards never demonstrate aggressive behavior. This type of competition can also be observed in forests where large trees dominate the canopy and thus allow little light to reach smaller competitors living below. These interactions have important implications for the population dynamics and distribution of both species. Based on outcome. Scramble and contest competition refer to the relative success of competitors. Scramble competition is said to occur when each competitor is equal suppressed, either through reduction in survival or birth rates. Contest competition is said to occur when one or a few competitors are unaffected by competition, but all others suffer greatly, either through reduction in survival or birth rates. Sometimes these types of competition are referred to as symmetric (scramble) vs. asymmetric (contest) competition. Scramble and contest competition are two ends of a spectrum, of completely equal or completely unequal effects. Apparent competition. Apparent competition is actually an example of predation that alters the relative abundances of prey on the same trophic level. It occurs when two or more species in a habitat affect shared natural enemies in a higher trophic level. If two species share a common predator, for example, apparent competition can exist between the two prey items in which the presence of each prey species increases the abundance of the shared enemy, and thereby suppresses one or both prey species. This mechanism gets its name from experiments in which one prey species is removed and the second prey species increases in abundance. Investigators sometimes mistakenly attribute the increase in abundance in the second species as evidence for resource competition between prey species. It is "apparently" competition, but is in fact due to a shared predator, parasitoid, parasite, or pathogen. Notably, species competing for resources may often also share predators in nature. Interactions via resource competition and shared predation may thus often influence one another, thus making it difficult to study and predict their outcome by only studying one of them. Consequences. Many studies, including those cited previously, have shown major impacts on both individuals and populations from interspecific competition. Documentation of these impacts has been found in species from every major branch of organism. The effects of interspecific competition can also reach communities and can even influence the evolution of species as they adapt to avoid competition. This evolution may result in the exclusion of a species in the habitat, niche separation, and local extinction. The changes of these species over time can also change communities as other species must adapt. Competitive exclusion. The competitive exclusion principle, also called "Gause's law" which arose from mathematical analysis and simple competition models states that two species that use the same limiting resource in the same way in the same space and time cannot coexist and must diverge from each other over time in order for the two species to coexist. One species will often exhibit an advantage in resource use. This superior competitor will out-compete the other with more efficient use of the limiting resource. As a result, the inferior competitor will suffer a decline in population over time. It will be excluded from the area and replaced by the superior competitor. A well-documented example of competitive exclusion was observed to occur between Dolly Varden charr (Trout)("Salvelinus malma") and white spotted char (Trout)("S. leucomaenis") in Japan. Both of these species were morphologically similar but the former species was found primarily at higher elevations than the latter. Although there was a zone of overlap, each species excluded the other from its dominant region by becoming better adapted to its habitat over time. In some such cases, each species gets displaced into an exclusive segment of the original habitat. Because each species suffers from competition, natural selection favors the avoidance of competition in such a way. Niche differentiation. Niche differentiation is a process by which competitive exclusion leads to differences in resource use. In the previous example, niche differentiation resulted in spatial displacement. In other cases it may result in other changes that also avoid competition. If competition avoidance is achievable, each species will occupy an edge of the niche and will become more specialized to that area thus minimizing competition. This phenomenon often results in the separation of species over time as they become more specialized to their edge of the niche, called niche differentiation. The species do not have to be in separate habitats however to avoid niche overlap. Some species adapt regionally to utilizing different resources than they ordinarily would in order to avoid competition. There have been several well-documented cases in birds where species that are very similar change their habitat use where they overlap. For example, they may consume different food resources or use different nesting habitat or materials. On the Galapagos Islands, finch species have been observed to change dietary specializations in just a few generations in order to utilize limited resources and minimize competition. In some cases, third party species interfere to the detriment or benefit of the competing species. In a laboratory study, coexistence between two competing bacterial species was mediated by phage parasites. This type of interaction actually helped to maintain diversity in bacterial communities and has far reaching implications in medical research as well as ecology. Similar effects have been documented for many communities as a result of the action of a keystone predator that preys on a competitively superior species. Local extinction. Although local extinction of one or more competitors has been less documented than niche separation or competitive exclusion, it does occur. In an experiment involving zooplankton in artificial rock pools, local extinction rates were significantly higher in areas of interspecific competition. In these cases, therefore, the negative effects are not only at the population level but also species richness of communities. Impacts on communities. As mentioned previously, interspecific competition has great impact on community composition and structure. Niche separation of species, local extinction and competitive exclusion are only some of the possible effects. In addition to these, interspecific competition can be the source of a cascade of effects that build on each other. An example of such an effect is the introduction of an invasive species to the United States, purple-loosestrife. This plant when introduced to wetland communities often outcompetes much of the native flora and decreases species richness, food and shelter to many other species at higher trophic levels. In this way, one species can influence the populations of many other species as well as through a myriad of other interactions. Because of the complicated web of interactions that make up every ecosystem and habitat, the results of interspecific competition are complex and site-specific. Competitive Lotka–Volterra model. The impacts of interspecific competition on populations have been formalized in a mathematical model called the Competitive Lotka–Volterra equations, which creates a theoretical prediction of interactions. It combines the effects of each species on the other. These effects are calculated separately for the first and second population respectively: formula_0 formula_1 In these formulae, N is the population size, t is time, K is the carrying capacity, r is the intrinsic rate of increase and α and β are the relative competition coefficients. The results show the effect that the other species has on the species being calculated. The results can be graphed to show a trend and possible prediction for the future of the species. One problem with this model is that certain assumptions must be made for the calculation to work. These include the lack of migration and constancy of the carrying capacities and competition coefficients of both species. The complex nature of ecology determines that these assumptions are rarely true in the field but the model provides a basis for improved understanding of these important concepts. An equivalent formulation of these models is: formula_2 formula_3 In these formulae, formula_4 is the effect that an individual of species 1 has on its own population growth rate. Similarly, formula_5 is the effect that an individual of species 2 has on the population growth rate of species 1. One can also read this as the effect on species 1 of species 2. In comparing this formulation to the one above, we note that formula_6, and formula_7. Coexistence between competitors occurs when formula_8 and formula_9. We can translate this as coexistence occurs when the effect of each species on itself is greater the effect of the competitor. There are other mathematical representations that model species competition, such as using non-polynomial functions. Interspecific competition in macroevolution. Interspecific competition is a major factor in macroevolution. Darwin assumed that interspecific competition limits the number of species on Earth, as formulated in his wedge metaphor: "Nature may be compared to a surface covered with ten-thousand sharp wedges ... representing different species, all packed closely together and driven in by incessant blows, . . . sometimes a wedge of one form and sometimes another being struck; the one driven deeply in forcing out others; with the jar and shock often transmitted very far to other wedges in many lines of direction." (From "Natural Selection" - the "big book" from which Darwin abstracted the "Origin"). The question whether interspecific competition limits global biodiversity is disputed today, but analytical studies of the global Phanerozoic fossil record are in accordance with the existence of global (although not constant) carrying capacities for marine biodiversity. Interspecific competition is also the basis for Van Valen's Red Queen hypothesis, and it may underlie the positive correlation between origination and extinction rates that is seen in almost all major taxa. In the previous examples, the macroevolutionary role of interspecific competition is that of a limiting factor of biodiversity, but interspecific competition also promotes niche differentiation and thus speciation and diversification. The impact of interspecific competition may therefore change during phases of diversity build-up, from an initial phase where positive feedback mechanisms dominate to a later phase when niche-peremption limits further increase in the number of species; a possible example for this situation is the re-diversification of marine faunas after the end-Permian mass extinction event. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "{dN_1\\over dt}=r_1 N_1 {K_1-N_1-\\alpha N_2\\over K_1}" }, { "math_id": 1, "text": "{dN_2\\over dt}=r_2 N_2 {K_2-N_2-\\beta N_1\\over K_2}" }, { "math_id": 2, "text": "{dN_1\\over dt}=r_1 N_1 \\left(1-\\alpha_{11}N_1-\\alpha_{12} N_2\\right)" }, { "math_id": 3, "text": "{dN_2\\over dt}=r_2 N_2 \\left(1-\\alpha_{21}N_1-\\alpha_{22} N_2\\right)" }, { "math_id": 4, "text": "\\alpha_{11}" }, { "math_id": 5, "text": "\\alpha_{12}" }, { "math_id": 6, "text": "\\alpha_{11} = 1/K_1,~ \\alpha_{22}=1/K_2" }, { "math_id": 7, "text": "\\alpha_{12}=\\alpha/K_1" }, { "math_id": 8, "text": "\\alpha_{11} > \\alpha_{12}" }, { "math_id": 9, "text": "\\alpha_{22} > \\alpha_{21}" } ]
https://en.wikipedia.org/wiki?curid=6920233
69209212
Forouhi–Bloomer model
Popular optical dispersion relation The Forouhi–Bloomer model is a mathematical formula for the frequency dependence of the complex-valued refractive index. The model can be used to fit the refractive index of amorphous and crystalline semiconductor and dielectric materials at energies near and greater than their optical band gap. The dispersion relation bears the names of Rahim Forouhi and Iris Bloomer, who created the model and interpreted the physical significance of its parameters. The model is aphysical due to its incorrect asymptotic behavior and non-Hermitian character. These shortcomings inspired modified versions of the model as well as development of the Tauc–Lorentz model. Mathematical formulation. The complex refractive index is given by formula_0 where The real and imaginary components of the refractive index are related to one another through the Kramers-Kronig relations. Forouhi and Bloomer derived a formula for formula_5 for amorphous materials. The formula and complementary Kramers–Kronig integral are given by formula_6 formula_7 where formula_9, formula_10, and formula_11 are subject to the constraints formula_15, formula_16, formula_17, and formula_18. Evaluating the Kramers-Kronig integral, formula_19 where The Forouhi–Bloomer model for crystalline materials is similar to that of amorphous materials. The formulas for formula_23 and formula_5 are given by formula_24. formula_25. where all variables are defined similarly to the amorphous case, but with unique values for each value of the summation index formula_26. Thus, the model for amorphous materials is a special case of the model for crystalline materials when the sum is over a single term only. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\tilde{n}(E) = n(E) + i \\kappa(E)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\kappa" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "E=\\hbar\\omega" }, { "math_id": 5, "text": "\\kappa(E)" }, { "math_id": 6, "text": " \\kappa(E) = \\frac{A \\left( E - E_{g} \\right)^{2}}{E^{2} - B E + C} " }, { "math_id": 7, "text": " n(E) = n_{\\infty} + \\frac{1}{\\pi} \\mathcal{P} \\int_{-\\infty}^{\\infty} \\frac{\\kappa(\\xi) - \\kappa_{\\infty}}{\\xi - E} d\\xi " }, { "math_id": 8, "text": "E_{g}" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "B" }, { "math_id": 11, "text": "C" }, { "math_id": 12, "text": "n_{\\infty}" }, { "math_id": 13, "text": "\\mathcal{P}" }, { "math_id": 14, "text": "\\kappa_{\\infty} = \\lim_{E \\rightarrow \\infty} \\kappa(E) = A" }, { "math_id": 15, "text": "A>0" }, { "math_id": 16, "text": "B>0" }, { "math_id": 17, "text": "C>0" }, { "math_id": 18, "text": "4C - B^{2} > 0" }, { "math_id": 19, "text": " n(E) = n_{\\infty} + \\frac{B_{0} E + C_{0}}{E^{2} - B E + C} " }, { "math_id": 20, "text": " Q = \\frac{1}{2} \\sqrt{4 C - B^{2}} " }, { "math_id": 21, "text": " B_{0} = \\frac{A}{Q} \\left( - \\frac{1}{2} B^{2} + E_{g} B - E_{g}^{2} + C \\right) " }, { "math_id": 22, "text": " C_{0} = \\frac{A}{Q} \\left( \\frac{1}{2} B \\left( E_{g}^{2} + C \\right) - 2 E_{g} C \\right) " }, { "math_id": 23, "text": "n(E)" }, { "math_id": 24, "text": " n(E) = n_{\\infty} + \\sum_{j} \\frac{B_{0,j} E + C_{0,j}}{E^{2} - B_{j} E + C_{j}} " }, { "math_id": 25, "text": " \\kappa(E) = \\left( E - E_{g} \\right)^{2} \\sum_{j} \\frac{A_{j}}{E^{2} - B_{j} E + C_{j}} " }, { "math_id": 26, "text": "j" } ]
https://en.wikipedia.org/wiki?curid=69209212
692099
Body surface area
Drug calculating formula In physiology and medicine, the body surface area (BSA) is the measured or calculated surface area of a human body. For many clinical purposes, BSA is a better indicator of metabolic mass than body weight because it is less affected by abnormal adipose mass. Nevertheless, there have been several important critiques of the use of BSA in determining the dosage of medications with a narrow therapeutic index, such as chemotherapy. Typically there is a 4–10 fold variation in drug clearance between individuals due to differing the activity of drug elimination processes related to genetic and environmental factors. This can lead to significant overdosing and underdosing (and increased risk of disease recurrence). It is also thought to be a distorting factor in Phase I and II trials that may result in potentially helpful medications being prematurely rejected. The trend to personalized medicine is one approach to counter this weakness. Uses. Examples of uses of the BSA: There is some evidence that BSA values are less accurate at extremes of height and weight, where Body Mass Index may be a better estimate (for hemodynamic parameters). Calculation. Various calculations have been published to arrive at the BSA without direct measurement. In the following formulae, BSA is expressed in m2, weight (or, more properly, mass) W in kg, and height H in cm. The most widely used is the Du Bois formula, which has been shown to be equally as effective in estimating body fat in obese and non-obese patients, something the Body mass index fails to do. formula_0 The Mosteller formula is also commonly used, and is mathematically simpler: formula_1 Other formulas for BSA in m2 include: For any formula, the units should match. Mosteller pointed out that his formula holds only if the density is treated as a constant for all humans. Lipscombe, following Mosteller's reasoning, observed that the formulas obtained by Fujimoto, Shuter and Aslani, Takahira, and Lipscombe are suggestive of formula_2, which is dimensionally correct for the case of constant density. It equals formula_3. A weight-based formula that does not include a square root (making it easier to use) was proposed by Costeff and recently validated for the pediatric age group. It is [4W (kg) + 7]/[90 + W (kg)]. Average values. Average BSA for children of various ages, for men, and for women, can be estimated using statistical survey data and a BSA formula: The estimations in the above tables are based weight and height data from the U.S. NCHS National Health and Nutrition Examination Survey (2011-2014). There was an average BSA of 1.73 m2 for 3,000 cancer patients from 1990 to 1998 in a European Organisation for Research and Treatment of Cancer (EORTC) database. During 2005 there was an average BSA of 1.79 m2 for 3,613 adult cancer patients in the UK. Among them the average BSA for men was 1.91 m2 and for women was 1.71 m2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "{BSA}=0.007184 \\times W^{0.425} \\times H^{0.725} " }, { "math_id": 1, "text": "{BSA }= \\frac{\\sqrt{W \\times H}}{60} \n= 0.016667 \\times W^{0.5} \\times H^{0.5} " }, { "math_id": 2, "text": " {8/900} \\times W^{4/9} \\times H^{2/3} " }, { "math_id": 3, "text": " (2^3/3^2) \\times (W^{2/3} H) ^ {2/3} /100 " } ]
https://en.wikipedia.org/wiki?curid=692099
69210595
1 Samuel 7
First Book of Samuel chapter 1 Samuel 7 is the seventh chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter records a victory of Israel under the leadership of Samuel against the Philistines as part of the "Ark Narrative" (1 Samuel 4:1–7:1) within a section concerning the life of Samuel (1 Samuel 1:1–7:17), and also as part of a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul. Text. This chapter was originally written in the Hebrew language. It is divided into 17 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls, including 4Q51 (4QSama; 100–50 BCE) with extant verse 1. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter provides the background leading to the rise of the monarchy in chapters 8–12 by indicating the threat to Israel, here from the Philistines (cf. 9:16), and later also from other nations (11:1–15), as well as showing that theocracy, based on Israel's faithfulness to the covenant with God, brought success against the enemies, but later when Israel became unfaithful to God, a monarchy became a necessity. The Ark at Kiriath Jearim (7:1–2). At the request of the people of Beth-shemesh (1 Samuel 6:21), men of Kiriath Jearim moved the Ark of the Covenant from Beth-shemesh to their city and installed it at the house of Abinadab. The people set aside Eleazar, son of Abinadab, as the guard (Hebrew: "shmr") to the Ark, a term which may mean a priestly task of liturgical services to take care of it as a sacred object or the actual task of keeping people away from it (preventing curious peeking as in Beth-shemesh, that caused plagues there). Both the names Abinadab and Eleazar often appear in levitical lists. Eleazar seemed to perform his duties well as there was no reported casualties during the twenty years of the Ark being there. The Ark stayed in Kiriath Jearim until David moved it to Jerusalem (2 Samuel 6). "Then the people of Kiriath Jearim came and took the ark of the Lord; they brought it to the house of Abinadab located on the hill. They consecrated Eleazar his son to guard the ark of the Lord." "And it came to pass, while the ark abode in Kirjathjearim, that the time was long; for it was twenty years: and all the house of Israel lamented after the Lord." Rematch with the Philistines (7:3–14). Although the Philistines had been forced to return the ark, they were still a threat, so Samuel surfaced to lead his people to fight, first by addressing the issue in verse 3, then by assembling the army in verse 5. The battle in verses 7–11 'bears the marks of the holy war tradition', such as in Joshua 10: These elements emphasize the basic claim that 'victory belongs to YHWH alone'. Using a formula similar to those in the book of Judges (cf. Judges 4:23–24), the section concludes by stating that the Philistines were completely subjugated with Israel repossessing towns and territories formerly lost to the Philistines (near Ekron and Gath), restoring their position, as it was before an earlier battle at Ebenezer (chapter 4), a place with significant meaning, 'Stone of Help', reminding Israel that 'thus far the LORD has helped us'. The Israelites even made peace with the Amorites. "And Samuel said to all the house of Israel, "If you are returning to the LORD with all your heart, then put away the foreign gods and the Ashtaroth from among you and direct your heart to the LORD and serve him only, and he will deliver you out of the hand of the Philistines."" Verse 3. Samuel's address contains Deuteronomistic phrases, such as 'returning to the LORD with all your heart', and many expressions found in the book of Judges (cf. Judges 10:6–16 for 'remove foreign gods', 'serve him only', 'the Baals and Astartes'). "And Samuel said, "Gather all Israel to Mizpah, and I will pray to the Lord for you."" "Then Samuel took a stone and set it between Mizpah and Shen. And he called its name Ebenezer saying, "Thus far the Lord has helped us."" "So the Philistines were subdued, and they came no more into the coast of Israel: and the hand of the Lord was against the Philistines all the days of Samuel." Verse 13. The contrast displays the ineffectiveness of Saul's reign against the Philistines, but moreover shows how the people of Israel demanded a king during the time of military dominance over the Philistines under Samuel, thus lack of valid reason to replace theocracy with monarchy. Samuel judges Israel (7:15–17). Samuel the prophet led Israel in the style of preceding "judges", who saved the people from their enemies (Judges 2:18), while also fulfilling a narrower judicial role (verses 15–17). In affirming the effectiveness of a charismatic, non-royal leadership, the inappropriateness of Israel's wish to have a king in subsequent events is established. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" /> Sources. Commentaries on Samuel. <templatestyles src="Refbegin/styles.css" /> General. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69210595
69215219
Approximate Membership Query Filter
Approximate Membership Query Filters (hereafter, AMQ filters) comprise a group of space-efficient probabilistic data structures that support approximate membership queries. An approximate membership query answers whether an element is in a set or not with a false positive rate of formula_0. Bloom filters are the most known AMQ filter, but there are other AMQ filters that support additional operations or have different space requirements. AMQ filters have numerous applications, mainly in distributed systems and databases. There, they are often used to avoid network request or I/O operations that result from requesting elements that do not exist. Approximate membership query problem. The approximate membership query problem is to store information about a set of elements S in a space-efficient way. The goal is to answer queries about whether an element x is in the set S or not, while constraining false positives to a maximal probability formula_0. All AMQ filters support this lookup operation. Dynamic AMQ filters allow insertions at any time whereas static AMQ filters must be rebuilt after inserting additional elements. Some AMQ filters support additional operations such as deleting elements, or merging two filters. Lookup. An AMQ filter lookup will determine whether an element is definitely not in the set or probably in the set. In other words, if the filter represents a set S and we are interested in a value s, then the lookup function applied to s behaves as follows: A false positive is a lookup of an element that is not part of the set, but where the lookup returns true. The probability of this happening is the false positive rate formula_0. False negatives (the lookup returns false although the element is part of the set) are not allowed for AMQ filters. Insertion. After an element is inserted the lookup for this element must return true. Dynamic AMQ filters support inserting elements one at a time without rebuilding the data structure. Other AMQ filters have to be rebuilt after each insertion. Those are called static AMQ filters. False positive rate vs. space. There is a tradeoff between storage size and the false positive rate formula_0. Increasing the storage space reduces the false positive rate. The theoretical lower bound is formula_4 bits for each element. Dynamic AMQ filters cannot reach this lower bound. They need at least formula_5 bits for formula_6 insertions. Different AMQ filters have different ranges of false positive rates and space requirements. Choosing the best AMQ filter depends on the application. Data structures. There are different ways to solve the approximate membership query problem. The most known data structure are Bloom filters, but there are other data structures that perform better for some false positive rates and space requirements, support additional operations, or have other insertion and lookup times. Below we describe some well known AMQ filters. Bloom filter. A Bloom filter is a bit array of formula_7 bits with formula_8 hash functions. Each hash function maps an element to one of the formula_7 positions in the array. In the beginning, all bits of the array are set to zero. To insert an element, all hash functions are calculated and all corresponding bits in the array are set to one. To lookup an element, all formula_8 hash functions are calculated. If all corresponding bits are set, codice_0 is returned. To reduce the false positive rate, the number of hash functions and formula_7 can be increased. Quotient filter. The idea of quotient filters is to hash an element and to split its fingerprint into the formula_9 least significant bits called the remainder formula_10 and the most significant bits called the quotient formula_11. The quotient determines where in the hash table the remainder is stored. Additional three bits for every slot in the hash table are used to resolve soft collisions (same quotient but different remainders). The space used by quotient filters is comparable to Bloom filters, but quotient filters can be merged without affecting their false positive rate. Cuckoo filter. Cuckoo filters are based on cuckoo hashing, but only fingerprints of the elements are stored in the hash table. Each element has two possible locations. The second location is calculated based on the first location and the fingerprint of the element. This is necessary to enable moving already inserted elements if both possible slots for an element are full. After reaching a load threshold the insertion speed of cuckoo filter degrades. It is possible that an insertion fails, and the table must be rehashed. Whereas Bloom filters have always constant insertion time, but as the load factor increases the false positive rate increases as well. A cuckoo filter supports deleting elements in the case where we know for certain that the element was in fact previously inserted. This is an advantage over Bloom filters and quotient filters which do not support this operation. Xor filter. Xor filters are static AMQ filters that are based on a Bloomier filter and use the idea of perfect hash tables. Similar to cuckoo filters, they save fingerprints of the elements in a hash table. The idea is that a query for an element formula_12 is true if the xor of three given hash functions formula_13 is the fingerprint of formula_12. While building the hash table, each element is assigned one of its three slots in a way that no other elements are assigned to this slot. After all elements are assigned, we set for each element the value of its slot to the xor of the two other (not assigned) slots of the element and the fingerprint of the element. This construction algorithm can fail in a way such that no dynamic insertions are possible without rebuilding the hash table. This hash table can be constructed using only formula_14 bits per element. The disadvantage of this filter is that the data structure has to be rebuilt if additional elements are added. They are used in applications where no elements have to added afterwards and space is of importance. Application. Typical applications of AMQ filters are distributed systems and database systems. The AMQ filter functions as a proxy to the set of keys of a database or remote memory. Before a presumably slow query to the database or to remote memory is performed, the AMQ filter is used to give an approximate answer as to whether the key is in the database or in remote memory. The slow query is only performed when the AMQ filter returns true. Only in the case of a (hopefully rare) false positive is an unnecessary I/O or remote access performed. The applications are numerous and include package and resource routing, P2P networks, and distributed caching. AMQ filters are often used as an in-memory data structure to avoid expensive disk accesses. One application is Log-structured merge-trees or LSM trees. They have a fast in-memory component and one or multiple components on a disk which are trees themselves. Elements are inserted into the in-memory component until it reaches its maximal size, than the in-memory component is merged with the disk components. To speed up the lookup many LSM trees implement AMQ filters like Bloom filters or quotient filters. Those filters approximate for each component which elements are stored in it. LSM trees are used in databases like Apache AsterixDB, Bigtable, HBase, LevelDB, SQLite4. Networks offer a lot of applications for AMQ filters. They are used to approximate a set of data that is located on a different servers. In many cases those AMQ filters can be seen as immutable. Even if the set on the remote server changes the AMQ filter is often not updated right away, but some false positives are tolerated. One example of this application is web cache sharing. If a proxy has a cache miss it wants to determine if another proxy has the requested data. Therefore, the proxy must know or at least approximate if another proxy holds the requested web page. This can be archived by periodically broadcasting a static AMQ filter of the URLs of the web pages a proxy has cached instead of broadcasting URL lists. In this setting, false negatives can occur if the cache changed in between periodic updates. The same concept can be applied to P2P networks. AMQ filters can be used to approximate what is stored at each node of the network. The filter can be filled with ids or keywords of the actual documents of the nodes. False positives only lead to some unnecessary requests. AMQ filters have further applications in P2P networks for example finding the difference or intersection between sets stored on different nodes. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\epsilon" }, { "math_id": 1, "text": "s \\in S" }, { "math_id": 2, "text": "s \\notin S" }, { "math_id": 3, "text": "1-\\epsilon" }, { "math_id": 4, "text": "log_2 (1/\\epsilon)" }, { "math_id": 5, "text": "n \\log_2 (1/\\epsilon)(1+ o(1))" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "r" }, { "math_id": 10, "text": "d_R" }, { "math_id": 11, "text": "d_Q" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "h_0, h_1, h_2" }, { "math_id": 14, "text": "1.23 \\log_2 (1/\\epsilon)" } ]
https://en.wikipedia.org/wiki?curid=69215219
69215392
ITU-R BT.1886
ITU-R BT.1886 is the reference EOTF of SDR-TV. It is a gamma 2.4 transfer function (a power law with a 2.4 exponent) considered as a satisfactory approximation of the response characteristic of CRT to electrical signal. It has been standardized by ITU in March 2011. It is used for Rec. 709 (HD-TV) and Rec. 2020 (UHD-TV). Definition. BT.1886 EOTF is as follows: formula_0where According to ITU, for a better match, formula_8 can be set to 0.1 for moderate black level settings (e.g. 0.1 cd/m2) or to 0 for lower black levels (e.g. 0.01 cd/m2). An alternative EOTF has also been provided by ITU for the cases a more precise match of CRT characteristics is required. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "L = a (\\max[(V+b),0])^{\\gamma}\n" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "\\left[0, 1 \\right]" }, { "math_id": 4, "text": "\\gamma" }, { "math_id": 5, "text": "a = ({L_W}^{1/\\gamma} - {L_B}^{1/\\gamma})^{\\gamma}" }, { "math_id": 6, "text": "b = \\frac {{L_B}^{1/\\gamma}} {{L_W}^{1/\\gamma} - {L_B}^{1/\\gamma}}" }, { "math_id": 7, "text": "L_W" }, { "math_id": 8, "text": "L_B" } ]
https://en.wikipedia.org/wiki?curid=69215392
69216431
Gödel Lecture
Award in mathematical logic The Gödel Lecture is an honor in mathematical logic given by the Association for Symbolic Logic, associated with an annual lecture at the association's general meeting. The award is named after Kurt Gödel and has been given annually since 1990. Award winners. The list of award winners and lecture titles is maintained online by the Association for Symbolic Logic. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Omega" }, { "math_id": 1, "text": "\\mathbb{R}_{an, exp}" } ]
https://en.wikipedia.org/wiki?curid=69216431
6921893
Tight closure
In mathematics, in the area of commutative algebra, tight closure is an operation defined on ideals in positive characteristic. It was introduced by Melvin Hochster and Craig Huneke (1988, 1990). Let formula_0 be a commutative noetherian ring containing a field of characteristic formula_1. Hence formula_2 is a prime number. Let formula_3 be an ideal of formula_0. The tight closure of formula_3, denoted by formula_4, is another ideal of formula_0 containing formula_3. The ideal formula_4 is defined as follows. formula_5 if and only if there exists a formula_6, where formula_7 is not contained in any minimal prime ideal of formula_0, such that formula_8 for all formula_9. If formula_0 is reduced, then one can instead consider all formula_10. Here formula_11 is used to denote the ideal of formula_0 generated by the formula_12'th powers of elements of formula_3, called the formula_13th Frobenius power of formula_3. An ideal is called tightly closed if formula_14. A ring in which all ideals are tightly closed is called weakly formula_15-regular (for Frobenius regular). A previous major open question in tight closure is whether the operation of tight closure commutes with localization, and so there is the additional notion of formula_15-regular, which says that all ideals of the ring are still tightly closed in localizations of the ring. found a counterexample to the localization property of tight closure. However, there is still an open question of whether every weakly formula_15-regular ring is formula_15-regular. That is, if every ideal in a ring is tightly closed, is it true that every ideal in every localization of that ring is also tightly closed?
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "p > 0" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "I^*" }, { "math_id": 5, "text": "z \\in I^*" }, { "math_id": 6, "text": "c \\in R" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "c z^{p^e} \\in I^{[p^e]}" }, { "math_id": 9, "text": "e \\gg 0" }, { "math_id": 10, "text": "e > 0" }, { "math_id": 11, "text": "I^{[p^e]}" }, { "math_id": 12, "text": "p^e" }, { "math_id": 13, "text": "e" }, { "math_id": 14, "text": "I = I^*" }, { "math_id": 15, "text": "F" } ]
https://en.wikipedia.org/wiki?curid=6921893
69226638
Fair allocation of items and money
Fair allocation of items and money is a class of fair item allocation problems in which, during the allocation process, it is possible to give or take money from some of the participants. Without money, it may be impossible to allocate indivisible items fairly. For example, if there is one item and two people, and the item must be given entirely to one of them, the allocation will be unfair towards the other one. Monetary payments make it possible to attain fairness, as explained below. Two agents and one item. With two agents and one item, it is possible to attain fairness using the following simple algorithm (which is a variant of cut and choose): The algorithm always yields an envy-free allocation. If the agents have quasilinear utilities, that is, their utility is the value of items plus the amount of money that they have, then the allocation is also proportional. If George thinks that Alice's price is low (he is willing to pay more than "p"), then he takes the item and pay "p", and his utility is positive, so he does not envy Alice. Alice, too, does not envy George since his utility - in her eyes - is 0. Similarly, if George thinks that Alice's price is high (he is willing to pay "p" or more), then he leaves the item to Alice and does not envy, since Alice's utility in his eyes is negative. The paid money "p" can later be divided equally between the players, since an equal monetary transfer does not affect the relative utilities. Then, effectively, the buying agent pays "p"/2 to the selling agent. The total utility of each agent is at least 1/2 of his/her utility for the item. If the agents have different entitlements, then the paid money "p" should be divided between the partners in proportion to their entitlements. There are various works extending this simple idea to more than two players and more complex settings. The main fairness criteria in these works is envy-freeness. In addition, some works consider a setting in which a benevolent third-party is willing to subsidize the allocation, but wants to "minimize" the amount of subsidy subject to envy-freeness. This problem is called the minimum-subsidy envy-free allocation. Unit-demand agents. Unit-demand agents are interested in at most a single item. Rental harmony. A special case of this setting is when dividing rooms in an apartment between tenants. It is characterized by three requirements: (a) the number of agents equals the number of items, (b) each agent must get exactly one item (room), (c) the total amount of money paid by the agents must equal a fixed constant, which represents the total apartment rent. This is known as the Rental Harmony problem. More general settings. In general, in the economics literature, it is common to assume that each agent has a utility function on bundles (a bundle is a pair of an object and a certain amount of money). The utility function should be continuous and increasing in money. It does not have to be linear in money, but does have to be "Archimedean", i.e., there exists some value "V" such that, for every two objects "j" and "k", the utility of object "j" plus "V" should be larger than the utility of object "k" (alternatively, the utility of getting object "j" for free is larger than the utility of getting object "k" and paying "V"). Quasilinear utility is a special case of Archimedean utility, in which "V" is the largest value-difference (for the same agent) between two objects. Svensson first proved that, when all agents are Archimedean, an envy-free allocation exists and is Pareto-optimal. Demange, Gale and Sotomayor showed a natural ascending auction that achieves an envy-free allocation using monetary payments for unit demand agents. Maskin proved the existence of a Pareto-optimal envy-free allocation when the total money endowment is more than ("n-1")"V." The proofs use competitive equilibrium. Note that a subsidy of ("n"-1)"V" may be required: if all agents value a single object at "V" and the other objects at 0, then envy-freeness requires a subsidy of "V" for each agent who does not receive the object. Tadenuma and Thomson study several consistency properties of envy-free allocation rules. Aragones characterizes the minimum amount of subsidy required for envy-freeness. The allocation that attains this minimum subsidy is almost unique: there is only one way to combine objects with agents, and all agents are indifferent among all minimum-subsidy allocations. It coincides with the solution called the "money-Rawlsian solution" of Alkan, Demange and Gale. It can be found in polynomial time, by finding a maximum-weight matching and then finding shortest paths in a certain induced graph. Klijn presents another polynomial-time algorithm for the same setting. His algorithm uses the polytope of side-payments that make a given allocation envy-free: this polytope is nonempty iff the original allocation is Pareto-efficient. Connectivity of the undirected envy graph characterizes the extreme points of this polytope. This implies a method for finding extreme envy-free allocations. Additive agents. Additive agents may receive several objects, so the allocation problem becomes more complex - there are many more possible allocations. Knaster's auction. The first procedure for fair allocation of items and money was invented by Bronislaw Knaster and published by Hugo Steinhaus. This auction works as follows, for each item separately: The utility of every agent is at least 1/"n" of the value he attributes to the entire set of objects, so the allocation is proportional. Moreover, the allocation maximizes the sum of utilities, so it is Pareto efficient. Knaster's auction is not strategyproof. Some researchers analysed its performance when agents play strategically: Knaster's auction has been adapted to fair allocation of wireless channels. Raith's auction. Matthias G. Raith presented a variant on Knaster's auction, which he called "Adjusted Knaster". As in Knaster's auction, each item is given to the highest bidder. However, the payments are different. The payments are determined as follows: To illustrate the difference between Knaster's auction and Raith's auction, consider a setting with two items and two agents with the following values: In both auctions, George wins both items, but the payments are different: In experiments with human subjects, it was found that participants prefer the Raith's auction (Adjusted Knaster) to Divide-and-Choose and to Proportional Knaster (a variant in which each winner pays 1/n of the winning to each loser; in the above example, George pays 90 to Alice, and the net utilities are 90, 90). Compensation procedure. Haake, Raith and Su present the Compensation Procedure. Their procedure allows arbitrary constraints on bundles of items, as long as they are anonymous (do not differentiate between partners based on their identity). For example, there can be no constraint at all, or a constraint such as "each partner must receive at least a certain number of items", or "some items must be bundled together" (e.g. because they are land-plots that must remain connected), etc. The "items" can have both positive or negative utilities. There is a "qualification requirement" for a partner: the sum of his bids must be at least the total cost. The procedure works in the following steps. When there are many item and complex constraints, the initial step - finding a maxsum allocation - may be difficult to calculate without a computer. In this case, the Compensation Procedure may start with an arbitrary allocation. In this case, the procedure might conclude with an allocation that contains "envy-cycles". These cycles can be removed by moving bundles along the cycle, as in the envy-graph procedure. This strictly increases the total sum of utilities. Hence, after a bounded number of iterations, a maxsum allocation will be found, and the procedure can continue as above to create an envy-free allocation. The Compensation Procedure might charge some partners a negative payment (i.e., give the partners a positive amount of money). The authors say: "we do not preclude the possibility that an individual may end up being paid by the others to take a bundle of goods. In the context of fair division, we do not find this problematic at all. Indeed, if a group does not wish to exclude any of its members, then there is no reason why the group should not subsidize a member for receiving an undesired bundle. Moreover, the qualification requirement guarantees that subsidization is never a consequence of a player's insufficient valuation of the complete set of objects to be distributed". MInimum subsidy procedures. Some works assume that a benevolent third-party is willing to subsidize the allocation, but wants to "minimize" the amount of subsidy subject to envy-freeness. This problem is called the minimum-subsidy envy-free allocation. Halpern and Shah study subsidy minimization in the general item-allocation setting. They consider two cases: Brustle, Dippel, Narayan, Suzuki and Vetta improve the upper bounds on the required subsidy: Caragiannis and Ioannidis study the computational problem of minimizing the subsidy: Note that an envy-free allocation with subsidy remains envy-free if a fixed amount is taken from every agent. Therefore, similar methods can be used to find allocations that are not subsidized: Additional procedures. Alkan, Demange and Gale showed that an envy-free allocation always exists when the amount of money is sufficiently large. This is true even when items may have negative valuations. Meertens, Potters and Reijnierse prove the existence of envy-free and Pareto-optimal allocations under very mild assumptions on the valuations (not necessarily quasilinear). Cavallo generalizes the traditional binary criteria of envy-freeness, proportionality, and efficiency (welfare) to measures of degree that range between 0 and 1. In the canonical fair division settings, under any allocatively-efficient mechanism the worst-case welfare rate is 0 and disproportionality rate is 1; in other words, the worst-case results are as bad as possible. He looks for a mechanism that achieves high welfare, low envy, and low disproportionality in expectation across a spectrum of fair division settings. The VCG mechanism is not a satisfactory candidate, but the redistribution mechanism of Bailey and Cavallo is. Related problems. Envy-free pricing. When selling objects to buyers, the sum of payments is not fixed in advance, and the goal is to "maximize" either the seller's revenue, or the social welfare, subject to envy-freeness. Additionally, the number of objects may be different than the number of agents, and some objects may be discarded. This is known as the Envy-free Pricing problem. Multi-dimensional objectives. Often, some other objectives have to be attained besides fairness. For example, when assigning tasks to agents, it is required both to avoid envy, and to minimize the makespan (- the completion time of the last agent). Mu'alem presents a general framework for optimization problems with envy-freeness guarantee that naturally extends fair item allocations using monetary payments. Aziz aims to attain, using monetary transfers, an allocation that is both envy-free and equitable. He studies not only additive positive utilities, but also for any superadditive utilities, whether positive or negative: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n-1" }, { "math_id": 1, "text": "OPT + \\varepsilon\\cdot S" }, { "math_id": 2, "text": "O((m/\\varepsilon)^{n^2+1})" }, { "math_id": 3, "text": "OPT + (n-1)\\cdot S" }, { "math_id": 4, "text": "OPT + 3\\cdot 10^{-4}\\cdot S" } ]
https://en.wikipedia.org/wiki?curid=69226638
69232707
Scaled particle theory
Equilibrium theory of hard-sphere fluids The Scaled Particle Theory (SPT) is an equilibrium theory of hard-sphere fluids which gives an approximate expression for the equation of state of hard-sphere mixtures and for their thermodynamic properties such as the surface tension. One-component case. Consider the one-component homogeneous hard-sphere fluid with molecule radius formula_0. To obtain its equation of state in the form formula_1 (where formula_2 is the pressure, formula_3 is the density of the fluid and formula_4 is the temperature) one can find the expression for the chemical potential formula_5 and then use the Gibbs–Duhem equation to express formula_2 as a function of formula_3. The chemical potential of the fluid can be written as a sum of an ideal-gas contribution and an excess part: formula_6. The excess chemical potential is equivalent to the reversible work of inserting an additional molecule into the fluid. Note that inserting a spherical particle of radius formula_7 is equivalent to creating a cavity of radius formula_8 in the hard-sphere fluid. The SPT theory gives an approximate expression for this work formula_9. In case of inserting a molecule formula_10 it is formula_11, where formula_12 is the packing fraction, formula_13 is the Boltzmann constant. This leads to the equation of state formula_14 which is equivalent to the compressibility equation of state of the Percus-Yevick theory. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "p=p(\\rho,T)" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "\\rho" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "\\mu=\\mu_{id}+\\mu_{ex}" }, { "math_id": 7, "text": "R_0" }, { "math_id": 8, "text": "R_0+R" }, { "math_id": 9, "text": "W(R_0)" }, { "math_id": 10, "text": "(R_0=R)" }, { "math_id": 11, "text": "\\frac{\\mu_{ex}}{kT}=\\frac{W(R)}{kT}=-\\ln(1-\\eta)+\\frac{6\\eta}{1-\\eta}+\\frac{9\\eta^2}{2(1-\\eta)^2}+\\frac{p\\eta}{kT\\rho}" }, { "math_id": 12, "text": "\\eta\\equiv\\frac{4}{3}\\pi R^3\\rho" }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": "\\frac{p}{kT\\rho}=\\frac{1+\\eta+\\eta^2}{(1-\\eta)^3}" } ]
https://en.wikipedia.org/wiki?curid=69232707
692369
Fractional coloring
Graph coloring where graph elements are assigned sets of colors Fractional coloring is a topic in a young branch of graph theory known as fractional graph theory. It is a generalization of ordinary graph coloring. In a traditional graph coloring, each vertex in a graph is assigned some color, and adjacent vertices — those connected by edges — must be assigned different colors. In a fractional coloring however, a "set" of colors is assigned to each vertex of a graph. The requirement about adjacent vertices still holds, so if two vertices are joined by an edge, they must have no colors in common. Fractional graph coloring can be viewed as the linear programming relaxation of traditional graph coloring. Indeed, fractional coloring problems are much more amenable to a linear programming approach than traditional coloring problems. Definitions. A b"-fold coloring of a graph "G" is an assignment of sets of size "b" to vertices of a graph such that adjacent vertices receive disjoint sets. An a":"b"-coloring is a "b"-fold coloring out of "a" available colors. Equivalently, it can be defined as a homomorphism to the Kneser graph "KG""a","b". The "b"-fold chromatic number formula_0 is the least "a" such that an "a":"b"-coloring exists. The fractional chromatic number formula_1 is defined to be: formula_2 Note that the limit exists because formula_0 is "subadditive", meaning:formula_3 The fractional chromatic number can equivalently be defined in probabilistic terms. formula_1 is the smallest "k" for which there exists a probability distribution over the independent sets of "G" such that for each vertex "v", given an independent set "S" drawn from the distribution: formula_4 Properties. We have: formula_5 with equality for vertex-transitive graphs, where "n"("G") is the order of "G", α("G") is the independence number. Moreover: formula_6 where ω("G") is the clique number, and formula_7 is the chromatic number. Furthermore, the fractional chromatic number approximates the chromatic number within a logarithmic factor, in fact: formula_8 Kneser graphs give examples where: formula_9 is arbitrarily large, since: formula_10 while formula_11 Linear programming (LP) formulation. The fractional chromatic number formula_1 of a graph "G" can be obtained as a solution to a linear program. Let formula_12 be the set of all independent sets of "G", and let formula_13 be the set of all those independent sets which include vertex "x". For each independent set "I", define a nonnegative real variable "xI". Then formula_1 is the minimum value of: formula_14 subject to: formula_15 for each vertex formula_16. The dual of this linear program computes the "fractional clique number", a relaxation to the rationals of the integer concept of clique number. That is, a weighting of the vertices of "G" such that the total weight assigned to any independent set is at most "1". The strong duality theorem of linear programming guarantees that the optimal solutions to both linear programs have the same value. Note however that each linear program may have size that is exponential in the number of vertices of "G", and that computing the fractional chromatic number of a graph is NP-hard. This stands in contrast to the problem of fractionally coloring the edges of a graph, which can be solved in polynomial time. This is a straightforward consequence of Edmonds' matching polytope theorem. Applications. Applications of fractional graph coloring include "activity scheduling". In this case, the graph "G" is a "conflict graph": an edge in "G" between the nodes "u" and "v" denotes that "u" and "v" cannot be active simultaneously. Put otherwise, the set of nodes that are active simultaneously must be an independent set in graph "G". An optimal fractional graph coloring in "G" then provides a shortest possible schedule, such that each node is active for (at least) 1 time unit in total, and at any point in time the set of active nodes is an independent set. If we have a solution "x" to the above linear program, we simply traverse all independent sets "I" in an arbitrary order. For each "I", we let the nodes in "I" be active for formula_17 time units; meanwhile, each node not in "I" is inactive. In more concrete terms, each node of "G" might represent a "radio transmission" in a wireless communication network; the edges of "G" represent "interference" between radio transmissions. Each radio transmission needs to be active for 1 time unit in total; an optimal fractional graph coloring provides a minimum-length schedule (or, equivalently, a maximum-bandwidth schedule) that is conflict-free. Comparison with traditional graph coloring. If one further required that each node must be active "continuously" for 1 time unit (without switching it off and on every now and then), then traditional graph vertex coloring would provide an optimal schedule: first the nodes of color 1 are active for 1 time unit, then the nodes of color 2 are active for 1 time unit, and so on. Again, at any point in time, the set of active nodes is an independent set. In general, fractional graph coloring provides a shorter schedule than non-fractional graph coloring; there is an integrality gap. It may be possible to find a shorter schedule, at the cost of switching devices (such as radio transmitters) on and off more than once.
[ { "math_id": 0, "text": "\\chi_b(G)" }, { "math_id": 1, "text": "\\chi_f(G)" }, { "math_id": 2, "text": "\\chi_{f}(G) = \\lim_{b \\to \\infty}\\frac{\\chi_{b}(G)}{b} = \\inf_{b}\\frac{\\chi_{b}(G)}{b}" }, { "math_id": 3, "text": "\\chi_{a+b}(G) \\le \\chi_a(G) + \\chi_b(G)." }, { "math_id": 4, "text": "\\Pr(v\\in S) \\geq \\frac{1}{k}." }, { "math_id": 5, "text": "\\chi_f(G)\\ge \\frac{n(G)}{\\alpha(G)}," }, { "math_id": 6, "text": "\\omega(G) \\le \\chi_f(G) \\le \\chi(G)," }, { "math_id": 7, "text": "\\chi(G)" }, { "math_id": 8, "text": "\\frac{\\chi(G)}{1+\\ln \\alpha(G)} \\le \\chi_f(G) \\le \\frac{\\chi_b(G)}{b} \\le \\chi(G)." }, { "math_id": 9, "text": "\\chi(G) /\\chi_f(G)" }, { "math_id": 10, "text": "\\chi(KG_{m,n}) =m-2n+2," }, { "math_id": 11, "text": "\\chi_f(KG_{m,n}) = \\tfrac{m}{n}." }, { "math_id": 12, "text": "\\mathcal{I}(G)" }, { "math_id": 13, "text": "\\mathcal{I}(G,x)" }, { "math_id": 14, "text": "\\sum_{I\\in\\mathcal{I}(G)} x_I," }, { "math_id": 15, "text": "\\sum_{I\\in\\mathcal{I}(G,x)} x_I \\ge 1" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "x_I" } ]
https://en.wikipedia.org/wiki?curid=692369
6924337
Ebullioscopic constant
Chemical and physical constant of materials In thermodynamics, the ebullioscopic constant "K"b relates molality b to boiling point elevation. It is the ratio of the latter to the former: formula_0 A formula to compute the ebullioscopic constant is: formula_1 Through the procedure called ebullioscopy, a known constant can be used to calculate an unknown molar mass. The term "ebullioscopy" means "boiling measurement" in Latin. This is related to cryoscopy, which determines the same value from the cryoscopic constant (of freezing point depression). This property of elevation of boiling point is a colligative property. It means that the property, in this case Δ"T", depends on the number of particles dissolved into the solvent and not the nature of those particles. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Delta T_\\text{b} = iK_\\text{b} b" }, { "math_id": 1, "text": "K_\\text{b} = \\frac{RMT_\\text{b}^2}{1000\\Delta H_\\text{vap}}" } ]
https://en.wikipedia.org/wiki?curid=6924337
692458
Transformation matrix
Central object in linear algebra; mapping vectors to vectors In linear algebra, linear transformations can be represented by matrices. If formula_0 is a linear transformation mapping formula_1 to formula_2 and formula_3 is a column vector with formula_4 entries, then formula_5 for some formula_6 matrix formula_7, called the transformation matrix of formula_0. Note that formula_7 has formula_8 rows and formula_4 columns, whereas the transformation formula_0 is from formula_1 to formula_2. There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors. Uses. Matrices allow arbitrary linear transformations to be displayed in a consistent format, suitable for computation. This also allows transformations to be composed easily (by multiplying their matrices). Linear transformations are not the only ones that can be represented by matrices. Some transformations that are non-linear on an n-dimensional Euclidean space R"n" can be represented as linear transformations on the "n"+1-dimensional space R"n"+1. These include both affine transformations (such as translation) and projective transformations. For this reason, 4×4 transformation matrices are widely used in 3D computer graphics. These "n"+1-dimensional transformation matrices are called, depending on their application, "affine transformation matrices", "projective transformation matrices", or more generally "non-linear transformation matrices". With respect to an "n"-dimensional matrix, an "n"+1-dimensional matrix can be described as an augmented matrix. In the physical sciences, an active transformation is one which actually changes the physical position of a system, and makes sense even in the absence of a coordinate system whereas a passive transformation is a change in the coordinate description of the physical system (change of basis). The distinction between active and passive transformations is important. By default, by "transformation", mathematicians usually mean active transformations, while physicists could mean either. Put differently, a "passive" transformation refers to description of the "same" object as viewed from two different coordinate frames. Finding the matrix of a transformation. If one has a linear transformation formula_9 in functional form, it is easy to determine the transformation matrix "A" by transforming each of the vectors of the standard basis by "T", then inserting the result into the columns of a matrix. In other words, formula_10 For example, the function formula_11 is a linear transformation. Applying the above process (suppose that "n" = 2 in this case) reveals that formula_12 The matrix representation of vectors and operators depends on the chosen basis; a similar matrix will result from an alternate basis. Nevertheless, the method to find the components remains the same. To elaborate, vector formula_13 can be represented in basis vectors, formula_14 with coordinates formula_15: formula_16 Now, express the result of the transformation matrix "A" upon formula_13, in the given basis: formula_17 The formula_18 elements of matrix "A" are determined for a given basis "E" by applying "A" to every formula_19, and observing the response vector formula_20 This equation defines the wanted elements, formula_18, of "j"-th column of the matrix "A". Eigenbasis and diagonal matrix. Yet, there is a special basis for an operator in which the components form a diagonal matrix and, thus, multiplication complexity reduces to n. Being diagonal means that all coefficients formula_21 except formula_22 are zeros leaving only one term in the sum formula_23 above. The surviving diagonal elements, formula_22, are known as eigenvalues and designated with formula_24 in the defining equation, which reduces to formula_25. The resulting equation is known as eigenvalue equation. The eigenvectors and eigenvalues are derived from it via the characteristic polynomial. With diagonalization, it is often possible to translate to and from eigenbases. Examples in 2 dimensions. Most common geometric transformations that keep the origin fixed are linear, including rotation, scaling, shearing, reflection, and orthogonal projection; if an affine transformation is not a pure translation it keeps some point fixed, and that point can be chosen as origin to make the transformation linear. In two dimensions, linear transformations can be represented using a 2×2 transformation matrix. Stretching. A stretch in the "xy"-plane is a linear transformation which enlarges all distances in a particular direction by a constant factor but does not affect distances in the perpendicular direction. We only consider stretches along the x-axis and y-axis. A stretch along the x-axis has the form x' = kx; y' = y for some positive constant k. (Note that if k > 1, then this really is a "stretch"; if k < 1, it is technically a "compression", but we still call it a stretch. Also, if k = 1, then the transformation is an identity, i.e. it has no effect.) The matrix associated with a stretch by a factor k along the x-axis is given by: formula_26 Similarly, a stretch by a factor k along the y-axis has the form x' = x; y' = ky, so the matrix associated with this transformation is formula_27 Squeezing. If the two stretches above are combined with reciprocal values, then the transformation matrix represents a squeeze mapping: formula_28 A square with sides parallel to the axes is transformed to a rectangle that has the same area as the square. The reciprocal stretch and compression leave the area invariant. Rotation. For rotation by an angle θ counterclockwise (positive direction) about the origin the functional form is formula_29 and formula_30. Written in matrix form, this becomes: formula_31 Similarly, for a rotation clockwise (negative direction) about the origin, the functional form is formula_32 and formula_33 the matrix form is: formula_34 These formulae assume that the "x" axis points right and the "y" axis points up. Shearing. For shear mapping (visually similar to slanting), there are two possibilities. A shear parallel to the "x" axis has formula_35 and formula_36. Written in matrix form, this becomes: formula_37 A shear parallel to the "y" axis has formula_38 and formula_39, which has matrix form: formula_40 Reflection. For reflection about a line that goes through the origin, let formula_41 be a vector in the direction of the line. Then use the transformation matrix: formula_42 Orthogonal projection. To project a vector orthogonally onto a line that goes through the origin, let formula_43 be a vector in the direction of the line. Then use the transformation matrix: formula_44 As with reflections, the orthogonal projection onto a line that does not pass through the origin is an affine, not linear, transformation. Parallel projections are also linear transformations and can be represented simply by a matrix. However, perspective projections are not, and to represent these with a matrix, homogeneous coordinates can be used. Examples in 3D computer graphics. Rotation. The matrix to rotate an angle "θ" about any axis defined by unit vector ("x","y","z") is formula_45 Reflection. To reflect a point through a plane formula_46 (which goes through the origin), one can use formula_47, where formula_48 is the 3×3 identity matrix and formula_49 is the three-dimensional unit vector for the vector normal of the plane. If the "L"2 norm of formula_50, formula_51, and formula_52 is unity, the transformation matrix can be expressed as: formula_53 Note that these are particular cases of a Householder reflection in two and three dimensions. A reflection about a line or plane that does not go through the origin is not a linear transformation — it is an affine transformation — as a 4×4 affine transformation matrix, it can be expressed as follows (assuming the normal is a unit vector): formula_54 where formula_55 for some point formula_56 on the plane, or equivalently, formula_57. If the 4th component of the vector is 0 instead of 1, then only the vector's direction is reflected and its magnitude remains unchanged, as if it were mirrored through a parallel plane that passes through the origin. This is a useful property as it allows the transformation of both positional vectors and normal vectors with the same matrix. See homogeneous coordinates and affine transformations below for further explanation. Composing and inverting transformations. One of the main motivations for using matrices to represent linear transformations is that transformations can then be easily composed and inverted. Composition is accomplished by matrix multiplication. Row and column vectors are operated upon by matrices, rows on the left and columns on the right. Since text reads from left to right, column vectors are preferred when transformation matrices are composed: If A and B are the matrices of two linear transformations, then the effect of first applying A and then B to a column vector formula_58 is given by: formula_59 In other words, the matrix of the combined transformation A followed by B is simply the product of the individual matrices. When A is an invertible matrix there is a matrix A−1 that represents a transformation that "undoes" A since its composition with A is the identity matrix. In some practical applications, inversion can be computed using general inversion algorithms or by performing inverse operations (that have obvious geometric interpretation, like rotating in opposite direction) and then composing them in reverse order. Reflection matrices are a special case because they are their own inverses and don't need to be separately calculated. Other kinds of transformations. Affine transformations. To represent affine transformations with matrices, we can use homogeneous coordinates. This means representing a 2-vector ("x", "y") as a 3-vector ("x", "y", 1), and similarly for higher dimensions. Using this system, translation can be expressed with matrix multiplication. The functional form formula_60 becomes: formula_61 All ordinary linear transformations are included in the set of affine transformations, and can be described as a simplified form of affine transformations. Therefore, any linear transformation can also be represented by a general transformation matrix. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, "the counter-clockwise rotation matrix from above" becomes: formula_62 Using transformation matrices containing homogeneous coordinates, translations become linear, and thus can be seamlessly intermixed with all other types of transformations. The reason is that the real plane is mapped to the "w" = 1 plane in real projective space, and so translation in real Euclidean space can be represented as a shear in real projective space. Although a translation is a non-linear transformation in a 2-D or 3-D Euclidean space described by Cartesian coordinates (i.e. it can't be combined with other transformations while preserving commutativity and other properties), it becomes, in a 3-D or 4-D projective space described by homogeneous coordinates, a simple linear transformation (a shear). More affine transformations can be obtained by composition of two or more affine transformations. For example, given a translation T' with vector formula_63 a rotation R by an angle θ counter-clockwise, a scaling S with factors formula_64 and a translation T of vector formula_65 the result M of T'RST is: formula_66 When using affine transformations, the homogeneous component of a coordinate vector (normally called "w") will never be altered. One can therefore safely assume that it is always 1 and ignore it. However, this is not true when using perspective projections. Perspective projection. Another type of transformation, of importance in 3D computer graphics, is the perspective projection. Whereas parallel projections are used to project points onto the image plane along parallel lines, the perspective projection projects points onto the image plane along lines that emanate from a single point, called the center of projection. This means that an object has a smaller projection when it is far away from the center of projection and a larger projection when it is closer (see also reciprocal function). The simplest perspective projection uses the origin as the center of projection, and the plane at formula_67 as the image plane. The functional form of this transformation is then formula_68; formula_69. We can express this in homogeneous coordinates as: formula_70 After carrying out the matrix multiplication, the homogeneous component formula_71 will be equal to the value of formula_72 and the other three will not change. Therefore, to map back into the real plane we must perform the homogeneous divide or perspective divide by dividing each component by formula_71: formula_73 More complicated perspective projections can be composed by combining this one with rotations, scales, translations, and shears to move the image plane and center of projection wherever they are desired. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "\\mathbb{R}^n" }, { "math_id": 2, "text": "\\mathbb{R}^m" }, { "math_id": 3, "text": "\\mathbf x" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "T( \\mathbf x ) = A \\mathbf x" }, { "math_id": 6, "text": "m \\times n" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "T(x)" }, { "math_id": 10, "text": "A = \\begin{bmatrix} T( \\mathbf e_1 ) & T( \\mathbf e_2 ) & \\cdots & T( \\mathbf e_n ) \\end{bmatrix}" }, { "math_id": 11, "text": "T(x) = 5x" }, { "math_id": 12, "text": "T( \\mathbf{x} ) = 5 \\mathbf{x} = 5I\\mathbf{x} = \\begin{bmatrix} 5 & 0 \\\\ 0 & 5 \\end{bmatrix} \\mathbf{x}" }, { "math_id": 13, "text": "\\mathbf v" }, { "math_id": 14, "text": "E = \\begin{bmatrix}\\mathbf e_1 & \\mathbf e_2 & \\cdots & \\mathbf e_n\\end{bmatrix}" }, { "math_id": 15, "text": " [\\mathbf v]_E = \\begin{bmatrix} v_1 & v_2 & \\cdots & v_n \\end{bmatrix}^\\mathrm{T}" }, { "math_id": 16, "text": "\\mathbf v = v_1 \\mathbf e_1 + v_2 \\mathbf e_2 + \\cdots + v_n \\mathbf e_n = \\sum_i v_i \\mathbf e_i = E [\\mathbf v]_E" }, { "math_id": 17, "text": "\\begin{align}\n A(\\mathbf v)\n &= A \\left(\\sum_i v_i \\mathbf e_i \\right)\n = \\sum_i {v_i A(\\mathbf e_i)} \\\\\n &= \\begin{bmatrix}A(\\mathbf e_1) & A(\\mathbf e_2) & \\cdots & A(\\mathbf e_n)\\end{bmatrix} [\\mathbf v]_E\n = A \\cdot [\\mathbf v]_E \\\\[3pt]\n &= \\begin{bmatrix}\\mathbf e_1 & \\mathbf e_2 & \\cdots & \\mathbf e_n \\end{bmatrix}\n \\begin{bmatrix}\n a_{1,1} & a_{1,2} & \\cdots & a_{1,n} \\\\\n a_{2,1} & a_{2,2} & \\cdots & a_{2,n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{n,1} & a_{n,2} & \\cdots & a_{n,n} \\\\\n \\end{bmatrix}\n \\begin{bmatrix} v_1 \\\\ v_2 \\\\ \\vdots \\\\ v_n\\end{bmatrix} \n\\end{align}" }, { "math_id": 18, "text": "a_{i,j}" }, { "math_id": 19, "text": "\\mathbf e_j = \\begin{bmatrix} 0 & 0 & \\cdots & (v_j=1) & \\cdots & 0 \\end{bmatrix}^\\mathrm{T}" }, { "math_id": 20, "text": "A \\mathbf e_j = a_{1,j} \\mathbf e_1 + a_{2,j} \\mathbf e_2 + \\cdots + a_{n,j} \\mathbf e_n = \\sum_i a_{i,j} \\mathbf e_i." }, { "math_id": 21, "text": "a_{i,j} " }, { "math_id": 22, "text": "a_{i,i}" }, { "math_id": 23, "text": "\\sum a_{i,j} \\mathbf e_i" }, { "math_id": 24, "text": "\\lambda_i" }, { "math_id": 25, "text": "A \\mathbf e_i = \\lambda_i \\mathbf e_i" }, { "math_id": 26, "text": "\\begin{bmatrix} k & 0 \\\\ 0 & 1 \\end{bmatrix} " }, { "math_id": 27, "text": "\\begin{bmatrix} 1 & 0 \\\\ 0 & k \\end{bmatrix} " }, { "math_id": 28, "text": "\\begin{bmatrix} k & 0 \\\\ 0 & 1/k \\end{bmatrix} ." }, { "math_id": 29, "text": "x' = x \\cos \\theta - y \\sin \\theta" }, { "math_id": 30, "text": "y' = x \\sin \\theta + y \\cos \\theta" }, { "math_id": 31, "text": "\\begin{bmatrix} x' \\\\ y' \\end{bmatrix} = \\begin{bmatrix} \\cos \\theta & -\\sin\\theta \\\\ \\sin \\theta & \\cos \\theta \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}" }, { "math_id": 32, "text": "x' = x \\cos \\theta + y \\sin \\theta" }, { "math_id": 33, "text": "y' = -x \\sin \\theta + y \\cos \\theta" }, { "math_id": 34, "text": "\\begin{bmatrix} x' \\\\ y' \\end{bmatrix} = \\begin{bmatrix} \\cos \\theta & \\sin\\theta \\\\ -\\sin \\theta & \\cos \\theta \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}" }, { "math_id": 35, "text": "x' = x + ky" }, { "math_id": 36, "text": "y' = y" }, { "math_id": 37, "text": "\\begin{bmatrix} x' \\\\ y' \\end{bmatrix} = \\begin{bmatrix} 1 & k \\\\ 0 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}" }, { "math_id": 38, "text": "x' = x" }, { "math_id": 39, "text": "y' = y + kx" }, { "math_id": 40, "text": "\n\\begin{bmatrix} x' \\\\ y' \\end{bmatrix} = \\begin{bmatrix} 1 & 0 \\\\ k & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}\n" }, { "math_id": 41, "text": "\\mathbf{l} = (l_x, l_y)" }, { "math_id": 42, "text": "\\mathbf{A} = \\frac{1}{\\lVert\\mathbf{l}\\rVert^2} \\begin{bmatrix} l_x^2 - l_y^2 & 2 l_x l_y \\\\ 2 l_x l_y & l_y^2 - l_x^2 \\end{bmatrix}" }, { "math_id": 43, "text": "\\mathbf{u} = (u_x, u_y)" }, { "math_id": 44, "text": "\\mathbf{A} = \\frac{1}{\\lVert\\mathbf{u}\\rVert^2} \\begin{bmatrix} u_x^2 & u_x u_y \\\\ u_x u_y & u_y^2 \\end{bmatrix}" }, { "math_id": 45, "text": "\\begin{bmatrix}\nxx(1-\\cos \\theta)+\\cos\\theta & yx(1-\\cos\\theta)-z\\sin\\theta & zx(1-\\cos\\theta)+y\\sin\\theta\\\\\nxy(1-\\cos\\theta)+z\\sin\\theta & yy(1-\\cos\\theta)+\\cos\\theta & zy(1-\\cos\\theta)-x\\sin\\theta \\\\\nxz(1-\\cos\\theta)-y\\sin\\theta & yz(1-\\cos\\theta)+x\\sin\\theta & zz(1-\\cos\\theta)+\\cos\\theta\n\\end{bmatrix}." }, { "math_id": 46, "text": "ax + by + cz = 0" }, { "math_id": 47, "text": "\\mathbf{A} = \\mathbf{I} - 2\\mathbf{NN}^\\mathrm{T} " }, { "math_id": 48, "text": "\\mathbf{I}" }, { "math_id": 49, "text": "\\mathbf{N}" }, { "math_id": 50, "text": "a" }, { "math_id": 51, "text": "b" }, { "math_id": 52, "text": "c" }, { "math_id": 53, "text": "\\mathbf{A} = \\begin{bmatrix} 1 - 2 a^2 & - 2 a b & - 2 a c \\\\ - 2 a b & 1 - 2 b^2 & - 2 b c \\\\ - 2 a c & - 2 b c & 1 - 2c^2 \\end{bmatrix}" }, { "math_id": 54, "text": "\\begin{bmatrix} x' \\\\ y' \\\\ z' \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 1 - 2 a^2 & - 2 a b & - 2 a c & - 2 a d \\\\ - 2 a b & 1 - 2 b^2 & - 2 b c & - 2 b d \\\\ - 2 a c & - 2 b c & 1 - 2c^2 & - 2 c d \\\\ 0 & 0 & 0 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\\\ z \\\\ 1 \\end{bmatrix} " }, { "math_id": 55, "text": "d = -\\mathbf{p} \\cdot \\mathbf{N}" }, { "math_id": 56, "text": "\\mathbf{p}" }, { "math_id": 57, "text": "ax + by + cz + d = 0" }, { "math_id": 58, "text": "\\mathbf{x}" }, { "math_id": 59, "text": "\\mathbf{B}(\\mathbf{A} \\mathbf x) = (\\mathbf{BA}) \\mathbf x." }, { "math_id": 60, "text": "x' = x + t_x; y' = y + t_y" }, { "math_id": 61, "text": "\\begin{bmatrix} x' \\\\ y' \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 1 & 0 & t_x \\\\ 0 & 1 & t_y \\\\ 0 & 0 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\\\ 1 \\end{bmatrix}." }, { "math_id": 62, "text": "\\begin{bmatrix} \\cos \\theta & -\\sin \\theta & 0 \\\\ \\sin \\theta & \\cos \\theta & 0 \\\\ 0 & 0 & 1 \\end{bmatrix}" }, { "math_id": 63, "text": "(t'_x, t'_y)," }, { "math_id": 64, "text": "(s_x, s_y)" }, { "math_id": 65, "text": "(t_x, t_y)," }, { "math_id": 66, "text": "\\begin{bmatrix}\ns_x \\cos \\theta & - s_y \\sin \\theta & t_x s_x \\cos \\theta - t_y s_y \\sin \\theta + t'_x \\\\\ns_x \\sin \\theta & s_y \\cos \\theta & t_x s_x \\sin \\theta + t_y s_y \\cos \\theta + t'_y \\\\\n0 & 0 & 1\n\\end{bmatrix}" }, { "math_id": 67, "text": "z = 1" }, { "math_id": 68, "text": "x' = x / z" }, { "math_id": 69, "text": "y' = y / z" }, { "math_id": 70, "text": "\\begin{bmatrix} x_c \\\\ y_c \\\\ z_c \\\\ w_c \\end{bmatrix} = \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\\\ z \\\\ 1 \\end{bmatrix}=\\begin{bmatrix} x \\\\ y \\\\ z \\\\ z \\end{bmatrix}\n" }, { "math_id": 71, "text": "w_c" }, { "math_id": 72, "text": "z" }, { "math_id": 73, "text": "\\begin{bmatrix} x' \\\\ y' \\\\ z' \\\\ 1 \\end{bmatrix} = \\frac{1}{w_c} \\begin{bmatrix} x_c \\\\ y_c \\\\ z_c \\\\ w_c \\end{bmatrix}=\\begin{bmatrix} x / z \\\\ y / z \\\\ 1 \\\\ 1 \\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=692458
692463
Rotation operator (quantum mechanics)
Quantum operator This article concerns the rotation operator, as it appears in quantum mechanics. Quantum mechanical rotations. With every physical rotation formula_0, we postulate a quantum mechanical rotation operator formula_1 which rotates quantum mechanical states. formula_2 In terms of the generators of rotation, formula_3 where formula_4 is rotation axis, formula_5 is angular momentum, and formula_6 is the reduced Planck constant. The translation operator. The rotation operator formula_7, with the first argument formula_8 indicating the rotation axis and the second formula_9 the rotation angle, can operate through the translation operator formula_10 for infinitesimal rotations as explained below. This is why, it is first shown how the translation operator is acting on a particle at position x (the particle is then in the state formula_11 according to Quantum Mechanics). Translation of the particle at position formula_12 to position formula_13: formula_14 Because a translation of 0 does not change the position of the particle, we have (with 1 meaning the identity operator, which does nothing): formula_15 formula_16 Taylor development gives: formula_17 with formula_18 From that follows: formula_19 This is a differential equation with the solution formula_20 Additionally, suppose a Hamiltonian formula_21 is independent of the formula_12 position. Because the translation operator can be written in terms of formula_22, and formula_23, we know that formula_24 This result means that linear momentum for the system is conserved. In relation to the orbital angular momentum. Classically we have for the angular momentum formula_25 This is the same in quantum mechanics considering formula_26 and formula_27 as operators. Classically, an infinitesimal rotation formula_28 of the vector formula_29 about the formula_8-axis to formula_30 leaving formula_8 unchanged can be expressed by the following infinitesimal translations (using Taylor approximation): formula_31 From that follows for states: formula_32 And consequently: formula_33 Using formula_34 from above with formula_35 and Taylor expansion we get: formula_36 with formula_37 the formula_8-component of the angular momentum according to the classical cross product. To get a rotation for the angle formula_38, we construct the following differential equation using the condition formula_39: formula_40 Similar to the translation operator, if we are given a Hamiltonian formula_21 which rotationally symmetric about the formula_8-axis, formula_41 implies formula_42. This result means that angular momentum is conserved. For the spin angular momentum about for example the formula_43-axis we just replace formula_44 with formula_45 (where formula_46 is the Pauli Y matrix) and we get the spin rotation operator formula_47 Effect on the spin operator and quantum states. Operators can be represented by matrices. From linear algebra one knows that a certain matrix formula_48 can be represented in another basis through the transformation formula_49 where formula_50 is the basis transformation matrix. If the vectors formula_51 respectively formula_52 are the z-axis in one basis respectively another, they are perpendicular to the y-axis with a certain angle formula_38 between them. The spin operator formula_53 in the first basis can then be transformed into the spin operator formula_54 of the other basis through the following transformation: formula_55 From standard quantum mechanics we have the known results formula_56 and formula_57 where formula_58 and formula_59 are the top spins in their corresponding bases. So we have: formula_60 formula_61 Comparison with formula_56 yields formula_62. This means that if the state formula_59 is rotated about the formula_43-axis by an angle formula_38, it becomes the state formula_58, a result that can be generalized to arbitrary axes.
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "D(R)" }, { "math_id": 2, "text": "| \\alpha \\rangle_R = D(R) |\\alpha \\rangle" }, { "math_id": 3, "text": "D (\\mathbf{\\hat n},\\phi) = \\exp \\left( -i \\phi \\frac{\\mathbf{\\hat n} \\cdot \\mathbf J }{ \\hbar} \\right)," }, { "math_id": 4, "text": "\\mathbf{\\hat n}" }, { "math_id": 5, "text": " \\mathbf{J} " }, { "math_id": 6, "text": "\\hbar" }, { "math_id": 7, "text": "\\operatorname{R}(z, \\theta)" }, { "math_id": 8, "text": "z" }, { "math_id": 9, "text": "\\theta" }, { "math_id": 10, "text": "\\operatorname{T}(a)" }, { "math_id": 11, "text": "|x\\rangle" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "x + a" }, { "math_id": 14, "text": "\\operatorname{T}(a)|x\\rangle = |x + a\\rangle" }, { "math_id": 15, "text": "\\operatorname{T}(0) = 1" }, { "math_id": 16, "text": "\\operatorname{T}(a) \\operatorname{T}(da)|x\\rangle = \\operatorname{T}(a)|x + da\\rangle = |x + a + da\\rangle = \\operatorname{T}(a + da)|x\\rangle \\Rightarrow \\operatorname{T}(a) \\operatorname{T}(da) = \\operatorname{T}(a + da)" }, { "math_id": 17, "text": "\\operatorname{T}(da) = \\operatorname{T}(0) + \\frac{d\\operatorname{T}(0)}{da} da + \\cdots = 1 - \\frac{i}{\\hbar} p_x da" }, { "math_id": 18, "text": "p_x = i \\hbar \\frac{d\\operatorname{T}(0)}{da}" }, { "math_id": 19, "text": "\\operatorname{T}(a + da) = \\operatorname{T}(a) \\operatorname{T}(da) = \\operatorname{T}(a)\\left(1 - \\frac{i}{\\hbar} p_x da\\right) \\Rightarrow \\frac{\\operatorname{T}(a + da) - \\operatorname{T}(a)}{da} = \\frac{d\\operatorname{T}}{da} = - \\frac{i}{\\hbar} p_x \\operatorname{T}(a)" }, { "math_id": 20, "text": "\\operatorname{T}(a) = \\exp\\left(- \\frac{i}{\\hbar} p_x a\\right)." }, { "math_id": 21, "text": "H" }, { "math_id": 22, "text": "p_x" }, { "math_id": 23, "text": "[p_x,H] = 0" }, { "math_id": 24, "text": "[H, \\operatorname{T}(a)]=0." }, { "math_id": 25, "text": "\\mathbf L = \\mathbf r \\times \\mathbf p." }, { "math_id": 26, "text": "\\mathbf r" }, { "math_id": 27, "text": "\\mathbf p" }, { "math_id": 28, "text": "dt" }, { "math_id": 29, "text": "\\mathbf r = (x,y,z)" }, { "math_id": 30, "text": "\\mathbf r' = (x',y',z)" }, { "math_id": 31, "text": "\\begin{align}\nx' &= r \\cos(t + dt) = x - y \\, dt + \\cdots \\\\\ny' &= r \\sin(t + dt) = y + x \\, dt + \\cdots\n\\end{align}" }, { "math_id": 32, "text": "\\operatorname{R}(z, dt)|r\\rangle = \\operatorname{R}(z, dt)|x, y, z\\rangle = |x - y \\, dt, y + x \\, dt, z\\rangle = \\operatorname{T}_x(-y \\, dt) \\operatorname{T}_y(x \\, dt)|x, y, z\\rangle = \\operatorname{T}_x(-y \\, dt) \\operatorname{T}_y(x \\, dt) |r\\rangle" }, { "math_id": 33, "text": "\\operatorname{R}(z, dt) = \\operatorname{T}_x (-y \\, dt) \\operatorname{T}_y(x \\, dt)" }, { "math_id": 34, "text": "T_k(a) = \\exp\\left(- \\frac{i}{\\hbar} p_k a\\right)" }, { "math_id": 35, "text": "k = x,y" }, { "math_id": 36, "text": "\\operatorname{R}(z,dt)=\\exp\\left[-\\frac{i}{\\hbar} \\left(x p_y - y p_x\\right) dt\\right] = \\exp\\left(-\\frac{i}{\\hbar} L_z dt\\right) = 1-\\frac{i}{\\hbar}L_z dt + \\cdots" }, { "math_id": 37, "text": "L_z = x p_y - y p_x" }, { "math_id": 38, "text": "t" }, { "math_id": 39, "text": "\\operatorname{R}(z, 0) = 1 " }, { "math_id": 40, "text": "\\begin{align}\n&\\operatorname{R}(z, t + dt) = \\operatorname{R}(z, t) \\operatorname{R}(z, dt) \\\\[1.1ex]\n\\Rightarrow {} & \\frac{d\\operatorname{R}}{dt} = \\frac{\\operatorname{R}(z, t + dt) - \\operatorname{R}(z, t)}{dt} = \\operatorname{R}(z, t) \\frac{\\operatorname{R}(z, dt) - 1}{dt} = - \\frac{i}{\\hbar} L_z \\operatorname{R}(z, t) \\\\[1.1ex]\n\\Rightarrow {}& \\operatorname{R}(z, t) = \\exp\\left(- \\frac{i}{\\hbar}\\, t \\, L_z\\right)\n\\end{align}" }, { "math_id": 41, "text": "[L_z,H]=0" }, { "math_id": 42, "text": "[\\operatorname{R}(z,t),H]=0" }, { "math_id": 43, "text": "y" }, { "math_id": 44, "text": "L_z" }, { "math_id": 45, "text": "S_y = \\frac{\\hbar}{2} \\sigma_y" }, { "math_id": 46, "text": "\\sigma_y" }, { "math_id": 47, "text": "\\operatorname{D}(y, t) = \\exp\\left(- i \\frac{t}{2} \\sigma_y\\right)." }, { "math_id": 48, "text": "A" }, { "math_id": 49, "text": "A' = P A P^{-1}" }, { "math_id": 50, "text": "P" }, { "math_id": 51, "text": "b" }, { "math_id": 52, "text": "c" }, { "math_id": 53, "text": "S_b" }, { "math_id": 54, "text": "S_c" }, { "math_id": 55, "text": "S_c = \\operatorname{D}(y, t) S_b \\operatorname{D}^{-1}(y, t)" }, { "math_id": 56, "text": "S_b |b+\\rangle = \\frac{\\hbar}{2} |b+\\rangle" }, { "math_id": 57, "text": "S_c |c+\\rangle = \\frac{\\hbar}{2} |c+\\rangle" }, { "math_id": 58, "text": "|b+\\rangle" }, { "math_id": 59, "text": "|c+\\rangle" }, { "math_id": 60, "text": "\\frac{\\hbar}{2} |c+\\rangle = S_c |c+\\rangle = \\operatorname{D}(y, t) S_b \\operatorname{D}^{-1}(y, t) |c+\\rangle \\Rightarrow" }, { "math_id": 61, "text": "S_b \\operatorname{D}^{-1}(y, t) |c+\\rangle = \\frac{\\hbar}{2} \\operatorname{D}^{-1}(y, t) |c+\\rangle" }, { "math_id": 62, "text": "|b+\\rangle = D^{-1}(y, t) |c+\\rangle" } ]
https://en.wikipedia.org/wiki?curid=692463
69262097
Invariant decomposition
Concept in group theory (mathematics) The invariant decomposition is a decomposition of the elements of pin groups formula_0 into orthogonal commuting elements. It is also valid in their subgroups, e.g. orthogonal, pseudo-Euclidean, conformal, and classical groups. Because the elements of Pin groups are the composition of formula_1 oriented reflections, the invariant decomposition theorem readsEvery formula_1-reflection can be decomposed into formula_2 commuting factors. It is named the invariant decomposition because these factors are the invariants of the formula_1-reflection formula_3. A well known special case is the Chasles' theorem, which states that any rigid body motion in formula_4 can be decomposed into a rotation around, followed or preceded by a translation along, a single line. Both the rotation and the translation leave two lines invariant: the axis of rotation and the orthogonal axis of translation. Since both rotations and translations are bireflections, a more abstract statement of the theorem reads "Every quadreflection can be decomposed into commuting bireflections". In this form the statement is also valid for e.g. the spacetime algebra formula_5, where any Lorentz transformation can be decomposed into a commuting rotation and boost. Bivector decomposition. Any bivector formula_6 in the geometric algebra formula_7 of total dimension formula_8 can be decomposed into formula_9 orthogonal commuting simple bivectors that satisfy formula_10 Defining formula_11, their properties can be summarized as formula_12 (no sum). The formula_13 are then found as solutions to the characteristic polynomial formula_14 Defining formula_15and formula_16, the solutions are given by formula_17 The values of formula_18 are subsequently found by squaring this expression and rearranging, which yields the polynomial formula_19 By allowing complex values for formula_18, the counter example of Marcel Riesz can in fact be solved. This closed form solution for the invariant decomposition is only valid for eigenvalues formula_18 with algebraic multiplicity of 1. For degenerate formula_18 the invariant decomposition still exists, but cannot be found using the closed form solution. Exponential map. A formula_20-reflection formula_21 can be written as formula_22 where formula_23 is a bivector, and thus permits a factorization formula_24 The invariant decomposition therefore gives a closed form formula for exponentials, since each formula_13 squares to a scalar and thus follows Euler's formula: formula_25 Carefully evaluating the limit formula_26 gives formula_27 and thus translations are also included. Rotor factorization. Given a formula_20-reflection formula_21 we would like to find the factorization into formula_28. Defining the simple bivector formula_29 where formula_30. These bivectors can be found directly using the above solution for bivectors by substituting formula_31 where formula_32 selects the grade formula_33 part of formula_34. After the bivectors formula_35 have been found, formula_36 is found straightforwardly as formula_37 Principal logarithm. After the decomposition of formula_21 into formula_28 has been found, the principal logarithm of each simple rotor is given by formula_38 and thus the logarithm of formula_34 is given by formula_39 General Pin group elements. So far we have only considered elements of formula_40, which are formula_20-reflections. To extend the invariant decomposition to a formula_41-reflections formula_42, we use that the vector part formula_43 is a reflection which already commutes with, and is orthogonal to, the formula_20-reflection formula_44. The problem then reduces to finding the decomposition of formula_34 using the method described above. Invariant bivectors. The bivectors formula_13 are invariants of the corresponding formula_21 since they commute with it, and thus under group conjugation formula_45 Going back to the example of Chasles' theorem as given in the introduction, the screw motion in 3D leaves invariant the two lines formula_46 and formula_47, which correspond to the axis of rotation and the orthogonal axis of translation on the horizon. While the entire space undergoes a screw motion, these two axes remain unchanged by it. History. The invariant decomposition finds its roots in a statement made by Marcel Riesz about bivectors:Can any bivector formula_6 be decomposed into the direct sum of mutually orthogonal simple bivectors?Mathematically, this would mean that for a given bivector formula_6 in an formula_48 dimensional geometric algebra, it should be possible to find a maximum of formula_49 bivectors formula_13, such that formula_50, where the formula_13 satisfy formula_51 and should square to a scalar formula_52. Marcel Riesz gave some examples which lead to this conjecture, but also one (seeming) counter example. A first more general solution to the conjecture in geometric algebras formula_53 was given by David Hestenes and Garret Sobczyck. However, this solution was limited to purely Euclidean spaces. In 2011 the solution in formula_54 (3DCGA) was published by Leo Dorst and Robert Jan Valkenburg, and was the first solution in a Lorentzian signature. Also in 2011, Charles Gunn was the first to give a solution in the degenerate metric formula_55. This offered a first glimpse that the principle might be metric independent. Then, in 2021, the full metric and dimension independent closed form solution was given by Martin Roelfs in his PhD thesis. And because bivectors in a geometric algebra formula_7 form the Lie algebra formula_56, the thesis was also the first to use this to decompose elements of formula_40 groups into orthogonal commuting factors which each follow Euler's formula, and to present closed form exponential and logarithmic functions for these groups. Subsequently in a paper by Martin Roelfs and Steven De Keninck the invariant decomposition was extended to include elements of formula_0, not just formula_40, and the direct decomposition of elements of formula_40 without having to pass through formula_56 was found. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\text{Pin}(p,q,r)" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "\\lceil k/2 \\rceil" }, { "math_id": 3, "text": "R \\in \\text{Pin}(p,q,r)" }, { "math_id": 4, "text": "\\text{SE}(3)" }, { "math_id": 5, "text": "\\text{SO}(3,1)" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "\\mathbb{R}_{p,q,r}" }, { "math_id": 8, "text": "n = p+q+r" }, { "math_id": 9, "text": "k = \\lfloor n / 2 \\rfloor" }, { "math_id": 10, "text": "F = F_1 + F_2 \\ldots + F_{k}." }, { "math_id": 11, "text": "\\lambda_i := F_i^2 \\in \\mathbb{C}" }, { "math_id": 12, "text": "F_i F_j = \\delta_{ij} \\lambda_i + F_i \\wedge F_j" }, { "math_id": 13, "text": "F_i" }, { "math_id": 14, "text": "\n0 = (F_1 - F_i) (F_2 - F_i) \\cdots (F_k - F_i).\n" }, { "math_id": 15, "text": "W_{m} = \\frac{1}{m!}\\langle F^{m}\\rangle_{2 m} = \\frac{1}{m!}\\, \\underbrace{F \\wedge F \\wedge \\ldots \\wedge F}_{m\\ \\text{times}}\n" }, { "math_id": 16, "text": "r = \\lfloor k/2 \\rfloor" }, { "math_id": 17, "text": "F_i\n= \\begin{cases}\n\\dfrac{\\lambda_i^{r} W_0 + \\lambda_i^{r-1} W_2 + \\ldots + W_k}{\\lambda_i^{r-1} W_1 + \\lambda_i^{r-2} W_3 + \\ldots + W_{k-1}} \\quad & k \\text{ even}, \\\\[10mu]\n\\dfrac{\\lambda_i^{r} W_1 + \\lambda_i^{r-1} W_3 + \\ldots + W_k}{\\lambda_i^{r} W_0 + \\lambda_i^{r-1} W_2 + \\ldots + W_{k-1}} & k \\text{ odd}.\n\\end{cases}" }, { "math_id": 18, "text": "\\lambda_i" }, { "math_id": 19, "text": "\\begin{aligned}\n0 &= \\sum_{m=0}^{k} \\langle W_{m}^2 \\rangle_0 (- \\lambda_i)^{k-m} \\\\[5mu]\n&= (F_1^2 - \\lambda_i) (F_2^2 - \\lambda_i) \\cdots (F_k^2 - \\lambda_i).\n\\end{aligned}" }, { "math_id": 20, "text": "2k" }, { "math_id": 21, "text": "R \\in \\text{Spin}(p,q,r)" }, { "math_id": 22, "text": "R = \\exp(F)" }, { "math_id": 23, "text": "F \\in \\mathfrak{spin}(p,q,r)" }, { "math_id": 24, "text": "\nR = e^F = e^{F_1} e^{F_2} \\cdots e^{F_k}.\n" }, { "math_id": 25, "text": "\nR_i = e^{F_i} =\n{\\cosh}\\bigl(\\sqrt{\\lambda_i}\\bigr)\n + \\frac{{\\sinh}\\bigl(\\sqrt{\\lambda_i}\\bigr)}{\\sqrt{\\lambda_i}} F_i.\n" }, { "math_id": 26, "text": "\\lambda_i \\to 0" }, { "math_id": 27, "text": "R_i = e^{F_i} = 1 + F_i," }, { "math_id": 28, "text": "R_i = \\exp(F_i)" }, { "math_id": 29, "text": "\nt(F_i) := \\frac{{\\tanh}\\bigl(\\sqrt{\\lambda_i}\\bigr)}{\\sqrt{\\lambda_i}} F_i,\n" }, { "math_id": 30, "text": "\\lambda_i = F_i^2" }, { "math_id": 31, "text": "W_m = \\langle R \\rangle_{2m} \\big/ \\langle R \\rangle_0" }, { "math_id": 32, "text": "\\langle R \\rangle_{2m}" }, { "math_id": 33, "text": "2m" }, { "math_id": 34, "text": "R" }, { "math_id": 35, "text": "t(F_i)" }, { "math_id": 36, "text": "R_i" }, { "math_id": 37, "text": "R_i = \\frac{1 + t(F_i)}{\\sqrt{1 - t(F_i)^2}}." }, { "math_id": 38, "text": "F_i = \\text{Log}(R_i) = \\begin{cases}\n\\dfrac{\\langle R_i \\rangle_2 }{\\textstyle \\sqrt{\\langle R_i \\rangle\\vphantom)_2^2}} \\;\\text{arccosh}(\\langle R_i \\rangle) \\quad & \\lambda_i^2 \\neq 0, \\\\[5mu]\n\\langle R_i \\rangle_2 & \\lambda_i^2 = 0.\n\\end{cases}" }, { "math_id": 39, "text": "\\text{Log}(R) = \\sum_{i=1}^k \\text{Log}(R_i)." }, { "math_id": 40, "text": "\\text{Spin}(p,q,r)" }, { "math_id": 41, "text": "(2k+1)" }, { "math_id": 42, "text": "P \\in \\text{Pin}(p,q,r)" }, { "math_id": 43, "text": "r = \\langle P \\rangle_1" }, { "math_id": 44, "text": "R = r^{-1} P = P r^{-1}" }, { "math_id": 45, "text": "R F_i R^{-1} = F_i." }, { "math_id": 46, "text": "F_1" }, { "math_id": 47, "text": "F_2" }, { "math_id": 48, "text": "n" }, { "math_id": 49, "text": "k = \\lfloor n/2 \\rfloor" }, { "math_id": 50, "text": "F = \\sum_{i=1}^{\\lfloor n/2 \\rfloor} F_i" }, { "math_id": 51, "text": "F_i \\cdot F_j = [F_i, F_j] = 0" }, { "math_id": 52, "text": "\\lambda_i := F_i^2 \\in \\mathbb{R}" }, { "math_id": 53, "text": "\\mathbb{R}_{n,0,0}" }, { "math_id": 54, "text": "\\mathbb{R}_{4,1,0}" }, { "math_id": 55, "text": "\\mathbb{R}_{3,0,1}" }, { "math_id": 56, "text": "\\mathfrak{spin}(p,q,r)" } ]
https://en.wikipedia.org/wiki?curid=69262097
69267805
Phenotypic response surfaces
Phenotypic response surfaces (PRS) is an artificial intelligence-guided personalized medicine platform that relies on combinatorial optimization principles to quantify drug interactions and efficacies to develop optimized combination therapies to treat a broad spectrum of illnesses. Phenotypic response surfaces fit a parabolic surface to a set of drug doses and biomarker values based on the understanding that the relationship between drugs, their interactions, and their effect on the measure biomarker can be modeled by quadric surface. The resulting surface allows for the omission of both in-vitro and in-silico screening of multi-drug combinations based on a patient's unique phenotypic response. This provides a method to utilize small data sets to create time-critical personalized therapies that is independent of the disease or drug mechanism. The adaptable nature of the platform allows it to tackle a wide range of applications from isolating novel combination therapies to predicting daily drug regimen adjustments to support in-patient treatments. History. Modern medical practice since its inception in the early 19th to 20th centuries has been seen as "a science of uncertainty and art of probability" as mused by one of its founders, Sir William Osler. The lack of a concrete mechanism for the relationship between drug dosing and its efficacy led largely to the use of population averages as a metric for determine optimal doses for patients. This issue is further compounded by the introduction of combination therapies as there is an exponential growth in number of possible combinations and outcomes as the number of drugs increases. Combinatory therapy treatments provide significant benefits over monotherapy alternatives including greater efficacies and lower side effects and fatality rates, making them ideal candidates to optimize. In 2011 the PRS methodology was developed by a team led by Dr. Ibrahim Al-Shyoukh and Dr. Chih Ming Ho of the University of California Los Angeles to provide a platform that would allow for a comparatively small number of calibration tests to optimize multi-drug combination therapies based on measurement of cellular biomarkers. Since its inception the PRS platform has been applied to a broad range of disease areas including organ transplants, oncology, and infectiology. The PRS platform has since become the basis for a commercial optimization platform marketed by Singapore based Kyan Therapeutics in partnership with Kite Pharma and the National University of Singapore to provided personalized combination therapies for oncological applications. Methodology. The PRS platform utilizes a neural network to fit data sets to a regression function resulting in a parabolic surface that provides a direct quantitative relationship between drug dose and efficacy. The governing function for the PRS platform is given as the following: formula_0 where: The parabolic nature of the relationship allows for the minimal required calibration test to utilize the PRS regression in the search area of NM combinations, where N is the number of dosing regimens and M is the number of drugs in the combination. Applications. The mechanism-independent nature of the PRS platform makes it utilizable to treat a broad spectrum of diseases including for the treatment of cancers, infectious diseases, and for organ transplants. Oncology. Optimization of combination therapies is of particular importance in oncology. Conventional cancer treatments often rely on the sequential use of chemotherapy drugs, with each new drug starting as soon as the previous agent loses efficacy. This methodology allows for cancerous cells, due to their rapid rate of mutation, to develop resistances to chemotherapy drugs in instances where chemotherapy drugs fail to be effective. Combination therapies are therefore vital to preventing the development of drug resistant tumors and thereby decreasing the likelihood of relapse among cancer patients. The PRS platform alleviates the principal difficulty in developing combination therapies to treat cancer as it omits the need to perform in-vitro high throughput screening to determine the most effective regimen that is currently employed. PRS based therapy has been used to successful derive an optimized 3 drug combination to treat multiple myeloma and overcome drug resistance. The PRS derivative CURATE.AI platform has also been used to optimize a 2 drug combination of a bromodomain inhibitor and enzalutamide to successfully treat and prevent the progression of prostate cancer. Infectious disease. Drug resistance is a particular challenge when attempting to treat infectious diseases as monotherapy solutions carry the risk of increasing drug resistance and combination therapy demonstrates lower mortality rates. Highly contagious infectious diseases like tuberculosis have become the leading cause of death by infectious disease world wide. Tuberculosis treatment requires the sustained use of antibiotics over an extended period of time, with high rates of noncompliance among patients, which increases the risk of development of drug resistant forms of tuberculosis. The PRS platform has been successfully used to develop combinatory regimens that reduce tuberculosis treatment time by 75% and can be employed on both drug sensitive and resistant variants of the disease. The PRS derivative IDENTIF.AI platform has been used in Singapore to identify viable SARS-CoV-2 delta variant treatments on behalf of the Singapore Ministry of Health. The platform identified the metabolite EIDD-1931 as having strong antiviral properties that can be used in combination with other commercial antiviral agents to create an effective therapy for the treatment of the SARS-CoV-2 delta variant. Organ transplant. The PRS derived phenotypic personalized dosing platform developed in 2016 has been used to provide personalized tacrolimus and prednisone dosing for liver transplant procedures and post transplant care to prevent transplant rejection events. This methodology is able to use the minimal number of calibration tests and as a result provides a physicians with a rolling window in which daily optimized drug dose can be predicted. The platform is recalibrated daily to take into consideration the patients changing physiological responses to the drug regimen providing physicians with accessible personalized treatment tools and eliminating the need to use of population average based dosing. The platform is actively being considered for other transplant uses including kidney and heart transplants.
[ { "math_id": 0, "text": "E(C,t) = x_0 + \\sum_{i=1}^Mx_iC_i +\\sum_{i=1}^My_{ii}C_i^2 +\\sum_{i=1}^{M-1}\\sum_{j=i+1}^Mz_{ij}C_iC_j" } ]
https://en.wikipedia.org/wiki?curid=69267805
6928351
Sigma-ring
Family of sets closed under countable unions In mathematics, a nonempty collection of sets is called a 𝜎-ring (pronounced "sigma-ring") if it is closed under countable union and relative complementation. Formal definition. Let formula_0 be a nonempty collection of sets. Then formula_0 is a 𝜎-ring if: Properties. These two properties imply: formula_6 whenever formula_7 are elements of formula_8 This is because formula_9 Every 𝜎-ring is a δ-ring but there exist δ-rings that are not 𝜎-rings. Similar concepts. If the first property is weakened to closure under finite union (that is, formula_10 whenever formula_5) but not countable union, then formula_0 is a ring but not a 𝜎-ring. Uses. 𝜎-rings can be used instead of 𝜎-fields (𝜎-algebras) in the development of measure and integration theory, if one does not wish to require that the universal set be measurable. Every 𝜎-field is also a 𝜎-ring, but a 𝜎-ring need not be a 𝜎-field. A 𝜎-ring formula_0 that is a collection of subsets of formula_11 induces a 𝜎-field for formula_12 Define formula_13 Then formula_14 is a 𝜎-field over the set formula_11 - to check closure under countable union, recall a formula_15-ring is closed under countable intersections. In fact formula_14 is the minimal 𝜎-field containing formula_0 since it must be contained in every 𝜎-field containing formula_8 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathcal{R}" }, { "math_id": 1, "text": "\\bigcup_{n=1}^{\\infty} A_{n} \\in \\mathcal{R}" }, { "math_id": 2, "text": "A_{n} \\in \\mathcal{R}" }, { "math_id": 3, "text": "n \\in \\N" }, { "math_id": 4, "text": "A \\setminus B \\in \\mathcal{R}" }, { "math_id": 5, "text": "A, B \\in \\mathcal{R}" }, { "math_id": 6, "text": "\\bigcap_{n=1}^{\\infty} A_n \\in \\mathcal{R}" }, { "math_id": 7, "text": "A_1, A_2, \\ldots" }, { "math_id": 8, "text": "\\mathcal{R}." }, { "math_id": 9, "text": "\\bigcap_{n=1}^\\infty A_n = A_1 \\setminus \\bigcup_{n=2}^{\\infty}\\left(A_1 \\setminus A_n\\right)." }, { "math_id": 10, "text": "A \\cup B \\in \\mathcal{R}" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "X." }, { "math_id": 13, "text": "\\mathcal{A} = \\{ E \\subseteq X : E \\in \\mathcal{R} \\ \\text{or} \\ E^c \\in \\mathcal{R} \\}." }, { "math_id": 14, "text": "\\mathcal{A}" }, { "math_id": 15, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=6928351
69283722
Māori wards and constituencies
Electoral unit representing Maori Māori wards and constituencies refer to wards and constituencies on urban, district, and regional councils in New Zealand that represent local constituents registered on the Māori parliamentary electoral roll vote. Like Māori electorates within the New Zealand Parliament, the purpose of Māori wards and constituencies is to ensure that Māori are represented in local government decision making. Māori wards and constituencies were first introduced by the Bay of Plenty Regional Council in 2001. Efforts to introduce them to other local and regional government bodies in New Zealand were complicated by a poll provision allowing referendums on the issue of introducing Māori wards and constituencies. Consequently, attempts to introduce Māori wards and constituencies were defeated at several polls in New Plymouth, Palmerston North, the Western Bay of Plenty, Whakatāne, Manawatu, and Kaikōura. In late February 2021, the Sixth Labour Government passed the Local Electoral (Māori Wards and Māori Constituencies) Amendment Act 2021, which eliminated the poll provision for establishing Māori wards and constituencies. As a result, at the 2022 local elections, six of the eleven regional councils (54.5%) have Māori constituencies and 29 of the 67 territorial authorities (43.3%) had Māori wards. In late November 2023, the Sixth National Government pledged to "restore the right of local referendum on the establishment or ongoing use of Māori wards." On 30 July 2024, the Sixth National Government passed a bill that reinstated the previous provisions requiring local referenda on the establishment or ongoing use of Māori wards. Councils that have already established a Māori ward without a referendum are now required to hold a binding poll alongside the 2025 local elections or to disestablish them. Background. Although in 2006 Māori formed 14.6% of New Zealand's population, a Department of Internal Affairs (DIA) survey found that 12% of candidates not elected at the October 2007 local elections were Māori and only 8% of winning candidates were Māori. By contrast the 66% of the population who are European had an 84% chance of losing candidates and a 90% chance of winning. The inequality was marginally smaller in 2016, with 89.8% of elected members being European and 10.1% Māori. A feature of New Zealand's parliamentary representation arrangements is the system of Māori electorates, which are for electors of Māori descent who choose to be registered on the Māori electoral roll and are intended to give Māori a more direct say in Parliament. Equivalent provisions for local government are set out in section 19Z (and following) of the Local Electoral Act 2001. These provisions are opt-in and allow territorial authorities and regional councils to introduce Māori wards (in cities and districts) or constituencies (in regions) for electoral purposes. The number of members elected to a council through its Māori wards or constituencies is determined after determining the total number of councillors for the city or district or region, in proportion to the number of members elected to the council through its general wards and constituencies, such that:formula_0The number of members excludes the mayor, who is elected separately. The total electoral population includes all electors in the city, district or region, regardless of whether they are on the general electoral roll or the Māori electoral roll. The number of Māori members is rounded to the nearest whole number. If the calculated number of Māori members is zero, the council must resolve against having separate Māori and general wards. Until the passage of the Local Electoral (Māori Wards and Māori Constituencies) Amendment Act 2021 into law, Māori wards and constituencies could be established by decision of the council, or through a local referendum (called, under the Act, a "poll"). If a council resolved to establish Māori wards or constituencies, it had to notify its residents of their right to demand a poll on the establishment of the wards and constituencies (the "poll provision"). If a petition signed by 5 percent of the electors of the city, district or region was presented to the council, the poll must be held within 89 days. All electors (not specifically electors of Māori descent or those on the Māori electoral roll) could demand and vote in a binding poll on Māori wards and constituencies. The result of the poll was binding for two local body elections, after which the council could choose to retain the status quo or adopt another change. History. Introduction. Māori wards and constituencies have proved contentious, as the poll provision (outlined above) has frequently overturned councils' decisions. While a general establishment provision has been available since 2002, the first Māori constituencies were established for the Bay of Plenty Regional Council in 2001 under a unique piece of legislation. Population ratios were such that the council was able to establish three Māori constituencies. The introduction of Māori wards and constituencies was supported by the Labour, Alliance, and Green parties, it was opposed by the conservative National Party, the populist New Zealand First Party, and the libertarian ACT Party. (While he supported the 2002 amendment to the Local Electoral Act, the Green Party co-leader, Rod Donald, though not his Party, had opposed the Bay of Plenty legislation due to its compulsory nature and preferring Single Transferable Votes.) In 2006, the National Party MP for Bay of Plenty, Tony Ryall (who had been Minister of Local Government for six months in 1998–1999), moved a private member's bill seeking the repeal of both pieces of Māori ward legislation, arguing that, since the opt-in provisions in the Local Electoral Act 2001 had not been used in four years, the wards were "unused... antiquated... not necessary [and] divisive." The motion failed. Initial expansion and resistance. In 2010, Māori Party MP Te Ururoa Flavell sought a law change to make it compulsory for all councils to have Māori seats. At that time, Bay of Plenty Regional Council was still the only local authority to have Māori representation. Flavell's proposal failed, but not before it was declared to be inconsistent with the New Zealand Bill of Rights Act 1990 due to using a different Māori representation formula that the Attorney-General Chris Finlayson stated would "lead to disparity in representation between Māori wards... and general wards." The difference was that the formula used the number of people of Māori descent rather than the number of people on the Māori electoral roll. In October 2011, the Waikato Regional Council voted 14–2 to establish two Māori seats in preparation for the 2013 local body elections. A poll was not demanded and the constituencies were established. In late October 2017, the Waikato Regional Council voted by a margin of 7–3 to retain both Maori constituencies Nga Hau e Wha and Nga Tai ki Uta. In 2014, the Mayor of New Plymouth Andrew Judd proposed introducing a Māori ward in the New Plymouth District Council. The council resolved to do so, but was defeated in a 2015 referendum by a margin of 83% to 17%. The backlash Judd experienced was an influence on his decision not to run for a second term during the 2016 local body elections. In April 2016, Flavell, now a Māori Party co-leader, presented a petition to the New Zealand Parliament on behalf of Judd that advocated (as Flavell had done previously) the establishment of mandatory Māori wards on every district council in New Zealand. In June 2017, a private members' bill in the name of Marama Davidson sought to remove the poll provision, but was defeated during its first reading. A poll on establishing Māori wards at Wairoa District Council was held alongside that council's October 2016 triennial election and was successful; elections for three Māori seats at that council were held in October 2019. Following this result, five territorial authorities (Palmerston North City Council, Kaikōura District Council, Whakatāne District Council, Manawatu District Council, and Western Bay of Plenty District Council) approved, in separate decisions over late 2017, to introduce Māori wards for the 2019 local elections. In response, the lobby group Hobson's Pledge (fronted by former National Party and ACT New Zealand leader Don Brash) organised several petitions calling for local referendums on the matter of introducing Māori wards and constituencies, taking advantage of the poll provision. These polls were granted and held in early 2018. Each poll failed; Māori wards were rejected by voters in Palmerston North (68.8%), Western Bay of Plenty (78.2%), Whakatāne (56.4%), Manawatu (77%), and Kaikōura (55%) on 19 May 2018. The average voter turnout in those polls was about 40%. The rejection of Māori wards was welcomed by Brash and conservative broadcaster Mike Hosking. By contrast, the referendum results were met with dismay by Whakatāne Mayor Tony Bonne and several Māori leaders including Labour MPs Willie Jackson and Tāmati Coffey, former Māori Party co-leader Te Ururoa Flavell, Bay of Plenty resident and activist Toni Boynton, and left-wing advocacy group ActionStation national director Laura O'Connell Rapira. In response, ActionStation organised a petition calling on the Minister of Local Government, Nanaia Mahuta, to change the law so that establishing a Māori ward uses the same process as establishing a general ward (general wards are not subject to the poll provision, but have a different appeals process through the Local Government Commission). The Labour Party has supported changes to the laws regarding Māori wards and constituencies. Two bills were introduced by backbench Labour MP Rino Tirikatene in 2019 (the first a local bill seeking permanent representation for Ngāi Tahu on the Canterbury Regional Council; the second a member's bill to ensure that the repeal of legislation establishing Māori seats in Parliament must be subject to a 75% supermajority of Parliament), but both failed. 2021–22 legislative reform. Nine local authorities determined to establish Māori wards ahead of the 2022 New Zealand local elections (Whangarei District Council, Kaipara District Council, Northland Regional Council, Tauranga City Council, Gisborne District Council, Ruapehu District Council, Taupō District Council, New Plymouth District Council, and South Taranaki District Council). While polls for some of those districts were signalled, Minister of Local Government Nanaia Mahuta stated in November 2020 that removing the poll provision was "on her list" for the Sixth Labour Government's second term. On 1 February 2021, Mahuta announced that the Government would establish a new law upholding local council decisions to establish Māori wards and abolishing the existing law allowing local referendums to veto decisions by councils to establish Māori wards. This law would come into effect before the scheduled 2022 local body elections. On 25 February, Mahuta's Local Electoral (Māori Wards and Māori Constituencies) Amendment Act 2021, which eliminates mechanisms for holding referendums on the establishment of Māori wards and constituencies on local bodies, passed its third reading in Parliament with the support of the Labour, Green and Māori parties. The bill was unsuccessfully opposed by the National and ACT parties, with the former mounting a twelve-hour filibuster challenging all of the Bill's ten clauses. Councils were also given a fresh opportunity to make decisions about establishing Māori wards after the law change; as a result, at the 2022 local elections, six of the eleven regional councils (54.5%) have Māori constituencies and 29 of the 67 territorial authorities (43.3%) had Māori wards. Until this point, only Bay of Plenty Regional Council, Waikato Regional Council and Wairoa District Council had had elections with Māori wards or constituencies. Between August and November 2023, Māori wards or constituencies were agreed to be introduced at a further group of councils for the 2025 and 2028 local elections, including Western Bay of Plenty District Council, Hauraki District Council, Whanganui District Council, Thames-Coromandel District Council, Greater Wellington Regional Council, and Upper Hutt City Council. The introduction of Māori wards was lost at Auckland Council in an 11-9 vote that followed two months of consultation. While 68% of non-Māori respondents opposed the Māori wards proposal, 54% of the 1,300 Māori respondents supported them. 87% of the 17 Māori organisations consulted including Te Whānau o Waipareira, supported the proposal. In addition, right-wing lobby group Hobson's Pledge sent councillors 1,200 emails opposing Māori wards. Rotorua Lakes Council local bill. In mid-November 2021, the Rotorua Lakes Council voted to establish a Māori ward. The Māori partnership organisation Te Tatau o Te Arawa expressed disappointment with the Council's decision, claiming that it did not provide Māori with adequate representation. While Rotorua District councillors had preferred a governing arrangement consisting of three Māori ward seats, three general seats, and four at-large seats, that model was not lawful under the Local Electoral Act 2001. In April 2022, Labour Member of Parliament Tāmati Coffey introduced a local bill on behalf of the council, the Rotorua District Council (Representation Arrangements) Bill, seeking an exemption from the Local Electoral Act's requirements preventing the Council's preferred 3-3-4 governing arrangement. The Rotorua bill passed its first reading on 6 April 2022 and was referred to the Māori Affairs Committee. The Labour, Green and Māori parties (77 votes) supported the bill while the National and ACT parties opposed the bill. Following complaints about the short two-weeks timeframe for submission, the bill's submission period was extended until 4 May 2022. In late April 2022, the Attorney General David Parker released a report expressing concern that the proposed Rotorua electoral bill breached the New Zealand Bill of Rights Act 1990 since it discriminated against general roll voters by allocating more seats to Māori ward voters. Rotorua's general roll had 55,600 voters while its Māori roll had 21,700 voters. In response, Māori Development Minister Willie Jackson and Deputy Prime Minister Grant Robertson stated that they would not support the bill in its current form. The National Party's justice spokesperson Paul Goldsmith claimed that the bill breached the principle of "equal suffrage" by giving Maori electoral roll votes 2.5 times the value of general roll votes. Māori Party co-leader Rawiri Waititi defended Coffey's Rotorua Bill, claiming that it gave equal representation to Māori. On 28 April 2022, Coffey and the Rotorua Lakes Council agreed to "pause" the bill's select committee process in order to address the legal issues raised by the Attorney General. Following the 2022 New Zealand local elections, the Rotorua Lakes Council with the exception of Māori ward councillors Rawiri Waru and Trevor Maxwell voted to withdraw its support for the bill in February 2023. Canterbury Regional Council local bill. In early December 2021, Rino Tirikatene's local bill on behalf of the Canterbury Regional Council (Environment Canterbury), the Canterbury Regional Council (Ngāi Tahu Representation) Bill, passed its first reading in the New Zealand Parliament by a margin of 77 to 43 votes. While the Labour, Green and Māori parties supported the legislation, it was opposed by the opposition National and ACT parties. The bill proposes adding two seats for Māori tribal Ngāi Tahu representatives to the Environment Canterbury, boosting the body's membership to 16 members. The proposed legislation was supported by Environment Canterbury and the Ngāi Tahu sub-groups Papatipu Rūnanga and Te Rūnanga o Ngāi Tahu. By 9 February, the Māori Affairs Select Committee had received almost 1,700 submissions regarding the proposed bill. Federated Farmers' South Canterbury chairman Greg Anderson stated that the Ngāi Tahu representatives should be elected either by the tribe or the general Canterbury population. On 3 July, the Ngāi Tahu Representation Bill passed its third and final reading. The bill's passage was welcomed by the Labour Party and Ngāi Tahu representatives including Tipene O'Regan as a means of ensuring Māori representation at the local government level and upholding the partnership aspects of the Treaty of Waitangi. The opposition National Party vowed to repeal the bill on the grounds that it did not uphold electoral equality for all New Zealanders and did not provide electoral accountability. 2024 law change. In late November 2023, the Sixth National Government pledged to "restore the right of local referendum on the establishment or ongoing use of Māori wards." In early April 2024, the Local Government Minister Simeon Brown announced that local and regional councils which introduced Māori wards without polling residents would have to hold referendums during the 2025 local elections or dissolve the wards they had established prior to the 2025 local elections. Brown also announced that the government would introduce legislation restoring the right to referenda on Māori wards by the end of July 2024. The Wairoa District, Waikato Region and Bay of Plenty Regional Councils are unaffected by the Government declaration since they introduced Māori wards before the removal of poll requirements. The Opotiki District's Māori wards are not affected by the ruling since they held a poll during the 2022 New Zealand local elections that found majority support for wards. The Tauranga City Council, which has been under the management of commissioners since 2020, is scheduled to hold elections in July 2024. Tauranga has the option of reversing its decision to establish Māori wards or holding a poll during the 2024-2028 term, with the outcome taking effect after the 2028 local elections. Councils affected by the Government's new polling requirement include the Stratford District Council, where councillors had voted to introduce a Māori ward in 2021 in time for the 2022 local body elections. However, Stratford District Council had not conducted a referendum among residents. In mid-May 2024, 54 mayors and regional council chairpersons including Local Government New Zealand President and Mayor of Selwyn Sam Broughton, Mayor of Palmerston North Grant Smith, Mayor of Central Otago Tim Cadogan, Mayor of Wellington Tory Whanau and Mayor of Dunedin Jules Radich issued a joint letter criticising the Government's proposed law change requiring local councils to hold referenda on having Māori wards and constituencies, describing it as "an overreach on local decision-making." In response, Brown along with New Zealand First leader Winston Peters and ACT Party David Seymour defended the proposed legislation as a restoration of democracy and said that New Zealanders had voted for change during the 2023 New Zealand general election. In early June 2024, the Human Rights Commission (HRC) issued a statement criticising the Government's proposed legislation to reinstate polls on Māori wards as discriminatory and said it would undermine local decision-making. The human rights watchdog said that Māori wards ensured a Māori voice on many local councils and that the Government's legislation would conflict with local councils' Treaty of Waitangi obligations and international human rights standards. The HRC also said that Māori wards were held to a higher procedural standard than general or rural wards, and that polls would heighten racism and bigotry against Māori. On 30 July 2024, the Government passed the Local Government (Māori Wards) Amendment Act 2024, which "restored the right of local referendum on the establishment or ongoing use of Māori wards." While National, ACT and NZ First supported the bill as part of their coalition agreements, it was opposed by the Labour, Green, and Māori parties. During the third reading, the Local Government Minister Brown said that the Government was supporting local democracy by giving local communities the right to decide whether to establish Māori wards in their communities. By contrast, Labour leader Chris Hipkins accused the Government of discriminating against Māori and promoting division. Similarly, Te Pāti Māori MP Mariameno Kapa-Kingi described the law change as an attack on the Treaty of Waitangi and an attempt to silence Māori. Under the law change, local councils have until 6 September 2024 to decide whether to drop their Māori wards or hold a binding referendum on them at the 2025 New Zealand local elections. Local government responses to 2024 law change. On 7 August 2024, the Kaipara District Council became the first local council to vote to disestablish its Māori under the Government's 2024 legislation. Prior to the 2022 New Zealand local elections, the Council had voted to establish its Te Moananui o Kaipara Māori ward, which was won by Councillor Pera Paniora. Kaipara councillors including Mayor Craig Jepson voted by margin of 6 to 3 in favour of disestablishing the Te Moananui o Kaipara Māori ward with one abstention. During the vote, 150 protesters demonstrated outside the Kaipara Council's Mangawhai office. Paniora will continue serving as Councillor until the end of her three year term in October 2025. While the Kaipara council's decision was welcomed by Democracy Northland spokesperson Frank Newman, Te Runanga o Ngāti Whātua (the tribal representative body for Ngāti Whātua) trustee Deb Nathan announced that they would file legal proceedings against the council for failing to consult local Māori over its removal. On 7 August, the New Plymouth District Council voted by a majority to retain its Te Purutanga Mauri Pūmanawa Māori Ward subject to a binding poll during the 2025 local elections. Cr Murray Chong, who had opposed Māori wards, abstained during the vote and claimed that his ute had been fired upon due to his outspoken views. Mayor Neil Holdom and Māori ward councillor Te Waka McLeod condemned the shooting as unacceptable while Police confirmed they were investigating the incident. McLeod expressed hope that voters would support the retention of New Plymouth's Māori but called for a communications strategy to combat disinformation. On 7 August, the Upper Hutt City Council rescinded its 2023 decision to establish at least one Māori ward for the 2025 and 2028 local elections. On 8 August, the Gisborne District Council voted by a majority to retain its Māori wards subject to a binding poll during the 2025 local elections. The sole opponent was Cr Rob Telfer, who said that " he did not want to put people in boxes." That same day, the Palmerston North City Council voted to retain its Te Pūao Māori Ward and also passed an amendment to seek information on the implications of avoiding a binding poll, which it described as a "race-based" decision. The Council's decision was criticised by ACT leader David Seymour and ACT local government spokesperson Cameron Luxton, saying that voters should have the right to decide whether to retain or abolish their Māori ward. On 13 August, the Stratford District Council voted unanimously to retain its Māori ward subject to a binding poll during the 2025 local elections. The Ruapehu, South Taranaki and Gisborne District Councils sought an exemption from the Government's requirement to hold polls on their Māori wards since they had introduced Māori wards between October and November 2020, prior to the Labour Government's law change. On 20 August, Local Government Minister Simeon Brown declined their petition, stating that the three months notification period was insufficient for local communities to have a say in a petition. During the annual Local Government New Zealand (LGNZ) conference held on 21 August, the Palmerston North City Council (PNCC) submitted a remit challenging the Government's poll requirement for Māori wards. The PNCC's remit was supported by the Far North District Council. In addition, the Northland Regional Council introduced a remit calling for a 75% majority vote for any changes to the Local Election Act affecting Māori wards. According to the "Otago Daily Times", 83.5% of local councils at the LGNZ conference sponsored the remit opposing the poll requirement for Māori wards and constituencies. In response, Minister Brown defended the Government's Māori ward polls requirement. On 27 August, the Waipa District Council voted to retain their Māori ward subject to a binding referendum. On 28 August, the Hawke's Bay Regional Council and Rotorua Lakes and Taupō District Councils voted to retain their Māori wards subject to binding referenda. The Hawke's Bay Regional Council attracted criticism from Minister Brown after patched gang members attended the Māori ward voting meeting despite a ban on gang insignia. On 29 August, the Hamilton City Council, Porirua City Council, Rangitīkei District Council, and Matamata-Piako District Council, Whangarei District Council voted to retain their Māori wards subject to binding referenda. By 30 August, more than half of the 45 city, district and regional councils had voted to retain their Māori wards subject to binding referenda. Councils with Māori wards or constituencies. Regional councils. Note: this table excludes the unitary authorities in Auckland, Gisborne, Nelson, Tasman and Marlborough. Public opinion. In mid June 2024, a New Zealand Taxpayers' Union–Curia poll found that a majority of New Zealanders (58%) believed that local voters rather than local mayors and councillors should decide on the introduction or disestablishment of Māori wards. By contrast, 23% believed that the decision should be left to local mayors and councillors while 19% were undecided. Notes and references. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathsf{\\text{Number of Māori members}=\\frac\\text{Māori electoral population}\\text{Total electoral population}\\times \\text{Number of members}}" } ]
https://en.wikipedia.org/wiki?curid=69283722
69286368
Alekseev–Gröbner formula
Formula for expressing the global error of a perturbation The Alekseev–Gröbner formula, or nonlinear variation-of-constants formula, is a generalization of the linear variation of constants formula which was proven independently by Wolfgang Gröbner in 1960 and Vladimir Mikhailovich Alekseev in 1961. It expresses the global error of a perturbation in terms of the local error and has many applications for studying perturbations of ordinary differential equations. Formulation. Let formula_0 be a natural number, let formula_1 be a positive real number, and let formula_2 be a function which is continuous on the time interval formula_3 and continuously differentiable on the formula_4-dimensional space formula_5. Let formula_6, formula_7 be a continuous solution of the integral equation formula_8 Furthermore, let formula_9 be continuously differentiable. We view formula_10 as the unperturbed function, and formula_11 as the perturbed function. Then it holds that formula_12 The Alekseev–Gröbner formula allows to express the global error formula_13 in terms of the local error formula_14. The Itô–Alekseev–Gröbner formula. The Itô–Alekseev–Gröbner formula is a generalization of the Alekseev–Gröbner formula which states in the deterministic case, that for a continuously differentiable function formula_15 it holds that formula_16 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "d \\in \\mathbb N" }, { "math_id": 1, "text": "T \\in (0, \\infty)" }, { "math_id": 2, "text": "\\mu \\colon [0, T] \\times \\mathbb{R}^{d} \\to \\mathbb{R}^{d} \\in C^{0, 1}([0, T] \\times \\mathbb{R}^{d})" }, { "math_id": 3, "text": "[0, T]" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "\\mathbb{R}^{d}" }, { "math_id": 6, "text": "X \\colon [0, T]^{2} \\times \\mathbb{R}^{d} \\to \\mathbb{R}^{d}" }, { "math_id": 7, "text": " (s, t, x) \\mapsto X_{s, t}^{x}" }, { "math_id": 8, "text": "X_{s, t}^{x} = x + \\int_{s}^{t} \\mu(r, X_{s, r}^{x}) dr." }, { "math_id": 9, "text": "Y \\in C^{1}([0, T], \\mathbb{R}^{d})" }, { "math_id": 10, "text": "Y" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "\nX_{0, T}^{Y_{0}} - Y_{T} = \\int_{0}^{T} \\left( \\frac{\\partial}{\\partial x} X_{r, T}^{Y_{s}} \\right) \\left( \\mu(r, Y_{r}) - \\frac{d}{dr} Y_{r} \\right) dr.\n" }, { "math_id": 13, "text": "X_{0, T}^{Y_{0}} - Y_{T}" }, { "math_id": 14, "text": "( \\mu(r, Y_{r}) - \\tfrac{d}{dr} Y_{r}) " }, { "math_id": 15, "text": "f \\in C^{1}(\\mathbb R^{k}, \\mathbb R^{d})" }, { "math_id": 16, "text": "\nf(X_{0, T}^{Y_{0}}) - f(Y_{T}) =\n\\int_{0}^{T} f'\\left( \\frac{\\partial}{\\partial x} X_{r, T}^{Y_{s}} \\right) \\frac{\\partial}{\\partial x} X_{s, T}^{Y_{s}}\\left( \\mu(r, Y_{r}) - \\frac{d}{dr} Y_{r} \\right) dr.\n" } ]
https://en.wikipedia.org/wiki?curid=69286368
6928954
Luma (video)
Brightness in an image or video In video, luma (formula_0) represents the brightness in an image (the "black-and-white" or achromatic portion of the image). Luma is typically paired with chrominance. Luma represents the achromatic image, while the chroma components represent the color information. Converting R′G′B′ sources (such as the output of a three-CCD camera) into luma and chroma allows for chroma subsampling: because human vision has finer spatial sensitivity to luminance ("black and white") differences than chromatic differences, video systems can store and transmit chromatic information at lower resolution, optimizing perceived detail at a particular bandwidth. Luma versus relative luminance. Luma is the weighted sum of gamma-compressed R′G′B′ components of a color video—the "prime symbols" ′ denote gamma compression. The word was proposed to prevent confusion between luma as implemented in video engineering and relative luminance as used in color science (i.e. as defined by CIE). Relative luminance is formed as a weighted sum of "linear" RGB components, not gamma-compressed ones. Even so, luma is sometimes erroneously called luminance. SMPTE EG 28 recommends the symbol formula_0 to denote luma and the symbol formula_1 to denote relative luminance. Use of relative luminance. While luma is more often encountered, relative luminance is sometimes used in video engineering when referring to the brightness of a monitor. The formula used to calculate relative luminance uses coefficients based on the CIE color matching functions and the relevant standard chromaticities of red, green, and blue (e.g., the original NTSC primaries, SMPTE C, or Rec. 709). For the Rec. 709 (and sRGB) primaries, the linear combination, based on pure colorimetric considerations and the definition of relative luminance is: formula_2 The formula used to calculate luma in the Rec. 709 spec arbitrarily also uses these same coefficients, but with gamma-compressed components: formula_3 where the prime symbol ′ denotes gamma compression. Rec. 601 luma versus Rec. 709 luma coefficients. For digital formats following CCIR 601 (i.e. most digital standard definition formats), luma is calculated with this formula: formula_4 Formats following ITU-R Recommendation BT. 709 (i.e. most digital high definition formats) use a different formula: formula_5 Modern HDTV systems use the 709 coefficients, while transitional 1035i HDTV (MUSE) formats may use the SMPTE 240M coefficients: formula_6 These coefficients correspond to the SMPTE RP 145 primaries (also known as "SMPTE C") in use at the time the standard was created. The change in the luma coefficients is to provide the "theoretically correct" coefficients that reflect the corresponding standard chromaticities ('colors') of the primaries red, green, and blue. However, there is some controversy regarding this decision. The difference in luma coefficients requires that component signals must be converted between Rec. 601 and Rec. 709 to provide accurate colors. In consumer equipment, the matrix required to perform this conversion may be omitted (to reduce cost), resulting in inaccurate color. Luma and luminance errors. As well, the Rec. 709 luma coefficients may not necessarily provide better performance. Because of the difference between luma and relative luminance, luma does not exactly represent the luminance in an image. As a result, errors in chroma can affect luminance. Luma alone does not perfectly represent luminance; accurate luminance requires both accurate luma and chroma. Hence, errors in chroma "bleed" into the luminance of an image. Note the bleeding in lightness near the borders. Due to the widespread usage of chroma subsampling, errors in chroma typically occur when it is lowered in resolution/bandwidth. This lowered bandwidth, coupled with high frequency chroma components, can cause visible errors in luminance. An example of a high frequency chroma component would be the line between the green and magenta bars of the SMPTE color bars test pattern. Error in luminance can be seen as a dark band that occurs in this area. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Y'" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "Y = 0.2126 R + 0.7152 G + 0.0722 B" }, { "math_id": 3, "text": "Y' = 0.2126 R' + 0.7152 G' + 0.0722 B'," }, { "math_id": 4, "text": " Y'_\\text{601} = 0.299 R' + 0.587 G' + 0.114 B'" }, { "math_id": 5, "text": " Y'_\\text{709} = 0.2126 R' + 0.7152 G' + 0.0722 B'" }, { "math_id": 6, "text": " Y'_\\text{240} = 0.212 R' + 0.701 G' + 0.087 B' = Y'_\\text{145}" } ]
https://en.wikipedia.org/wiki?curid=6928954
69299670
Kunerth's algorithm
Kunerth's algorithm is an algorithm for computing the modular square root of a given number. The algorithm does not require the factorization of the modulus, and relies on modular operations that is often easy when the given number is prime. Algorithm. To find "y" from a given value formula_0 it takes the following steps: Example. To obtain formula_18 first obtain formula_19. Then expand the polynomial: formula_20 into formula_21 Since, in this case the "C" term is a square, we take formula_22 and compute formula_23 (in general, formula_24). Solve for formula_12 and formula_13 the following equation formula_25 getting the solution formula_26 and formula_27. (There may be other pairs of solutions to this equation.) Then factor the following polynomial: formula_28 obtaining formula_16 Then obtain the modular square root via formula_29 Verify that formula_30 In the case that formula_31 has no answer, then formula_32 can be used instead. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "B = y^{2} \\bmod{N}," }, { "math_id": 1, "text": "r\\equiv \\sqrt{\\pm N} \\pmod{B}" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "w^{2}=A\\cdot z^{2}+B\\cdot z+C" }, { "math_id": 4, "text": "((B\\cdot z + r)^2 \\pm N)/B = w^{2}." }, { "math_id": 5, "text": "((B\\cdot z + r)^2 +(B\\cdot F+N))/B = w^{2}" }, { "math_id": 6, "text": "A\\cdot z^{2}+D\\cdot z+C+F" }, { "math_id": 7, "text": "C+F" }, { "math_id": 8, "text": "\\sqrt{67}\\bmod{67F+RSA260}" }, { "math_id": 9, "text": "((67 z+r)^2+X\\cdot RSA260)/(67y)" }, { "math_id": 10, "text": "(r^{2}+X\\cdot RSA260)/(67y)" }, { "math_id": 11, "text": "\\sqrt{67y}\\bmod{RSA260}" }, { "math_id": 12, "text": "\\alpha" }, { "math_id": 13, "text": "\\beta" }, { "math_id": 14, "text": "\\alpha = w (v + w \\beta )," }, { "math_id": 15, "text": "\\alpha^{2} \\cdot x^{2} + (2 \\alpha \\beta - N) x +(\\beta^{2} - (y^{2}\\bmod{N}))" }, { "math_id": 16, "text": "(-37 + 9 x) (1 + 25 x)" }, { "math_id": 17, "text": "y\\equiv \\alpha X + \\beta \\pmod{N}." }, { "math_id": 18, "text": "\\sqrt{41}\\bmod{856}," }, { "math_id": 19, "text": "\\sqrt{-856}\\equiv 13\\pmod{41}" }, { "math_id": 20, "text": "((41 z + 13)^2 + 856)/41 = w^2" }, { "math_id": 21, "text": "25 + 26 z + 41 z^{2}" }, { "math_id": 22, "text": "w=5" }, { "math_id": 23, "text": "v=13" }, { "math_id": 24, "text": "v=41\\cdot z+13" }, { "math_id": 25, "text": "\\alpha == w (v + w\\beta)" }, { "math_id": 26, "text": "\\alpha=15" }, { "math_id": 27, "text": "\\beta = -2" }, { "math_id": 28, "text": "\\alpha^2 x^2 + (2 \\alpha \\beta - 856) x + (\\beta^{2} - 41)" }, { "math_id": 29, "text": "15 \\cdot (37\\cdot 9^{-1})+(-2) \\equiv 345\\pmod{856}." }, { "math_id": 30, "text": "345^{2}\\equiv 41\\pmod{856}." }, { "math_id": 31, "text": "\\sqrt{-856}\\bmod{41}" }, { "math_id": 32, "text": "r\\equiv \\sqrt{-856}\\pmod{b\\cdot 856+41}" } ]
https://en.wikipedia.org/wiki?curid=69299670
693002
Löb's theorem
Provability logic In mathematical logic, Löb's theorem states that in Peano arithmetic (PA) (or any formal system including PA), for any formula "P", if it is provable in PA that "if "P" is provable in PA then "P" is true", then "P" is provable in PA. If Prov("P") means that the formula "P" is provable, we may express this more formally as If formula_0 then formula_1. An immediate corollary (the contrapositive) of Löb's theorem is that, if "P" is not provable in PA, then "if "P" is provable in PA, then "P" is true" is not provable in PA. For example, "If formula_2 is provable in PA, then formula_2" is not provable in PA. Löb's theorem is named for Martin Hugo Löb, who formulated it in 1955. It is related to Curry's paradox. Löb's theorem in provability logic. Provability logic abstracts away from the details of encodings used in Gödel's incompleteness theorems by expressing the provability of formula_3 in the given system in the language of modal logic, by means of the modality <templatestyles src="Template:Tooltip/styles.css" />formula_4. That is, when formula_3 is a logical formula, another formula can be formed by placing a box in front of formula_3, and is intended to mean that formula_3 is provable. Then we can formalize Löb's theorem by the axiom formula_5 known as axiom GL, for Gödel–Löb. This is sometimes formalized by means of the inference rule: If formula_6 then formula_7. The provability logic GL that results from taking the modal logic K4 (or K, since the axiom schema 4, formula_8, then becomes redundant) and adding the above axiom GL is the most intensely investigated system in provability logic. Modal proof of Löb's theorem. Löb's theorem can be proved within modal logic using only some basic rules about the provability operator (the K4 system) plus the existence of modal fixed points. Modal formulas. We will assume the following grammar for formulas: A modal sentence is a formula in this syntax that contains no propositional variables. The notation formula_19 is used to mean that formula_11 is a theorem. Modal fixed points. If formula_20 is a modal formula with only one propositional variable formula_9, then a modal fixed point of formula_20 is a sentence formula_21 such that formula_22 We will assume the existence of such fixed points for every modal formula with one free variable. This is of course not an obvious thing to assume, but if we interpret formula_23 as provability in Peano Arithmetic, then the existence of modal fixed points follows from the diagonal lemma. Modal rules of inference. In addition to the existence of modal fixed points, we assume the following rules of inference for the provability operator formula_23, known as Hilbert–Bernays provability conditions: Proof of Löb's theorem. Much of the proof does not make use of the assumption formula_27, so for ease of understanding, the proof below is subdivided to leave the parts depending on formula_27 until the end. Let formula_28 be any modal sentence. More informally, we can sketch out the proof as follows. Examples. An immediate corollary of Löb's theorem is that, if "P" is not provable in PA, then "if "P" is provable in PA, then "P" is true" is not provable in PA. Given we know PA is consistent (but PA does not know PA is consistent), here are some simple examples: In Doxastic logic, Löb's theorem shows that any system classified as a "reflexive" "type 4" reasoner must also be "modest": such a reasoner can never believe "my belief in P would imply that P is true", without also believing that P is true. Gödel's second incompleteness theorem follows from Löb's theorem by substituting the false statement formula_52 for "P". Converse: Löb's theorem implies the existence of modal fixed points. Not only does the existence of modal fixed points imply Löb's theorem, but the converse is valid, too. When Löb's theorem is given as an axiom (schema), the existence of a fixed point (up to provable equivalence) formula_53 for any formula "A"("p")" modalized in p" can be derived. Thus in normal modal logic, Löb's axiom is equivalent to the conjunction of the axiom schema 4, formula_54, and the existence of modal fixed points. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathit{PA} \\vdash {\\mathrm{Prov}(P) \\rightarrow P}" }, { "math_id": 1, "text": "\\mathit{PA} \\vdash P" }, { "math_id": 2, "text": "1+1=3" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "\\Box \\phi" }, { "math_id": 5, "text": "\\Box(\\Box P\\rightarrow P)\\rightarrow \\Box P," }, { "math_id": 6, "text": "\\vdash \\Box P \\rightarrow P" }, { "math_id": 7, "text": "\\vdash P" }, { "math_id": 8, "text": "\\Box A\\rightarrow\\Box\\Box A" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "K" }, { "math_id": 11, "text": "A" }, { "math_id": 12, "text": "\\Box A" }, { "math_id": 13, "text": "B" }, { "math_id": 14, "text": "\\neg A" }, { "math_id": 15, "text": "A \\rightarrow B" }, { "math_id": 16, "text": "A \\wedge B" }, { "math_id": 17, "text": "A \\vee B" }, { "math_id": 18, "text": "A \\leftrightarrow B" }, { "math_id": 19, "text": "\\vdash A" }, { "math_id": 20, "text": "F(X)" }, { "math_id": 21, "text": "\\Psi" }, { "math_id": 22, "text": "\\vdash \\Psi \\leftrightarrow F(\\Box \\Psi)" }, { "math_id": 23, "text": "\\Box" }, { "math_id": 24, "text": "\\vdash \\Box A" }, { "math_id": 25, "text": "\\vdash \\Box A \\rightarrow \\Box \\Box A" }, { "math_id": 26, "text": "\\vdash \\Box (A \\rightarrow B) \\rightarrow (\\Box A \\rightarrow \\Box B)" }, { "math_id": 27, "text": "\\Box P \\to P" }, { "math_id": 28, "text": "P" }, { "math_id": 29, "text": "F(X) = X \\rightarrow P" }, { "math_id": 30, "text": "\\vdash \\Psi \\leftrightarrow (\\Box \\Psi \\rightarrow P)" }, { "math_id": 31, "text": "\\vdash \\Psi \\rightarrow (\\Box \\Psi \\rightarrow P)" }, { "math_id": 32, "text": "\\vdash \\Box(\\Psi \\rightarrow (\\Box \\Psi \\rightarrow P))" }, { "math_id": 33, "text": "\\vdash \\Box\\Psi \\rightarrow \\Box(\\Box \\Psi \\rightarrow P)" }, { "math_id": 34, "text": "\\vdash \\Box(\\Box \\Psi \\rightarrow P) \\rightarrow (\\Box\\Box\\Psi \\rightarrow \\Box P)" }, { "math_id": 35, "text": " A = \\Box \\Psi " }, { "math_id": 36, "text": " B= P" }, { "math_id": 37, "text": "\\vdash \\Box \\Psi \\rightarrow (\\Box\\Box\\Psi \\rightarrow \\Box P)" }, { "math_id": 38, "text": "\\vdash \\Box \\Psi \\rightarrow \\Box \\Box \\Psi" }, { "math_id": 39, "text": "\\vdash \\Box \\Psi \\rightarrow \\Box P" }, { "math_id": 40, "text": "\\vdash \\Box \\Psi \\rightarrow P" }, { "math_id": 41, "text": "\\vdash (\\Box \\Psi \\rightarrow P) \\rightarrow \\Psi" }, { "math_id": 42, "text": "\\vdash \\Psi" }, { "math_id": 43, "text": "\\vdash \\Box \\Psi" }, { "math_id": 44, "text": "\\mathit{PA} \\vdash {\\mathrm{Prov}_{PA}(P) \\rightarrow P}" }, { "math_id": 45, "text": "\\mathit{PA} \\vdash {\\neg P \\rightarrow \\neg \\mathrm{Prov}_{PA}(P)} " }, { "math_id": 46, "text": "\\{ \\mathit{PA}, \\neg P\\} \\vdash { \\neg \\mathrm{Prov}_{PA} (P)} " }, { "math_id": 47, "text": "\\{ \\mathit{PA}, \\neg P\\} " }, { "math_id": 48, "text": "\\neg P \\to \\bot{} " }, { "math_id": 49, "text": "P " }, { "math_id": 50, "text": "\\neg \\mathrm{Prov}_{PA} (P) " }, { "math_id": 51, "text": "1+1=2" }, { "math_id": 52, "text": "\\bot" }, { "math_id": 53, "text": "p\\leftrightarrow A(p)" }, { "math_id": 54, "text": "(\\Box A\\rightarrow \\Box\\Box A)" } ]
https://en.wikipedia.org/wiki?curid=693002
69310432
Quantification (machine learning)
Machine learning practice of supervised learning In machine learning and data mining, quantification (variously called "learning to quantify", or "supervised prevalence estimation", or "class prior estimation") is the task of using supervised learning in order to train models ("quantifiers") that estimate the relative frequencies (also known as prevalence "values") of the classes of interest in a sample of unlabelled data items. For instance, in a sample of 100,000 unlabelled tweets known to express opinions about a certain political candidate, a quantifier may be used to estimate the percentage of these 100,000 tweets which belong to class `Positive' (i.e., which manifest a positive stance towards this candidate), and to do the same for classes `Neutral' and `Negative'. Quantification may also be viewed as the task of training predictors that estimate a (discrete) probability distribution, i.e., that generate a predicted distribution that approximates the unknown true distribution of the items across the classes of interest. Quantification is different from classification, since the goal of classification is to predict the class labels of individual data items, while the goal of quantification it to predict the class prevalence values of sets of data items. Quantification is also different from regression, since in regression the training data items have real-valued labels, while in quantification the training data items have class labels. It has been shown in multiple research works that performing quantification by classifying all unlabelled instances and then counting the instances that have been attributed to each class (the 'classify and count' method) usually leads to suboptimal quantification accuracy. This suboptimality may be seen as a direct consequence of 'Vapnik's principle', which states: <templatestyles src="Template:Blockquote/styles.css" />If you possess a restricted amount of information for solving some problem, try to solve the problem directly and never solve a more general problem as an intermediate step. It is possible that the available information is sufficient for a direct solution but is insufficient for solving a more general intermediate problem. In our case, the problem to be solved directly is quantification, while the more general intermediate problem is classification. As a result of the suboptimality of the 'classify and count' method, quantification has evolved as a task in its own right, different (in goals, methods, techniques, and evaluation measures) from classification. Quantification tasks. The main variants of quantification, according to the characteristics of the set of classes used, are: Most known quantification methods address the binary case or the single-label multiclass case, and only few of them address the ordinal case or the "regression" case. Binary-only methods include the "Mixture Model" (MM) method, the HDy method, SVM(KLD), and SVM(Q). Methods that can deal with both the binary case and the single-label multiclass case include "probabilistic classify and count" (PCC), "adjusted classify and count" (ACC), "probabilistic adjusted classify and count" (PACC), and the Saerens-Latinne-Decaestecker EM-based method (SLD). Methods for the ordinal case include "Ordinal Quantification Tree" (OQT), and ordinal versions of the above-mentioned ACC, PACC, and SLD methods. A number of methods that address regression quantification have also been proposed. Evaluation measures for quantification. Several evaluation measures can be used for evaluating the error of a quantification method. Since quantification consists of generating a predicted probability distribution that estimates a true probability distribution, these evaluation measures are ones that compare two probability distributions. Most evaluation measures for quantification belong to the class of divergences. Evaluation measures for binary quantification and single-label multiclass quantification are Evaluation measures for ordinal quantification are Applications. Quantification is of special interest in fields such as the social sciences, epidemiology, market research, and ecological modelling, since these fields are inherently concerned with aggregate data. However, quantification is also useful as a building block for solving other downstream tasks, such as measuring classifier bias, performing word sense disambiguation, allocating resources, and improving the accuracy of classifiers. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n=2" }, { "math_id": 1, "text": "n>2" } ]
https://en.wikipedia.org/wiki?curid=69310432
69314250
Retinomorphic sensor
Optical sensor Retinomorphic sensors are a type of event-driven optical sensor which produce a signal in response to "changes" in light intensity, rather than to light intensity itself. This is in contrast to conventional optical sensors such as charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) based sensors, which output a signal that increases with increasing light intensity. Because they respond to movement only, retinomorphic sensors are hoped to enable faster tracking of moving objects than conventional image sensors, and have potential applications in autonomous vehicles, robotics, and neuromorphic engineering. Naming and history. The first so-called "artificial retina" were reported in the late 1980's by Carver Mead and his doctoral students Misha Mahowald, and Tobias Delbrück. These silicon-based sensors were based on small circuits involving differential amplifiers, capacitors, and resistors. The sensors produced a spike and subsequent decay in output voltage in response to a step-change in illumination intensity. This response is analogous to that of animal retinal cells, which in the 1920's were observed to fire more frequently when the intensity of light was changed than when it was constant. The name silicon retina has hence been used to describe these sensors. The term "retinomorphic" was first used in a conference paper by Lex Akers in 1990. The term received wider use by Stanford Professor of Engineering Kwabena Boahen, and has since been applied to a wide range of event-driven sensing strategies. The word is analogous to neuromorphic, which is applied to hardware elements (such as processors) designed to replicate the way the brain processes information. Operating principles. There are several retinomorphic sensor designs which yield a similar response. The first designs employed a differential amplifier which compared the input signal from of a conventional sensor (e.g. a phototransistor) to a filtered version of the output, resulting in a gradual decay if the input was constant. Since the 1980's these sensors have evolved into much more complex and robust circuits. A more compact design of retinomorphic sensor consists of just a photosensitive capacitor and a resistor in series. The output voltage of these retinomorphic sensors, formula_0, is defined as voltage dropped across the resistor. The photosensitive capacitor is designed to have a capacitance which is a function of incident light intensity. If a constant voltage formula_1, is applied across this RC circuit it will act as a passive high-pass filter and all voltage will be dropped across the capacitor (i.e. formula_2). After a sufficient amount of time, the plates of the capacitor will be fully charged with a charge formula_3 on each plate, where formula_4 is the capacitance in the dark. Since formula_2 under constant illumination, this can be simplified to formula_5. If light is then applied to the capacitor it will change capacitance to a new value: formula_6. The charge that the plates can accommodate will therefore change to formula_7, leaving a surplus / deficit of charge on each plate. The excess charge will be forced to leave the plates, flowing either to ground or the input voltage terminal. The rate of charge flow is determined by the resistance of the resistor formula_8, and the capacitance of the capacitor. This charge flow will lead to a non-zero voltage being dropped across the resistor and hence a non-zero formula_0. After the charge stops flowing the system returns to steady-state, all the voltage is once again dropped across the capacitor, and formula_2 again.For a capacitor to change its capacitance under illumination, the dielectric constant of the insulator between the plates, or the effective dimensions of the capacitor, must be illumination-dependent. The effective dimensions can be changed by using a bilayer material between the plates, consisting of an insulator and a semiconductor. Under appropriate illumination conditions the semiconductor will increase its conductivity when exposed to light, emulating the process of moving the plates of the capacitor closer together, and therefore increasing capacitance. For this to be possible, the semiconductor must have a low electrical conductivity in the dark, and have an appropriate band gap to enable charge generation under illumination. The device must also allow optical access to the semiconductor, through a transparent plate (e.g. using a transparent conducting oxide). Applications. Conventional cameras capture every part of an image, regardless of whether it is relevant to the task. Because every pixel is measured, conventional image sensors are only able to sample the visual field at relatively low frame rates, typically 30 - 240 frames per second. Even in professional high speed cameras used for motion picture, the frame rate is limited to a few 10's of thousands of frames per second for a full resolution image. This limitation could represent a performance bottleneck in the identification of high speed moving objects. This is particularly critical in applications where rapid identification of movement is critical, such as in autonomous vehicles. By contrast, retinomorphic sensors identify movement by design. This means that they do not have a frame rate and instead are event-driven, responding only when needed. For this reason, retinomorphic sensors are hoped to enable identification of moving objects much more quickly than conventional real-time image analysis strategies. Retinomorphic sensors are therefore hoped to have applications in autonomous vehicles, robotics, and neuromorphic engineering. Theory. Retinomorphic sensor operation can be quantified using similar techniques to simple RC circuits, the only difference being that capacitance is not constant as a function of time in a retinomorphic sensor. If the input voltage is defined as formula_1, the voltage dropped across the resistor as formula_0, and the voltage dropped across the capacitor as formula_9, we can use Kirchhoff's Voltage Law to state: formula_10 Defining the current flowing through the resistor as formula_11, we can use Ohm's Law to write: formula_12 From the definition of current, we can then write this in terms of charge, formula_13, flowing off the bottom plate: formula_14 where formula_15 is time. Charge on the capacitor plates is defined by the product of capacitance, formula_16, and the voltage across the capacitor, formula_9, we can hence say: formula_17 Because capacitance in retinomorphic sensors is a function of time, formula_16 cannot be taken out of the derivative as a constant. Using the product rule, we get the following general equation of retinomorphic sensor response: formula_18 or, in terms of the output voltage: formula_19 Response to a step-change in intensity. While the equation above is valid for any form of formula_20, it cannot be solved analytically unless the input form of the optical stimulus is known. The simplest form of optical stimulus would be a step function going from zero to some finite optical power density formula_21 at a time formula_22. While real-world applications of retinomorphic sensors are unlikely to be accurately described by such events, it is a useful way to understand and benchmark the performance of retinomorphic sensors. In particular, we are primarily concerned with the maximum height of the formula_0 immediately after the light has been turned on. In this case the capacitance could be described by: formula_23 The capacitance under illumination will depend on formula_21. Semiconductors are known to have a conductance, formula_24, which increases with a power-law dependence on incident optical power density: formula_25, where formula_26 is a dimensionless exponent. Since formula_24 is linearly proportional to charge density, and capacitance is linearly proportional to charges on the plates for a given voltage, the capacitance of a retinomorphic sensor also has a power-law dependence on formula_21. The capacitance as a function of time in response to a step function, can therefore be written as: formula_27 where formula_28 is the capacitance prefactor. For a step function we can re-write our differential equation for formula_1 as a difference equation: formula_29 where formula_30 is the change in voltage dropped across the capacitor as a result of turning on the light, formula_31 is the change in capacitance as a result of turning on the light, and formula_32 is the time taken for the light to turn on. The variables formula_9 and formula_16 are defined as the voltage dropped across the capacitor and the capacitance, respectively, immediately after the light has been turned on. I.e. formula_9 is henceforth shorthand for formula_33, and formula_16 is henceforth shorthand for formula_34. Assuming the sensor has been held in the dark for sufficiently long before the light is turned on, the change in formula_9 can hence be written as: formula_35 Similarly, the change in formula_16 can be written as formula_36 Putting these into the difference equation for formula_1: formula_37 Multiplying this out: formula_41 Since we are assuming the light turns on very quickly we can approximate formula_42. This leads to the following: formula_43 Using the relationship formula_10, this can then be written in terms of the output voltage: formula_44 Where we have defined the peak height as formula_45, since he peak occurs immediately after the light has been turned on. The "retinomorphic figure of merit", formula_38, is defined as the ratio of the capacitance prefactor and the capacitance of the retinomorphic sensor in the dark: formula_46 With this parameter, the inverse ratio of peak height to input voltage can be written as follows: formula_47 The value of formula_26 will depend on the nature of recombination in the semiconductor, but if band-to-band recombination dominates and the charge density of electrons and holes are equal, formula_48. For systems where this is approximately true the following simplification to the above equation an be made: formula_49 This equation provides a simple method for evaluating the retinomorphic figure of merit from experimental data. This can be carried out by measuring the peak height, formula_50, of a retinomorphic sensor in response to a step change in light intensity from 0 to formula_21, for a range of values formula_21. Plotting formula_39 as a function of formula_51 should yield a straight line with a gradient of formula_40. This approach assumes that formula_50 is linearly proportional to formula_1.
[ { "math_id": 0, "text": "V_{out}" }, { "math_id": 1, "text": "V_{in}" }, { "math_id": 2, "text": "V_{out}=0" }, { "math_id": 3, "text": "Q=\\pm C_{dark}(V_{in}-V_{out})" }, { "math_id": 4, "text": "C_{dark}" }, { "math_id": 5, "text": "Q=\\pm C_{dark}V_{in}" }, { "math_id": 6, "text": "C_{light}" }, { "math_id": 7, "text": "Q=\\pm C_{light}(V_{in}-V_{out})" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "V_{C}" }, { "math_id": 10, "text": "V_{in}=V_{C}+V_{out}" }, { "math_id": 11, "text": "I" }, { "math_id": 12, "text": "V_{in}=V_{C}+IR" }, { "math_id": 13, "text": "Q" }, { "math_id": 14, "text": "V_{in}=V_{C}+R\\frac{dQ}{dt}" }, { "math_id": 15, "text": "t" }, { "math_id": 16, "text": "C" }, { "math_id": 17, "text": "V_{in}=V_{C}+R\\frac{d}{dt}(CV_{C})" }, { "math_id": 18, "text": "V_{in}=V_{C}+R\\bigg[C\\frac{dV_{C}}{dt}+V_{C}\\frac{dC}{dt}\\bigg]" }, { "math_id": 19, "text": "V_{out}=R\\bigg[C\\frac{dV_{C}}{dt}+V_{C}\\frac{dC}{dt}\\bigg]" }, { "math_id": 20, "text": "C(t)" }, { "math_id": 21, "text": "P" }, { "math_id": 22, "text": "t=t_{0}" }, { "math_id": 23, "text": "C(t) = \\begin{cases} C_{dark}, & \\text{for }t<t_{0} \\\\ C_{light}, & \\text{for }t\\geq t_{0} \\end{cases}" }, { "math_id": 24, "text": "G" }, { "math_id": 25, "text": "G \\propto P^{\\gamma}" }, { "math_id": 26, "text": "\\gamma" }, { "math_id": 27, "text": "C(t) = \\begin{cases} C_{dark}, & \\text{for }t<t_{0} \\\\ C_{dark}+\\alpha_{0} P^{\\gamma}, & \\text{for }t\\geq t_{0} \\end{cases}" }, { "math_id": 28, "text": "\\alpha_{0}" }, { "math_id": 29, "text": "V_{in}=V_{C}+R\\bigg[C\\frac{\\Delta V_{C}}{\\Delta t}+V_{C}\\frac{\\Delta C}{\\Delta t}\\bigg]" }, { "math_id": 30, "text": "\\Delta V_{C}" }, { "math_id": 31, "text": "\\Delta C" }, { "math_id": 32, "text": "\\Delta t " }, { "math_id": 33, "text": "V_{C}(t_{0})" }, { "math_id": 34, "text": "C(t_{0})" }, { "math_id": 35, "text": "\\Delta V_{C}=V_{C}-V_{in}" }, { "math_id": 36, "text": "\\Delta C = \\alpha_{0} P^{\\gamma}" }, { "math_id": 37, "text": "V_{in}=V_{C}+R\\bigg[C\\frac{V_{C}-V_{in}}{\\Delta t}+V_{C}\\frac{\\alpha_{0} P^{\\gamma}}{\\Delta t}\\bigg]" }, { "math_id": 38, "text": "\\Lambda" }, { "math_id": 39, "text": "V_{in}/V_{max}" }, { "math_id": 40, "text": "1/\\Lambda" }, { "math_id": 41, "text": "V_{in}\\Delta t=V_{C}\\Delta t+R\\bigg[C(V_{C}-V_{in})+V_{C}\\alpha_{0} P^{\\gamma}\\bigg]" }, { "math_id": 42, "text": "\\Delta t\\approx0" }, { "math_id": 43, "text": "V_{C}=V_{in}\\frac{C_{dark}+\\alpha_{0} P^{\\gamma}}{C_{dark}+2\\alpha_{0} P^{\\gamma}}" }, { "math_id": 44, "text": "V_{max}=V_{in}\\frac{\\alpha_{0} P^{\\gamma}}{C_{dark}+2\\alpha_{0} P^{\\gamma}}" }, { "math_id": 45, "text": "V_{max}=V_{out}(t_{0})" }, { "math_id": 46, "text": "\\Lambda=\\frac{\\alpha_{0}}{C_{dark}}" }, { "math_id": 47, "text": "\\frac{V_{in}}{V_{max}}=2+\\frac{1}{\\Lambda P^{\\gamma}}" }, { "math_id": 48, "text": "\\gamma=0.5" }, { "math_id": 49, "text": "\\frac{V_{in}}{V_{max}}=2+\\frac{1}{\\Lambda P^{1/2}}" }, { "math_id": 50, "text": "V_{max}" }, { "math_id": 51, "text": "P^{-1/2}" } ]
https://en.wikipedia.org/wiki?curid=69314250
693185
Monotonicity of entailment
Property of many systems of logic Monotonicity of entailment is a property of many logical systems such that if a sentence follows deductively from a given set of sentences then it also follows deductively from any superset of those sentences. A corollary is that if a given argument is deductively valid, it cannot become invalid by the addition of extra premises. Logical systems with this property are called monotonic logics in order to differentiate them from non-monotonic logics. Classical logic and intuitionistic logic are examples of monotonic logics. Weakening rule. Monotonicity may be stated formally as a rule called weakening, or sometimes thinning. A system is monotonic if and only if the rule is admissible. The weakening rule may be expressed as a natural deduction sequent: formula_0 This can be read as saying that if, on the basis of a set of assumptions formula_1, one can prove C, then by adding an assumption A, one can still prove C. Example. The following argument is valid: "All men are mortal. Socrates is a man. Therefore Socrates is mortal." This can be weakened by adding a premise: "All men are mortal. Socrates is a man. Cows produce milk. Therefore Socrates is mortal." By the property of monotonicity, the argument remains valid with the additional premise, even though the premise is irrelevant to the conclusion. Non-monotonic logics. In most logics, weakening is either an inference rule or a metatheorem if the logic doesn't have an explicit rule. Notable exceptions are: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\Gamma \\vdash C}{\\Gamma, A \\vdash C } " }, { "math_id": 1, "text": "\\Gamma" } ]
https://en.wikipedia.org/wiki?curid=693185
693197
Μ operator
Concept in computability theory In computability theory, the μ-operator, minimization operator, or unbounded search operator searches for the least natural number with a given property. Adding the μ-operator to the primitive recursive functions makes it possible to define all computable functions. Definition. Suppose that R("y", "x"1, ..., "x""k") is a fixed ("k"+1)-ary relation on the natural numbers. The μ-operator "μ"y"", in either the unbounded or bounded form, is a "number theoretic function" defined from the natural numbers to the natural numbers. However, "μ"y"" contains a "predicate" over the natural numbers, which can be thought of as a condition that evaluates to "true" when the predicate is satisfied and "false" when it is not. The "bounded" μ-operator appears earlier in Kleene (1952) "Chapter IX Primitive Recursive Functions, §45 Predicates, prime factor representation" as: "formula_0" (p. 225) Stephen Kleene notes that any of the six inequality restrictions on the range of the variable "y" is permitted, i.e. "y" &lt; "z", "y" ≤ "z", "w" &lt; "y" &lt; "z", "w" &lt; "y" ≤ "z", "w" ≤ "y" &lt; "z" and "w" ≤ "y" ≤ "z". "When the indicated range contains no "y" such that R("y") [is "true"], the value of the "μ"y"" expression is the cardinal number of the range" (p. 226); this is why the default "z" appears in the definition above. As shown below, the bounded μ-operator "μ"y""y"&lt;"z"" is defined in terms of two primitive recursive functions called the finite sum Σ and finite product Π, a predicate function that "does the test" and a representing function that converts {t, f} to {"0", "1"}. In Chapter XI §57 General Recursive Functions, Kleene defines the "unbounded" μ-operator over the variable "y" in the following manner, "formula_1" (p. 279, where "formula_2" means "there exists a "y" such that...") In this instance R itself, or its representing function, delivers "0" when it is satisfied (i.e. delivers "true"); the function then delivers the number "y". No upper bound exists on "y", hence no inequality expressions appear in its definition. For a given R("y") the unbounded μ-operator μ"y"R("y") (note no requirement for "(E"y")" ) is a partial function. Kleene makes it as a total function instead (cf. p. 317): formula_3 The total version of the unbounded μ-operator is studied in "higher-order" reverse mathematics () in the following form: formula_4 where the superscripts mean that "n" is zeroth-order, "f" is first-order, and μ is second-order. This axiom gives rise to the Big Five system ACA0 when combined with the usual base theory of higher-order reverse mathematics. Properties. (i) In context of the primitive recursive functions, where the search variable "y" of the μ-operator is bounded, e.g. "y" &lt; "z" in the formula below, if the predicate R is primitive recursive (Kleene Proof #E p. 228), then μ"y""y"&lt;"z"R("y", "x"1, ..., "x"n) is a primitive recursive function. (ii) In the context of the (total) recursive functions, where the search variable "y" is "unbounded" but guaranteed to exist for "all" values "x""i" of the total recursive predicate R's parameters, ("x"1)...,("x""n") (Ey) R("y", "x""i", ..., "x""n") implies that μ"y"R("y", "x""i", ..., "x""n") is a total recursive function. Here ("x""i") means "for all "x""i"" and E"y" means "there exists at least one value of "y" such that..." (cf Kleene (1952) p. 279.) then the five primitive recursive operators plus the unbounded-but-total μ-operator give rise to what Kleene called the "general" recursive functions (i.e. total functions defined by the six recursion operators). (iii) In the context of the partial recursive functions: Suppose that the relation R holds if and only if a partial recursive function converges to zero. And suppose that that partial recursive function converges (to something, not necessarily zero) whenever μ"y"R("y", "x"1, ..., "x""k") is defined and "y" is μ"y"R("y", "x"1, ..., "x""k") or smaller. Then the function μ"y"R("y", "x"1, ..., "x""k") is also a partial recursive function. The μ-operator is used in the characterization of the computable functions as the μ recursive functions. In constructive mathematics, the unbounded search operator is related to Markov's principle. "In the following" x "represents the string" "x""i", ..., "x""n". Examples. Example 1: The bounded μ-operator is a primitive recursive function. The "bounded" μ-operator can be expressed rather simply in terms of two primitive recursive functions (hereafter "prf") that also are used to define the CASE function—the product-of-terms Π and the sum-of-terms Σ (cf Kleene #B page 224). (As needed, any boundary for the variable such as "s" ≤ "t" or "t" &lt; "z", or 5 &lt; "x" &lt; 17 etc. is appropriate). For example: *Π"s"≤"t" f"s"(x, "s") = f0(x, 0) × f1(x, 1) × ... × ft(x, "t") *Σ"t"&lt;"z" g"t"(x, "t") = g0(x, 0) + g1(x, 1) + ... + gz-1(x, "z"-1) Before we proceed we need to introduce a function ψ called "the representing function" of predicate R. Function ψ is defined from inputs (t = "truth", f = "falsity") to outputs (0, 1) ("note the order!"). In this case the input to ψ. i.e. {t, f}. is coming from the output of R: * ψ(R = t) = 0 * ψ(R = f) = 1 Kleene demonstrates that μ"y""y"&lt;"z"R("y") is defined as follows; we see the product function Π is acting like a Boolean OR operator, and the sum Σ is acting somewhat like a Boolean AND but is producing {Σ≠0, Σ=0} rather than just {1, 0}: μ"y""y"&lt;"z"R("y") = Σ"t"&lt;"z"Π"s"≤"t" ψ(R(x, "t", "s")) = [ψ(x, 0, 0)] + [ψ(x, 1, 0) × ψ(x, 1, 1)] + [ψ(x, 2, 0) × ψ(x, 2, 1) × ψ(x, 2, 2)] + [ψ(x, "z"-1, 0) × ψ(x, "z"-1, 1) × ψ(x, "z"-1, 2) × . . . × ψ (x, "z"-1, "z"-1)] Note that "Σ is actually a primitive recursion with the base" Σ(x, 0) = 0 "and the induction step" Σ(x, "y"+1) = Σ(x, "y") + Π( x, "y"). "The product Π is also a primitive recursion with base step" Π(x, 0) = ψ(x, 0) "and induction step" Π(x, "y"+1) = Π(x, "y") × ψ(x, "y"+1). The equation is easier if observed with an example, as given by Kleene. He just made up the entries for the representing function ψ(R("y")). He designated the representing functions χ("y") rather than ψ(x, "y"): Example 2: The unbounded μ-operator is not primitive-recursive. The unbounded μ-operator—the function μ"y"—is the one commonly defined in the texts. But the reader may wonder why the unbounded μ-operator is searching for a function R(x, "y") to yield "zero", rather than some other natural number. "In a footnote Minsky does allow his operator to terminate when the function inside produces a match to the parameter" "k"; "this example is also useful because it shows another author's format:" "For μ"t"[φ("t") = "k"]" (p. 210) The reason for "zero" is that the unbounded operator μ"y" will be defined in terms of the function "product" Π with its index "y" allowed to "grow" as the μ-operator searches. As noted in the example above, the product Π"x"&lt;"y" of a string of numbers ψ(x, 0) *, ..., * ψ(x, "y") yields zero whenever one of its members ψ(x, "i") is zero: Π"s"&lt;"y" = ψ(x, 0) * , ..., * ψ(x, "y") = 0 if any ψ(x, "i") = 0 where 0≤"i"≤"s". Thus the Π is acting like a Boolean AND. The function μ"y" produces as "output" a single natural number "y" = {0, 1, 2, 3, ...}. However, inside the operator one of a couple "situations" can appear: (a) a "number-theoretic function" χ that produces a single natural number, or (b) a "predicate" R that produces either {t = true, f = false}. (And, in the context of "partial" recursive functions Kleene later admits a third outcome: "μ = undecided".) Kleene splits his definition of the unbounded μ-operator to handle the two situations (a) and (b). For situation (b), before the predicate R(x, "y") can serve in an arithmetic capacity in the product Π, its output {t, f} must first be "operated on" by its "representing function χ" to yield {0, 1}. And for situation (a) if one definition is to be used then the "number theoretic function χ" must produce zero to "satisfy" the μ-operator. With this matter settled, he demonstrates with single "Proof III" that either types (a) or (b) together with the five primitive recursive operators yield the (total) recursive functions, with this proviso for a total function: "For all parameters" x, "a demonstration must be provided to show that a "y" exists that satisfies (a)" μ"y"ψ(x, "y")" or (b)" μ"y"R(x, "y"). Kleene also admits a third situation (c) that does not require the demonstration of "for all x a "y" exists such that ψ(x, "y")." He uses this in his proof that more total recursive functions exist than can be enumerated; c.f. footnote Total function demonstration. Kleene's proof is informal and uses an example similar to the first example, but first he casts the μ-operator into a different form that uses the "product-of-terms" Π operating on function χ that yields a natural number "n", which can be any natural number, and 0 in the instance when the u-operator's test is "satisfied". The definition recast with the Π-function: μ"y""y"&lt;"z"χ("y") = *(i): π(x, "y") = Π"s"&lt;"y"χ(x, "s") *(ii): φ(x) = τ(π(x, "y"), π(x, "y' "), "y") *(iii): τ("z' ", 0, "y") = "y" ;τ("u", "v", "w") is undefined for "u" = 0 or "v" &gt; 0. This is subtle. At first glance the equations seem to be using primitive recursion. But Kleene has not provided us with a base step and an induction step of the general form: * base step: φ(0, x) = φ(x) * induction step: φ(0, x) = ψ(y, φ(0,x), x) To see what is going on, we first have to remind ourselves that we have assigned a parameter (a natural number) to every variable "x""i". Second, we do see a successor-operator at work iterating "y" (i.e. the "y' "). And third, we see that the function μ"y" "y"&lt;"z"χ("y", x) is just producing instances of χ("y",x) i.e. χ(0,x), χ(1,x), ... until an instance yields 0. Fourth, when an instance χ("n", x) yields 0 it causes the middle term of τ, i.e. v = π(x, "y' ") to yield 0. Finally, when the middle term "v" = 0, μ"y""y"&lt;"z"χ("y") executes line (iii) and "exits". Kleene's presentation of equations (ii) and (iii) have been exchanged to make this point that line (iii) represents an "exit"—an exit taken only when the search successfully finds a "y" to satisfy χ("y") and the middle product-term π(x, "y' ") is 0; the operator then terminates its search with τ("z' ", 0, "y") = "y". τ(π(x, "y"), π(x, "y' "), "y"), i.e.: * τ(π(x, 0), π(x, 1), 0), * τ(π(x, 1), π(x, 2), 1) * τ(π(x, 2), π(x, 3), 2) * τ(π(x, 3), π(x, 4), 3) * ... until a match occurs at "y"="n" and then: * τ("z' ", 0, "y") = τ("z' ", 0, "n") = "n" and the μ-operator's search is done. For the example Kleene "...consider[s] any fixed values of ("x""i", ..., "x""n") and write[s] simply 'χ("y")' for 'χ("x""i", ..., "x""n"), "y")'": Example 3: Definition of the unbounded μ-operator in terms of an abstract machine. Both Minsky (1967) p. 21 and Boolos-Burgess-Jeffrey (2002) p. 60-61 provide definitions of the μ-operator as an abstract machine; see footnote Alternative definitions of μ. The following demonstration follows Minsky without the "peculiarity" mentioned in the footnote. The demonstration will use a "successor" counter machine model closely related to the Peano Axioms and the primitive recursive functions. The model consists of (i) a finite state machine with a TABLE of instructions and a so-called 'state register' that we will rename "the Instruction Register" (IR), (ii) a few "registers" each of which can contain only a single natural number, and (iii) an instruction set of four "commands" described in the following table: "In the following, the symbolism " [ r ] " means "the contents of", and " →r " indicates an action with respect to register r." The algorithm for the minimization operator μ"y"[φ(x, "y")] will, in essence, create a sequence of instances of the function φ(x, "y") as the value of parameter "y" (a natural number) increases; the process will continue (see Note † below) until a match occurs between the output of function φ(x, "y") and some pre-established number (usually 0). Thus the evaluation of φ(x, "y") requires, at the outset, assignment of a natural number to each of its variables x and an assignment of a "match-number" (usually 0) to a register ""w", and a number (usually 0) to register "y". "Note †: The "unbounded" μ-operator will continue this attempt-to-match process ad infinitum or until a match occurs. Thus the "y"" register must be unbounded -- it must be able to "hold" a number of arbitrary size. Unlike a "real" computer model, abstract machine models allow this. In the case of a "bounded" μ-operator, a lower-bounded μ-operator would start with the contents of "y" set to a number other than zero. An upper-bounded μ-operator would require an additional register "ub" to contain the number that represents the upper bound plus an additional comparison operation; an algorithm could provide for both lower- and upper bounds." In the following we are assuming that the Instruction Register (IR) encounters the μ"y" "routine" at instruction number ""n". Its first action will be to establish a number in a dedicated "w"" register—an "example of" the number that function φ(x, "y") must produce before the algorithm can terminate (classically this is the number zero, but see the footnote about the use of numbers other than zero). The algorithm's next action at instructiton ""n"+1" will be to clear the ""y" register -- "y"" will act as an "up-counter" that starts from 0. Then at instruction ""n"+2" the algorithm evaluates its function φ(x, "y") -- we assume this takes "j" instructions to accomplish—and at the end of its evaluation φ(x, y) deposits its output in register "φ". At the ("n"+"j"+3)rd instruction the algorithm compares the number in the "w" register (e.g. 0) to the number in the "φ" register—if they are the same the algorithm has succeeded and it escapes through "exit"; otherwise it increments the contents of the "y" register and "loops" back with this new y-value to test function φ(x, "y") again. Footnotes. Total function demonstration. What is "mandatory" if the function is to be a total function is a demonstration "by some other method" (e.g. induction) that for each and every combination of values of its parameters "x""i" some natural number "y" will satisfy the μ-operator so that the algorithm that represents the calculation can terminate: "...we must always hesitate to assume that a system of equations really defines a general-recursive (i.e. total) function. We normally require auxiliary evidence for this, e.g. in the form of an inductive proof that, for each argument value, the computation terminates with a unique value." (Minsky (1967) p.186) "In other words, we should not claim that a function is effectively calculable on the ground that it has been shown to be general (i.e. total) recursive, unless the demonstration that it is general recursive is effective."(Kleene (1952) p.319) For an example of what this means in practice see the examples at mu recursive functions—even the simplest truncated subtraction algorithm ""x" - "y" = "d"" can yield, for the undefined cases when "x" &lt; "y", (1) no termination, (2) no numbers (i.e. something wrong with the format so the yield is not considered a natural number), or (3) deceit: wrong numbers in the correct format. The "proper" subtraction algorithm requires careful attention to all the "cases" ("x", "y") = {(0, 0), ("a", 0), (0, "b"), ("a"≥"b", "b"), ("a"="b", "b"), ("a"&lt;"b", "b")}. But even when the algorithm has been shown to produce the expected output in the instances {(0, 0), (1, 0), (0, 1), (2, 1), (1, 1), (1, 2)}, we are left with an uneasy feeling until we can devise a "convincing demonstration" that the cases ("x", "y") = ("n", "m") "all" yield the expected results. To Kleene's point: is our "demonstration" (i.e. the algorithm that is our demonstration) convincing enough to be considered "effective"? Alternative abstract machine models of the unbounded μ-operator from Minsky (1967) and Boolos-Burgess-Jeffrey (2002). The unbounded μ-operator is defined by Minsky (1967) p. 210 but with a peculiar flaw: the-operator will not yield "t"=0 when its predicate (the IF-THEN-ELSE test) is satisfied; rather it yields "t"=2. In Minsky's version the counter is "t", and the function φ("t", x) deposits its number in register φ. In the usual μ definition register "w" will contain 0, but Minsky observes that it can contain any number "k". Minsky's instruction set is equivalent to the following where "JNE" = Jump to z if Not Equal: { CLR ("r"), INC ("r"), JNE ("r""j", "r""k", "z") } The unbounded μ-operator is also defined by Boolos-Burgess-Jeffrey (2002) p. 60-61 for a counter machine with an instruction set equivalent to the following: { CLR (r), INC (r), DEC (r), JZ (r, z), H } In this version the counter "y" is called "r2", and the function f( x, r2 ) deposits its number in register "r3". Perhaps the reason Boolos-Burgess-Jeffrey clear r3 is to facilitate an unconditional jump to "loop"; this is often done by use of a dedicated register "0" that contains "0": References. &lt;templatestyles src="Reflist/styles.css" /&gt; On pages 210-215 Minsky shows how to create the μ-operator using the register machine model, thus demonstrating its equivalence to the general recursive functions.
[ { "math_id": 0, "text": "\\mu y_{y<z} R(y). \\ \\ \\mbox{The least} \\ y<z \\ \\mbox{such that} \\ R(y), \\ \\mbox{if} \\ (\\exists y)_{y<z} R(y); \\ \\mbox{otherwise}, \\ z." }, { "math_id": 1, "text": "(\\exists y) \\mu y R(y) = \\{ \\mbox{the least (natural number)}\\ y \\ \\mbox{such that} \\ R(y)\\}" }, { "math_id": 2, "text": "(\\exists y)" }, { "math_id": 3, "text": " \\varepsilon yR(x,y) = \\begin{cases}\n\\text{the least } y \\text{ such that } R(x,y), &\\text{if } (Ey)R(x,y)\\\\\n0, &\\text{otherwise}.\n\\end{cases}" }, { "math_id": 4, "text": "(\\exists \\mu^2)(\\forall f^1)\\big( (\\exists n^0)(f(n)=0) \\rightarrow f(\\mu(f))=0 \\big)," } ]
https://en.wikipedia.org/wiki?curid=693197
69319810
Indicator function (complex analysis)
Notion from the theory of entire functions In the field of mathematics known as complex analysis, the indicator function of an entire function indicates the rate of growth of the function in different directions. Definition. Let us consider an entire function formula_0. Supposing, that its growth order is formula_1, the indicator function of formula_2 is defined to be formula_3 The indicator function can be also defined for functions which are not entire but analytic inside an angle formula_4. Basic properties. By the very definition of the indicator function, we have that the indicator of the product of two functions does not exceed the sum of the indicators: formula_5 Similarly, the indicator of the sum of two functions does not exceed the larger of the two indicators: formula_6 Examples. Elementary calculations show that, if formula_7, then formula_8. Thus,52 formula_9 In particular, formula_10 Since the complex sine and cosine functions are expressible in terms of the exponential, it follows from the above result that formula_11 Another easily deducible indicator function is that of the reciprocal Gamma function. However, this function is of infinite type (and of order formula_12), therefore one needs to define the indicator function to be formula_13 Stirling's approximation of the Gamma function then yields, that formula_14 Another example is that of the Mittag-Leffler function formula_15. This function is of order formula_16, and50 formula_17 The indicator of the Barnes G-function can be calculated easily from its asymptotic expression (which roughly says that formula_18): formula_19 Further properties of the indicator. Those formula_20 indicator functions which are of the form formula_21 are called formula_1-trigonometrically convex (formula_22 and formula_23 are real constants). If formula_12, we simply say, that formula_20 is trigonometrically convex. Such indicators have some special properties. For example, the following statements are all true for an indicator function that is trigonometrically convex at least on an interval formula_24: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f : \\Complex \\to \\Complex" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "h_f(\\theta)=\\limsup_{r\\to\\infty}\\frac{\\log|f(re^{i\\theta})|}{r^\\rho}." }, { "math_id": 4, "text": "D = \\{z=re^{i\\theta}:\\alpha<\\theta<\\beta\\}" }, { "math_id": 5, "text": "h_{fg}(\\theta)\\le h_f(\\theta)+h_g(\\theta)." }, { "math_id": 6, "text": "h_{f+g}(\\theta)\\le \\max\\{h_f(\\theta),h_g(\\theta)\\}." }, { "math_id": 7, "text": "f(z)=e^{(A+iB)z^\\rho}" }, { "math_id": 8, "text": "|f(re^{i\\theta})|=e^{Ar^\\rho\\cos(\\rho\\theta)-Br^\\rho\\sin(\\rho\\theta)}" }, { "math_id": 9, "text": "h_f(\\theta) = A\\cos(\\rho\\theta)-B\\sin(\\rho\\theta)." }, { "math_id": 10, "text": "h_{\\exp}(\\theta) = \\cos(\\theta)." }, { "math_id": 11, "text": "\nh_{\\sin}(\\theta)=h_{\\cos}(\\theta)=\\begin{cases}\n \\sin(\\theta), & \\text{if } 0 \\le\\theta<\\pi \\\\\n -\\sin(\\theta), & \\text{if } \\pi \\le \\theta<2\\pi.\n \\end{cases}\n" }, { "math_id": 12, "text": "\\rho = 1" }, { "math_id": 13, "text": "h_{1/\\Gamma}(\\theta) = \\limsup_{r\\to\\infty}\\frac{\\log|1/\\Gamma(re^{i\\theta})|}{r\\log r}." }, { "math_id": 14, "text": "h_{1/\\Gamma}(\\theta)=-\\cos(\\theta)." }, { "math_id": 15, "text": "E_\\alpha" }, { "math_id": 16, "text": "\\rho = 1/\\alpha" }, { "math_id": 17, "text": "h_{E_\\alpha}(\\theta)=\\begin{cases}\\cos\\left(\\frac{\\theta}{\\alpha}\\right),&\\text{for }|\\theta|\\le\\frac 1 2 \\alpha\\pi;\\\\0,&\\text{otherwise}.\\end{cases}" }, { "math_id": 18, "text": "\n\\log G(z+1)\\sim \\frac{z^2}{2}\\log z" }, { "math_id": 19, "text": "h_G(\\theta)=\\frac{\\log(G(re^{i\\theta}))}{r^2\\log(r)} = \\frac12\\cos(2\\theta)." }, { "math_id": 20, "text": "h" }, { "math_id": 21, "text": "h(\\theta)=A\\cos(\\rho\\theta)+B\\sin(\\rho\\theta)" }, { "math_id": 22, "text": "A" }, { "math_id": 23, "text": "B" }, { "math_id": 24, "text": "(\\alpha,\\beta)" }, { "math_id": 25, "text": "h(\\theta_1)=-\\infty" }, { "math_id": 26, "text": "\\theta_1\\in(\\alpha,\\beta)" }, { "math_id": 27, "text": "h = -\\infty" }, { "math_id": 28, "text": "[\\alpha,\\beta]" }, { "math_id": 29, "text": "h(\\theta)+h(\\theta+\\pi/\\rho) \\ge 0" }, { "math_id": 30, "text": "\\alpha \\le \\theta < \\theta+\\pi/\\rho\\le\\beta" } ]
https://en.wikipedia.org/wiki?curid=69319810
6932317
Conoid
Ruled surface made of lines parallel to a plane and intersecting an axis In geometry a conoid (from el " "κωνος' 'cone' and " -"ειδης' 'similar') is a ruled surface, whose rulings (lines) fulfill the additional conditions: (1) All rulings are parallel to a plane, the "directrix plane". (2) All rulings intersect a fixed line, the "axis". The conoid is a right conoid if its axis is perpendicular to its directrix plane. Hence all rulings are perpendicular to the axis. Because of (1) any conoid is a Catalan surface and can be represented parametrically by formula_0 Any curve x("u"0,"v") with fixed parameter "u" = "u"0 is a ruling, c("u") describes the "directrix" and the vectors r("u") are all parallel to the directrix plane. The planarity of the vectors r("u") can be represented by formula_1. If the directrix is a circle, the conoid is called a circular conoid. The term "conoid" was already used by Archimedes in his treatise "On Conoids and Spheroides". Examples. Right circular conoid. The parametric representation formula_2 describes a right circular conoid with the unit circle of the x-y-plane as directrix and a directrix plane, which is parallel to the y--z-plane. Its axis is the line formula_3 "Special features": The implicit representation is fulfilled by the points of the line formula_8, too. For these points there exist no tangent planes. Such points are called "singular". Parabolic conoid. The parametric representation formula_9 formula_10 describes a "parabolic conoid" with the equation formula_11. The conoid has a parabola as directrix, the y-axis as axis and a plane parallel to the x-z-plane as directrix plane. It is used by architects as roof surface (s. below). The parabolic conoid has no singular points. Applications. Mathematics. There are a lot of conoids with singular points, which are investigated in algebraic geometry. Architecture. Like other ruled surfaces conoids are of high interest with architects, because they can be built using beams or bars. Right conoids can be manufactured easily: one threads bars onto an axis such that they can be rotated around this axis, only. Afterwards one deflects the bars by a directrix and generates a conoid (s. parabolic conoid).
[ { "math_id": 0, "text": "\\mathbf x(u,v)= \\mathbf c(u) + v\\mathbf r(u)\\ " }, { "math_id": 1, "text": "\\det(\\mathbf r,\\mathbf \\dot r,\\mathbf \\ddot r)=0 " }, { "math_id": 2, "text": " \\mathbf x(u,v)=(\\cos u,\\sin u,0) + v (0,-\\sin u,z_0) \\ ,\\ 0\\le u <2\\pi, v\\in \\R" }, { "math_id": 3, "text": "(x,0,z_0) \\ x\\in \\R \\ ." }, { "math_id": 4, "text": "(1-x^2)(z-z_0)^2-y^2z_0^2=0" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": " V=\\tfrac{\\pi}{2}r^2h" }, { "math_id": 8, "text": "(x,0,z_0)" }, { "math_id": 9, "text": " \\mathbf x(u,v)=\\left(1,u,-u^2\\right)+ v\\left(-1,0,u^2\\right)" }, { "math_id": 10, "text": "=\\left(1-v,u,-(1-v)u^2\\right)\\ , u,v \\in \\R \\ ," }, { "math_id": 11, "text": " z=-xy^2" } ]
https://en.wikipedia.org/wiki?curid=6932317
693233
Affine logic
Affine logic is a substructural logic whose proof theory rejects the structural rule of contraction. It can also be characterized as linear logic with weakening. The name "affine logic" is associated with linear logic, to which it differs by allowing the weakening rule. Jean-Yves Girard introduced the name as part of the geometry of interaction semantics of linear logic, which characterizes linear logic in terms of linear algebra; here he alludes to affine transformations on vector spaces. Affine logic predated linear logic. V. N. Grishin used this logic in 1974, after observing that Russell's paradox cannot be derived in a set theory without contraction, even with an unbounded comprehension axiom. Likewise, the logic formed the basis of a decidable sub-theory of predicate logic, called 'Direct logic' (Ketonen &amp; Wehrauch, 1984; Ketonen &amp; Bellin, 1989). Affine logic can be embedded into linear logic by rewriting the affine arrow formula_0 as the linear arrowformula_1. Whereas full linear logic (i.e. propositional linear logic with multiplicatives, additives, and exponentials) is undecidable, full affine logic is decidable. Affine logic forms the foundation of ludics.
[ { "math_id": 0, "text": "A \\rightarrow B" }, { "math_id": 1, "text": "A \\multimap B \\otimes \\top" } ]
https://en.wikipedia.org/wiki?curid=693233
6933049
General frame
In logic, general frames (or simply frames) are Kripke frames with an additional structure, which are used to model modal and intermediate logics. The general frame semantics combines the main virtues of Kripke semantics and algebraic semantics: it shares the transparent geometrical insight of the former, and robust completeness of the latter. Definition. A modal general frame is a triple formula_0, where formula_1 is a Kripke frame (i.e., formula_2 is a binary relation on the set formula_3), and formula_4 is a set of subsets of formula_3 that is closed under the following: They are thus a special case of fields of sets with additional structure. The purpose of formula_4 is to restrict the allowed valuations in the frame: a model formula_7 based on the Kripke frame formula_1 is admissible in the general frame formula_8, if formula_9 for every propositional variable formula_10. The closure conditions on formula_4 then ensure that formula_11 belongs to formula_4 for "every" formula formula_12 (not only a variable). A formula formula_12 is valid in formula_8, if formula_13 for all admissible valuations formula_14, and all points formula_15. A normal modal logic formula_16 is valid in the frame formula_8, if all axioms (or equivalently, all theorems) of formula_16 are valid in formula_8. In this case we call formula_8 an formula_16-frame. A Kripke frame formula_1 may be identified with a general frame in which all valuations are admissible: i.e., formula_17, where formula_18 denotes the power set of formula_3. Types of frames. In full generality, general frames are hardly more than a fancy name for Kripke "models"; in particular, the correspondence of modal axioms to properties on the accessibility relation is lost. This can be remedied by imposing additional conditions on the set of admissible valuations. A frame formula_0 is called Kripke frames are refined and atomic. However, infinite Kripke frames are never compact. Every finite differentiated or atomic frame is a Kripke frame. Descriptive frames are the most important class of frames because of the duality theory (see below). Refined frames are useful as a common generalization of descriptive and Kripke frames. Operations and morphisms on frames. Every Kripke model formula_23 induces the general frame formula_24, where formula_4 is defined as formula_25 The fundamental truth-preserving operations of generated subframes, p-morphic images, and disjoint unions of Kripke frames have analogues on general frames. A frame formula_26 is a generated subframe of a frame formula_0, if the Kripke frame formula_27 is a generated subframe of the Kripke frame formula_1 (i.e., formula_28 is a subset of formula_3 closed upwards under formula_2, and formula_29), and formula_30 A p-morphism (or bounded morphism) formula_31 is a function from formula_3 to formula_28 that is a p-morphism of the Kripke frames formula_1 and formula_27, and satisfies the additional constraint formula_32 for every formula_33. The disjoint union of an indexed set of frames formula_34, formula_35, is the frame formula_0, where formula_3 is the disjoint union of formula_36, formula_2 is the union of formula_37, and formula_38 The refinement of a frame formula_0 is a refined frame formula_26 defined as follows. We consider the equivalence relation formula_39 and let formula_40 be the set of equivalence classes of formula_41. Then we put formula_42 formula_43 Completeness. Unlike Kripke frames, every normal modal logic formula_16 is complete with respect to a class of general frames. This is a consequence of the fact that formula_16 is complete with respect to a class of Kripke models formula_23: as formula_16 is closed under substitution, the general frame induced by formula_23 is an formula_16-frame. Moreover, every logic formula_16 is complete with respect to a single "descriptive" frame. Indeed, formula_16 is complete with respect to its canonical model, and the general frame induced by the canonical model (called the canonical frame of formula_16) is descriptive. Jónsson–Tarski duality. General frames bear close connection to modal algebras. Let formula_0 be a general frame. The set formula_4 is closed under Boolean operations, therefore it is a subalgebra of the power set Boolean algebra formula_44. It also carries an additional unary operation, formula_5. The combined structure formula_45 is a modal algebra, which is called the dual algebra of formula_46, and denoted by formula_47. In the opposite direction, it is possible to construct the dual frame formula_48 to any modal algebra formula_49. The Boolean algebra formula_50 has a Stone space, whose underlying set formula_3 is the set of all ultrafilters of formula_51. The set formula_4 of admissible valuations in formula_52 consists of the clopen subsets of formula_3, and the accessibility relation formula_2 is defined by formula_53 for all ultrafilters formula_54 and formula_55. A frame and its dual validate the same formulas; hence the general frame semantics and algebraic semantics are in a sense equivalent. The double dual formula_56 of any modal algebra is isomorphic to formula_51 itself. This is not true in general for double duals of frames, as the dual of every algebra is descriptive. In fact, a frame formula_46 is descriptive if and only if it is isomorphic to its double dual formula_57. It is also possible to define duals of p-morphisms on one hand, and modal algebra homomorphisms on the other hand. In this way the operators formula_58 and formula_59 become a pair of contravariant functors between the category of general frames, and the category of modal algebras. These functors provide a duality (called Jónsson–Tarski duality after Bjarni Jónsson and Alfred Tarski) between the categories of descriptive frames, and modal algebras. This is a special case of a more general duality between complex algebras and fields of sets on relational structures. Intuitionistic frames. The frame semantics for intuitionistic and intermediate logics can be developed in parallel to the semantics for modal logics. An intuitionistic general frame is a triple formula_60, where formula_61 is a partial order on formula_3, and formula_4 is a set of upper subsets ("cones") of formula_3 that contains the empty set, and is closed under Validity and other concepts are then introduced similarly to modal frames, with a few changes necessary to accommodate for the weaker closure properties of the set of admissible valuations. In particular, an intuitionistic frame formula_63 is called Tight intuitionistic frames are automatically differentiated, hence refined. The dual of an intuitionistic frame formula_63 is the Heyting algebra formula_66. The dual of a Heyting algebra formula_67 is the intuitionistic frame formula_68, where formula_3 is the set of all prime filters of formula_51, the ordering formula_61 is inclusion, and formula_4 consists of all subsets of formula_3 of the form formula_69 where formula_70. As in the modal case, formula_58 and formula_59 are a pair of contravariant functors, which make the category of Heyting algebras dually equivalent to the category of descriptive intuitionistic frames. It is possible to construct intuitionistic general frames from transitive reflexive modal frames and vice versa, see modal companion.
[ { "math_id": 0, "text": "\\mathbf F=\\langle F,R,V\\rangle" }, { "math_id": 1, "text": "\\langle F,R\\rangle" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "F" }, { "math_id": 4, "text": "V" }, { "math_id": 5, "text": "\\Box" }, { "math_id": 6, "text": "\\Box A=\\{x\\in F \\mid \\forall y\\in F\\,(x\\,R\\,y\\to y\\in A)\\}" }, { "math_id": 7, "text": "\\langle F,R,\\Vdash\\rangle" }, { "math_id": 8, "text": "\\mathbf{F}" }, { "math_id": 9, "text": "\\{x\\in F \\mid x\\Vdash p\\}\\in V" }, { "math_id": 10, "text": "p" }, { "math_id": 11, "text": "\\{x\\in F \\mid x\\Vdash A\\}" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": "x\\Vdash A" }, { "math_id": 14, "text": "\\Vdash" }, { "math_id": 15, "text": "x\\in F" }, { "math_id": 16, "text": "L" }, { "math_id": 17, "text": "\\langle F,R,\\mathcal{P}(F)\\rangle" }, { "math_id": 18, "text": "\\mathcal P(F)" }, { "math_id": 19, "text": "\\forall A\\in V\\,(x\\in A\\Leftrightarrow y\\in A)" }, { "math_id": 20, "text": "x=y" }, { "math_id": 21, "text": "\\forall A\\in V\\,(x\\in\\Box A\\Rightarrow y\\in A)" }, { "math_id": 22, "text": "x\\,R\\,y" }, { "math_id": 23, "text": "\\langle F,R,{\\Vdash}\\rangle" }, { "math_id": 24, "text": "\\langle F,R,V\\rangle" }, { "math_id": 25, "text": "V=\\big\\{\\{x\\in F \\mid x\\Vdash A\\} \\mid A\\hbox{ is a formula}\\big\\}." }, { "math_id": 26, "text": "\\mathbf G=\\langle G,S,W\\rangle" }, { "math_id": 27, "text": "\\langle G,S\\rangle" }, { "math_id": 28, "text": "G" }, { "math_id": 29, "text": "S=R\\cap G\\times G" }, { "math_id": 30, "text": "W=\\{A\\cap G \\mid A\\in V\\}." }, { "math_id": 31, "text": "f\\colon\\mathbf F\\to\\mathbf G" }, { "math_id": 32, "text": "f^{-1}[A]\\in V" }, { "math_id": 33, "text": "A\\in W" }, { "math_id": 34, "text": "\\mathbf F_i=\\langle F_i,R_i,V_i\\rangle" }, { "math_id": 35, "text": "i\\in I" }, { "math_id": 36, "text": "\\{F_i \\mid i\\in I\\}" }, { "math_id": 37, "text": "\\{R_i \\mid i\\in I\\}" }, { "math_id": 38, "text": "V=\\{A\\subseteq F \\mid \\forall i\\in I\\,(A\\cap F_i\\in V_i)\\}." }, { "math_id": 39, "text": "x\\sim y\\iff\\forall A\\in V\\,(x\\in A\\Leftrightarrow y\\in A)," }, { "math_id": 40, "text": "G=F/{\\sim}" }, { "math_id": 41, "text": "\\sim" }, { "math_id": 42, "text": "\\langle x/{\\sim},y/{\\sim}\\rangle\\in S\\iff\\forall A\\in V\\,(x\\in\\Box A\\Rightarrow y\\in A)," }, { "math_id": 43, "text": "A/{\\sim}\\in W\\iff A\\in V." }, { "math_id": 44, "text": "\\langle\\mathcal P(F),\\cap,\\cup,-\\rangle" }, { "math_id": 45, "text": "\\langle V,\\cap,\\cup,-,\\Box\\rangle" }, { "math_id": 46, "text": "\\mathbf F" }, { "math_id": 47, "text": "\\mathbf F^+" }, { "math_id": 48, "text": "\\mathbf A_+=\\langle F,R,V\\rangle" }, { "math_id": 49, "text": "\\mathbf A=\\langle A,\\wedge,\\vee,-,\\Box\\rangle" }, { "math_id": 50, "text": "\\langle A,\\wedge,\\vee,-\\rangle" }, { "math_id": 51, "text": "\\mathbf A" }, { "math_id": 52, "text": "\\mathbf A_+" }, { "math_id": 53, "text": "x\\,R\\,y\\iff\\forall a\\in A\\,(\\Box a\\in x\\Rightarrow a\\in y)" }, { "math_id": 54, "text": "x" }, { "math_id": 55, "text": "y" }, { "math_id": 56, "text": "(\\mathbf A_+)^+" }, { "math_id": 57, "text": "(\\mathbf F^+)_+" }, { "math_id": 58, "text": "(\\cdot)^+" }, { "math_id": 59, "text": "(\\cdot)_+" }, { "math_id": 60, "text": "\\langle F,\\le,V\\rangle" }, { "math_id": 61, "text": "\\le" }, { "math_id": 62, "text": "A\\to B=\\Box(-A\\cup B)" }, { "math_id": 63, "text": "\\mathbf F=\\langle F,\\le,V\\rangle" }, { "math_id": 64, "text": "x\\le y" }, { "math_id": 65, "text": "V\\cup\\{F-A \\mid A\\in V\\}" }, { "math_id": 66, "text": "\\mathbf F^+=\\langle V,\\cap,\\cup,\\to,\\emptyset\\rangle" }, { "math_id": 67, "text": "\\mathbf A=\\langle A,\\wedge,\\vee,\\to,0\\rangle" }, { "math_id": 68, "text": "\\mathbf A_+=\\langle F,\\le,V\\rangle" }, { "math_id": 69, "text": "\\{x\\in F \\mid a\\in x\\}," }, { "math_id": 70, "text": "a\\in A" } ]
https://en.wikipedia.org/wiki?curid=6933049
6933302
Sound from ultrasound
Sound transmission method Sound from ultrasound is the name given here to the generation of audible sound from modulated ultrasound without using an active receiver. This happens when the modulated ultrasound passes through a nonlinear medium which acts, intentionally or unintentionally, as a demodulator. Parametric array. Since the early 1960s, researchers have been experimenting with creating directive low-frequency sound from nonlinear interaction of an aimed beam of ultrasound waves produced by a parametric array using heterodyning. Ultrasound has much shorter wavelengths than audible sound, so that it propagates in a much narrower beam than any normal loudspeaker system using audio frequencies. Most of the work was performed in liquids (for underwater sound use). The first modern device for air acoustic use was created in 1998, and is now known by the trademark name "Audio Spotlight", a term first coined in 1983 by the Japanese researchers who abandoned the technology as infeasible in the mid-1980s. A transducer can be made to project a narrow beam of modulated ultrasound that is powerful enough, at 100 to 110 dBSPL, to substantially change the speed of sound in the air that it passes through. The air within the beam behaves nonlinearly and extracts the modulation signal from the ultrasound, resulting in sound that can be heard only along the path of the beam, or that appears to radiate from any surface that the beam strikes. This technology allows a beam of sound to be projected over a long distance to be heard only in a small well-defined area; for a listener outside the beam the sound pressure decreases substantially. This effect cannot be achieved with conventional loudspeakers, because sound at audible frequencies cannot be focused into such a narrow beam. There are some limitations with this approach. Anything that interrupts the beam will prevent the ultrasound from propagating, like interrupting a spotlight's beam. For this reason, most systems are mounted overhead, like lighting. Applications. Commercial advertising. A sound signal can be aimed so that only a particular passer-by, or somebody very close, can hear it. In commercial applications, it can target sound to a single person without the peripheral sound and related noise of a loudspeaker. Personal audio. It can be used for personal audio, either to have sounds audible to only one person, or that which a group wants to listen to. The navigation instructions for example are only interesting for the driver in a car, not for the passengers. Another possibility are future applications for true stereo sound, where one ear does not hear what the other is hearing. Train signaling device. Directional audio train signaling may be accomplished through the use of an ultrasonic beam which will warn of the approach of a train while avoiding the nuisance of loud train signals on surrounding homes and businesses. History. This technology was originally developed by the US Navy and Soviet Navy for underwater sonar in the mid-1960s, and was briefly investigated by Japanese researchers in the early 1980s, but these efforts were abandoned due to extremely poor sound quality (high distortion) and substantial system cost. These problems went unsolved until a paper published by Dr. F. Joseph Pompei of the Massachusetts Institute of Technology in 1998 fully described a working device that reduced audible distortion essentially to that of a traditional loudspeaker. Products. As of 2014[ [update]] there were known to be five devices which have been marketed that use ultrasound to create an audible beam of sound. Audio Spotlight. F. Joseph Pompei of MIT developed technology he calls the "Audio Spotlight", and made it commercially available in 2000 by his company Holosonics, which according to their website claims to have sold "thousands" of their "Audio Spotlight" systems. Disney was among the first major corporations to adopt it for use at the Epcot Center, and many other application examples are shown on the Holosonics website. Audio Spotlight is a narrow beam of sound that can be controlled with similar precision to light from a spotlight. It uses a beam of ultrasound as a "virtual acoustic source", enabling control of sound distribution. The ultrasound has wavelengths only a few millimeters long which are much smaller than the source, and therefore naturally travel in an extremely narrow beam. The ultrasound, which contains frequencies far outside the range of human hearing, is completely inaudible. But as the ultrasonic beam travels through the air, the inherent properties of the air cause the ultrasound to change shape in a predictable way. This gives rise to frequency components in the audible band, which can be predicted and controlled. HyperSonic Sound. Elwood "Woody" Norris, founder and Chairman of American Technology Corporation (ATC), announced he had successfully created a device which achieved ultrasound transmission of sound in 1996. This device used piezoelectric transducers to send two ultrasonic waves of differing frequencies toward a point, giving the illusion that the audible sound from their interference pattern was originating at that point. ATC named and trademarked their device as "HyperSonic Sound" (HSS). In December 1997, HSS was one of the items in the Best of What's New issue of Popular Science. In December 2002, Popular Science named HyperSonic Sound the best invention of 2002. Norris received the 2005 Lemelson–MIT Prize for his invention of a "hypersonic sound". ATC (now named LRAD Corporation) spun off the technology to Parametric Sound Corporation in September 2010 to focus on their long-range acoustic device (LRAD) products, according to their quarterly reports, press releases, and executive statements. Mitsubishi Electric Engineering Corporation. Mitsubishi apparently offers a sound from ultrasound product named the "MSP-50E" and commercially available from Mitsubishi electrical engineering company. AudioBeam. German audio company Sennheiser Electronic once listed their "AudioBeam" product for about $4,500. There is no indication that the product has been used in any public applications. The product has since been discontinued. Literature survey. The first experimental systems were built over 30 years ago, although these first versions only played simple tones. It was not until much later (see above) that the systems were built for practical listening use. Experimental ultrasonic nonlinear acoustics. A chronological summary of the experimental approaches taken to examine Audio Spotlight systems in the past will be presented here. At the turn of the millennium working versions of an Audio Spotlight capable of reproducing speech and music could be bought from Holosonics, a company founded on Dr. Pompei's work in the MIT Media Lab. Related topics were researched almost 40 years earlier in the context of underwater acoustics. Both articles were supported by the U.S. Office of Naval Research, specifically for the use of the phenomenon for underwater sonar pulses. The goal of these systems was not high directivity "per se", but rather higher usable bandwidth of a typically band-limited transducer. The 1970s saw some activity in experimental airborne systems, both in air and underwater. Again supported by the U.S. Office of Naval Research, the primary aim of the underwater experiments was to determine the range limitations of sonar pulse propagation due to nonlinear distortion. The airborne experiments were aimed at recording quantitative data about the directivity and propagation loss of both the ultrasonic carrier and demodulated waves, rather than developing the capability to reproduce an audio signal. In 1983 the idea was again revisited experimentally but this time with the firm intent to analyze the use of the system in air to form a more complex base band signal in a highly directional manner. The signal processing used to achieve this was simple DSB-AM with no precompensation, and because of the lack of precompensation applied to the input signal, the THD (total harmonic distortion) levels of this system would have probably been satisfactory for speech reproduction, but prohibitive for the reproduction of music. An interesting feature of the experimental set up was the use of 547 ultrasonic transducers to produce a 40 kHz ultrasonic sound source of over 130db at 4 m, which would demand significant safety considerations. Even though this experiment clearly demonstrated the potential to reproduce audio signals using an ultrasonic system, it also showed that the system suffered from heavy distortion, especially when no precompensation was used. Theoretical ultrasonic nonlinear acoustics. The equations that govern nonlinear acoustics are quite complex and unfortunately they do not have general analytical solutions. They usually require the use of a computer simulation. However, as early as 1965, Berktay performed an analysis under some simplifying assumptions that allowed the demodulated SPL to be written in terms of the amplitude modulated ultrasonic carrier wave pressure Pc and various physical parameters. Note that the demodulation process is extremely lossy, with a minimum loss in the order of 60 dB from the ultrasonic SPL to the audible wave SPL. A precompensation scheme can be based from Berktay's expression, shown in Equation 1, by taking the square root of the base band signal envelope E and then integrating twice to invert the effect of the double partial-time derivative. The analogue electronic circuit equivalents of a square root function is simply an op-amp with feedback, and an equalizer is analogous to an integration function. However, these topic areas lie outside the scope of this project. formula_0 where This equation says that the audible demodulated ultrasonic pressure wave (output signal) is proportional to the twice differentiated, squared version of the envelope function (input signal). Precompensation refers to the trick of anticipating these transforms and applying the inverse transforms on the input, hoping that the output is then closer to the untransformed input. By the 1990s, it was well known that the Audio Spotlight could work but suffered from heavy distortion. It was also known that the precompensation schemes placed an added demand on the frequency response of the ultrasonic transducers. In effect the transducers needed to keep up with what the digital precompensation demanded of them, namely a broader frequency response. In 1998 the negative effects on THD of an insufficiently broad frequency response of the ultrasonic transducers was quantified with computer simulations by using a precompensation scheme based on Berktay's expression. In 1999 Pompei's article discussed how a new prototype transducer met the increased frequency response demands placed on the ultrasonic transducers by the precompensation scheme, which was once again based on Berktay's expression. In addition impressive reductions in the THD of the output when the precompensation scheme was employed were graphed against the case of using no precompensation. In summary, the technology that originated with underwater sonar 40 years ago has been made practical for reproduction of audible sound in air by Pompei's paper and device, which, according to his AES paper (1998), demonstrated that distortion had been reduced to levels comparable to traditional loudspeaker systems. Modulation scheme. The nonlinear interaction mixes ultrasonic tones in air to produce sum and difference frequencies. A DSB (double-sideband) amplitude-modulation scheme with an appropriately large baseband DC offset, to produce the demodulating tone superimposed on the modulated audio spectrum, is one way to generate the signal that encodes the desired baseband audio spectrum. This technique suffers from extremely heavy distortion as not only the demodulating tone interferes, but also all other frequencies present interfere with one another. The modulated spectrum is convolved with itself, doubling its bandwidth by the length property of the convolution. The baseband distortion in the bandwidth of the original audio spectrum is inversely proportional to the magnitude of the DC offset (demodulation tone) superimposed on the signal. A larger tone results in less distortion. Further distortion is introduced by the second order differentiation property of the demodulation process. The result is a multiplication of the desired signal by the function -ω² in frequency. This distortion may be equalized out with the use of preemphasis filtering (increase amplitude of high frequency signal). By the time-convolution property of the Fourier transform, multiplication in the time domain is a convolution in the frequency domain. Convolution between a baseband signal and a unity gain pure carrier frequency shifts the baseband spectrum in frequency and halves its magnitude, though no energy is lost. One half-scale copy of the replica resides on each half of the frequency axis. This is consistent with Parseval's theorem. The modulation depth "m" is a convenient experimental parameter when assessing the total harmonic distortion in the demodulated signal. It is inversely proportional to the magnitude of the DC offset. THD increases proportionally with "m"1². These distorting effects may be better mitigated by using another modulation scheme that takes advantage of the differential squaring device nature of the nonlinear acoustic effect. Modulation of the second integral of the square root of the desired baseband audio signal, without adding a DC offset, results in convolution in frequency of the modulated square-root spectrum, half the bandwidth of the original signal, with itself due to the nonlinear channel effects. This convolution in frequency is a multiplication in time of the signal by itself, or a squaring. This again doubles the bandwidth of the spectrum, reproducing the second time integral of the input audio spectrum. The double integration corrects for the -ω² filtering characteristic associated with the nonlinear acoustic effect. This recovers the scaled original spectrum at baseband. The harmonic distortion process has to do with the high frequency replicas associated with each squaring demodulation, for either modulation scheme. These iteratively demodulate and self-modulate, adding a spectrally smeared-out and time-exponentiated copy of the original signal to baseband and twice the original center frequency each time, with one iteration corresponding to one traversal of the space between the emitter and target. Only sound with parallel collinear phase velocity vectors interfere to produce this nonlinear effect. Even-numbered iterations will produce their modulation products, baseband and high frequency, as reflected emissions from the target. Odd-numbered iterations will produce their modulation products as reflected emissions off the emitter. This effect still holds when the emitter and the reflector are not parallel, though due to diffraction effects the baseband products of each iteration will originate from a different location each time, with the originating location corresponding to the path of the reflected high frequency self-modulation products. These harmonic copies are largely attenuated by the natural losses at those higher frequencies when propagating through air. Attenuation of ultrasound in air. The figure provided in provides an estimation of the attenuation that the ultrasound would suffer as it propagated through air. The figures from this graph correspond to completely linear propagation, and the exact effect of the nonlinear demodulation phenomena on the attenuation of the ultrasonic carrier waves in air was not considered. There is an interesting dependence on humidity. Nevertheless, a 50 kHz wave suffers an attenuation level in the order of 1 dB per meter at one atmosphere of pressure. Safe use of high-intensity ultrasound. For the nonlinear effect to occur, relatively high-intensity ultrasonics are required. The SPL involved was typically greater than 100 dB of ultrasound at a nominal distance of 1 m from the face of the ultrasonic transducer. Exposure to more intense ultrasound over 140 dB near the audible range (20–40 kHz) can lead to a syndrome involving manifestations of nausea, headache, tinnitus, pain, dizziness, and fatigue, but this is around 100 times the 100 dB level cited above, and is generally not a concern. Dr Joseph Pompei of Audio Spotlight has published data showing that their product generates ultrasonic sound pressure levels around 130 dB (at 60 kHz) measured at 3 meters. The UK's independent Advisory Group on Non-ionising Radiation (AGNIR) produced a 180-page report on the health effects of human exposure to ultrasound and infrasound in 2010. The UK Health Protection Agency (HPA) published their report, which recommended an exposure limit for the general public to airborne ultrasound sound pressure levels (SPL) of 100 dB (at 25 kHz and above). OSHA specifies a safe ceiling value of ultrasound as 145 dB SPL exposure at the frequency range used by commercial systems in air, as long as there is no possibility of contact with the transducer surface or coupling medium (i.e. submerged). This is several times the highest levels used by commercial Audio Spotlight systems, so there is a significant margin for safety. In a review of international acceptable exposure limits Howard et al. (2005) noted the general agreement among standards organizations, but expressed concern with the decision by United States of America's Occupational Safety and Health Administration (OSHA) to increase the exposure limit by an additional 30 dB under some conditions (equivalent to a factor of 1000 in intensity). For frequencies of ultrasound from 25 to 50 kHz, a guideline of 110 dB had been recommended by Canada, Japan, the USSR, and the International Radiation Protection Agency, and 115 dB by Sweden in the late 1970s to early 1980s, but these were primarily based on subjective effects. The more recent OSHA guidelines above are based on ACGIH (American Conference of Governmental Industrial Hygienists) research from 1987. Lawton(2001) reviewed international guidelines for airborne ultrasound in a report published by the United Kingdom's Health and Safety Executive, this included a discussion of the guidelines issued by the American Conference of Governmental Industrial Hygienists (ACGIH), 1988. Lawton states "This reviewer believes that the ACGIH has pushed its acceptable exposure limits to the very edge of potentially injurious exposure". The ACGIH document also mentioned the possible need for hearing protection. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "p_2(x,t)=K \\cdot P_c^2 \\cdot \\frac{\\partial^2}{\\partial t^2} E^2(x,t)" }, { "math_id": 1, "text": "p_2(x,t)=\\," }, { "math_id": 2, "text": "K=\\," }, { "math_id": 3, "text": "P_c=\\," }, { "math_id": 4, "text": "E(x,t)=\\," } ]
https://en.wikipedia.org/wiki?curid=6933302
6934
Condition number
Function's sensitivity to argument change In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given formula_0 one is solving for "x," and thus the condition number of the (local) inverse must be used. The condition number is derived from the theory of propagation of uncertainty, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the independent variables) there is a large change in the answer or dependent variable. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called "backward stability"; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number formula_1, then you may lose up to formula_2 digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). General definition in the context of error analysis. Given a problem formula_3 and an algorithm formula_4 with an input formula_5 and output formula_6 the "error" is formula_7 the "absolute" error is formula_8 and the "relative" error is formula_9 In this context, the "absolute" condition number of a problem formula_3 is formula_10 and the "relative" condition number is formula_11 Matrices. For example, the condition number associated with the linear equation "Ax" = "b" gives a bound on how inaccurate the solution "x" will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating-point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution "x" will change with respect to a change in "b". Thus, if the condition number is large, even a small error in "b" may cause a large error in "x". On the other hand, if the condition number is small, then the error in "x" will not be much bigger than the error in "b". The condition number is defined more precisely to be the maximum ratio of the relative error in "x" to the relative error in "b". Let "e" be the error in "b". Assuming that "A" is a nonsingular matrix, the error in the solution "A"−1"b" is "A"−1"e". The ratio of the relative error in the solution to the relative error in "b" is formula_12 The maximum value (for nonzero "b" and "e") is then seen to be the product of the two operator norms as follows: formula_13 The same definition is used for any consistent norm, i.e. one that satisfies formula_14 When the condition number is exactly one (which can only happen if "A" is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If formula_15 is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the "L"2 norm and typically denoted as formula_16), then formula_17 where formula_18 and formula_19 are maximal and minimal singular values of formula_20 respectively. Hence: The condition number with respect to "L"2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If formula_15 is the matrix norm induced by the formula_25 (vector) norm and formula_20 is lower triangular non-singular (i.e. formula_26 for all formula_27), then formula_28 recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a "non-linear algebra", for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not significantly larger than one, the matrix is well-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined as formula_29, where formula_30 is the Moore-Penrose pseudoinverse. For square matrices, this unfortunately makes the condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations. Nonlinear. Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. One variable. The condition number of a differentiable function formula_3 in one variable as a function is formula_31. Evaluated at a point formula_5, this is formula_32 Note that this is the absolute value of the elasticity of a function in economics. Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of formula_3, which is formula_33, and the logarithmic derivative of formula_5, which is formula_34, yielding a ratio of formula_35. This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative formula_36 scaled by the value of formula_3. Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change formula_37 in formula_5, the relative change in formula_5 is formula_38, while the relative change in formula_39 is formula_40. Taking the ratio yields formula_41 The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative. A few important ones are given below: Several variables. Condition numbers can be defined for any function formula_3 mapping its data from some domain (e.g. an formula_42-tuple of real numbers formula_5) into some codomain (e.g. an formula_43-tuple of real numbers formula_39), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing eigenvalues. The condition number of formula_3 at a point formula_5 (specifically, its relative condition number) is then defined to be the maximum ratio of the fractional change in formula_39 to any fractional change in formula_5, in the limit where the change formula_44 in formula_5 becomes infinitesimally small: formula_45 where formula_15 is a norm on the domain/codomain of formula_3. If formula_3 is differentiable, this is equivalent to: formula_46 where &amp;NoBreak;&amp;NoBreak; denotes the Jacobian matrix of partial derivatives of formula_3 at formula_5, and formula_47 is the induced norm on the matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = y," }, { "math_id": 1, "text": "\\kappa(A) = 10^k" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\tilde{f}" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "\\tilde{f}(x)," }, { "math_id": 7, "text": "\\delta f(x) := f(x) - \\tilde{f}(x)," }, { "math_id": 8, "text": "\\|\\delta f(x)\\| = \\left\\|f(x) - \\tilde{f}(x)\\right\\|" }, { "math_id": 9, "text": "\\|\\delta f(x)\\| / \\|f(x)\\| = \\left\\|f(x) - \\tilde{f}(x)\\right\\| / \\|f(x)\\|." }, { "math_id": 10, "text": "\\lim_{\\varepsilon \\rightarrow 0^+}\\, \\sup_{\\|\\delta x\\| \\,\\leq\\, \\varepsilon} \\frac{\\|\\delta f(x)\\|}{\\|\\delta x\\|}" }, { "math_id": 11, "text": "\\lim_{\\varepsilon \\rightarrow 0^+}\\, \\sup_{\\|\\delta x \\| \\,\\leq\\, \\varepsilon} \\frac{\\|\\delta f(x)\\| / \\|f(x)\\|}{\\|\\delta x\\| / \\|x\\|}." }, { "math_id": 12, "text": "{\\frac{\\left\\|A^{-1} e\\right\\|}{\\left\\|A^{-1} b\\right\\|}}/{\\frac{\\|e\\|}{\\|b\\|}} = \\frac{\\left\\|A^{-1} e\\right\\|}{\\|e\\|} \\frac{\\|b\\|}{\\left\\|A^{-1} b\\right\\|}." }, { "math_id": 13, "text": "\\begin{align}\n \\max_{e,b \\neq 0} \\left\\{ \\frac{\\left\\| A^{-1}e \\right\\|}{\\| e \\|} \\frac{\\| b \\|}{\\left\\| A^{-1}b \\right\\|} \\right\\}\n &= \\max_{e \\neq 0} \\left\\{\\frac{\\left\\| A^{-1}e\\right\\| }{\\| e\\|} \\right\\} \\, \\max_{b \\neq 0} \\left\\{ \\frac {\\| b \\|}{\\left\\| A^{-1}b \\right\\|} \\right\\} \\\\\n &= \\max_{e \\neq 0} \\left\\{\\frac{\\left\\| A^{-1}e\\right\\|}{\\| e \\|}\\right\\} \\, \\max_{x \\neq 0} \\left \\{\\frac {\\| Ax \\| }{\\| x \\|} \\right\\} \\\\\n &= \\left\\| A^{-1} \\right \\| \\, \\|A\\|.\n\\end{align}" }, { "math_id": 14, "text": "\\kappa(A) = \\left\\| A^{-1} \\right\\| \\, \\left\\| A \\right\\| \\ge \\left\\| A^{-1} A \\right\\| = 1." }, { "math_id": 15, "text": "\\|\\cdot\\|" }, { "math_id": 16, "text": "\\|\\cdot\\|_2" }, { "math_id": 17, "text": "\\kappa(A) = \\frac{\\sigma_\\text{max}(A)}{\\sigma_\\text{min}(A)}," }, { "math_id": 18, "text": "\\sigma_\\text{max}(A)" }, { "math_id": 19, "text": "\\sigma_\\text{min}(A)" }, { "math_id": 20, "text": "A" }, { "math_id": 21, "text": "\\kappa(A) = \\frac{\\left|\\lambda_\\text{max}(A)\\right|}{\\left|\\lambda_\\text{min}(A)\\right|}," }, { "math_id": 22, "text": "\\lambda_\\text{max}(A)" }, { "math_id": 23, "text": "\\lambda_\\text{min}(A) " }, { "math_id": 24, "text": "\\kappa(A) = 1." }, { "math_id": 25, "text": "L^\\infty" }, { "math_id": 26, "text": "a_{ii} \\ne 0" }, { "math_id": 27, "text": "i" }, { "math_id": 28, "text": "\\kappa(A) \\geq \\frac{\\max_i\\big(|a_{ii}|\\big)}{\\min_i\\big(|a_{ii}|\\big)}" }, { "math_id": 29, "text": "\\kappa(A) = \\|A\\| \\|A^\\dagger\\|" }, { "math_id": 30, "text": "A^\\dagger" }, { "math_id": 31, "text": "\\left|xf'/f\\right|" }, { "math_id": 32, "text": "\\left|\\frac{xf'(x)}{f(x)}\\right|=\\left|\\frac{(\\log f)'}{(\\log x)'}\\right|." }, { "math_id": 33, "text": "(\\log f)' = f'/f" }, { "math_id": 34, "text": "(\\log x)' = x'/x = 1/x" }, { "math_id": 35, "text": "xf'/f" }, { "math_id": 36, "text": "f'" }, { "math_id": 37, "text": "\\Delta x" }, { "math_id": 38, "text": "[(x + \\Delta x) - x] / x = (\\Delta x) / x" }, { "math_id": 39, "text": "f(x)" }, { "math_id": 40, "text": "[f(x + \\Delta x) - f(x)] / f(x)" }, { "math_id": 41, "text": "\\frac{[f(x + \\Delta x) - f(x)] / f(x)}{(\\Delta x) / x} = \\frac{x}{f(x)} \\frac{f(x + \\Delta x) - f(x)}{(x + \\Delta x) - x} = \\frac{x}{f(x)} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x}." }, { "math_id": 42, "text": "m" }, { "math_id": 43, "text": "n" }, { "math_id": 44, "text": "\\delta x" }, { "math_id": 45, "text": "\\lim_{\\varepsilon \\to 0^+} \\sup_{\\|\\delta x\\| \\leq \\varepsilon} \\left[ \\left. \\frac{\\left\\|f(x + \\delta x) - f(x)\\right\\|}{\\|f(x)\\|} \\right/ \\frac{\\|\\delta x\\|}{\\|x\\|} \\right]," }, { "math_id": 46, "text": "\\frac{\\|J(x)\\|}{\\|f(x) \\| / \\|x\\|}," }, { "math_id": 47, "text": "\\|J(x)\\|" } ]
https://en.wikipedia.org/wiki?curid=6934
69343459
Innovation Method
Statistical estimation method In statistics, the Innovation Method provides an estimator for the parameters of stochastic differential equations given a time series of (potentially noisy) observations of the state variables. In the framework of continuous-discrete state space models, the innovation estimator is obtained by maximizing the log-likelihood of the corresponding discrete-time innovation process with respect to the parameters. The innovation estimator can be classified as a M-estimator, a quasi-maximum likelihood estimator or a prediction error estimator depending on the inferential considerations that want to be emphasized. The innovation method is a system identification technique for developing mathematical models of dynamical systems from measured data and for the optimal design of experiments. Background. Stochastic differential equations (SDEs) have become an important mathematical tool for describing the time evolution of several random phenomenon in natural, social and applied sciences. Statistical inference for SDEs is thus of great importance in applications for model building, model selection, model identification and forecasting. To carry out statistical inference for SDEs, measurements of the state variables of these random phenomena are indispensable. Usually, in practice, only a few state variables are measured by physical devices that introduce random measurement errors (observational errors). Mathematical model for inference. The innovation estimator. for SDEs is defined in the framework of continuous-discrete state space models. These models arise as natural mathematical representation of the temporal evolution of continuous random phenomena and their measurements in a succession of time instants. In the simplest formulation, these continuous-discrete models are expressed in term of a SDE of the form formula_0 describing the time evolution of formula_1 state variables formula_2 of the phenomenon for all time instant formula_3, and an observation equation formula_4 describing the time series of measurements formula_5of at least one of the variables formula_6 of the random phenomenon on formula_7 time instants formula_8. In the model (1)-(2), formula_9 and formula_10 are differentiable functions, formula_11 is an formula_12-dimensional standard Wiener process, formula_13 is a vector of formula_14 parameters, formula_15 is a sequence of formula_16-dimensional i.i.d. Gaussian random vectors independent of formula_17, formula_18 an formula_19 positive definite matrix, and formula_20 an formula_21 matrix. Statistical problem to solve. Once the dynamics of a phenomenon is described by a state equation as (1) and the way of measurement the state variables specified by an observation equation as (2), the inference problem to solve is the following: given formula_7 partial and noisy observations formula_5 of the stochastic process formula_6 on the observation times formula_22, estimate the unobserved state variable of formula_6 and the unknown parameters formula_23 in (1) that better fit to the given observations. Discrete-time innovation process. Let formula_24 be the sequence of formula_7 observation times formula_25 of the states of (1), and formula_26 the time series of partial and noisy measurements of formula_6 described by the observation equation (2). Further, let formula_27 and formula_28 be the conditional mean and variance of formula_6 with formula_29, where formula_30 denotes the expected value of random vectors. The random sequence formula_31 with formula_32 defines the discrete-time innovation process, where formula_33 is proved to be an independent normally distributed random vector with zero mean and variance formula_34 for small enough formula_35, with formula_36. In practice, this distribution for the discrete-time innovation is valid when, with a suitable selection of both, the number formula_7 of observations and the time distance formula_37 between consecutive observations, the time series of observations formula_5 of the SDE contains the main information about the continuous-time process formula_6. That is, when the sampling of the continuous-time process formula_6 has low distortion (aliasing) and when there is a suitable signal-noise ratio. Innovation estimator. The innovation estimator for the parameters of the SDE (1) is the one that maximizes the likelihood function of the discrete-time innovation process formula_38 with respect to the parameters. More precisely, given formula_7 measurements formula_39of the state space model (1)-(2) with formula_40 on formula_41 the innovation estimator for the parameters formula_42 of (1) is defined by formula_43 where formula_44 being formula_45the discrete-time innovation (3) and formula_46the innovation variance (4) of the model (1)-(2) at formula_47, for all formula_48 In the above expression for formula_49 the conditional mean formula_50 and variance formula_51 are computed by the continuous-discrete filtering algorithm for the evolution of the moments (Section 6.4 in), for all formula_48 Differences with the maximum likelihood estimator. The maximum likelihood estimator of the parameters formula_52 in the model (1)-(2) involves the evaluation of the - usually unknown - transition density function formula_53 between the states formula_54 and formula_55 of the diffusion process formula_6 for all the observation times formula_56 and formula_57. Instead of this, the innovation estimator (5) is obtained by maximizing the likelihood of the discrete-time innovation process formula_58 taking into account that formula_59 are Gaussian and independent random vectors. Remarkably, whereas the transition density function formula_60 changes when the SDE for formula_6 does, the transition density function formula_61 for the innovation process remains Gaussian independently of the SDEs for formula_6. Only in the case that the diffusion formula_6 is described by a linear SDE with additive noise, the density function formula_62 is Gaussian and equal to formula_63 and so the maximum likelihood and the innovation estimator coincide. Otherwise, the innovation estimator is an approximation to the maximum likelihood estimator and, in this sense, the innovation estimator is a Quasi-Maximum Likelihood estimator. In addition, the innovation method is a particular instance of the Prediction Error method according to the definition given in. Therefore, the asymptotic results obtained in for that general class of estimators are valid for the innovation estimators. Intuitively, by following the typical control engineering viewpoint, it is expected that the innovation process - viewed as a measure of the prediction errors of the fitted model - be approximately a white noise process when the models fit the data, which can be used as a practical tool for designing of models and for optimal experimental design. Properties. The innovation estimator (5) has a number of important attributes: formula_68 where formula_69 is the t-student distribution with formula_65 significance level, and formula_70 degrees of freedom . Here, formula_71 denotes the variance of the innovation estimator formula_67, where formula_72 is the Fisher Information matrix the innovation estimator formula_73 of formula_74 and formula_75 is the entry formula_76 of the matrix formula_77 with formula_78 and formula_79, for formula_80. formula_83 can be transformed to the simpler one (2), and the innovation estimator (5) can be applied. Approximate Innovation estimators. In practice, close form expressions for computing formula_84 and formula_85 in (5) are only available for a few models (1)-(2). Therefore, approximate filtering algorithms as the following are used in applications. Given formula_7 measurements formula_39 and the initial filter estimates formula_86, formula_87, the approximate Linear Minimum Variance (LMV) filter for the model (1)-(2) is iteratively defined at each observation time formula_88 by the prediction estimates formula_89 and formula_90 with initial conditions formula_91 and formula_92, and the filter estimates formula_93 and formula_94 with filter gain formula_95 for all formula_96, where formula_97 is an approximation to the solution formula_6 of (1) on the observation times formula_98. Given formula_99 measurements formula_100 of the state space model (1)-(2) with formula_101 on formula_98, the approximate innovation estimator for the parameters formula_102 of (1) is defined by formula_103 where formula_104 being formula_105 and formula_106 approximations to the discrete-time innovation (3) and innovation variance (4), respectively, resulting from the filtering algorithm (7)-(8). For models with complete observations free of noise (i.e., with formula_107 and formula_108 in (2)), the approximate innovation estimator (9) reduces to the known Quasi-Maximum Likelihood estimators for SDEs. Main conventional-type estimators. Conventional-type innovation estimators are those (9) derived from conventional-type continuous-discrete or discrete-discrete approximate filtering algorithms. With approximate continuous-discrete filters there are the innovation estimators based on Local Linearization (LL) filters, on the extended Kalman filter, and on the second order filters. Approximate innovation estimators based on discrete-discrete filters result from the discretization of the SDE (1) by means of a numerical scheme. Typically, the effectiveness of these innovation estimators is directly related to the stability of the involved filtering algorithms. A shared drawback of these conventional-type filters is that, once the observations are given, the error between the approximate and the exact innovation process is fixed and completely settled by the time distance between observations. This might set a large bias of the approximate innovation estimators in some applications, bias that cannot be corrected by increasing the number of observations. However, the conventional-type innovation estimators are useful in many practical situations for which only medium or low accuracy for the parameter estimation is required. Order-β innovation estimators. Let us consider the finer time discretization formula_109 of the time interval formula_110 satisfying the condition formula_111. Further, let formula_112 be the approximate value of formula_113 obtained from a discretization of the equation (1) for all formula_114, and formula_115 for all formula_116 a continuous-time approximation to formula_2. A order-formula_117 LMV filter. is an approximate LMV filter for which formula_118 is an order-formula_117 weak approximation to formula_6 satisfying (10) and the weak convergence condition formula_119 for all formula_120 and any formula_121 times continuously differentiable functions formula_122 for which formula_123 and all its partial derivatives up to order formula_121 have polynomial growth, being formula_124 a positive constant. This order-formula_125 LMV filter converges with rate formula_125 to the exact LMV filter as formula_126 goes to zero, where formula_126 is the maximum stepsize of the time discretization formula_127 on which the approximation formula_118 to formula_6 is defined. A order-formula_125 innovation estimator is an approximate innovation estimator (9) for which the approximations to the discrete-time innovation (3) and innovation variance (4), respectively, resulting from an order-formula_125 LMV filter. Approximations formula_118 of any kind converging to formula_6 in a weak sense (as, e.g., those in ) can be used to design an order-formula_125 LMV filter and, consequently, an order-formula_125 innovation estimator. These order-formula_125 innovation estimators are intended for the recurrent practical situation in which a diffusion process should be identified from a reduced number of observations distant in time or when high accuracy for the estimated parameters is required. Properties. An order-formula_125 innovation estimator formula_128 has a number of important properties: Figure 1 presents the histograms of the differences formula_136 and formula_137 between the exact innovation estimator formula_138 with the conventional formula_139 and order-formula_140 formula_141 innovation estimators for the parameters formula_142 and formula_143 of the equation formula_144 obtained from 100 time series formula_145 of formula_7 noisy observations formula_146 of formula_147 on the observation times formula_148, formula_149, with formula_150 and formula_151. The classical and the order-formula_140 Local Linearization filters of the innovation estimators formula_139 and formula_141 are defined as in, respectively, on the uniform time discretizations formula_152 and formula_153, with formula_154. The number of stochastic simulations of the order-formula_140 Local Linearization filter is estimated via an adaptive sampling algorithm with moderate tolerance. The Figure 1 illustrates the convergence of the order-formula_140 innovation estimator formula_141 to the exact innovation estimators formula_138 as formula_126 decreases, which substantially improves the estimation provided by the conventional innovation estimator formula_155. Deterministic approximations. The order-formula_125 innovation estimators overcome the drawback of the conventional-type innovation estimators concerning the impossibility of reducing bias. However, the viable bias reduction of an order-formula_125 innovation estimators might eventually require that the associated order-formula_125 LMV filter performs a large number of stochastic simulations. In situations where only low or medium precision approximate estimators are needed, an alternative deterministic filter algorithm - called deterministic order-formula_156 LMV filter - can be obtained by tracking the first two conditional moments formula_157 and formula_158 of the order-formula_125 weak approximation formula_159 at all the time instants formula_160 in between two consecutive observation times formula_161 and formula_162. That is, the value of the predictions formula_163 and formula_164 in the filtering algorithm are computed from the recursive formulas formula_165 and formula_166 with formula_167 and with formula_168. The approximate innovation estimators formula_169 defined with these deterministic order-formula_125 LMV filters not longer converge to the exact innovation estimator, but allow a significant bias reduction in the estimated parameters for a given finite sample with a lower computational cost. Figure 2 presents the histograms and the confidence limits of the approximate innovation estimators formula_170 and formula_171 for the parameters formula_172 and formula_173 of the Van der Pol oscillator with random frequency formula_174 formula_175 obtained from 100 time series formula_145 of formula_7 partial and noisy observations formula_176 of formula_147 on the observation times formula_177, formula_149, with formula_178 and formula_179. The deterministic order-formula_140 Local Linearization filter of the innovation estimators formula_180 and formula_171 is defined, for each estimator, on uniform time discretizations formula_181, with formula_182 and on an adaptive time-stepping discretization formula_183 with moderate relative and absolute tolerances, respectively. Observe the bias reduction of the estimated parameter as formula_126 decreases. Software. A Matlab implementation of various approximate innovation estimators is provided by the SdeEstimation toolbox. This toolbox has Local Linearization filters, including deterministic and stochastic options with fixed step sizes and sample numbers. It also offers adaptive time stepping and sampling algorithms, along with local and global optimization algorithms for innovation estimation. For models with complete observations free of noise, various approximations to the Quasi-Maximum Likelihood estimator are implemented in R. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\qquad\\qquad \nd\\mathbf{x}(t) = \\mathbf{f} ( t,\\mathbf{x}(t);\\theta) dt +\\sum_{i=1}^m \\mathbf{g}_{i}\\ (t, \\mathbf{x}(t);\\theta)\\ d\\mathbf{w}^{i}(t) \n\\qquad \\qquad (1)\n" }, { "math_id": 1, "text": " d " }, { "math_id": 2, "text": " \\mathbf{x} " }, { "math_id": 3, "text": "t \\ge t_0" }, { "math_id": 4, "text": "\\qquad\\qquad \n\\mathbf{z}_{t_k} = \\mathbf{Cx}(t_k)+\\mathbf{e}_{t_k} \\qquad\\qquad (2)\n" }, { "math_id": 5, "text": "\\mathbf{z}_{t_0},...,\\mathbf{z}_{t_{M-1}}" }, { "math_id": 6, "text": "\\mathbf{x}" }, { "math_id": 7, "text": "M" }, { "math_id": 8, "text": "t_0 \\ ,..., \\ t_{M-1}" }, { "math_id": 9, "text": "\\mathbf{f}" }, { "math_id": 10, "text": "\\mathbf{g}_i" }, { "math_id": 11, "text": "\\mathbf{w}=(\\mathbf{w}^1,...,\\mathbf{w}^m)" }, { "math_id": 12, "text": "m" }, { "math_id": 13, "text": "\\theta \\in \\mathbb{R}^p" }, { "math_id": 14, "text": "p" }, { "math_id": 15, "text": "\\{\\mathbf{e}_{t_k} : \\mathbf{e}_{t_k}\\sim \\Nu ( 0,\\Pi_{t_k}) \\}_{k=0,...,M-1}" }, { "math_id": 16, "text": "r" }, { "math_id": 17, "text": "\\mathbf{w}" }, { "math_id": 18, "text": "\\Pi_{t_k}" }, { "math_id": 19, "text": " r \\times r " }, { "math_id": 20, "text": "\\mathbf{C}" }, { "math_id": 21, "text": " r \\times d " }, { "math_id": 22, "text": "t_0,...,t_{M-1}" }, { "math_id": 23, "text": "\\theta" }, { "math_id": 24, "text": "\\{t\\}_M" }, { "math_id": 25, "text": "t_0,\\ldots,t_{M-1}" }, { "math_id": 26, "text": "Z_\\rho =\\{ \\mathbf{z}_{t_k} : t_k\\leq \\rho, t_k \\in \\{ t \\}_M \\}" }, { "math_id": 27, "text": "\\mathbf{x}_{t/\\rho} = \\Epsilon (\\mathbf{x}(t)|Z_\\rho) " }, { "math_id": 28, "text": "\\mathbf{U}_{t/\\rho }=E(\\mathbf{x}(t)\\mathbf{x}^{\\intercal }(t)|Z_{\\rho })-\\mathbf{x}_{t/\\rho }\\mathbf{x}_{t/\\rho }^{\\intercal }" }, { "math_id": 29, "text": "\\rho \\leq t" }, { "math_id": 30, "text": "E (\\cdot)" }, { "math_id": 31, "text": "\\{ \\nu_{t_k}\\} _{k=1,\\ldots,M-1}, " }, { "math_id": 32, "text": "\\qquad \\qquad \\nu_{t_k} = \\mathbf{z}_{t_k}- \\mathbf{Cx}_{{t_k}/{t_{k-1}}}(\\theta), \\qquad \\qquad (3) " }, { "math_id": 33, "text": "\\nu_{t_k} " }, { "math_id": 34, "text": "\\qquad \\qquad \\Sigma_{t_k}= \\mathbf{CU}_{{t_k}/t_{k-1}}(\\theta)\\ \\mathbf{C}^\\intercal + \\Pi_{t_k}, \\qquad \\qquad (4) " }, { "math_id": 35, "text": "\\Delta=\\underset{k}{\\max } \\{t_{k+1}-t_k\\}" }, { "math_id": 36, "text": "t_k,t_{k+1} \\in \\{t\\}_{M}" }, { "math_id": 37, "text": "t_{k+1}-t_k" }, { "math_id": 38, "text": "\\{\\nu_{t_k}\\}_ {k=1,\\ldots,M-1}" }, { "math_id": 39, "text": "Z_{t_{M-1}}" }, { "math_id": 40, "text": "\\theta = \\theta_0 " }, { "math_id": 41, "text": "\\{t\\}_M," }, { "math_id": 42, "text": "\\theta_0" }, { "math_id": 43, "text": "\\qquad\\qquad\n\\hat{\\theta}_M = \\operatorname\\arg\\{\\underset{\\theta}{\\min} \\ U_M(\\theta, Z_{t_{M-1}})\\},\n\\qquad\\qquad (5)" }, { "math_id": 44, "text": "\\qquad\\qquad\nU_M(\\theta,Z_{t_{M-1}}) = (M-1)\\ln(2\\pi)+\\sum_{k=1}^{M-1} \\ln(\\det(\\Sigma_{t_k})) + \\nu_{t_k}^\\intercal\\Sigma_{t_k}^{-1}\\nu_{t_k},\n" }, { "math_id": 45, "text": "\\nu_{t{_k}}" }, { "math_id": 46, "text": "\\Sigma_{t_k}" }, { "math_id": 47, "text": "t_k" }, { "math_id": 48, "text": "k=1,...,M-1." }, { "math_id": 49, "text": "U_M (\\theta, Z_{t_{M-1}})," }, { "math_id": 50, "text": "\\mathbf{x}_{t_k/t_{k-1}}(\\theta)" }, { "math_id": 51, "text": "\\mathbf{U}_{t_k/t_{k-1}}(\\theta)" }, { "math_id": 52, "text": "\\theta " }, { "math_id": 53, "text": "p_\\theta (t_{k+1}- t_k, \\mathbf{x}(t_k), \\mathbf{x}(t_{k+1}))" }, { "math_id": 54, "text": "\\mathbf{x}(t_k)" }, { "math_id": 55, "text": " \\mathbf{x}(t_{k+1})" }, { "math_id": 56, "text": "t_k " }, { "math_id": 57, "text": " t_{k+1}" }, { "math_id": 58, "text": "\\{ \\nu_{t_k} \\}_{k=1,...,M-1} , " }, { "math_id": 59, "text": "\\nu _{t_1},...,\\nu _{t_{M-1}}" }, { "math_id": 60, "text": " p_{\\theta}(t_{k+1}-t_k, \\mathbf{x}(t_k),\\mathbf{x}(t_{k+1}))" }, { "math_id": 61, "text": "\\mathfrak{p}_\\theta(t_{k+1}-t_k, \\nu_{t_{k}},\\nu_{t_{k+1}} )" }, { "math_id": 62, "text": "p_\\theta(t_{k+1}-t_k, \\mathbf{x}(t_k),\\mathbf{x}(t_{k+1}))" }, { "math_id": 63, "text": "\\mathfrak{p}_\\theta(t_{k+1}-t_k, \\nu_{t_k}, \\nu_{t_{k+1}})," }, { "math_id": 64, "text": "-U_{M,h} ( \\widehat{\\theta}_M, Z_{t_{M-1}} )" }, { "math_id": 65, "text": "100 (1-\\alpha)\\%" }, { "math_id": 66, "text": "\\widehat{\\theta}_M\\pm\\bigtriangleup" }, { "math_id": 67, "text": "\\widehat{\\theta}_M" }, { "math_id": 68, "text": "\\ \\qquad \\qquad \n\\bigtriangleup = t_{1-\\alpha, M-\\rho-1}\\sqrt{\\frac{diag(Var(\\widehat{\\theta}_M))}{M-p}},\n" }, { "math_id": 69, "text": "t_{1-\\alpha, M-p-1}" }, { "math_id": 70, "text": "M-p-1" }, { "math_id": 71, "text": "\\text{V} ar (\\widehat{\\theta}_M)= (I (\\widehat{\\theta}_M))^{-1}" }, { "math_id": 72, "text": "\\ \\qquad \\qquad I(\\widehat{\\theta }_{M})=\\sum_{k=1}^{M-1}I_{k}(\\widehat{\\theta }_{M}) " }, { "math_id": 73, "text": " \\widehat{\\theta }_{M} " }, { "math_id": 74, "text": " \\theta _{0} " }, { "math_id": 75, "text": " \\qquad \\qquad \n\n\\lbrack I_{k}(\\widehat{\\theta }_{M})]_{m,n}=\\frac{\\partial \\mu ^{\\intercal }}{\\partial \\theta _{m}}\\Sigma ^{-1}\\frac{\\partial \\mu }{\\partial \\theta _{n}}\n+\\frac{1}{2}trace(\\Sigma ^{-1}\\frac{\\partial \\Sigma }{\\partial \\theta _{m}}\\Sigma ^{-1}\\frac{\\partial \\Sigma }{\\partial \\theta _{n}})\n\n" }, { "math_id": 76, "text": "(m,n)" }, { "math_id": 77, "text": "I_{k}(\\widehat{\\theta }_{M})" }, { "math_id": 78, "text": "\\mu = \\mathbf{Cx}_{t_{k}/t_{k-1}}(\\widehat{\\theta }_{M})" }, { "math_id": 79, "text": "\\Sigma =\\mathbf{\\Sigma }_{t_{k}}(\\widehat{\\theta }_{M})" }, { "math_id": 80, "text": "1\\leq m,n\\leq p" }, { "math_id": 81, "text": "\\{\\mathbf{\\nu }_{t_{k}}:\\mathbf{\\nu }_{t_{k}}=\\mathbf{z}_{t_{k}}-\\mathbf{Cx}\n_{t_{k}/t_{k-1}}(\\widehat{\\theta }_{M})\\}_{k=1,\\ldots M-1}" }, { "math_id": 82, "text": "\\mathbf{h}" }, { "math_id": 83, "text": " \\qquad \\qquad \n\\mathbf{z}_{t_{k}}=\\mathbf{h}(t_{k}\\text{, }\\mathbf{x}(t_{k}))+\\mathbf{e}_{t_{k}}, \\qquad \\qquad (6)\n" }, { "math_id": 84, "text": "\\mathbf{x}_{t_{k}/t_{k-1}}(\\theta )" }, { "math_id": 85, "text": "\\mathbf{U}_{t_{k}/t_{k-1}}(\\theta )" }, { "math_id": 86, "text": "\\mathbf{y}_{t_{0}/t_{0}}=\\mathbf{x}_{t_{0}/t_{0}}" }, { "math_id": 87, "text": " \\mathbf{V}_ {t_{0}/t_{0}}=\\mathbf{U}_{t_{0}/t_{0}}" }, { "math_id": 88, "text": "t_{k}\\in \\{t\\}_{M}" }, { "math_id": 89, "text": " \\qquad \\qquad \n\n\\mathbf{y}_{t_{k+1}/t_{k}}=E(\\mathbf{y}(t_{k+1})|Z_{t_{k}}) \\quad " }, { "math_id": 90, "text": " \\quad \\mathbf{V}_{t_{k+1}/t_{k}}=E(\\mathbf{y}(t_{k+1})\\mathbf{y}\n^{\\intercal }(t_{k+1})|Z_{t_{k}})-\\mathbf{y}_{t_{k+1}/t_{k}}\\mathbf{y}_{t_{k+1}/t_{k}}^{\\intercal }, \\qquad (7)\n\n" }, { "math_id": 91, "text": " \\mathbf{y}_{t_{k}/t_{k}} " }, { "math_id": 92, "text": " \\mathbf{V}_{t_{k}/t_{k}} " }, { "math_id": 93, "text": " \\qquad \\qquad \n\n\\mathbf{y}_{t_{k+1}/t_{k+1}}=\\mathbf{y}_{t_{k+1}/t_{k}}+\\mathbf{K}_{t_{k+1}} \\mathbf{(\\mathbf{z}}_{t_{k+1}}-\\mathbf{\\mathbf{C}y}_{t_{k+1}/t_{k}}\\mathbf{)} \n\\quad " }, { "math_id": 94, "text": " \\quad \\mathbf{V}_{t_{k+1}/t_{k+1}}=\\mathbf{V}_{t_{k+1}/t_{k}}-\\mathbf{K}_{t_{k+1}}\\mathbf{CV}_{t_{k+1}/t_{k}} \\qquad (8)\n\n" }, { "math_id": 95, "text": " \\qquad \\qquad \n\n\\mathbf{K}_{t_{k+1}}=\\mathbf{V}_{t_{k+1}/t_{k}}\\mathbf{C}^{\\intercal } \\mathbf{CV}_{t_{k+1}/t_{k}} (\\mathbf{C}^{\\intercal }+\\mathbf{\\Pi }_{t_{k+1}})^{-1}\n\n" }, { "math_id": 96, "text": " t_{k},t_{k+1}\\in \\{t\\}_{M} " }, { "math_id": 97, "text": " \\mathbf{y} " }, { "math_id": 98, "text": " \\{t\\}_{M} " }, { "math_id": 99, "text": " M " }, { "math_id": 100, "text": " Z_{t_{M-1}} " }, { "math_id": 101, "text": " \\mathbf{\\theta =\\theta }_{0} " }, { "math_id": 102, "text": " \\mathbf{\\theta }_{0} " }, { "math_id": 103, "text": " \\qquad \\qquad \n\n\\widehat{\\mathbf{\\vartheta }}_{M}=\\arg \\{\\underset{\\mathbf{\\theta \\in } \\mathcal{D}_{\\theta }}{\\mathbf{\\min }}\\text{ }\\widetilde{U}_{M}\\mathbf{(\\theta },Z_{t_{M-1}})\\}, \\qquad \\qquad (9)\n\n" }, { "math_id": 104, "text": " \\qquad \\qquad \n\n\\widetilde{U}_{M}(\\mathbf{\\theta },Z_{t_{M-1}})=(M-1)\\ln (2\\pi)+\\sum\\limits_{k=1}^{M-1}\\ln (\\det (\\widetilde{\\mathbf{\\Sigma }}_{t_{k}}))+\\widetilde{\\mathbf{\\nu }}_{t_{k}}^{\\intercal }(\\widetilde{\\mathbf{\\Sigma }}_{t_{k}})^{-1}\\widetilde{\\mathbf{\\nu }}_{t_{k}},\n\n" }, { "math_id": 105, "text": " \\qquad \\qquad \n\n\\widetilde{\\mathbf{\\nu }}_{t_{k}}=\\mathbf{z}_{t_{k}}-\\mathbf{Cy}_{t_{k}/t_{k-1}}(\\theta ) \\qquad " }, { "math_id": 106, "text": " \\qquad \\widetilde{\\mathbf{\\Sigma }}_{t_{k}}=\\mathbf{CV}_{t_{k}/t_{k-1}}(\\theta )\\mathbf{C}^{\\intercal }+\\mathbf{\\Pi }_{t_{k}}\n\n" }, { "math_id": 107, "text": " \\mathbf{C}=\\mathbf{I} " }, { "math_id": 108, "text": " \\mathbf{\\Pi }_{t_{k}}=0 " }, { "math_id": 109, "text": " \\left( \\tau \\right) _{h>0}=\\{\\tau _{n}:\\tau _{n+1}-\\tau _{n}\\leq h\\text{ for }n=0,1,\\ldots ,N\\} " }, { "math_id": 110, "text": "[t_{0},t_{M-1}]" }, { "math_id": 111, "text": "\\left( \\tau\\right) _{h}\\supset \\{t\\}_{M}" }, { "math_id": 112, "text": "\\mathbf{y}_n" }, { "math_id": 113, "text": "\\mathbf{x}(\\tau_n)" }, { "math_id": 114, "text": " \\left(\\tau \\right)_{h} " }, { "math_id": 115, "text": " \\qquad \\qquad \n\n\\mathbf{y}=\\{\\mathbf{y}(t),t\\in \\lbrack t_{0},t_{M-1}] : \\mathbf{y}(\\tau _{n})=\\mathbf{y}_{n}, \\quad " }, { "math_id": 116, "text": " \\quad \\tau _{n}\\in \\left( \\tau \\right) _{h}\\} \\qquad \\qquad (10) \n\n" }, { "math_id": 117, "text": "\\beta" }, { "math_id": 118, "text": "\\mathbf{y}" }, { "math_id": 119, "text": " \\qquad \\qquad \n\n\\underset{t_{k}\\leq t\\leq t_{k+1}}{\\sup }\\left\\vert E\\left( g(\\mathbf{x} (t))|Z_{t_{k}}\\right) -E\\left( g(\\mathbf{y}(t))|Z_{t_{k}}\\right) \\right\\vert\n\\leq L_{k}h^{\\beta } \n\n" }, { "math_id": 120, "text": "t_{k},t_{k+1}\\in \\{t\\}_{M}" }, { "math_id": 121, "text": "2(\\beta +1)" }, { "math_id": 122, "text": "g:\\mathbb{R}^{d}\\rightarrow \\mathbb{R}" }, { "math_id": 123, "text": "g" }, { "math_id": 124, "text": "L_{k}" }, { "math_id": 125, "text": "\\beta " }, { "math_id": 126, "text": "h" }, { "math_id": 127, "text": "(\\tau )_{h}\\supset \\{t\\}_{M}" }, { "math_id": 128, "text": "\\widehat{\\mathbf{\\theta }}_{M}(h)" }, { "math_id": 129, "text": "\\widehat{\\mathbf{\\theta }}_{M}" }, { "math_id": 130, "text": "\\left( \\tau \\right) _{h}\\supset \\{t\\}_{M}" }, { "math_id": 131, "text": "t_{k+1}-t_{k}" }, { "math_id": 132, "text": "\\mathbf{z}_{t_{k}}" }, { "math_id": 133, "text": "\\mathbf{z}_{t_{k+1}}" }, { "math_id": 134, "text": "(\\tau )_{h} \\supset \\{t\\}_{M}. " }, { "math_id": 135, "text": "\\{\\widetilde{\\mathbf{\\nu }}_{t_{k}}:\\widetilde{\\mathbf{\\nu }}_{t_{k}}=\\mathbf{z}_{t_{k}}-\\mathbf{Cy}_{t_{k}/t_{k-1}}(\\widehat{\\theta }_{M}(h))\\}_{k=1,\\ldots M-1}" }, { "math_id": 136, "text": "(\\widehat{\\alpha }_{M}-\\widehat{\\alpha}^D_{h,M},\\widehat{\\sigma}_{M}-\\widehat{\\sigma}^D_{h,M})" }, { "math_id": 137, "text": "(\\widehat{\\alpha }_{M}-\\widehat{\\alpha}_{h,M},\\widehat{\\sigma}_{M}-\\widehat{\\sigma}_{h,M})" }, { "math_id": 138, "text": "(\\widehat{\\alpha }_{M},\\widehat{\\sigma}_{M})" }, { "math_id": 139, "text": "(\\widehat{\\alpha}^D_{h,M},\\widehat{\\sigma}^D_{h,M})" }, { "math_id": 140, "text": "1" }, { "math_id": 141, "text": "(\\widehat{\\alpha}_{h,M},\\widehat{\\sigma}_{h,M})" }, { "math_id": 142, "text": "\\alpha =-0.1" }, { "math_id": 143, "text": "\\sigma = 0.1" }, { "math_id": 144, "text": " \\qquad \ndx = txdt + \\sigma \\sqrt{t}x dw \\quad (11)\n" }, { "math_id": 145, "text": "z_{t_{0}},..,z_{t_{M-1}}" }, { "math_id": 146, "text": " \\qquad \nz_{t_{k}}=x(t_{k})+e_{t_{k}},\\text{ for }k=0,1,..,M-1, \\quad (12)\n" }, { "math_id": 147, "text": "x" }, { "math_id": 148, "text": "\\{t\\}_{M=10}=\\{t_{k}=0.5+k\\Delta :k=0,\\ldots M-1" }, { "math_id": 149, "text": "\\Delta =1\\}" }, { "math_id": 150, "text": "x(0.5)=1" }, { "math_id": 151, "text": "\\Pi _{k}=0.0001" }, { "math_id": 152, "text": "\\left( \\tau \\right) _{h=\\Delta}\\equiv \\{t\\}_{M}" }, { "math_id": 153, "text": "\\left( \\tau\n\\right)_{h=\\Delta/2,\\Delta/8,\\Delta/32}=\\{\\tau _{n}:\\tau _{n}=0.5+nh" }, { "math_id": 154, "text": "n=0,1,\\ldots ,(M-1)/h\\}" }, { "math_id": 155, "text": "(\\widehat{\\alpha}^D_{\\Delta,M},\\widehat{\\sigma}^D_{\\Delta,M})" }, { "math_id": 156, "text": " \\beta " }, { "math_id": 157, "text": "\\mu" }, { "math_id": 158, "text": "\\Lambda" }, { "math_id": 159, "text": "\\mathbf{y} " }, { "math_id": 160, "text": "\\tau _{n}\\in \\left( \\tau \\right) _{h}" }, { "math_id": 161, "text": "t_{k}" }, { "math_id": 162, "text": "t_{k+1}" }, { "math_id": 163, "text": "\\mathbf{y}_{t_{k+1}/t_{k}}" }, { "math_id": 164, "text": "\\mathbf{P}_{t_{k+1}/t_{k}}" }, { "math_id": 165, "text": " \\qquad \\qquad \n\n\\mathbf{y}_{\\tau _{n+1}/t_{k}}=\\mu (\\tau _{n},\\mathbf{y}_{\\tau_{n}/t_{k}};h_{n})\\quad " }, { "math_id": 166, "text": " \\quad \\mathbf{P}_{\\tau_{n+1}/t_{k}}=\\Lambda (\\tau _{n},\\mathbf{P}_{\\tau\t_{n}/t_{k}};h_{n}),\\quad " }, { "math_id": 167, "text": " \\tau _{n},\\tau _{n+1}\\in (\\tau )_{h}\\cap \\lbrack t_{k,}t_{k+1}], \n\n" }, { "math_id": 168, "text": "h_n=\\tau _{n+1}-\\tau _{n}" }, { "math_id": 169, "text": "\\widehat{\\mathbf{\\theta }}_{h,M}" }, { "math_id": 170, "text": "(\\widehat{\\alpha }_{h,M},\\widehat{\\sigma }_{h,M})" }, { "math_id": 171, "text": "(\\widehat{\\alpha }_{\\cdot ,M},\\widehat{\\sigma }_{\\cdot ,M})" }, { "math_id": 172, "text": "\\alpha =1" }, { "math_id": 173, "text": "\\sigma =1" }, { "math_id": 174, "text": " \\qquad \ndx_{1} =x_{2} dt \\quad (13)\n" }, { "math_id": 175, "text": " \\qquad \ndx_{2} =(-(x_{1}^{2}-1)x_{2}-\\alpha x_{1})dt+\\sigma x_{1}dw \\quad (14)\n" }, { "math_id": 176, "text": " \\qquad \nz_{t_{k}}=x_{1}(t_{k})+e_{t_{k}},\\text{ for }k=0,1,..,M-1, \\quad (15)\n" }, { "math_id": 177, "text": "\\{t\\}_{M=30}=\\{t_{k}=k\\Delta :k=0,\\ldots M-1" }, { "math_id": 178, "text": "(x_{1}(0),x_{1}(0))=(1,1)" }, { "math_id": 179, "text": "\\Pi _{k}=0.001" }, { "math_id": 180, "text": "(\\widehat{\\alpha }_{h,,M},\\widehat{\\sigma }_{h,M})" }, { "math_id": 181, "text": "\\left( \\tau\\right) _{h}=\\{\\tau _{n}:\\tau _{n}=nh" }, { "math_id": 182, "text": " n=0,1,\\ldots ,(M-1)/h\\}" }, { "math_id": 183, "text": "\\left( \\tau \\right) _{\\cdot }" } ]
https://en.wikipedia.org/wiki?curid=69343459
693453
Skolem–Noether theorem
Theorem characterizing the automorphisms of simple rings In ring theory, a branch of mathematics, the Skolem–Noether theorem characterizes the automorphisms of simple rings. It is a fundamental result in the theory of central simple algebras. The theorem was first published by Thoralf Skolem in 1927 in his paper "Zur Theorie der assoziativen Zahlensysteme" (German: "On the theory of associative number systems") and later rediscovered by Emmy Noether. Statement. In a general formulation, let "A" and "B" be simple unitary rings, and let "k" be the center of "B". The center "k" is a field since given "x" nonzero in "k", the simplicity of "B" implies that the nonzero two-sided ideal "BxB" = ("x") is the whole of "B", and hence that "x" is a unit. If the dimension of "B" over "k" is finite, i.e. if "B" is a central simple algebra of finite dimension, and "A" is also a "k"-algebra, then given "k"-algebra homomorphisms "f", "g" : "A" → "B", there exists a unit "b" in "B" such that for all "a" in "A" "g"("a") = "b" · "f"("a") · "b"−1. In particular, every automorphism of a central simple "k"-algebra is an inner automorphism. Proof. First suppose formula_0. Then "f" and "g" define the actions of "A" on formula_1; let formula_2 denote the "A"-modules thus obtained. Since formula_3 the map "f" is injective by simplicity of "A", so "A" is also finite-dimensional. Hence two simple "A"-modules are isomorphic and formula_2 are finite direct sums of simple "A"-modules. Since they have the same dimension, it follows that there is an isomorphism of "A"-modules formula_4. But such "b" must be an element of formula_5. For the general case, formula_6 is a matrix algebra and that formula_7 is simple. By the first part applied to the maps formula_8, there exists formula_9 such that formula_10 for all formula_11 and formula_12. Taking formula_13, we find formula_14 for all "z". That is to say, "b" is in formula_15 and so we can write formula_16. Taking formula_17 this time we find formula_18, which is what was sought. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B = \\operatorname{M}_n(k) = \\operatorname{End}_k(k^n)" }, { "math_id": 1, "text": "k^n" }, { "math_id": 2, "text": "V_f, V_g" }, { "math_id": 3, "text": "f(1) = 1 \\neq 0 " }, { "math_id": 4, "text": "b: V_g \\to V_f" }, { "math_id": 5, "text": "\\operatorname{M}_n(k) = B" }, { "math_id": 6, "text": "B \\otimes_k B^{\\text{op}}" }, { "math_id": 7, "text": "A \\otimes_k B^{\\text{op}}" }, { "math_id": 8, "text": "f \\otimes 1, g \\otimes1 : A \\otimes_k B^{\\text{op}} \\to B \\otimes_k B^{\\text{op}}" }, { "math_id": 9, "text": "b \\in B \\otimes_k B^{\\text{op}}" }, { "math_id": 10, "text": "(f \\otimes 1)(a \\otimes z) = b (g \\otimes 1)(a \\otimes z) b^{-1}" }, { "math_id": 11, "text": "a \\in A" }, { "math_id": 12, "text": "z \\in B^{\\text{op}}" }, { "math_id": 13, "text": "a = 1" }, { "math_id": 14, "text": "1 \\otimes z = b (1\\otimes z) b^{-1}" }, { "math_id": 15, "text": "Z_{B \\otimes B^{\\text{op}}}(k \\otimes B^{\\text{op}}) = B \\otimes k" }, { "math_id": 16, "text": "b = b' \\otimes 1" }, { "math_id": 17, "text": "z = 1" }, { "math_id": 18, "text": "f(a)= b' g(a) {b'^{-1}}" } ]
https://en.wikipedia.org/wiki?curid=693453
6934542
Prod
Prod or PROD may refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "\\prod" } ]
https://en.wikipedia.org/wiki?curid=6934542
6934687
Constant-weight code
Method for encoding data in communications In coding theory, a constant-weight code, also called an "m"-of-"n" code, is an error detection and correction code where all codewords share the same Hamming weight. The one-hot code and the balanced code are two widely used kinds of constant-weight code. The theory is closely connected to that of designs (such as "t"-designs and Steiner systems). Most of the work on this field of discrete mathematics is concerned with "binary" constant-weight codes. Binary constant-weight codes have several applications, including frequency hopping in GSM networks. Most barcodes use a binary constant-weight code to simplify automatically setting the brightness threshold that distinguishes black and white stripes. Most line codes use either a constant-weight code, or a nearly-constant-weight paired disparity code. In addition to use as error correction codes, the large space between code words can also be used in the design of asynchronous circuits such as delay insensitive circuits. Constant-weight codes, like Berger codes, can detect all unidirectional errors. "A"("n", "d", "w"). The central problem regarding constant-weight codes is the following: what is the maximum number of codewords in a binary constant-weight code with length formula_0, Hamming distance formula_1, and weight formula_2? This number is called formula_3. Apart from some trivial observations, it is generally impossible to compute these numbers in a straightforward way. Upper bounds are given by several important theorems such as the first and second Johnson bounds, and better upper bounds can sometimes be found in other ways. Lower bounds are most often found by exhibiting specific codes, either with use of a variety of methods from discrete mathematics, or through heavy computer searching. A large table of such record-breaking codes was published in 1990, and an extension to longer codes (but only for those values of formula_1 and formula_2 which are relevant for the GSM application) was published in 2006. 1-of-"N" codes. A special case of constant weight codes are the one-of-"N" codes, that encode formula_4 bits in a code-word of formula_5 bits. The one-of-two code uses the code words 01 and 10 to encode the bits '0' and '1'. A one-of-four code can use the words 0001, 0010, 0100, 1000 in order to encode two bits 00, 01, 10, and 11. An example is dual rail encoding, and chain link used in delay insensitive circuits. For these codes, formula_6 and formula_7. Some of the more notable uses of one-hot codes include biphase mark code uses a 1-of-2 code; pulse-position modulation uses a 1-of-"n" code; address decoder, etc. Balanced code. In coding theory, a balanced code is a binary forward error correction code for which each codeword contains an equal number of zero and one bits. Balanced codes have been introduced by Donald Knuth; they are a subset of so-called unordered codes, which are codes having the property that the positions of ones in a codeword are never a subset of the positions of the ones in another codeword. Like all unordered codes, balanced codes are suitable for the detection of all unidirectional errors in an encoded message. Balanced codes allow for particularly efficient decoding, which can be carried out in parallel. Some of the more notable uses of balanced-weight codes include biphase mark code uses a 1 of 2 code; 6b/8b encoding uses a 4 of 8 code; the Hadamard code is a formula_8 of formula_9 code (except for the zero codeword), the code; etc. The 3-wire lane encoding used in MIPI C-PHY can be considered a generalization of constant-weight code to ternary -- each wire transmits a ternary signal, and at any one instant one of the 3 wires is transmitting a low, one is transmitting a middle, and one is transmitting a high signal. "m"-of-"n" codes. An "m"-of-"n" code is a separable error detection code with a code word length of "n" bits, where each code word contains exactly "m" instances of a "one". A single bit error will cause the code word to have either "m" + 1 or "m" − 1 "ones". An example "m"-of-"n" code is the 2-of-5 code used by the United States Postal Service. The simplest implementation is to append a string of ones to the original data until it contains "m" ones, then append zeros to create a code of length "n". Example: Some of the more notable uses of constant-weight codes, other than the one-hot and balanced-weight codes already mentioned above, include Code 39 uses a 3-of-9 code; bi-quinary coded decimal code uses a 2-of-7 code, the 2-of-5 code, etc. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "w" }, { "math_id": 3, "text": "A(n,d,w)" }, { "math_id": 4, "text": "\\log_2 N" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "n=N,~ d=2,~ w=1" }, { "math_id": 7, "text": "A(n, d, w) = n" }, { "math_id": 8, "text": "2^{k-1}" }, { "math_id": 9, "text": "2^k" } ]
https://en.wikipedia.org/wiki?curid=6934687
69349499
Cerebrospinal fluid flow MRI
CSF Flow MRI overview, methodology, and application Cerebrospinal fluid (CSF) flow MRI is used to assess pulsatile CSF flow both qualitatively and quantitatively. Time-resolved 2D phase-contrast MRI with velocity encoding is the most common method for CSF analysis. CSF Fluid Flow MRI detects back and forth flow of Cerebrospinal fluid that corresponds to vascular pulsations from mostly the cardiac cycle of the choroid plexus. Bulk transport of CSF, characterized by CSF circulation through the Central Nervous System, is not used because it is too slow to assess clinically. CSF would have to pass through the brain's lymphatic system and be absorbed by arachnoid granulations. Cerebrospinal fluid (CSF). CSF is a clear fluid that surrounds the brain and spinal cord. The rate of CSF formation in humans is about 0.3–0.4 ml per minute and the total CSF volume is 90–150 ml in adults. Traditionally, CSF was evaluated mainly using invasive procedures such as lumbar puncture, myelographies, radioisotope studies, and intracranial pressure monitoring. Recently, rapid advances in imaging techniques have provided non-invasive methods for flow assessment. One of the best-known methods is Phase-Contrast MRI and it is the only imaging modality for both qualitative and quantitative evaluation. The constant progress of magnetic resonance sequences gives a new opportunity to develop new applications and enhance unknown mechanisms of CSF flow. Phase contrast MRI. The study of CSF flow became one of Phase-contrast MRI's major applications. The key to Phase-contrast MRI (PC-MRI) is the use of a bipolar gradient. A bipolar gradient has equal positive and negative magnitudes that are applied for the same time duration. The bipolar gradient in PC-MRI is put in a sequence after RF excitation but before data collection during the echo time of the generic MRI modality. The bipolar lobe must be applied in all three axes to image flow in all three directions. Bipolar gradient. The basis of the bipolar gradient in PC-MRI is that when using this gradient to change frequencies, there will be no phase shift for the stationary protons because they will experience equal positive and negative magnitudes. However, the moving protons will undergo various degrees of phase shift because, along the gradient direction, their locations are constantly changing. This notion can be applied to monitor protons that are moving through a plane. From the phase contrast, the floating protons can be detected. In the equation for determining the phase, local susceptibility influence is not removed by this bipolar gradient. Thus, it is necessary to invert a second sequence with the bipolar gradient, and the signal must be subtracted from the original acquisition. The purpose of this step is to cancel out those static areas’ signals and produce the characteristic static appearance at phase-contrast imaging. formula_0 where formula_1 = phase shift, formula_2 = gyromagnetic ratio, formula_3 is the proton velocity, and formula_4 is the change in magnetic moment "Equation 1." This is used to calculate phase shift, which is directly proportional to the gradient strength according to the change in magnetic moment. In phase-contrast imaging, there is a direct correlation between the degree of phase shift and the proton velocity in the direction of the gradient. However, because of the limitation of angles above 360°, the angle will wrap back to 0°, and only a specific range of proton velocities can be measured. For example, if a certain velocity leads to a 361° phase shift, we cannot distinguish this one from a velocity that causes a 1° phase shift. This phenomenon is called aliasing. Because both the forward direction velocity and the backward direction velocity are important, phase angles are usually within the range from −180° to 180°. Using the bipolar gradient, it is possible to create a phase shift of spins that move with a specific velocity in the axis direction. Spins moving towards the bipolar gradient have a positive net phase shift, whereas spins moving away from the gradient have a negative net phase shift. Positive phase shifts are generally shown as white, while negative phase shifts are black. The net phase shift is directly proportional to both the time of bipolar gradient application and the flow velocity. This is why it is important to pick a velocity parameter that is similar in magnitude and width to that of the bipolar gradient - this is denoted as velocity encoding. Velocity encoding. Velocity encoding (VENC), measured in cm/s, is directly related to the properties of the bipolar gradient. The VENC is used as the highest estimated fluid velocity in PC-MRI. Underestimating VENC leads to aliasing artifacts, as any velocity slightly higher than the VENC value has the opposite sign phase shift. However, overestimating the VENC value leads to a lower acquired flow signal and a lower SNR. Typical CSF flow is 5–8 cm/s; however, patients with hyper-dynamic circulation often require higher VENCs of up to 25 cm/s. An accurate VENC value helps generate the highest signal possible. formula_5 formula_6 "Equation 2." This is used to calculate VENC, which is inversely proportional to gradient strength. The variables are equivalent to those defined in "Equation 1". Images. PC-MRI is made of a magnitude and phase image for each plane and VENC obtained. In the magnitude image, cerebrospinal fluid (CSF) that is flowing is a brighter signal and stationary tissues are suppressed and visualized as black background. The phase image is phase-shift encoded, where white high signals represent forward flowing CSF and black low signals represent backwards flow. Since the phase image is phase-dependent, the velocity can be quantitatively estimated from the image. The background is mid-grey in color. There is also a re-phased image, which is the magnitude of flow of the compensated signal. It includes bright high signal flow and a background that is visible. The phase-contrast velocity image has greater sensitivity to CSF flow than the magnitude image, since the velocity image reflects the phase shifts of the protons. There are two sets of phase-contrast images used in evaluating CSF flow. The first is imaging of the axial plane, with through-plane velocity that shows the craniocaudal direction of flow (from cranial to caudal end of the structure). The second image is in the sagittal plane, where the velocity is shown in-plane and images the craniocaudal direction. The first technique allows for flow quantification, while the second allows for qualitative assessment. Through-plane analysis is usually done perpendicular to the aqueduct and is more accurate for quantitative evaluation because this minimizes the partial volume effect, a main limitation of PC-MRI. The partial volume effect occurs when a voxel includes a boundary of static and moving materials, this leads to an overestimate of phase which results in inaccurate velocities at material boundaries. These quantitative and qualitative CSF flow images can be acquired in about 8-10 additional minutes than a regular MRI. Choosing parameters. Factors that impact PC-MRI include VENC, repetition time (TR), and signal-to-noise ratio (SNR). To capture CSF flow of 5–8 cm/s, it is necessary to use a strong bipolar gradient. VENC is inversely proportional to magnitude and time of application. This means that a slower VENC value needs a higher magnitude bipolar gradient applied for a longer time. This results in a larger TR value; however, TR can only be increased to a certain extent, as a short repetition time is needed for higher temporal resolution since the data is plotted relative to a full cardiac cycle. Therefore, it is important to balance these parameters to maximize resolution. Quantification. To quantify CSF flow, it is important to define the region of interest, which can be done using a cross-sectional area measurement, for example. Then, velocity versus time can be plotted. Velocity is typically pulsatile due to systole and diastole, and the area under the curve can yield the amount of flow. Systole produces forward flow, while diastole produces backwards flow. Applications. Clinical. CSF flow can be used in diagnosing and treating aqueduct stenosis, normal pressure hydrocephalus, and Chiari malformation. Aqueduct stenosis is the narrowing of the aqueduct of Sylvius which blocks the flow of CSF, causing fluid buildup in the brain called hydrocephalus. Decreased aqueduct stroke volume and peak systolic velocity could be detected through CSF flow to diagnose a patient with aqueduct stenosis. Normal pressure hydrocephalus (NPH) looks at CSF flow values and velocities, which is important for diagnosis because NPH is idiopathic and has varying symptoms amongst patients including urinary incontinence, dementia, and gait disturbances. Increased aqueduct CSF stroke volume and velocity are indicators of NPH. It is critically important to recognize and treat NPH because NPH is one of the few potentially treatable causes of dementia. The treatment of choice in NPH is ventriculoperitoneal shunt surgery (VPS). This treatment needs a VP shunt, which is a catheter with a valve aiming at implementing a one-way outflow of the excessive amount of CSF from the ventricles. It is obligatory to have patency control because of some possible complications such as infections and obstruction. Due to the development and widespread of PC-MRI, it superseded spin-echo(SE) images, which is the traditional way to choose patients who might benefit from a VPS. And PC-MRI gradually became the most often used sequence to evaluate the CSF flow pattern in patients with NPH in relation to the cardiac cycle. Chiari malformation (CMI) is when the cerebellar tonsils push through the foramen magnum of the skull. CSF flow varies based on level of tonsil descent and type of Chiari malformation, so the MRI can also be helpful in deciding the type of surgery to be performed and monitoring progress. CSF flow will be altered within different regions of the spinal cord and brain stem because of the changes in the morphology of the posterior fossa and craniocervical junction, which enables PC-MRI as a fundamental technique in CMI research studies and clinical evaluation. Limitations. In PC-MRI, the quantitative analysis of stroke volume, mean peak velocity, and peak systolic velocity is possible only in the plane that is perpendicular to the unidirectional flow. Additionally, it is not possible to calculate multidirectional flow in multiaxial planes in 2D or 3D PC-MRI. This means that it is not a useful technique in clinical applications that have turbulent flow. Future. Emerging 4D PC-MRI is showing promising results in the assessment of multidirectional flow. The 4D imaging modality adds time as a dimension to the 3D image. There are many applications of 4D PC-MRI, including the ability to examine blood flow patterns. This is particularly helpful for cardiac and aortic imaging, but the major limitation remains the image acquisition time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta\\phi=\\gamma\\vec{\\nu}\\Delta M_1" }, { "math_id": 1, "text": "\\Delta\\phi" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "\\vec{\\nu}" }, { "math_id": 4, "text": "\\Delta M_1" }, { "math_id": 5, "text": "Since\\ \\Delta\\phi<\\pi, \\ \\vec{\\nu}<VENC\n" }, { "math_id": 6, "text": "VENC = \\frac{\\pi}{\\gamma\\Delta M_1}" } ]
https://en.wikipedia.org/wiki?curid=69349499
69352620
Rank-width
Graph width parameter used in graph theory Rank-width is a graph width parameter used in graph theory and parameterized complexity, and defined using linear algebra. It is defined from hierarchical clusterings of the vertices of a given graph, which can be visualized as ternary trees having the vertices as their leaves. Removing any edge from such a tree disconnects it into two subtrees and partitions the vertices into two subsets. The graph edges that cross from one side of the partition to the other can be described by a biadjacency matrix; for the purposes of rank-width, this matrix is defined over the finite field GF(2) rather than using real numbers. The rank-width of a graph is the maximum of the ranks of the biadjacency matrices, for a clustering chosen to minimize this maximum. Rank-width is closely related to clique-width: formula_0, where formula_1 is the clique-width. However, clique-width is NP-hard to compute, for graphs of large clique-width, and its parameterized complexity is unknown. In contrast, testing whether the rank-width is at most a constant formula_2 takes polynomial time, and even when the rank-width is not constant it can be approximated, with a constant approximation ratio, in polynomial time. For this reason, rank-width can be used as a more easily computed substitute for clique-width. An example of a family of graphs with high rank-width is provided by the square grid graphs. For an formula_3 grid graph, the rank-width is exactly formula_4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " k \\leq c \\leq 2^{k+1}-1" }, { "math_id": 1, "text": "c" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "n\\times n" }, { "math_id": 4, "text": "n-1" } ]
https://en.wikipedia.org/wiki?curid=69352620
6935363
Invariant differential operator
In mathematics and theoretical physics, an invariant differential operator is a kind of mathematical map from some objects to an object of similar type. These objects are typically functions on formula_0, functions on a manifold, vector valued functions, vector fields, or, more generally, sections of a vector bundle. In an invariant differential operator formula_1, the term "differential operator" indicates that the value formula_2 of the map depends only on formula_3 and the derivatives of formula_4 in formula_5. The word "invariant" indicates that the operator contains some symmetry. This means that there is a group formula_6 with a group action on the functions (or other objects in question) and this action is preserved by the operator: formula_7 Usually, the action of the group has the meaning of a change of coordinates (change of observer) and the invariance means that the operator has the same expression in all admissible coordinates. Invariance on homogeneous spaces. Let "M" = "G"/"H" be a homogeneous space for a Lie group G and a Lie subgroup H. Every representation formula_8 gives rise to a vector bundle formula_9 Sections formula_10 can be identified with formula_11 In this form the group "G" acts on sections via formula_12 Now let "V" and "W" be two vector bundles over "M". Then a differential operator formula_13 that maps sections of "V" to sections of "W" is called invariant if formula_14 for all sections formula_15 in formula_16 and elements "g" in "G". All linear invariant differential operators on homogeneous parabolic geometries, i.e. when "G" is semi-simple and "H" is a parabolic subgroup, are given dually by homomorphisms of generalized Verma modules. Invariance in terms of abstract indices. Given two connections formula_17 and formula_18 and a one form formula_19, we have formula_20 for some tensor formula_21. Given an equivalence class of connections formula_22, we say that an operator is invariant if the form of the operator does not change when we change from one connection in the equivalence class to another. For example, if we consider the equivalence class of all torsion free connections, then the tensor Q is symmetric in its lower indices, i.e. formula_23. Therefore we can compute formula_24 where brackets denote skew symmetrization. This shows the invariance of the exterior derivative when acting on one forms. Equivalence classes of connections arise naturally in differential geometry, for example: Conformal invariance. Given a metric formula_28 on formula_29, we can write the sphere formula_30 as the space of generators of the nil cone formula_31 In this way, the flat model of conformal geometry is the sphere formula_32 with formula_33 and P the stabilizer of a point in formula_29. A classification of all linear conformally invariant differential operators on the sphere is known (Eastwood and Rice, 1987).
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": "Df" }, { "math_id": 3, "text": "f(x)" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "D(g\\cdot f)=g\\cdot (Df)." }, { "math_id": 8, "text": "\\rho:H\\rightarrow\\mathrm{Aut}(\\mathbb{V})" }, { "math_id": 9, "text": "V=G\\times_{H}\\mathbb{V}\\;\\text{where}\\;(gh,v)\\sim(g,\\rho(h)v)\\;\\forall\\;g\\in G,\\;h\\in H\\;\\text{and}\\;v\\in\\mathbb{V}." }, { "math_id": 10, "text": "\\varphi\\in\\Gamma(V)" }, { "math_id": 11, "text": "\\Gamma(V)=\\{\\varphi:G\\rightarrow\\mathbb{V}\\;:\\;\\varphi(gh)=\\rho(h^{-1})\\varphi(g)\\;\\forall\\;g\\in G,\\; h\\in H\\}." }, { "math_id": 12, "text": "(\\ell_g \\varphi)(g')=\\varphi(g^{-1}g')." }, { "math_id": 13, "text": "d:\\Gamma(V)\\rightarrow\\Gamma(W)" }, { "math_id": 14, "text": "d(\\ell_g \\varphi) = \\ell_g (d\\varphi)." }, { "math_id": 15, "text": "\\varphi" }, { "math_id": 16, "text": "\\Gamma(V)" }, { "math_id": 17, "text": "\\nabla" }, { "math_id": 18, "text": "\\hat{\\nabla}" }, { "math_id": 19, "text": "\\omega" }, { "math_id": 20, "text": "\\nabla_{a}\\omega_{b}=\\hat{\\nabla}_{a}\\omega_{b}-Q_{ab}{}^{c}\\omega_{c}" }, { "math_id": 21, "text": "Q_{ab}{}^{c}" }, { "math_id": 22, "text": "[\\nabla]" }, { "math_id": 23, "text": "Q_{ab}{}^{c}=Q_{(ab)}{}^{c}" }, { "math_id": 24, "text": "\\nabla_{[a}\\omega_{b]}=\\hat{\\nabla}_{[a}\\omega_{b]}," }, { "math_id": 25, "text": "d=\\sum_j \\partial_j \\, dx_j" }, { "math_id": 26, "text": "d:\\Omega^n(M)\\rightarrow\\Omega^{n+1}(M)" }, { "math_id": 27, "text": "X^a \\mapsto \\nabla_{(a}X_{b)}-\\frac{1}{n}\\nabla_c X^c g_{ab}" }, { "math_id": 28, "text": "g(x,y)=x_{1}y_{n+2}+x_{n+2}y_{1}+\\sum_{i=2}^{n+1}x_{i}y_{i}" }, { "math_id": 29, "text": "\\mathbb{R}^{n+2}" }, { "math_id": 30, "text": "S^{n}" }, { "math_id": 31, "text": "S^{n}=\\{[x]\\in\\mathbb{RP}_{n+1}\\; :\\; g(x,x)=0 \\}." }, { "math_id": 32, "text": "S^{n}=G/P" }, { "math_id": 33, "text": "G=SO_{0}(n+1,1)" } ]
https://en.wikipedia.org/wiki?curid=6935363
69356423
Cluster prime
In number theory, a cluster prime is a prime number p such that every even positive integer "k" ≤ p − 3 can be written as the difference between two prime numbers not exceeding p (OEIS: ). For example, the number 23 is a cluster prime because 23 − 3 = 20, and every even integer from 2 to 20, inclusive, is the difference of at least one pair of prime numbers not exceeding 23: On the other hand, 149 is not a cluster prime because 140 &lt; 146, and there is no way to write 140 as the difference of two primes that are less than or equal to 149. By convention, 2 is not considered to be a cluster prime. The first 23 odd primes (up to 89) are all cluster primes. The first few odd primes that are not cluster primes are 97, 127, 149, 191, 211, 223, 227, 229, ... OEIS:  It is not known if there are infinitely many cluster primes. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Are there infinitely many cluster primes? References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_n" }, { "math_id": 1, "text": "{{p_{n} - p_{n - 1}}}" }, { "math_id": 2, "text": "C(x) < {x \\over ln(x)^m}" } ]
https://en.wikipedia.org/wiki?curid=69356423
69361192
Diamagnetic inequality
Mathematical inequality relating the derivative of a function to its covariant derivative In mathematics and physics, the diamagnetic inequality relates the Sobolev norm of the absolute value of a section of a line bundle to its covariant derivative. The diamagnetic inequality has an important physical interpretation, that a charged particle in a magnetic field has more energy in its ground state than it would in a vacuum. To precisely state the inequality, let formula_0 denote the usual Hilbert space of square-integrable functions, and formula_1 the Sobolev space of square-integrable functions with square-integrable derivatives. Let formula_2 be measurable functions on formula_3 and suppose that formula_4 is real-valued, formula_5 is complex-valued, and formula_6. Then for almost every formula_7, formula_8 In particular, formula_9. Proof. For this proof we follow Elliott H. Lieb and Michael Loss. From the assumptions, formula_10 when viewed in the sense of distributions and formula_11 for almost every formula_12 such that formula_13 (and formula_14 if formula_15). Moreover, formula_16 So formula_17 for almost every formula_12 such that formula_13. The case that formula_15 is similar. Application to line bundles. Let formula_18 be a U(1) line bundle, and let formula_19 be a connection 1-form for formula_20. In this situation, formula_19 is real-valued, and the covariant derivative formula_21 satisfies formula_22 for every section formula_5. Here formula_23 are the components of the trivial connection for formula_20. If formula_4 and formula_6, then for almost every formula_7, it follows from the diamagnetic inequality that formula_24 The above case is of the most physical interest. We view formula_3 as Minkowski spacetime. Since the gauge group of electromagnetism is formula_25, connection 1-forms for formula_20 are nothing more than the valid electromagnetic four-potentials on formula_3. If formula_26 is the electromagnetic tensor, then the massless Maxwell–Klein–Gordon system for a section formula_27 of formula_20 are formula_28 and the energy of this physical system is formula_29 The diamagnetic inequality guarantees that the energy is minimized in the absence of electromagnetism, thus formula_30. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^2(\\mathbb R^n)" }, { "math_id": 1, "text": "H^1(\\mathbb R^n)" }, { "math_id": 2, "text": "f, A_1, \\dots, A_n" }, { "math_id": 3, "text": "\\mathbb R^n" }, { "math_id": 4, "text": "A_j \\in L^2_{\\text{loc}} (\\mathbb R^n)" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "f , (\\partial_1 + iA_1)f, \\dots, (\\partial_n + iA_n)f \\in L^2(\\mathbb R^n)" }, { "math_id": 7, "text": "x \\in \\mathbb R^n" }, { "math_id": 8, "text": "|\\nabla |f|(x)| \\leq |(\\nabla + iA)f(x)|." }, { "math_id": 9, "text": "|f| \\in H^1(\\mathbb R^n)" }, { "math_id": 10, "text": "\\partial_j |f| \\in L^1_{\\text{loc}}(\\mathbb R^n)" }, { "math_id": 11, "text": "\\partial_j |f|(x) = \\operatorname{Re}\\left(\\frac{\\overline f(x)}{|f(x)|} \\partial_j f(x)\\right)" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "f(x) \\neq 0" }, { "math_id": 14, "text": "\\partial_j |f|(x) = 0" }, { "math_id": 15, "text": "f(x) = 0" }, { "math_id": 16, "text": "\\operatorname{Re}\\left(\\frac{\\overline f(x)}{|f(x)|} i A_j f(x)\\right) = \\operatorname{Im}(A_jf) = 0." }, { "math_id": 17, "text": "\\nabla |f|(x) = \\operatorname{Re}\\left(\\frac{\\overline f(x)}{|f(x)|} \\mathbf D f(x)\\right) \\leq \\left|\\frac{\\overline f(x)}{|f(x)|} \\mathbf D f(x)\\right| = |\\mathbf D f(x)|" }, { "math_id": 18, "text": "p: L \\to \\mathbb R^n" }, { "math_id": 19, "text": "A" }, { "math_id": 20, "text": "L" }, { "math_id": 21, "text": "\\mathbf D" }, { "math_id": 22, "text": "\\mathbf Df_j = (\\partial_j + iA_j)f" }, { "math_id": 23, "text": "\\partial_j" }, { "math_id": 24, "text": "|\\nabla |f|(x)| \\leq |\\mathbf Df(x)|." }, { "math_id": 25, "text": "U(1)" }, { "math_id": 26, "text": "F = dA" }, { "math_id": 27, "text": "\\phi" }, { "math_id": 28, "text": "\\begin{cases} \\partial^\\mu F_{\\mu\\nu} = \\operatorname{Im}(\\phi \\mathbf D_\\nu \\phi) \\\\\n\\mathbf D^\\mu \\mathbf D_\\mu \\phi = 0\\end{cases}" }, { "math_id": 29, "text": "\\frac{||F(t)||_{L^2_x}^2}{2} + \\frac{||\\mathbf D \\phi(t)||_{L^2_x}^2}{2}." }, { "math_id": 30, "text": "A = 0" } ]
https://en.wikipedia.org/wiki?curid=69361192
69364493
Probability of direction
Mathematical index used in Bayesian statistics In Bayesian statistics, the probability of direction ("pd") is a measure of effect existence representing the certainty with which an effect is positive or negative. This index is numerically similar to the frequentist "p"-value. Definition. It is mathematically defined as the proportion of the posterior distribution that is of the median's sign. It typically varies between 50% and 100%. History. The original formulation of this index and its usage in Bayesian statistics can be found in the psycho software documentation by Dominique Makowski under the appellation "Maximum Probability of Effect" (MPE). It was later renamed "Probability of Direction" and implemented in the easystats collection of software. Similar formulations have also been described in the context of bootstrapped parameters interpretation. Properties. The probability of direction is typically independent of the statistical model, as it is solely based on the posterior distribution and does not require any additional information from the data or the model. Contrary to indices related to the Region of Practical Interest (ROPE), it is robust to the scale of both the response variable and the predictors. However, similarly to its frequentist counterpart - the "p"-value, this index is not able to quantify evidence "in favor" of the null hypothesis. Advantages and limitations of the probability of direction have been studied by comparing it to other indices including the Bayes factor or Bayesian Equivalence test. Relationship with "p"-value. The probability of direction has a direct correspondence with the frequentist one-sided "p"-value through the formula formula_0 and to the two-sided "p"-value through the formula formula_1. Thus, a two-sided "p"-value of respectively .1, .05, .01 and .001 would correspond approximately to a "pd" of 95%, 97.5%, 99.5% and 99.95%. The proximity between the "pd" and the "p"-value is in line with the interpretation of the former as an index of effect existence, as it follows the original definition of the "p"-value. Interpretation. The bayestestR package for R suggests the following rule of thumb guidelines: &lt;templatestyles src="alternating rows table/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_\\text{one-sided} = 1 - pd" }, { "math_id": 1, "text": "p_\\text{two-sided} = 2 \\left(1 - pd\\right)" } ]
https://en.wikipedia.org/wiki?curid=69364493
6936536
Prandtl–Meyer expansion fan
Phenomenon in fluid dynamics A supersonic expansion fan, technically known as Prandtl–Meyer expansion fan, a two-dimensional simple wave, is a centered expansion process that occurs when a supersonic flow turns around a convex corner. The fan consists of an infinite number of Mach waves, diverging from a sharp corner. When a flow turns around a smooth and circular corner, these waves can be extended backwards to meet at a point. Each wave in the expansion fan turns the flow gradually (in small steps). It is physically impossible for the flow to turn through a single "shock" wave because this would violate the second law of thermodynamics. Across the expansion fan, the flow accelerates (velocity increases) and the Mach number increases, while the static pressure, temperature and density decrease. Since the process is isentropic, the stagnation properties (e.g. the total pressure and total temperature) remain constant across the fan. The theory was described by Theodor Meyer on his thesis dissertation in 1908, along with his advisor Ludwig Prandtl, who had already discussed the problem a year before. Flow properties. The expansion fan consists of an infinite number of expansion waves or Mach lines. The first Mach line is at an angle formula_1 with respect to the flow direction, and the last Mach line is at an angle formula_2 with respect to final flow direction. Since the flow turns in small angles and the changes across each expansion wave are small, the whole process is isentropic. This simplifies the calculations of the flow properties significantly. Since the flow is isentropic, the stagnation properties like stagnation pressure (formula_3), stagnation temperature (formula_4) and stagnation density (formula_5) remain constant. The final static properties are a function of the final flow Mach number (formula_6) and can be related to the initial flow conditions as follows, where formula_0 is the heat capacity ratio of the gas (1.4 for air): formula_7 The Mach number after the turn (formula_6) is related to the initial Mach number (formula_8) and the turn angle (formula_9) by, formula_10 where, formula_11 is the Prandtl–Meyer function. This function determines the angle through which a sonic flow (M = 1) must turn to reach a particular Mach number (M). Mathematically, formula_12 By convention, formula_13 Thus, given the initial Mach number (formula_14), one can calculate formula_15 and using the turn angle find formula_16. From the value of formula_16 one can obtain the final Mach number (formula_17) and the other flow properties. The velocity field in the expansion fan, expressed in polar coordinates formula_18 are given by formula_19 formula_20 is the specific enthalpy and formula_21 is the stagnation specific enthalpy. Maximum turn angle. As Mach number varies from 1 to formula_22, formula_23 takes values from 0 to formula_24, where formula_25 This places a limit on how much a supersonic flow can turn through, with the maximum turn angle given by, formula_26 One can also look at it as follows. A flow has to turn so that it can satisfy the boundary conditions. In an ideal flow, there are two kinds of boundary condition that the flow has to satisfy, If the flow turns enough so that it becomes parallel to the wall, we do not need to worry about pressure boundary condition. However, as the flow turns, its static pressure decreases (as described earlier). If there is not enough pressure to start with, the flow won't be able to complete the turn and will not be parallel to the wall. This shows up as the maximum angle through which a flow can turn. The lower the Mach number is to start with (i.e. small formula_8), the greater the maximum angle through which the flow can turn. The streamline which separates the final flow direction and the wall is known as a slipstream (shown as the dashed line in the figure). Across this line there is a jump in the temperature, density and tangential component of the velocity (normal component being zero). Beyond the slipstream the flow is stagnant (which automatically satisfies the velocity boundary condition at the wall). In case of real flow, a shear layer is observed instead of a slipstream, because of the additional no-slip boundary condition.
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\mu_1 = \\arcsin \\left( \\frac{1}{M_1} \\right)" }, { "math_id": 2, "text": "\\mu_2 = \\arcsin \\left( \\frac{1}{M_2} \\right)" }, { "math_id": 3, "text": "p_0" }, { "math_id": 4, "text": "T_0" }, { "math_id": 5, "text": "\\rho_0" }, { "math_id": 6, "text": "M_2" }, { "math_id": 7, "text": "\\begin{align}\n \\frac{T_2}{T_1} &= \\left( \\frac{1 + \\frac{\\gamma - 1}{2}M_1^2}{1 + \\frac{\\gamma - 1}{2}M_2^2} \\right) \\\\[3pt]\n \\frac{p_2}{p_1} &= \\left( \\frac{1 + \\frac{\\gamma - 1}{2}M_1^2}{1 + \\frac{\\gamma - 1}{2}M_2^2} \\right)^\\frac{\\gamma}{\\gamma - 1} \\\\[3pt]\n \\frac{\\rho_2}{\\rho_1} &= \\left( \\frac{1 + \\frac{\\gamma - 1}{2}M_1^2}{1 + \\frac{\\gamma - 1}{2}M_2^2} \\right)^\\frac{1}{\\gamma - 1}.\n\\end{align}" }, { "math_id": 8, "text": "M_1" }, { "math_id": 9, "text": "\\theta" }, { "math_id": 10, "text": " \\theta = \\nu(M_2) - \\nu(M_1) \\," }, { "math_id": 11, "text": "\\nu(M) \\," }, { "math_id": 12, "text": "\\begin{align}\n \\nu(M) \n &= \\int \\frac{\\sqrt{M^2 - 1}}{1 + \\frac{\\gamma - 1}{2}M^2}\\frac{\\,dM}{M} \\\\\n &= \\sqrt{\\frac{\\gamma + 1}{\\gamma - 1}} \\arctan \\sqrt{\\frac{\\gamma - 1}{\\gamma + 1} \\left(M^2 - 1\\right)} - \\arctan\\sqrt{M^2 - 1}. \\\\\n\\end{align} " }, { "math_id": 13, "text": "\\nu(1) = 0. \\," }, { "math_id": 14, "text": " M_1 " }, { "math_id": 15, "text": "\\nu(M_1) \\," }, { "math_id": 16, "text": "\\nu(M_2) \\," }, { "math_id": 17, "text": " M_2 " }, { "math_id": 18, "text": "(r,\\phi)" }, { "math_id": 19, "text": "v_r = \\sqrt{2(h_0-h)-c^2}, \\quad v_\\phi=c, \\quad \\text{where} \\quad \\phi = - \\int \\frac{d(\\rho c)}{\\rho \\sqrt{2(h_0-h)-c^2}}," }, { "math_id": 20, "text": "h" }, { "math_id": 21, "text": "h_0" }, { "math_id": 22, "text": "\\infty" }, { "math_id": 23, "text": "\\nu \\," }, { "math_id": 24, "text": "\\nu_\\text{max} \\," }, { "math_id": 25, "text": "\\nu_\\text{max} = \\frac{\\pi}{2} \\left( \\sqrt{\\frac{\\gamma + 1}{\\gamma - 1}} - 1 \\right)." }, { "math_id": 26, "text": "\\theta_\\text{max} = \\nu_\\text{max} - \\nu(M_1). \\," } ]
https://en.wikipedia.org/wiki?curid=6936536
69365391
Aaron Robertson (mathematician)
American mathematician (born 1971) Aaron Robertson (born November 8, 1971) is an American mathematician who specializes in Ramsey theory. He is a professor at Colgate University. Life and education. Aaron Robertson was born in Torrance, California, and moved with his parents to Midland, Michigan at the age of 4. He studied actuarial science as an undergraduate at the University of Michigan, and went on to graduate school in mathematics at Temple University in Philadelphia, where he was supervised by Doron Zeilberger. Robertson received his Ph.D. in 1999 with his thesis titled "Some New Results in Ramsey Theory". Following his Ph.D., Robertson became an assistant professor of mathematics at Colgate University, where he is currently a full professor. Mathematical work. Robertson's work in mathematics since 1998 has consisted predominantly of topics related to Ramsey theory. One of Robertson's earliest publications is a paper, co-authored with his supervisor Doron Zeilberger, which came out of his Ph.D. work. The authors prove that "the minimum number (asymptotically) of monochromatic Schur Triples that a 2-colouring of formula_0 can have formula_1". After completing his dissertation, Robertson worked with 3-term arithmetic progressions where he found the best-known values that were close to each other and titled this piece "New Lower Bounds for Some Multicolored Ramsey Numbers". Another notable piece of Robertson's research is a paper co-authored with Doron Zeilberger and Herbert Wilf titled "Permutation Patterns and Continued Fractions". In the paper, they "find a generating function for the number of (132)-avoiding permutations that have a given number of (123) patterns" with the result being "in the form of a continued fraction". Robertson's contribution to this specific paper includes discussion on permutations that avoid a certain pattern but contain others. A notable paper Robertson wrote titled "A Probalistic Threshold For Monochromatic Arithmetic Progressions" explores the function formula_2 (where formula_3 is fixed) and the r-colourings of formula_4. Robertson analyzes the threshold function for formula_5-term arithmetic progressions and improves the bounds found previously. In 2004, Robertson and Bruce M. Landman published the book "Ramsey Theory on the Integers", of which a second expanded edition appeared in 2014. The book introduced new topics such as rainbow Ramsey theory, an “inequality” version of Schur's theorem, monochromatic solutions of recurrence relations, Ramsey results involving both sums and products, monochromatic sets avoiding certain differences, Ramsey properties for polynomial progressions, generalizations of the Erdős–Ginzberg–Ziv theorem, and the number of arithmetic progressions under arbitrary colourings. More recently, in 2021, Robertson published a book titled "Fundamentals of Ramsey Theory." Robertson's goal in writing this book was to "help give an overview of Ramsey theory from several points of view, adding intuition and detailed proofs as we go, while being, hopefully, a bit gentler than most of the other books on Ramsey theory". Throughout the book, Robertson discusses several theorems including Ramsey's Theorem, Van der Waerden's Theorem, Rado's Theorem, and Hales–Jewett Theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[1,n]" }, { "math_id": 1, "text": "n^2/22 + O(n)" }, { "math_id": 2, "text": "f_r(k) = \\sqrt k \\cdot r^{k/2}" }, { "math_id": 3, "text": "r\\geq 2" }, { "math_id": 4, "text": "[1,n_k] = \\{1,2,\\ldots,n_k\\}" }, { "math_id": 5, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=69365391
693673
Transition radiation
Transition radiation (TR) is a form of electromagnetic radiation emitted when a charged particle passes through inhomogeneous media, such as a boundary between two different media. This is in contrast to Cherenkov radiation, which occurs when a charged particle passes through a homogeneous dielectric medium at a speed greater than the phase velocity of electromagnetic waves in that medium. History. Transition radiation was demonstrated theoretically by Ginzburg and Frank in 1945. They showed the existence of Transition radiation when a charged particle perpendicularly passed through a boundary between two different homogeneous media. The frequency of radiation emitted in the backwards direction relative to the particle was mainly in the range of visible light. The intensity of radiation was logarithmically proportional to the Lorentz factor of the particle. After the first observation of the transition radiation in the optical region, many early studies indicated that the application of the optical transition radiation for the detection and identification of individual particles seemed to be severely limited due to the inherent low intensity of the radiation. Interest in transition radiation was renewed when Garibian showed that the radiation should also appear in the x-ray region for ultrarelativistic particles. His theory predicted some remarkable features for transition radiation in the x-ray region. In 1959 Garibian showed theoretically that energy losses of an ultrarelativistic particle, when emitting TR while passing the boundary between media and vacuum, were directly proportional to the Lorentz factor of the particle. Theoretical discovery of x-ray transition radiation, which was directly proportional to the Lorentz factor, made possible further use of TR in high-energy physics. Thus, from 1959 intensive theoretical and experimental research of TR, and x-ray TR in particular began. Transition radiation in the x-ray region. Transition radiation in the x-ray region (TR) is produced by relativistic charged particles when they cross the interface of two media of different dielectric constants. The emitted radiation is the homogeneous difference between the two inhomogeneous solutions of Maxwell's equations of the electric and magnetic fields of the moving particle in each medium separately. In other words, since the electric field of the particle is different in each medium, the particle has to "shake off" the difference when it crosses the boundary. The total energy loss of a charged particle on the transition depends on its Lorentz factor "γ" "E"/"mc"2 and is mostly directed forward, peaking at an angle of the order of 1/"γ" relative to the particle's path. The intensity of the emitted radiation is roughly proportional to the particle's energy "E". Optical transition radiation is emitted both in the forward direction and reflected by the interface surface. In case of a foil having an angle at 45 degrees with respect to a particle beam, the particle beam's shape can be visually seen at an angle of 90 degrees. More elaborate analysis of the emitted visual radiation may allow for the determination of "γ" and emittance. In the approximation of relativistic motion (formula_0), small angles (formula_1) and high frequency (formula_2), the energy spectrum can be expressed as: formula_3 Where formula_4 is the atomic charge, formula_5 is the charge of an electron, formula_6 is the Lorentz factor, formula_7 is the Plasma Frequency. This divergences at low frequencies where the approximations fail. The total energy emitted is: formula_8 The characteristics of this electromagnetic radiation makes it suitable for particle discrimination, particularly of electrons and hadrons in the momentum range between and . The transition radiation photons produced by electrons have wavelengths in the x-ray range, with energies typically in the range from 5 to . However, the number of produced photons per interface crossing is very small: for particles with γ = 2×103, about 0.8 x-ray photons are detected. Usually several layers of alternating materials or composites are used to collect enough transition radiation photons for an adequate measurement—for example, one layer of inert material followed by one layer of detector (e.g. microstrip gas chamber), and so on. By placing interfaces (foils) of very precise thickness and foil separation, coherence effects will modify the transition radiation's spectral and angular characteristics. This allows a much higher number of photons to be obtained in a smaller angular "volume". Applications of this x-ray source are limited by the fact that the radiation is emitted in a cone, with a minimum intensity at the center. X-ray focusing devices (crystals/mirrors) are not easy to build for such radiation patterns. A special type of transition radiation is diffusive radiation. It is emitted provided that a charged particle crosses a medium with randomly inhomogeneous dielectric permittivity^{9,10,11}. References. &lt;templatestyles src="Reflist/styles.css" /&gt; 9. ^S.R.Atayan and Zh.S.Gevorkian, Pseudophoton diffusion and radiation of a charged particle in a randomly inhomogeneous medium, Sov.Phys.JETP,v.71(5),862,(1990).\\ 10. ^Zh.S.Gevorkian, Radiation of a relativistic charged particle in a system with one-dimensional randomness, Phys.Rev.E,v.57,2338,(1998).\\ 11. ^ Zh.S.Gevorkian, C.P.Chen and Chin-Kun Hu, New Mechanism of X-ray radiation from a relativistic charged particle in a dielectric random medium, Phys.Rev.Lett. v.86,3324,(2001).
[ { "math_id": 0, "text": "\\gamma \\gg 1" }, { "math_id": 1, "text": "\\theta \\ll 1" }, { "math_id": 2, "text": "\\omega \\gg \\omega_p" }, { "math_id": 3, "text": "\\frac{dI}{d \\nu} \\approx \\frac{z^2 e^2 \\gamma \\omega_p}{\\pi c} \\bigg( ( 1 + 2 \\nu^2) \\ln(1 + \\frac{1}{\\nu^2}) - 2\\bigg )" }, { "math_id": 4, "text": "z" }, { "math_id": 5, "text": "e" }, { "math_id": 6, "text": "\\gamma" }, { "math_id": 7, "text": "\\omega_p" }, { "math_id": 8, "text": "I = \\frac{z^2 e^2 \\gamma \\omega_p}{3 c}" } ]
https://en.wikipedia.org/wiki?curid=693673
6937695
Band emission
Band emission, is the fraction of the total emission from a blackbody that is in a certain wavelength interval or band. For a prescribed temperature T and spectral interval from 0 to λ, it is the ratio of the total emissive power of a black body from 0 to λ to the total emissive power over the entire spectrum. formula_0
[ { "math_id": 0, "text": "F_{0,\\lambda} = \\frac{\\int_0^\\lambda E_{\\lambda,b} d\\lambda}{\\int_0^\\infty E_{\\lambda,b} d\\lambda} =\\frac{\\int_0^\\lambda E_{\\lambda,b} d\\lambda}{\\sigma T ^4} =\\int_0^{\\lambda T} \\frac{E_{\\lambda,b}}{\\sigma T ^5} d(\\lambda T) = f(\\lambda T) " } ]
https://en.wikipedia.org/wiki?curid=6937695
69378275
Flag algebra
Technique in graph theory Flag algebras are an important computational tool in the field of graph theory which have a wide range of applications in homomorphism density and related topics. Roughly, they formalize the notion of adding and multiplying homomorphism densities and set up a framework to solve graph homomorphism inequalities with computers by reducing them to semidefinite programming problems. Originally introduced by Alexander Razborov in a 2007 paper, the method has since come to solve numerous difficult, previously unresolved graph theoretic questions. These include the formula_0 question regarding the region of feasible edge density, triangle density pairs and the maximum number of pentagons in triangle free graphs. Motivation. The motivation of the theory of flag algebras is credited to John Adrian Bondy and his work on the Caccetta-Haggkvist conjecture, where he illustrated his main ideas via a graph homomorphism flavored proof to Mantel's Theorem. This proof is an adaptation on the traditional proof of Mantel via double counting, except phrased in terms of graph homomorphism densities and shows how much information can be encoded with just density relationships. Theorem (Mantel): The edge density in a triangle-free graph formula_1 is at most formula_2. In other words, formula_3 As the graph is triangle-free, among 3 vertices in formula_1, they can either form an independent set, a single induced edge formula_4, or a path of length 2 formula_5. Denoting formula_6 as the induced density of a subgraph formula_7 in formula_1, double counting gives: formula_8 Intuitively, formula_9 since a formula_5 just consists of two formula_10s connected together, and there are 3 ways to label the common vertex among a set of 3 points. In fact, this can be rigorously proven by double counting the number of induced formula_5s. Letting formula_11 denote the number of vertices of formula_1, we have: formula_12 where formula_13 is the path of length 2 with its middle vertex labeled, and formula_14 represents the density of formula_13s subject to the constraint that the labeled vertex is used, and that formula_13 is counted as a proper induced subgraph only when its labeled vertex coincides with formula_15. Now, note that formula_16 since the probability of choosing two formula_17s where the unlabeled vertices coincide is small (to be rigorous, a limit as formula_18 should be taken, so formula_6 acts as a limit function on a sequence of larger and larger graphs formula_1. This idea will be important for the actual definition of flag algebras.) To finish, apply the Cauchy–Schwarz inequality to get formula_19 Plugging this back into our original relation proves what was hypothesized intuitively. Finally, note that formula_20 so formula_21 The important ideas from this proof which will be generalized in the theory of flag algebras are substitutions such as formula_22, the use of labeled graph densities, considering only the "limit case" of the densities, and applying Cauchy at the end to get a meaningful result. Definition. Fix a collection of forbidden subgraphs formula_23 and consider the set of graphs formula_24 of formula_23-free graphs. Now, define a type of size formula_25 to be a graph formula_26 with labeled vertices formula_27. The type of size 0 is typically denoted as formula_28. First, we define a formula_29-flag, a partially labeled graph which will be crucial for the theory of flag algebras: Definition: A formula_29-flag is a pair formula_30 where formula_31 is an underlying, unlabeled, formula_32-free graph, while formula_33 defines a labeled graph embedding of formula_29 onto the vertices formula_34. Denote the set of formula_29-flags to be formula_35 and the set of formula_29-flags of size formula_36 to be formula_37. As an example, formula_13 from the proof of Mantel's Theorem above is a formula_29-flag where formula_29 is a type of size 1 corresponding to a single vertex. For formula_29-flags formula_38 satisfying formula_39, we can define the density of the formula_29-flags onto the underlying graph formula_1 in the following way: Definition: The density formula_40 of the formula_29-flags formula_41 in formula_1 is defined to be the probability of successfully randomly embedding formula_41 into formula_42 such that they are nonintersecting on formula_43 and are all labeled in the exact same way as formula_1 on formula_44. More precisely, choose pairwise disjoint formula_45 at random and define formula_46 to be the probability that the formula_29-flag formula_47 is isomorphic to formula_48 for all formula_49. Note that, when embedding formula_34 into formula_1, where formula_50 are formula_29-flags, it can be done by first embedding formula_34 into a formula_29-flag formula_51 of size formula_52 and then embedding formula_51 into formula_1, which gives the formula: formula_53. Extending this to sets of formula_29-flags gives the Chain Rule: Theorem (Chain Rule): If formula_54 are formula_29-flags, formula_55 are naturals such that formula_41 fit in formula_1, formula_56 fit in a formula_29-flag of size formula_36, and a formula_29-flag of size formula_36 combined with formula_57 fit in formula_1, then formula_58. Recall that the previous proof for Mantel's involved linear combinations of terms of the form formula_6. The relevant ideas were slightly imprecise with letting formula_11 tend to infinity, but explicitly there is a sequence formula_59 such that formula_60 converges to some formula_61 for all formula_7, where formula_62 is called a limit functional. Thus, all references to formula_6 really refer to the limit functional. Now, graph homomorphism inequalities can be written as linear combinations of formula_62 with different formula_7s, but it would be convenient to express them as a single term. This motivates defining formula_63, the set of formal linear combinations of formula_29-flags over formula_64, and now formula_62 can be extended to a linear function over formula_63. However, using the full space formula_63 is wasteful when investigating just limit functionals, since there exist nontrivial relations between densities of certain formula_29-flags. In particular, the Chain Rule shows that formula_65 is always true. Rather than dealing with all of these elements of the kernel, let the set of expressions of the above form (i.e. those obtained from Chain Rule with a single formula_29-flag) as formula_66 and quotient them out in our final analysis. These ideas combine to form the definition for a flag algebra: Definition (Flag Algebras): A flag algebra is defined on the space of linear combinations of formula_29-flags formula_67 equipped with bilinear operator formula_68 for formula_69 and any natural formula_36 such that formula_50 fit in a formula_29-flag of size formula_36, extending the operator linearly to formula_63. It remains to check that the choice of formula_36 does not matter for a pair formula_50 provided it is large enough (this can be proven with Chain Rule) as well as that if formula_70 then formula_71, meaning that the operator respects the quotient and thus forms a well-defined algebra on the desired space. One important result of this definition for the operator is that multiplication is respected on limit functionals. In particular, for a limit functional formula_62, the identity formula_72 holds true. For example, it was shown that formula_73 in our proof for Mantel's, and this result is just a corollary of this statement. More generally, the fact that formula_62 is multiplicative means that all limit functionals are algebra homomorphisms between formula_74 and formula_64. The downward operator. The definition above provides a framework for dealing with formula_29-flags, which are partially labeled graphs. However, most of the time, unlabeled graphs, or formula_28-flags, are of greatest interest. To get from the former to the latter, define the downward operator. The downward operator is defined in the most natural way: given a formula_29-flag formula_34, let formula_75 to be the formula_28-flag resulting from forgetting the labels assigned to formula_29. Now, to define a natural mapping between formula_29-flags and unlabeled graphs, let formula_76 be the probability that an injective map formula_33 taken at random has image isomorphic to formula_29, and define formula_77. Extending formula_78 linearly to formula_63 gives a valid linear map which sends combinations of formula_29-flags to combinations of unlabeled ones. The most important result regarding formula_78 is its averaging properties. In particular, fix a formula_29-flag formula_34 and unlabeled graph formula_1 with formula_79, then choosing an embedding formula_80 of formula_29 on formula_1 at random defines random variable formula_81. It can be shown that formula_82 Optimization with flag algebras. All linear functionals, formula_62 are algebra homomorphisms formula_83. Furthermore, by definition, formula_84 for any formula_29-flag formula_34 since formula_62 represents a density limit. Thus, say that a homomorphism formula_85 is positive if and only if formula_86, and let formula_87 be the set of positive homomorphisms. One can show that the set of limit functionals formula_88 is exactly the set of positive homomorphisms formula_87, so it suffices to understand the latter definition of the set. In order for a linear combination formula_89 to yield a valid graph homomorphism inequality, it needs to be nonnegative over all possible linear functionals, which will then imply that it is true for all graphs. With this in mind, define the semantic cone of type formula_29, a set formula_90 such that formula_91 Once again, formula_92 is the case of most interest, which corresponds to the case of unlabeled graphs. However, the downward operator has the property of mapping formula_93 to formula_94, and it can be shown that the image of formula_93 under formula_95 is a subset of formula_94, meaning that any results on the type formula_29 semantic cone readily generalize to unlabeled graphs as well. Just by naively manipulating elements of formula_74, numerous elements of the semantic cone formula_93 can be generated. For example, since elements of formula_96 are nonnegative for formula_29-flags, any conical combination of elements of formula_35 will yield an element of formula_93. Perhaps more non-trivially, any conical combination of squares of elements of formula_97 will also yield an element of the semantic cone. Though one can find squares of flags which sum to nontrivial results manually, it is often simpler to automate the process. In particular, it is possible to adapt the ideas in sum-of-squares optimization for polynomials to flag algebras. Define the degree of a vector formula_98 to be the largest flag with nonzero coefficient in the expansion of formula_99, and let the degree of formula_100 to be the minimum degree of a vector over all choices in formula_101. Also, define formula_102 as the canonical embedding sending formula_34 to itself for all formula_103. These definitions give rise to the following flag-algebra analogue: Theorem: Given formula_89, formula_104, then there exist formula_105 for some formula_106 if and only if there is a positive semidefinite matrix formula_107 such that formula_108. With this theorem, graph homomorphism problems can now be relaxed into semidefinite programming ones which can be solved via computer. For example, Mantel's Theorem can be rephrased as finding the smallest formula_109 such that formula_110. As formula_111 is poorly understood, it is difficult to make progress on the question in this form, but note that conic combinations of formula_28-flags and squares of vectors lie in formula_111, so instead take a semidefinite relaxation. In particular, minimize formula_112 under the constraint that formula_113 where formula_114 is a conic combination of formula_28-flags and formula_115 is positive semi-definite. This new optimization problem can be transformed into a semidefinite-programming problem which is then solvable with standard algorithms. Generalizations. The method of flag algebras readily generalizes to numerous graph-like constructs. As Razborov wrote in his original paper, flags can be described with finite model theory instead. Instead of graphs, models of some nondegenerate universal first-order theory formula_116 with equality in a finite relational signature formula_117 with only predicate symbols can be used. A model formula_118, which replaces our previous notion of a graph, has ground set formula_119, whose elements are called vertices. Now, defining sub-models and model embeddings in an analogous way to subgraphs and graph embeddings, all of the definitions and theorems above can be nearly directly translated into the language of model theory. The fact that the theory of flag algebras generalizes well means that it can be used not only to solve problems in simple graphs, but also similar constructs such as, but not limited to, directed graphs and hypergraphs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_{2,3}" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\frac{1}{2}" }, { "math_id": 3, "text": "\\mathrm{ex}(K_2,\\{K_3\\})\\le\\frac{1}{2}." }, { "math_id": 4, "text": "\\tilde{P_3}" }, { "math_id": 5, "text": "P_3" }, { "math_id": 6, "text": "d(H,G)" }, { "math_id": 7, "text": "H" }, { "math_id": 8, "text": "\\binom{n}{3}\\left(d(\\tilde{P_3},G)+2d(P_3,G)\\right)=\\binom{n}{2}(n-2)d(K_2,G)\\implies d(\\tilde{P_3},G)+2d(P_3,G)=3d(K_2,G)." }, { "math_id": 9, "text": "d(P_3,G)\\approx 3d(K_2,G)^2" }, { "math_id": 10, "text": "K_2" }, { "math_id": 11, "text": "|G|" }, { "math_id": 12, "text": "d(P_3,G)=\\binom{|G|}{3}^{-1}\\sum_{v\\in V(G)}\\binom{d(v)}{2}=\\binom{|G|}{3}^{-1}\\binom{|G|-1}{2}\\sum_{v\\in V(G)}d(P_3^b,G^v)=\\frac{3}{|G|}\\sum_{v\\in V(G)}d(P_3^b,G^v)," }, { "math_id": 13, "text": "P_3^b" }, { "math_id": 14, "text": "d(P_3^b,G^v)" }, { "math_id": 15, "text": "v" }, { "math_id": 16, "text": "d(P_3^b,G^v)\\approx d(K_2^b,G^v)^2" }, { "math_id": 17, "text": "K_2^b" }, { "math_id": 18, "text": "|G|\\to\\infty" }, { "math_id": 19, "text": "\\sum_{v\\in V(G)}d(P_3^b,G^v)\\approx\\sum_{v\\in V(G)}d(K_2^b,G^v)^2\\ge\\frac{1}{|G|}\\left(\\sum_{v\\in V(G)}d(K_2^b,G^v)\\right)^2=\\frac{1}{|G|}\\left(\\frac{1}{|G|-1}\\binom{|G|}{2}2d(K_2,G)\\right)^2=|G|d(K_2,G)^2." }, { "math_id": 20, "text": "d(\\tilde{P_3},G)\\ge 0" }, { "math_id": 21, "text": "6d(K_2,G)^2\\le 2d(P_3,G)\\le 3d(K_2,G)\\implies d(K_2,G)\\le\\frac{1}{2}." }, { "math_id": 22, "text": "d(P_3,G)\\to 3d(K_2,G)^2" }, { "math_id": 23, "text": "\\mathcal{H}" }, { "math_id": 24, "text": "\\mathcal{G}" }, { "math_id": 25, "text": "k" }, { "math_id": 26, "text": "\\sigma\\in\\mathcal G" }, { "math_id": 27, "text": "V(\\sigma)=[k]" }, { "math_id": 28, "text": "\\varnothing" }, { "math_id": 29, "text": "\\sigma" }, { "math_id": 30, "text": "(F,\\theta)" }, { "math_id": 31, "text": "F\\in\\mathcal G" }, { "math_id": 32, "text": "\\mathcal H" }, { "math_id": 33, "text": "\\theta:[k]\\to V(F)" }, { "math_id": 34, "text": "F" }, { "math_id": 35, "text": "\\mathcal F^\\sigma" }, { "math_id": 36, "text": "n" }, { "math_id": 37, "text": "\\mathcal F^\\sigma_n" }, { "math_id": 38, "text": "F_1,F_2,\\ldots,F_t,(G,\\theta)" }, { "math_id": 39, "text": "|G|-|\\sigma|\\ge \\sum_{i=1}^t |F_i|-|\\sigma|" }, { "math_id": 40, "text": "p(F_1,F_2,\\ldots,F_t;G)" }, { "math_id": 41, "text": "F_1,\\ldots,F_t" }, { "math_id": 42, "text": "V(G)" }, { "math_id": 43, "text": "V(G)\\setminus\\mathrm{im}(\\theta)" }, { "math_id": 44, "text": "\\mathrm{im}(\\theta)" }, { "math_id": 45, "text": "U_1,U_2,\\ldots,U_t\\subseteq V(G)\\setminus\\mathrm{im}(\\theta)" }, { "math_id": 46, "text": "p(F_1,\\ldots,F_t;G)" }, { "math_id": 47, "text": "(G[U_i\\cup\\mathrm{im}(\\theta)],\\theta)" }, { "math_id": 48, "text": "F_i" }, { "math_id": 49, "text": "i\\in[t]" }, { "math_id": 50, "text": "F,G" }, { "math_id": 51, "text": "F'" }, { "math_id": 52, "text": "n\\in[|F|,|G|]" }, { "math_id": 53, "text": "p(F;G)=\\sum_{F'\\in \\mathcal F^\\sigma_n}p(F;F')p(F';G)" }, { "math_id": 54, "text": "F_1,F_2,\\ldots,F_t,G" }, { "math_id": 55, "text": "s,n" }, { "math_id": 56, "text": "F_1,\\ldots,F_s" }, { "math_id": 57, "text": "F_{s+1},\\ldots,F_t" }, { "math_id": 58, "text": "p(F_1,F_2,\\ldots,F_t;G)=\\sum_{F\\in\\mathcal F^\\sigma_n}p(F_1,\\ldots,F_s;F)p(F,F_{s+1},\\ldots,F_t;G)" }, { "math_id": 59, "text": "G_1,G_2,\\ldots" }, { "math_id": 60, "text": "d(H,G_i)" }, { "math_id": 61, "text": "\\phi(H)" }, { "math_id": 62, "text": "\\phi" }, { "math_id": 63, "text": "\\mathbb R\\mathcal F^\\sigma" }, { "math_id": 64, "text": "\\mathbb R" }, { "math_id": 65, "text": "\\phi(F)=\\phi\\left(\\sum_{F'\\in\\mathcal F^\\sigma_n}p(F;F')F'\\right)\\implies F-\\sum_{F'\\in\\mathcal F^\\sigma_n}p(F;F')F'\\in\\ker\\phi" }, { "math_id": 66, "text": "\\mathcal K^\\sigma" }, { "math_id": 67, "text": "\\mathcal A^\\sigma = \\mathbb R\\mathcal F^\\sigma/\\mathcal K^\\sigma" }, { "math_id": 68, "text": "F\\cdot G=\\sum_{H\\in \\mathcal F^\\sigma_n}p(F,G;H)H" }, { "math_id": 69, "text": "F,G\\in \\mathcal F^\\sigma" }, { "math_id": 70, "text": "f\\in \\mathcal K^\\sigma" }, { "math_id": 71, "text": "f\\cdot g\\in\\mathcal K^\\sigma" }, { "math_id": 72, "text": "\\phi(f\\cdot g)=\\phi(f)\\cdot\\phi(g)" }, { "math_id": 73, "text": "\\phi(P_3^b)=\\phi(K_2^b)^2" }, { "math_id": 74, "text": "\\mathcal A^\\sigma" }, { "math_id": 75, "text": "\\downarrow\\!\\!F" }, { "math_id": 76, "text": "q_\\sigma(F)" }, { "math_id": 77, "text": "[\\![F]\\!]_\\sigma=q_\\sigma(F)\\downarrow\\!\\!F" }, { "math_id": 78, "text": "[\\![\\cdot]\\!]" }, { "math_id": 79, "text": "|G|\\ge |F|" }, { "math_id": 80, "text": "\\theta" }, { "math_id": 81, "text": "p(F;(G,\\theta))" }, { "math_id": 82, "text": "\\mathbb E[p(F;(G,\\theta)]=\\frac{q_\\sigma(F)p(\\downarrow\\!\\!F;G)}{q_\\sigma(\\sigma)p(\\downarrow\\!\\!\\sigma;G)}." }, { "math_id": 83, "text": "\\phi:\\mathcal A^\\sigma\\to\\mathbb R" }, { "math_id": 84, "text": "\\phi(F)\\ge 0" }, { "math_id": 85, "text": "\\psi\\in\\operatorname{Hom}(\\mathcal A^\\sigma,\\mathbb R)" }, { "math_id": 86, "text": "\\psi(F)\\ge 0\\forall F\\in\\mathcal F^\\sigma" }, { "math_id": 87, "text": "\\operatorname{Hom}^+(A^\\sigma,\\mathbb R)" }, { "math_id": 88, "text": "\\Phi" }, { "math_id": 89, "text": "f\\in\\mathcal A^\\sigma" }, { "math_id": 90, "text": "\\mathcal S^\\sigma\\subset \\mathcal A^\\sigma" }, { "math_id": 91, "text": "\\mathcal S^\\sigma=\\{f\\in \\mathcal A^\\sigma \\mid \\phi(f)\\ge 0\\,\\,\\,\\forall \\phi\\in\\operatorname{Hom}^+(\\mathcal A^\\sigma,\\mathbb R)\\}." }, { "math_id": 92, "text": "\\sigma=\\varnothing" }, { "math_id": 93, "text": "\\mathcal S^\\sigma" }, { "math_id": 94, "text": "\\mathcal S^\\varnothing" }, { "math_id": 95, "text": "[\\![\\cdot]\\!]_\\sigma" }, { "math_id": 96, "text": "\\operatorname{Hom}^+(\\mathcal A^\\sigma,\\mathbb R)" }, { "math_id": 97, "text": "A^\\sigma" }, { "math_id": 98, "text": "f\\in\\mathbb R\\mathcal F^\\sigma" }, { "math_id": 99, "text": "f" }, { "math_id": 100, "text": "f^*\\in\\mathcal A^\\sigma" }, { "math_id": 101, "text": "f^*+\\mathcal K^\\sigma" }, { "math_id": 102, "text": "v_{\\sigma,n}=\\mathcal F^\\sigma_n\\to\\mathcal A^\\sigma" }, { "math_id": 103, "text": "F\\in\\mathcal F^\\sigma_n" }, { "math_id": 104, "text": "n\\ge|\\sigma|" }, { "math_id": 105, "text": "g_1,\\ldots,g_t\\in\\mathcal A^\\sigma" }, { "math_id": 106, "text": "t\\ge 1" }, { "math_id": 107, "text": "Q:\\mathcal F^\\sigma_n\\times\\mathcal F^\\sigma_n\\to\\mathbb R" }, { "math_id": 108, "text": "f=v_{\\sigma,n}^\\intercal Qv_{\\sigma,n}" }, { "math_id": 109, "text": "\\lambda\\in\\mathbb R" }, { "math_id": 110, "text": "\\lambda\\varnothing-K_2\\in S^\\varnothing" }, { "math_id": 111, "text": "S^\\varnothing" }, { "math_id": 112, "text": "\\lambda" }, { "math_id": 113, "text": "\\lambda\\varnothing-K_2=r+[\\![v_{1,2}^\\intercal Qv_{1,2}]\\!]" }, { "math_id": 114, "text": "r" }, { "math_id": 115, "text": "Q" }, { "math_id": 116, "text": "T" }, { "math_id": 117, "text": "L" }, { "math_id": 118, "text": "M" }, { "math_id": 119, "text": "V(M)" } ]
https://en.wikipedia.org/wiki?curid=69378275
69380218
Hypergraph regularity method
Mathematical method in extremal graph theory In mathematics, the hypergraph regularity method is a powerful tool in extremal graph theory that refers to the combined application of the hypergraph regularity lemma and the associated counting lemma. It is a generalization of the graph regularity method, which refers to the use of Szemerédi's regularity and counting lemmas. Very informally, the hypergraph regularity lemma decomposes any given formula_0-uniform hypergraph into a random-like object with bounded parts (with an appropriate boundedness and randomness notions) that is usually easier to work with. On the other hand, the hypergraph counting lemma estimates the number of hypergraphs of a given isomorphism class in some collections of the random-like parts. This is an extension of Szemerédi's regularity lemma that partitions any given graph into bounded number parts such that edges between the parts behave almost randomly. Similarly, the hypergraph counting lemma is a generalization of the graph counting lemma that estimates number of copies of a fixed graph as a subgraph of a larger graph. There are several distinct formulations of the method, all of which imply the hypergraph removal lemma and a number of other powerful results, such as Szemerédi's theorem, as well as some of its multidimensional extensions. The following formulations are due to V. Rödl, B. Nagle, J. Skokan, M. Schacht, and Y. Kohayakawa, for alternative versions see Tao (2006), and Gowers (2007). Definitions. In order to state the hypergraph regularity and counting lemmas formally, we need to define several rather technical terms to formalize appropriate notions of pseudo-randomness (random-likeness) and boundedness, as well as to describe the random-like blocks and partitions. Notation The following defines an important notion of relative density, which roughly describes the fraction of formula_2-edges spanned by formula_10-edges that are in the hypergraph. For example, when formula_11, the quantity formula_12 is equal to the fraction of triangles formed by 2-edges in the subhypergraph that are 3-edges. Definition [Relative density]. For formula_13, fix some classes formula_14 of formula_15 with formula_16. Suppose formula_17 is an integer. Let formula_18 be a subhypergraph of the induced formula_2-partite graph formula_19. Define the relative density formula_20.What follows is the appropriate notion of pseudorandomness that the regularity method will use. Informally, by this concept of regularity, formula_10-edges (formula_21) have some control over formula_2-edges (formula_3). More precisely, this defines a setting where density of formula_22 edges in large subhypergraphs is roughly the same as one would expect based on the relative density alone. Formally,Definition [(formula_23)-regularity]. Suppose formula_24 are positive real numbers and formula_17 is an integer. formula_3 is (formula_23)-regular with respect to formula_21 if for any choice of classes formula_14 and any collection of subhypergraphs formula_18 of formula_19 satisfying formula_25 we have formula_26.Roughly speaking, the following describes the pseudorandom blocks into which the hypergraph regularity lemma decomposes any large enough hypergraph. In Szemerédi regularity, 2-edges are regularized versus 1-edges (vertices). In this generalized notion, formula_2-edges are regularized versus formula_10-edges for all formula_27. More precisely, this defines a notion of regular hypergraph called formula_28-complex, in which existence of formula_2-edge implies existence of all underlying formula_10-edges, as well as their relative regularity. For example, if formula_29 is a 3-edge then formula_30,formula_31, and formula_32 are 2-edges in the complex. Moreover, the density of 3-edges over all possible triangles made by 2-edges is roughly the same in every collection of subhypergraphs.Definition [formula_33-regular formula_28-complex]. An formula_34-complex formula_35 is a system formula_36 of formula_4-partite formula_2 graphs formula_3 satisfying formula_37. Given vectors of positive real numbers formula_38, formula_39, and an integer formula_17, we say formula_28-complex is formula_33-regular if The following describes the equitable partition that the hypergraph regularity lemma will induce. A formula_45-equitable family of partition is a sequence of partitions of 1-edges (vertices), 2-edges (pairs), 3-edges (triples), etc. This is an important distinction from the partition obtained by Szemerédi's regularity lemma, where only vertices are being partitioned. In fact, Gowers demonstrated that solely vertex partition can not give a sufficiently strong notion of regularity to imply Hypergraph counting lemma. Definition [formula_45-equitable partition]. Let formula_46 be a real number, formula_17 be an integer, and formula_47, formula_48 be vectors of positive reals. Let formula_49 be a vector of positive integers and formula_50 be an formula_51-element vertex set. We say that a family of partitions formula_52 on formula_50 is formula_45-equitable if it satisfies the following: Statements. Hypergraph regularity lemma. For all positive real formula_77, formula_78, and functions formula_79, formula_80 for formula_81 there exists formula_82 and formula_83 so that the following holds. For any formula_0-uniform hypergraph formula_68 on formula_84 vertices, there exists a family of partitions formula_85 and a vector formula_86 so that, for formula_87 and formula_88 where formula_89 for all formula_2, the following holds. Hypergraph counting lemma. For all integers formula_93 the following holds: formula_94 and there are integers formula_95 and formula_96 so that, with formula_97, formula_98, and formula_99, if formula_100 is a formula_101-regular formula_102 complex with vertex partition formula_103 and formula_104, then formula_105. Applications. The main application through which most others follow is the hypergraph removal lemma, which roughly states that given fixed formula_106 and large formula_68 formula_0-uniform hypergraphs, if formula_68 contains few copies of formula_106, then one can delete few hyperedges in formula_68 to eliminate all of the copies of formula_106. To state it more formally, Hypergraph removal lemma. For all formula_107 and every formula_46, there exists formula_108 and formula_109 so that the following holds. Suppose formula_106 is a formula_0-uniform hypergraph on formula_4 vertices and formula_68 is that on formula_84 vertices. If formula_68 contains at most formula_110 copies of formula_106, then one can delete formula_61 hyperedges in formula_68 to make it formula_106-free. One of the original motivations for graph regularity method was to prove Szemerédi's theorem, which states that every dense subset of formula_111 contains an arithmetic progression of arbitrary length. In fact, by a relatively simple application of the triangle removal lemma, one can prove that every dense subset of formula_111 contains an arithmetic progression of length 3. The hypergraph regularity method and hypergraph removal lemma can prove high-dimensional and ring analogues of density version of Szemerédi's theorems, originally proved by Furstenberg and Katznelson. In fact, this approach yields first quantitative bounds for the theorems. This theorem roughly implies that any dense subset of formula_112 contains any finite pattern of formula_112. The case when formula_113 and the pattern is arithmetic progression of length some length is equivalent to Szemerédi's theorem. Furstenberg and Katznelson Theorem. Source: Let formula_114 be a finite subset of formula_115 and let formula_116 be given. Then there exists a finite subset formula_117 such that every formula_118 with formula_119 contains a homothetic copy of formula_114. (i.e. set of form formula_120, for some formula_121 and formula_122) Moreover, if formula_123 for some formula_124, then there exists formula_125 such that formula_126 has this property for all formula_127.Another possible generalization that can be proven by the removal lemma is when the dimension is allowed to grow. Tengan, Tokushige, Rödl, and Schacht Theorem. Let formula_128 be a finite ring. For every formula_116, there exists formula_129 such that, for formula_130, any subset formula_131 with formula_132 contains a coset of an isomorphic copy of formula_128 (as a left formula_128-module). In other words, there are some formula_133 such that formula_134, where formula_135, formula_136 is an injection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " k " }, { "math_id": 1, "text": " K_j^{(k)} " }, { "math_id": 2, "text": " j " }, { "math_id": 3, "text": " \\mathcal{G}^{(j)} " }, { "math_id": 4, "text": " l " }, { "math_id": 5, "text": " \\mathcal{G}^{(1)} = V_1 \\sqcup \\ldots \\sqcup V_l " }, { "math_id": 6, "text": " \\mathcal{K}_j(\\mathcal{G}^{(i)}) " }, { "math_id": 7, "text": " K_j^{(i)} " }, { "math_id": 8, "text": " \\mathcal{G}^{(i)} " }, { "math_id": 9, "text": " \\mathcal{K}_j(\\mathcal{G}^{(1)}) = K_l^{(j)}(V_1, \\ldots, V_l) " }, { "math_id": 10, "text": " (j-1) " }, { "math_id": 11, "text": " j=3 " }, { "math_id": 12, "text": "d(\\mathcal{G}^{(3)} \\vert \\mathbf{Q}^{(2)})" }, { "math_id": 13, "text": " j \\geq 3 " }, { "math_id": 14, "text": " V_{i_1}, \\ldots, V_{i_j} " }, { "math_id": 15, "text": " \\mathcal{G}^{(1)} " }, { "math_id": 16, "text": " 1 \\leq i_1 < \\ldots < i_j \\leq l " }, { "math_id": 17, "text": " r \\geq 1 " }, { "math_id": 18, "text": " \\mathbf{Q}^{(j-1)} = \\{ Q_1^{(j-1)}, \\ldots, Q_r^{(j-1)} \\} " }, { "math_id": 19, "text": " \\mathcal{G}^{(j-1)}[V_{i_1}, \\ldots, V_{i_j}] " }, { "math_id": 20, "text": "d\\left(\\mathcal{G}^{(j)} \\vert \\mathbf{Q}^{(j-1)}\\right) = \\frac{\\left|\\mathcal{G}^{(j)} \\cap \\cup_{s \\in [r]} \\mathcal{K}_j(Q_s^{j-1})\\right|}{\\left|\\cup_{s \\in [r]} \\mathcal{K}_j(Q_s^{j-1})\\right|}" }, { "math_id": 21, "text": " \\mathcal{G}^{(j-1)} " }, { "math_id": 22, "text": "j" }, { "math_id": 23, "text": " \\delta_j, d_j, r " }, { "math_id": 24, "text": " \\delta_j, d_j " }, { "math_id": 25, "text": "\\left|\\cup_{s \\in [r]}\\mathcal{K}_j(Q_s)^{(j-1)}\\right| \\geq \\delta_j \\left|\\mathcal{K}_j(\\mathcal{G}^{(j-1)}[V_{i_1}, \\ldots, V_{i_j}])\\right|" }, { "math_id": 26, "text": " d(\\mathcal{G}^{(j)} \\vert \\mathbf{Q}^{(j-1)}) = d_j \\pm \\delta_j " }, { "math_id": 27, "text": " 2\\leq j\\leq h " }, { "math_id": 28, "text": " (l, h) " }, { "math_id": 29, "text": " \\{x,y,z\\} " }, { "math_id": 30, "text": " \\{x,y\\} " }, { "math_id": 31, "text": " \\{x,z\\} " }, { "math_id": 32, "text": " \\{y,z\\} " }, { "math_id": 33, "text": " (\\delta, \\mathbf{d},r) " }, { "math_id": 34, "text": " (l,h) " }, { "math_id": 35, "text": " \\mathbf{G} " }, { "math_id": 36, "text": " \\{\\mathcal{G}^{(j)}\\}_{j=1}^h " }, { "math_id": 37, "text": " \\mathcal{G}^{(j)} \\subset \\mathcal{K}_j(\\mathcal{G}^{(j-1)}) " }, { "math_id": 38, "text": " \\delta = (\\delta_2, \\ldots, \\delta_h) " }, { "math_id": 39, "text": " \\mathbf{d} = (d_2, \\ldots, d_h) " }, { "math_id": 40, "text": " 1 \\leq i_1 < i_2 \\leq l " }, { "math_id": 41, "text": " \\mathcal{G}^{(2)}[V_{i_1},V_{i_2}] " }, { "math_id": 42, "text": " \\delta_2 " }, { "math_id": 43, "text": " d_2 \\pm \\delta_2 " }, { "math_id": 44, "text": " 3 \\leq j \\leq h " }, { "math_id": 45, "text": " (\\mu, \\delta, \\mathbf{d}, r) " }, { "math_id": 46, "text": " \\mu > 0 " }, { "math_id": 47, "text": " \\delta = (\\delta_2, \\ldots, \\delta_{k-1}) " }, { "math_id": 48, "text": " \\mathbf{d}=(d_2, \\ldots, d_{k-1}) " }, { "math_id": 49, "text": " \\mathbf{a} = (a_1, \\ldots, a_{k-1}) " }, { "math_id": 50, "text": " V " }, { "math_id": 51, "text": " n " }, { "math_id": 52, "text": " \\mathcal{P} = \\mathcal{P}(k-1, \\mathbf{a}) = \\{\\mathcal{P}^{(1)}\\, \\ldots, \\mathcal{P}^{(k-1)}\\} " }, { "math_id": 53, "text": " \\mathcal{P}^{(1)} = \\{V_i \\colon i \\in [a_1]\\} " }, { "math_id": 54, "text": " |V_1| \\leq \\ldots \\leq |V_{a_1}| \\leq | V_1| + 1 " }, { "math_id": 55, "text": " \\mathcal{P}^{(j)} " }, { "math_id": 56, "text": " \\mathcal{K}_j(\\mathcal{G}^{(1)}) = K_{a_1}^{(j)}(V_1, \\ldots, V_{a_1}) " }, { "math_id": 57, "text": " P_1^{(j-1)}, \\ldots, P_j^{(j-1)} \\in \\mathcal{P}^{(j-1)} " }, { "math_id": 58, "text": " \\mathcal{K}_j(\\cup_{i=1}^jP_i{(j-1)}) \\neq \\emptyset " }, { "math_id": 59, "text": " \\mathcal{K}_j(\\cup_{i=1}^jP_i{(j-1)}) " }, { "math_id": 60, "text": " a_j " }, { "math_id": 61, "text": " \\mu n^k " }, { "math_id": 62, "text": " K \\in \\binom{V}{k} " }, { "math_id": 63, "text": " (k, k-1) " }, { "math_id": 64, "text": " \\mathbf{P} = \\{P^{(j)}\\}_{j=1}^{k-1} " }, { "math_id": 65, "text": " P^{(j)} " }, { "math_id": 66, "text": " \\binom{k}{j} " }, { "math_id": 67, "text": " K \\in \\mathcal{K}_k(P^{(k-1)}) \\subset \\ldots \\subset \\mathcal{K}_k(P^{(1)}) " }, { "math_id": 68, "text": " \\mathcal{H}^{(k)} " }, { "math_id": 69, "text": " (\\delta_k, r) " }, { "math_id": 70, "text": " \\mathcal{P} " }, { "math_id": 71, "text": " \\delta_k n^k " }, { "math_id": 72, "text": " k- " }, { "math_id": 73, "text": " K " }, { "math_id": 74, "text": " K \\in \\mathcal{K}_k(\\mathcal{G}^{(1)}) " }, { "math_id": 75, "text": " K \\in \\mathcal{K}_k(P^{(k-1)}) " }, { "math_id": 76, "text": " (\\delta_k, d(\\mathcal{H}^{(k)} \\vert P^{(k-1)}), r) " }, { "math_id": 77, "text": " \\mu " }, { "math_id": 78, "text": " \\delta_k " }, { "math_id": 79, "text": " r \\, \\colon \\mathbb{N} \\times (0, 1]^{k-2} \\to \\mathbb{N} " }, { "math_id": 80, "text": "\\delta_j \\, \\colon (0,1]^{k-j} \\to (0,1]" }, { "math_id": 81, "text": " j = 2, \\ldots, k-1 " }, { "math_id": 82, "text": " T_0 " }, { "math_id": 83, "text": " n_0 " }, { "math_id": 84, "text": " n \\geq n_0 " }, { "math_id": 85, "text": " \\mathcal{P} = \\mathcal{P}(k-1, \\mathbf{a}) " }, { "math_id": 86, "text": " \\mathbf{d} = (d_2, \\ldots, d_{k-1}) " }, { "math_id": 87, "text": " r = r(a_1, \\mathbf{d}) " }, { "math_id": 88, "text": " \\mathbf{\\delta} = (\\delta_2, \\ldots, \\delta_{k-1}) " }, { "math_id": 89, "text": " \\delta_j = \\delta_j(d_j, \\ldots, d_{k-1}) " }, { "math_id": 90, "text": " (\\mu, \\mathbf{\\delta}, \\mathbf{d}, r) " }, { "math_id": 91, "text": " a_i \\leq T_0 " }, { "math_id": 92, "text": " i = 1, \\ldots, k-1 " }, { "math_id": 93, "text": " 2 \\leq k \\leq l " }, { "math_id": 94, "text": " \\forall \\gamma > 0 \\; \\; \\forall d_k > 0 \\; \\; \\exists \\delta_k > 0 \\;\\; \\forall d_{k-1} > 0 \\; \\; \\exists \\delta_{k-1} > 0 \\; \\cdots \\; \\forall d_2 > 0 \\; \\; \\exists \\delta_2>0 " }, { "math_id": 95, "text": " r " }, { "math_id": 96, "text": " m_0 " }, { "math_id": 97, "text": " \\mathbf{d} = (d_2, \\ldots, d_k) " }, { "math_id": 98, "text": " \\delta = (\\delta_2, \\ldots, \\delta_k) " }, { "math_id": 99, "text": " m \\geq m_0 " }, { "math_id": 100, "text": " \\mathbf{G} = \\{ \\mathcal{G}^{(j)}\\}_{j=1}^k " }, { "math_id": 101, "text": " (\\delta, \\mathbf{d}, r) " }, { "math_id": 102, "text": " (l,k) " }, { "math_id": 103, "text": " \\mathcal{G}^{(1)} = V_1 \\cup \\cdots \\cup V_l " }, { "math_id": 104, "text": " |V_i| = m " }, { "math_id": 105, "text": "\\left|\\mathcal{K}_l(\\mathcal{G}^{(k)})\\right| = (1 \\pm \\gamma) \\prod_{h = 2}^kd_h^{\\binom{l}{h}}\\times m^l" }, { "math_id": 106, "text": " \\mathcal{F}^{(k)} " }, { "math_id": 107, "text": " l \\geq k \\geq 2 " }, { "math_id": 108, "text": " \\zeta > 0 " }, { "math_id": 109, "text": " n_0 > 0 " }, { "math_id": 110, "text": " \\zeta n^l " }, { "math_id": 111, "text": " \\mathbb{Z} " }, { "math_id": 112, "text": " \\mathbb{Z}^d " }, { "math_id": 113, "text": " d = 1 " }, { "math_id": 114, "text": " T " }, { "math_id": 115, "text": " \\mathbb{R}^d " }, { "math_id": 116, "text": " \\delta > 0 " }, { "math_id": 117, "text": " C \\subset \\mathbb{R}^d " }, { "math_id": 118, "text": " Z \\subset C " }, { "math_id": 119, "text": " |Z| > \\delta |C| " }, { "math_id": 120, "text": " z + \\lambda T " }, { "math_id": 121, "text": " z \\in \\mathbb{R}^d " }, { "math_id": 122, "text": " t \\in \\mathbb{R} " }, { "math_id": 123, "text": " T \\subset [-t; t]^d " }, { "math_id": 124, "text": " t \\in \\mathbb{N} " }, { "math_id": 125, "text": " N_0 \\in \\mathbb{N} " }, { "math_id": 126, "text": " C = [-N,N]^d " }, { "math_id": 127, "text": " N \\geq N_0 " }, { "math_id": 128, "text": " A " }, { "math_id": 129, "text": " M_0 " }, { "math_id": 130, "text": " M \\geq M_0 " }, { "math_id": 131, "text": " Z \\subset A^M " }, { "math_id": 132, "text": " |Z| > \\delta |A^M| " }, { "math_id": 133, "text": " \\mathbf{r}, \\mathbf{u} \\in A^M " }, { "math_id": 134, "text": " r + \\varphi(A) \\subset Z " }, { "math_id": 135, "text": " \\varphi \\colon A \\to A^M " }, { "math_id": 136, "text": " \\varphi(a)=a \\mathbf{u} " } ]
https://en.wikipedia.org/wiki?curid=69380218
693816
Fejér kernel
In mathematics, the Fejér kernel is a summability kernel used to express the effect of Cesàro summation on Fourier series. It is a non-negative kernel, giving rise to an approximate identity. It is named after the Hungarian mathematician Lipót Fejér (1880–1959). Definition. The Fejér kernel has many equivalent definitions. We outline three such definitions below: 1) The traditional definition expresses the Fejér kernel formula_0 in terms of the Dirichlet kernel: formula_1 where formula_2 is the "k"th order Dirichlet kernel. 2) The Fejér kernel formula_0 may also be written in a closed form expression as follows formula_3 This closed form expression may be derived from the definitions used above. The proof of this result goes as follows. First, we use the fact that the Dirichlet kernel may be written as: formula_4 Hence, using the definition of the Fejér kernel above we get: formula_5 Using the trigonometric identity: formula_6 formula_7 Hence it follows that: formula_8 3) The Fejér kernel can also be expressed as: formula_9 Properties. The Fejér kernel is a positive summability kernel. An important property of the Fejér kernel is formula_10 with average value of formula_11. Convolution. The convolution "Fn" is positive: for formula_12 of period formula_13 it satisfies formula_14 Since formula_15, we have formula_16, which is Cesàro summation of Fourier series. By Young's convolution inequality, formula_17 Additionally, if formula_18, then formula_19 a.e. Since formula_20 is finite, formula_21, so the result holds for other formula_22 spaces, formula_23 as well. If formula_24 is continuous, then the convergence is uniform, yielding a proof of the Weierstrass theorem. Applications. The Fejér kernel is used in signal processing and Fourier analysis.
[ { "math_id": 0, "text": "F_n(x)" }, { "math_id": 1, "text": " F_n(x) = \\frac{1}{n} \\sum_{k=0}^{n-1}D_k(x) " }, { "math_id": 2, "text": "D_k(x)=\\sum_{s=-k}^k {\\rm e}^{isx}" }, { "math_id": 3, "text": " F_n(x) = \\frac{1}{n} \\left(\\frac{\\sin( \\frac{nx}{2})}{\\sin( \\frac{x}{2})}\\right)^2 = \\frac{1}{n} \\left(\\frac{1 - \\cos(nx)}{1 - \\cos (x)}\\right)" }, { "math_id": 4, "text": "D_k(x)=\\frac{\n\\sin(k\n+\\frac{1}{2})x}{\\sin\\frac{x}{2}}" }, { "math_id": 5, "text": "F_n(x) = \\frac{1}{n} \\sum_{k=0}^{n-1}D_k(x) = \\frac{1}{n} \\sum_{k=0}^{n-1} \\frac{\n\\sin((k\n+\\frac{1}{2})x)}{\\sin(\\frac{x}{2})} \n= \\frac{1}{n} \\frac{1}{\\sin(\\frac{x}{2})}\\sum_{k=0}^{n-1} \n\\sin((k\n+\\frac{1}{2})x) = \\frac{1}{n} \\frac{1}{\\sin^2(\\frac{x}{2})}\\sum_{k=0}^{n-1} \n[\\sin((k\n+\\frac{1}{2})x) \\cdot \\sin(\\frac{x}{2})] " }, { "math_id": 6, "text": "\\sin(\\alpha)\\cdot\\sin(\\beta)=\\frac{1}{2}(\\cos(\\alpha-\\beta)-\\cos(\\alpha+\\beta))" }, { "math_id": 7, "text": "F_n(x) =\\frac{1}{n} \\frac{1}{\\sin^2(\\frac{x}{2})}\\sum_{k=0}^{n-1} \n[\\sin((k\n+\\frac{1}{2})x) \\cdot \\sin(\\frac{x}{2})] = \\frac{1}{n} \\frac{1}{2\\sin^2(\\frac{x}{2})}\\sum_{k=0}^{n-1} \n[\\cos(kx)-\\cos((k+1)x)] " }, { "math_id": 8, "text": "F_n(x) = \\frac{1}{n} \\frac{1}{\\sin^2(\\frac{x}{2})}\\frac{1-\\cos(nx)}2=\\frac{1}{n} \\frac{1}{\\sin^2(\\frac{x}{2})}\\sin^2(\\frac{nx}2) =\\frac{1}{n} (\\frac{\\sin(\\frac{nx}2)}{\\sin(\\frac{x}{2})})^2 " }, { "math_id": 9, "text": " F_n(x)=\\sum_{ |k| \\leq n-1} \\left(1-\\frac{ |k| }{n}\\right)e^{ikx} " }, { "math_id": 10, "text": "F_n(x) \\ge 0" }, { "math_id": 11, "text": "1 " }, { "math_id": 12, "text": "f \\ge 0" }, { "math_id": 13, "text": "2 \\pi" }, { "math_id": 14, "text": "0 \\le (f*F_n)(x)=\\frac{1}{2\\pi}\\int_{-\\pi}^\\pi f(y) F_n(x-y)\\,dy." }, { "math_id": 15, "text": "f*D_n=S_n(f)=\\sum_{|j|\\le n}\\widehat{f}_je^{ijx}" }, { "math_id": 16, "text": "f*F_n=\\frac{1}{n}\\sum_{k=0}^{n-1}S_k(f)" }, { "math_id": 17, "text": "\\|F_n*f \\|_{L^p([-\\pi, \\pi])} \\le \\|f\\|_{L^p([-\\pi, \\pi])} \\text{ for every } 1 \\le p \\le \\infty \\text{ for } f\\in L^p." }, { "math_id": 18, "text": "f\\in L^1([-\\pi,\\pi])" }, { "math_id": 19, "text": "f*F_n \\rightarrow f" }, { "math_id": 20, "text": "[-\\pi,\\pi]" }, { "math_id": 21, "text": "L^1([-\\pi,\\pi])\\supset L^2([-\\pi,\\pi])\\supset\\cdots\\supset L^\\infty([-\\pi,\\pi])" }, { "math_id": 22, "text": "L^p" }, { "math_id": 23, "text": "p\\ge1" }, { "math_id": 24, "text": "f" }, { "math_id": 25, "text": "f,g\\in L^1" }, { "math_id": 26, "text": "\\hat{f}=\\hat{g}" }, { "math_id": 27, "text": "f=g" }, { "math_id": 28, "text": "f*F_n=\\sum_{|j|\\le n}\\left(1-\\frac{|j|}{n}\\right)\\hat{f}_je^{ijt}" }, { "math_id": 29, "text": "\\lim_{n\\to\\infty}S_n(f)" }, { "math_id": 30, "text": "\\lim_{n\\to\\infty}F_n(f)=f" }, { "math_id": 31, "text": "F_n*f" } ]
https://en.wikipedia.org/wiki?curid=693816
69381645
Quasirandom group
A group with no large product-free subset In mathematics, a quasirandom group is a group that does not contain a large product-free subset. Such groups are precisely those without a small non-trivial irreducible representation. The namesake of these groups stems from their connection to graph theory: bipartite Cayley graphs over any subset of a quasirandom group are always bipartite quasirandom graphs. &lt;templatestyles src="Template:TOC_left/styles.css" /&gt; Motivation. The notion of quasirandom groups arises when considering subsets of groups for which no two elements in the subset have a product in the subset; such subsets are termed product-free. László Babai and Vera Sós asked about the existence of a constant formula_0 for which every finite group formula_1 with order formula_2 has a product-free subset with size at least formula_3. A well-known result of Paul Erdős about sum-free sets of integers can be used to prove that formula_4 suffices for abelian groups, but it turns out that such a constant does not exist for non-abelian groups. Both non-trivial lower and upper bounds are now known for the size of the largest product-free subset of a group with order formula_2. A lower bound of formula_5 can be proved by taking a large subset of a union of sufficiently many cosets, and an upper bound of formula_6 is given by considering the projective special linear group formula_7 for any prime formula_8. In the process of proving the upper bound, Timothy Gowers defined the notion of a quasirandom group to encapsulate the product-free condition and proved equivalences involving quasirandomness in graph theory. Graph quasirandomness. Formally, it does not make sense to talk about whether or not a single group is quasirandom. The strict definition of quasirandomness will apply to sequences of groups, but first bipartite graph quasirandomness must be defined. The motivation for considering sequences of groups stems from its connections with graphons, which are defined as limits of graphs in a certain sense. Fix a real number formula_9 A sequence of bipartite graphs formula_10 (here formula_2 is allowed to skip integers as long as formula_2 tends to infinity) with formula_11 having formula_2 vertices, vertex parts formula_12 and formula_13, and formula_14 edges is quasirandom if any of the following equivalent conditions hold: It is a result of Chung–Graham–Wilson that each of the above conditions is equivalent. Such graphs are termed quasirandom because each condition asserts that the quantity being considered is approximately what one would expect if the bipartite graph was generated according to the Erdős–Rényi random graph model; that is, generated by including each possible edge between formula_12 and formula_13 independently with probability formula_34 Though quasirandomness can only be defined for sequences of graphs, a notion of formula_0-quasirandomness can be defined for a specific graph by allowing an error tolerance in any of the above definitions of graph quasirandomness. To be specific, given any of the equivalent definitions of quasirandomness, the formula_21 term can be replaced by a small constant formula_35, and any graph satisfying that particular modified condition can be termed formula_0-quasirandom. It turns out that formula_0-quasirandomness under any condition is equivalent to formula_36-quasirandomness under any other condition for some absolute constant formula_37 The next step for defining group quasirandomness is the Cayley graph. Bipartite Cayley graphs give a way from translating quasirandomness in the graph-theoretic setting to the group-theoretic setting. Given a finite group formula_38 and a subset formula_39, the bipartite Cayley graph formula_40 is the bipartite graph with vertex sets formula_18 and formula_19, each labeled by elements of formula_1, whose edges formula_41 are between vertices whose ratio formula_42 is an element of formula_43 Definition. With the tools defined above, one can now define group quasirandomness. A sequence of groups formula_44 with formula_45 (again, formula_2 is allowed to skip integers) is quasirandom if for every real number formula_46 and choice of subsets formula_47 with formula_48, the sequence of graphs formula_49 is quasirandom. Though quasirandomness can only be defined for sequences of groups, the concept of formula_0-quasirandomness for specific groups can be extended to groups using the definition of formula_0-quasirandomness for specific graphs. Properties. As proved by Gowers, group quasirandomness turns out to be equivalent to a number of different conditions. To be precise, given a sequence of groups formula_44, the following are equivalent: Cayley graphs generated from pseudorandom groups have strong mixing properties; that is, formula_52 is a bipartite formula_53-graph for some formula_54 tending to zero as formula_2 tends to infinity. (Recall that an formula_53 graph is a graph with formula_2 vertices, each with degree formula_55, whose adjacency matrix has a second largest eigenvalue of at most formula_56) In fact, it can be shown that for any formula_0-quasirandom group formula_38, the number of solutions to formula_57 with formula_58, formula_59, and formula_60 is approximately equal to what one might expect if formula_61 was chosen randomly; that is, approximately equal to formula_62 This result follows from a direct application of the expander mixing lemma. Examples. There are several notable families of quasirandom groups. In each case, the quasirandomness properties are most easily verified by checking the dimension of its smallest non-trivial representation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "cn" }, { "math_id": 4, "text": "c = \\frac{1}{3}" }, { "math_id": 5, "text": "cn^{\\frac{11}{14}}" }, { "math_id": 6, "text": "cn^{\\frac{8}{9}}" }, { "math_id": 7, "text": "\\operatorname{PSL}(2, p)" }, { "math_id": 8, "text": "p" }, { "math_id": 9, "text": "p \\in [0, 1]." }, { "math_id": 10, "text": "(G_n)" }, { "math_id": 11, "text": "G_n" }, { "math_id": 12, "text": "A_n" }, { "math_id": 13, "text": "B_n" }, { "math_id": 14, "text": "(p + o(1)) | A_n | | B_n |" }, { "math_id": 15, "text": "H" }, { "math_id": 16, "text": "A'" }, { "math_id": 17, "text": "B'" }, { "math_id": 18, "text": "A" }, { "math_id": 19, "text": "B" }, { "math_id": 20, "text": "\\left(p^{e(H)} + o(1)\\right) | A | ^ {| A' |} | B | ^ {| B' |}." }, { "math_id": 21, "text": "o(1)" }, { "math_id": 22, "text": "H." }, { "math_id": 23, "text": "(p^4 + o(1))| A_n |^2 | B_n |^2." }, { "math_id": 24, "text": "p | A' | | B' | + n^2 o(1)" }, { "math_id": 25, "text": "A' \\subseteq A_n" }, { "math_id": 26, "text": "B' \\subseteq B_n." }, { "math_id": 27, "text": "\\sum\\limits_{a_1, a_2 \\in A_n} N(a_1, a_2)^2 = (p^4 + o(1)) | A_n |^2 | B_n |^2" }, { "math_id": 28, "text": "N(u, v)" }, { "math_id": 29, "text": "u" }, { "math_id": 30, "text": "v." }, { "math_id": 31, "text": "\\sum\\limits_{b_1, b_2 \\in B_n} N(b_1, b_2)^2 = (p^4 + o(1)) | A_n |^2 | B_n |^2." }, { "math_id": 32, "text": "(p + o(1)) \\sqrt{| A | | B |}" }, { "math_id": 33, "text": "\\sqrt{| A | | B |}o(1)." }, { "math_id": 34, "text": "p." }, { "math_id": 35, "text": "c > 0" }, { "math_id": 36, "text": "c^k" }, { "math_id": 37, "text": "k \\geq 1." }, { "math_id": 38, "text": "\\Gamma" }, { "math_id": 39, "text": "S \\subseteq \\Gamma" }, { "math_id": 40, "text": "\\operatorname{BiCay}(\\Gamma, S)" }, { "math_id": 41, "text": "a \\sim b" }, { "math_id": 42, "text": "ab^{-1}" }, { "math_id": 43, "text": "S." }, { "math_id": 44, "text": "(\\Gamma_n)" }, { "math_id": 45, "text": "| \\Gamma_n | = n" }, { "math_id": 46, "text": "p \\in [0, 1]" }, { "math_id": 47, "text": "S_n \\subseteq \\Gamma_n" }, { "math_id": 48, "text": "| S_n | = (p + o(1)) | \\Gamma_n |" }, { "math_id": 49, "text": "\\operatorname{BiCay}(\\Gamma_n, S_n)" }, { "math_id": 50, "text": "\\Gamma_n" }, { "math_id": 51, "text": "o(| \\Gamma_n |)." }, { "math_id": 52, "text": "\\operatorname{BiCay}(\\Gamma_n, S)" }, { "math_id": 53, "text": "(n, d, \\lambda)" }, { "math_id": 54, "text": "\\lambda" }, { "math_id": 55, "text": "d" }, { "math_id": 56, "text": "\\lambda." }, { "math_id": 57, "text": "xy=z" }, { "math_id": 58, "text": "x \\in X" }, { "math_id": 59, "text": "y \\in Y" }, { "math_id": 60, "text": "z \\in Z" }, { "math_id": 61, "text": "S" }, { "math_id": 62, "text": "\\tfrac{| X | | Y | | Z |}{| \\Gamma |}." }, { "math_id": 63, "text": "\\tfrac{1}{2}(p-1)." }, { "math_id": 64, "text": "(A_n)" }, { "math_id": 65, "text": "n-1." }, { "math_id": 66, "text": "\\tfrac{1}{2}\\sqrt{\\log n}" } ]
https://en.wikipedia.org/wiki?curid=69381645
69381702
Ramsey-Turán theory
Ramsey-Turán theory is a subfield of extremal graph theory. It studies common generalizations of Ramsey's theorem and Turán's theorem. In brief, Ramsey-Turán theory asks for the maximum number of edges a graph which satisfies constraints on its subgraphs and structure can have. The theory organizes many natural questions which arise in extremal graph theory. The first authors to formalize the central ideas of the theory were Erdős and Sós in 1969, though mathematicians had previously investigated many Ramsey-Turán-type problems. Ramsey's theorem and Turán's theorem. Ramsey's theorem for two colors and the complete graph, proved in its original form in 1930, states that for any positive integer "k" there exists an integer "n" large enough that for any coloring of the edges of the complete graph formula_0 using two colors has a monochoromatic copy of formula_1. More generally, for any graphs formula_2, there is a threshold formula_3 such that if formula_4 and the edges of formula_0 are colored arbitrarily with formula_5 colors, then for some formula_6 there is a formula_7 in the formula_8th color. Turán's theorem, proved in 1941, characterizes the graph with the maximal number of edges on formula_9 vertices which does not contain a formula_10. Specifically, the theorem states that for all positive integers formula_11, the number of edges of an formula_9-vertex graph which does not contain formula_10 as a subgraph is at most formula_12 and that the maximum is attained uniquely by the Turán graph formula_13. Both of these classic results ask questions about how large a graph can be before it possesses a certain property. There is a notable stylistic difference, however. The extremal graph in Turán's theorem has a very strict structure, having a small chromatic number and containing a small number of large independent sets. On the other hand, the graph considered in Ramsey problems is the complete graph, which has large chromatic number and no nontrivial independent set. A natural way to combine these two kinds of problems is to ask the following question, posed by Andrásfai: Problem 1: For a given positive integer formula_14, let formula_15 be an formula_9-vertex graph not containing formula_10 and having independence number formula_16. What is the maximum number of edges such a graph can have? Essentially, this question asks for the answer to the Turán problem in a Ramsey setting; it restricts Turán's problem to a subset of graphs with less orderly, more randomlike structure. The following question combines the problems in the opposite direction: Problem 2: Let formula_17 be fixed graphs. What is the maximum number of edges an formula_5-edge colored graph on formula_9 vertices can have under the condition that it does not contain an formula_7 in the "i"th color? General problem. The backbone of Ramsey-Turán theory is the common generalization of the above problems. Problem 3: Let formula_17 be fixed graphs. Let formula_15 be an formula_5-edge-colored formula_9-vertex graph satisfying (1) formula_16 (2) the subgraph formula_18 defined by the formula_8th color contains no formula_7. What is the maximum number of edges formula_15 can have? We denote the maximum by formula_19. Ramsey-Turán-type problems are special cases of problem 3. Many cases of this problem remain open, but several interesting cases have been resolved with precise asymptotic solutions. Notable results. Problem 3 can be divided into three different cases, depending on the restriction on the independence number. There is the restriction-free case, where formula_20, which reduces to the classic Ramsey problem. There is the "intermediate" case, where formula_21 for a fixed formula_22. Lastly, there is the formula_23 case, which contains the richest problems. The most basic nontrivial problem in the formula_24 range is when formula_25 and formula_26 Erdős and Sós determined the asymptotic value of the Ramsey-Turán number in this situation in 1969: formula_27 The case of the complete graph on an even number of vertices is much more challenging, and was resolved by Erdős, Hajnal, Sós and Szemerédi in 1983: formula_28 Note that in both cases, the problem can be viewed as adding the extra condition to Turán's theorem that formula_29. In both cases, it can be seen that asymptotically, the effect is the same as if we had excluded formula_1 instead of formula_30 or formula_31. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_n" }, { "math_id": 1, "text": "K_k" }, { "math_id": 2, "text": "L_1,\\dots,L_r" }, { "math_id": 3, "text": "R=R(L_1,\\dots,L_k)" }, { "math_id": 4, "text": "n \\geq R" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "1 \\leq i \\leq r" }, { "math_id": 7, "text": "L_i" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "K_{r+1}" }, { "math_id": 11, "text": "r,n" }, { "math_id": 12, "text": "\\bigg(1 - \\frac{1}{r}\\bigg) \\frac{n^2}{2}" }, { "math_id": 13, "text": "T_{n,r}" }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "G" }, { "math_id": 16, "text": "\\alpha(G) < m" }, { "math_id": 17, "text": "L_1,\\dots, L_r" }, { "math_id": 18, "text": "G_i" }, { "math_id": 19, "text": "\\mathbf{RT}(n;L_1,\\dots,L_r,m)" }, { "math_id": 20, "text": "m=n" }, { "math_id": 21, "text": "m=cn" }, { "math_id": 22, "text": "0<c<1" }, { "math_id": 23, "text": "m = o(n)" }, { "math_id": 24, "text": "m=o(n)" }, { "math_id": 25, "text": "r=1" }, { "math_id": 26, "text": "L_1=K_{2k+1}." }, { "math_id": 27, "text": "\\mathbf{RT}(n;K_{2k+1},o(n)) = \\bigg(1-\\frac{1}{k}\\bigg)\\frac{n^2}{2} + o(n^2)." }, { "math_id": 28, "text": "\\mathbf{RT}(n;K_{2k},o(n)) = \\frac{6k-10}{6k-4}\\frac{n^2}{2} + o(n^2)." }, { "math_id": 29, "text": "\\alpha(G) < o(n)" }, { "math_id": 30, "text": "K_{2k+1}" }, { "math_id": 31, "text": "K_{2k}" } ]
https://en.wikipedia.org/wiki?curid=69381702
693848
Hypergeometric identity
Equalities involving sums over the coefficients occurring in hypergeometric series In mathematics, hypergeometric identities are equalities involving sums over hypergeometric terms, i.e. the coefficients occurring in hypergeometric series. These identities occur frequently in solutions to combinatorial problems, and also in the analysis of algorithms. These identities were traditionally found 'by hand'. There exist now several algorithms which can find and "prove" all hypergeometric identities. formula_0 formula_1 formula_2 formula_3 Definition. There are two definitions of hypergeometric terms, both used in different cases as explained below. See also hypergeometric series. A term "tk" is a hypergeometric term if formula_4 is a rational function in "k". A term "F(n,k)" is a hypergeometric term if formula_5 is a rational function in "k". There exist two types of sums over hypergeometric terms, the definite and indefinite sums. A definite sum is of the form formula_6 The indefinite sum is of the form formula_7 Proofs. Although in the past proofs have been found for many specific identities, there exist several general algorithms to find and prove identities. These algorithms first find a "simple expression" for a sum over hypergeometric terms and then provide a certificate which anyone can use to check and prove the correctness of the identity. For each of the hypergeometric sum types there exist one or more methods to find a "simple expression". These methods also provide the certificate to check the identity's proof: The book A = B by Marko Petkovšek, Herbert Wilf and Doron Zeilberger describes the three main approaches mentioned above.
[ { "math_id": 0, "text": " \\sum_{i=0}^{n} {n \\choose i} = 2^{n} " }, { "math_id": 1, "text": " \\sum_{i=0}^{n} {n \\choose i}^2 = {2n \\choose n} " }, { "math_id": 2, "text": " \\sum_{k=0}^{n} k {n \\choose k} = n2^{n-1} " }, { "math_id": 3, "text": " \\sum_{i=n}^{N} i{i \\choose n} = (n+1){N+2\\choose n+2}-{N+1\\choose n+1} " }, { "math_id": 4, "text": "\\frac{t_{k+1}}{t_k} " }, { "math_id": 5, "text": "\\frac{F(n,k+1)}{F(n,k)} " }, { "math_id": 6, "text": " \\sum_{k} t_k." }, { "math_id": 7, "text": " \\sum_{k=0}^{n} F(n,k)." } ]
https://en.wikipedia.org/wiki?curid=693848
69384997
Monitored neutrino beam
Monitored neutrino beams are facilities for the production of neutrinos with unprecedented control of the flux of particles created inside and outside the facility. Accelerator neutrinos. Accelerator neutrino beams are beams of neutrinos produced by particle accelerators. Since neutrinos are neutral particles that feebly interact with matter, monitoring the production rate of neutrinos at accelerators is a major experimental challenge. M. Schwartz and B. Pontecorvo proposed to exploit accelerators to produce neutrinos in 1960. Their ideas brought to the first neutrino experiment where neutrinos were produced by an accelerator from the scattering of protons in a beryllium target. The scattering produces a wealth of particles and, in particular, pions, which decay producing muons and neutrinos. The experiment, first carried out by Lederman, Schwartz, Steinberger and collaborators demonstrated the existence of two neutrino flavors. At that time, protons were not even steered outside the accelerator but the target was inserted close to the proton orbit. The protons in the AGS accelerator of the Brookhaven National Laboratory were brought to strike an internal Be target in a short straight session of the accelerator. Modern experiments steer the protons outside the accelerator and focus the particles produced after the target by magnetic horns or a static focusing system based on quadrupoles and dipoles. The focusing system increases the flux of pions pointing toward the neutrino detector and selects the charge and momentum of these pions. After focusing, pions propagate along a tunnel and decay by reactions like formula_0 . All undecayed pions and all muons are stopped at the end of the tunnel while the neutrinos cross the wall of the tunnel because their interaction probability is very small. At large distances from the end of the tunnel, no particles are present except for an intense flux of neutrinos. Diagnostics and flux determination. In early experiments, the flux of neutrinos was estimated by measuring the number of pions produced after the target and monitoring the muons produced at the end of the tunnel. After the discovery of neutrino oscillation, the need for high precision beams fostered the construction of sophisticated monitoring systems. They are based on dedicated experiments to measure the number of particles produced by proton interactions on solid-state targets (beryllium, graphite). The beamline comprises the proton beam, target, focusing system, and decay tunnel, and it is simulated by Monte Carlo methods. Variations of the flux are monitored in real-time by measuring the number of protons impinging on the target and the rate of muons. All these techniques are the basic toolkit of accelerator neutrino physicists and are inherited by beam diagnostics. Modern monitored neutrino beams. Monitored neutrino beams are beams where diagnostic can directly measure the flux of neutrinos because the experimenters monitor the production of the lepton associated with the neutrino at the single-particle level. For instance, if a muon neutrino is produced by a formula_1 decay, its appearance is signaled by the observation of the corresponding antimuon. This is a direct estimate because the number of antimuons produced by those decays is equal to the number of muon neutrinos. Similarly, an electron neutrino produced by a kaon decay -for instance formula_2 - is signaled by the observation of a positron. Monitoring the production of leptons in the decay tunnel of an accelerator neutrino beam is a challenge because the number of leptons and background particles is huge. In the 1980s, monitored neutrino beams were built in the USSR in the framework of the "tagged neutrino beam facility". This facility did not reach a flux sufficient to feed neutrino experiments and was later descoped to a tagged kaon beam facility. Current neutrino beams record muons but they have not reached single-particle sensitivity. Their precision on flux (15%) cannot beat conventional techniques, yet. The most advanced monitored neutrino beam project is ENUBET, which aims at designing a monitored neutrino beam for high precision neutrino cross-section measurements. Tagged neutrino beams. Monitored neutrino beams detect the charged leptons produced in the decay tunnel but the experimenters do not attempt to identify simultaneously the charged lepton and the neutrino produced by the decay of the parent particle. For example, a formula_3 decay creates an antimuon formula_4 that can be detected inside the tunnel by a particle detector. The vast majority of neutrinos cross the tunnel without interacting but a handful of them interacts in the neutrino detector, which is generally located far from the tunnel. If the time resolution of the particle detector in the tunnel and the neutrino detector outside the tunnel is very good (below 1 ns), the experimenters can associate unambiguously the neutrino observed in the detector with the charged lepton recorded in the tunnel. These facilities are called (time) tagged neutrino beams and were proposed by L.N. Hand and B. Pontecorvo in the 1960s. To date, an intense and time-tagged neutrino facility has never been built. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi \\rightarrow \\mu \\nu" }, { "math_id": 1, "text": "\\pi^{+} \\rightarrow \\mu^+ \\nu_{\\mu}" }, { "math_id": 2, "text": "K^{+} \\rightarrow e^+ \\pi^0 \\nu_{e}" }, { "math_id": 3, "text": "\\pi^+ \\rightarrow \\mu^+ \\nu_\\mu" }, { "math_id": 4, "text": "\\mu^+ " } ]
https://en.wikipedia.org/wiki?curid=69384997
69389162
Common graph
Concept in extremal graph theory In graph theory, an area of mathematics, common graphs belong to a branch of extremal graph theory concerning inequalities in homomorphism densities. Roughly speaking, formula_0 is a common graph if it "commonly" appears as a subgraph, in a sense that the total number of copies of formula_0 in any graph formula_1 and its complement formula_2 is a large fraction of all possible copies of formula_0 on the same vertices. Intuitively, if formula_1 contains few copies of formula_0, then its complement formula_2 must contain lots of copies of formula_0 in order to compensate for it. Common graphs are closely related to other graph notions dealing with homomorphism density inequalities. For example, common graphs are a more general case of Sidorenko graphs. Definition. A graph formula_0 is common if the inequality: formula_3 holds for any graphon formula_4, where formula_5 is the number of edges of formula_0 and formula_6 is the homomorphism density. The inequality is tight because the lower bound is always reached when formula_4 is the constant graphon formula_7. Interpretations of definition. For a graph formula_1, we have formula_8 and formula_9 for the associated graphon formula_10, since graphon associated to the complement formula_2 is formula_11. Hence, this formula provides us with the very informal intuition to take a close enough approximation, whatever that means, formula_4 to formula_10, and see formula_6 as roughly the fraction of labeled copies of graph formula_0 in "approximate" graph formula_1. Then, we can assume the quantity formula_12 is roughly formula_13 and interpret the latter as the combined number of copies of formula_0 in formula_1 and formula_2. Hence, we see that formula_14 holds. This, in turn, means that common graph formula_0 commonly appears as subgraph. In other words, if we think of edges and non-edges as 2-coloring of edges of complete graph on the same vertices, then at least formula_15 fraction of all possible copies of formula_0 are monochromatic. Note that in a Erdős–Rényi random graph formula_16 with each edge drawn with probability formula_17, each graph homomorphism from formula_0 to formula_1 have probability formula_18of being monochromatic. So, common graph formula_0 is a graph where it attains its minimum number of appearance as a monochromatic subgraph of graph formula_1 at the graph formula_19 with formula_20 formula_20. The above definition using the generalized homomorphism density can be understood in this way. Proofs. Sidorenko graphs are common. A graph formula_0 is a Sidorenko graph if it satisfies formula_27 for all graphons formula_4. In that case, formula_28. Furthermore, formula_29, which follows from the definition of homomorphism density. Combining this with Jensen's inequality for the function formula_30: formula_31 Thus, the conditions for common graph is met. The triangle graph is common. Expand the integral expression for formula_32 and take into account the symmetry between the variables: formula_33 Each term in the expression can be written in terms of homomorphism densities of smaller graphs. By the definition of homomorphism densities: formula_34 formula_35 formula_36 where formula_37 denotes the complete bipartite graph on formula_38 vertex on one part and formula_39 vertices on the other. It follows: formula_40. formula_41 can be related to formula_42 thanks to the symmetry between the variables formula_43 and formula_44: formula_45 where the last step follows from the integral Cauchy–Schwarz inequality. Finally: formula_46. This proof can be obtained from taking the continuous analog of Theorem 1 in "On Sets Of Acquaintances And Strangers At Any Party" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\overline{G}" }, { "math_id": 3, "text": "t(F, W) + t(F, 1 - W) \\ge 2^{-e(F)+1}" }, { "math_id": 4, "text": "W" }, { "math_id": 5, "text": "e(F)" }, { "math_id": 6, "text": "t(F, W)" }, { "math_id": 7, "text": "W \\equiv 1/2" }, { "math_id": 8, "text": "t(F, G) = t(F, W_{G}) " }, { "math_id": 9, "text": "t(F, \\overline{G})=t(F, 1 - W_G)" }, { "math_id": 10, "text": "W_G" }, { "math_id": 11, "text": "W_{\\overline{G}}=1 - W_G" }, { "math_id": 12, "text": "t(F, W) + t(F, 1 - W)" }, { "math_id": 13, "text": "t(F, G) + t(F, \\overline{G})" }, { "math_id": 14, "text": "t(F, G) + t(F, \\overline{G}) \\gtrsim 2^{-e(F)+1}" }, { "math_id": 15, "text": "2^{-e(F)+1}" }, { "math_id": 16, "text": "G = G(n, p)" }, { "math_id": 17, "text": "p=1/2 " }, { "math_id": 18, "text": "2 \\cdot 2^{-e(F)} = 2^ {-e(F) +1}" }, { "math_id": 19, "text": "G=G(n, p)" }, { "math_id": 20, "text": "p=1/2" }, { "math_id": 21, "text": "K_{3}" }, { "math_id": 22, "text": "K_4 ^{-}" }, { "math_id": 23, "text": "K_4" }, { "math_id": 24, "text": "K_{t}" }, { "math_id": 25, "text": "t \\ge 4" }, { "math_id": 26, "text": "K_{4} ^{-}" }, { "math_id": 27, "text": "t(F, W) \\ge t(K_2, W)^{e(F)}" }, { "math_id": 28, "text": "t(F, 1 - W) \\ge t(K_2, 1 - W)^{e(F)}" }, { "math_id": 29, "text": "t(K_2, W) + t(K_2, 1 - W) = 1 " }, { "math_id": 30, "text": "f(x) = x^{e(F)}" }, { "math_id": 31, "text": "t(F, W) + t(F, 1 - W) \\ge t(K_2, W)^{e(F)} + t(K_2, 1 - W)^{e(F)} \n\\ge 2 \\bigg( \\frac{t(K_2, W) + t(K_2, 1 - W)}{2} \\bigg)^{e(F)} = 2^{-e(F) + 1}" }, { "math_id": 32, "text": "t(K_3, 1 - W)" }, { "math_id": 33, "text": "\\int_{[0, 1]^3} (1 - W(x, y))(1 - W(y, z))(1 - W(z, x)) dx dy dz \n= 1 - 3 \\int_{[0, 1]^2} W(x, y) + 3 \\int_{[0, 1]^3} W(x, y) W(x, z) dx dy dz - \\int_{[0, 1]^3} W(x, y) W(y, z) W(z, x) dx dy dz" }, { "math_id": 34, "text": "\\int_{[0, 1]^2} W(x, y) dx dy = t(K_2, W) " }, { "math_id": 35, "text": "\\int{[0, 1]^3} W(x, y) W(x, z) dx dy dz = t(K_{1, 2}, W) " }, { "math_id": 36, "text": "\\int_{[0, 1]^3} W(x, y) W(y, z) W(z, x) dx dy dz = t(K_3, W)" }, { "math_id": 37, "text": "K_{1, 2}" }, { "math_id": 38, "text": "1" }, { "math_id": 39, "text": "2" }, { "math_id": 40, "text": "t(K_3, W) + t(K_3, 1 - W) = 1 - 3 t(K_2, W) + 3 t(K_{1, 2}, W) " }, { "math_id": 41, "text": "t(K_{1, 2}, W)" }, { "math_id": 42, "text": "t(K_2, W)" }, { "math_id": 43, "text": "y " }, { "math_id": 44, "text": "z" }, { "math_id": 45, "text": "\\begin{alignat}{4}\nt(K_{1, 2}, W) &= \\int_{[0, 1]^3} W(x, y) W(x, z) dx dy dz && \\\\\n&= \\int_{x \\in [0, 1]} \\bigg( \\int_{y \\in [0, 1]} W(x, y) \\bigg) \\bigg( \\int_{z \\in [0, 1]} W(x, z) \\bigg) && \\\\\n&= \\int_{x \\in [0, 1]} \\bigg( \\int_{y \\in [0, 1]} W(x, y) \\bigg)^2 && \\\\\n&\\ge \\bigg( \\int_{x \\in [0, 1]} \\int_{y \\in [0, 1]} W(x, y) \\bigg)^2 = t(K_2, W)^2\n\\end{alignat}" }, { "math_id": 46, "text": "t(K_3, W) + t(K_3, 1 - W) \\ge 1 - 3 t(K_2, W) + 3 t(K_{2}, W)^2 \n= 1/4 + 3 \\big( t(K_2, W) - 1/2 \\big)^2 \\ge 1/4" } ]
https://en.wikipedia.org/wiki?curid=69389162
69392163
Blow-up lemma
Important lemma in extremal graph theory The blow-up lemma, proved by János Komlós, Gábor N. Sárközy, and Endre Szemerédi in 1997, is an important result in extremal graph theory, particularly within the context of the regularity method. It states that the regular pairs in the statement of Szemerédi's regularity lemma behave like complete bipartite graphs in the context of embedding spanning graphs of bounded degree. Definitions and Statement. To formally state the blow-up lemma, we first need to define the notion of a super-regular pair. Super-regular pairs. A pair formula_0 of subsets of the vertex set is called formula_1-super-regular if for every formula_2 and formula_3 satisfying formula_4 and formula_5 we have formula_6 and furthermore, formula_7 for all formula_8 and formula_9 for all formula_10. Here formula_11 denotes the number of pairs formula_12 with formula_13 and formula_14 such that formula_15 is an edge. Statement of the Blow-up Lemma. Given a graph formula_16 of order formula_17 and positive parameters formula_18, there exists a positive formula_19 such that the following holds. Let formula_20 be arbitrary positive integers and let us replace the vertices formula_21 of formula_16 with pairwise disjoint sets formula_22 of sizes formula_23 (blowing up). We construct two graphs on the same vertex set formula_24. The first graph formula_25 is obtained by replacing each edge formula_26 of formula_16 with the complete bipartite graph between the corresponding vertex sets formula_27 and formula_28. A sparser graph G is constructed by replacing each edge formula_26 with an formula_1-super-regular pair between formula_27 and formula_28. If a graph formula_29 with formula_30 is embeddable into formula_25 then it is already embeddable into G. Proof Sketch. The proof of the blow-up lemma is based on using a randomized greedy algorithm (RGA) to embed the vertices of formula_29 into formula_31 sequentially. The argument then proceeds by bounding the failure rate of the algorithm such that it is less than 1 (and in fact formula_32) for an appropriate choice of parameters. This means that there is a non-zero chance for the algorithm to succeed, so an embedding must exist. Attempting to directly embed all the vertices of formula_29 in this manner does not work because the algorithm may get stuck when only a small number of vertices are left. Instead, we set aside a small fraction of the vertex set, called buffer vertices, and attempt to embed the rest of the vertices. The buffer vertices are subsequently embedded by using Hall's marriage theorem to find a perfect matching between the buffer vertices and the remaining vertices of formula_31. Notation. We borrow all notation introduced in previous sections. Let formula_33. Since formula_29 can be embedded into formula_25, we can write formula_34 with formula_35 for all formula_36. For a vertex formula_37, let formula_38 denote formula_39. For formula_40, formula_41 denotes the density of edges between the corresponding vertex sets of formula_42. formula_43 is the embedding that we wish to construct. formula_44 is the final time after which the algorithm concludes. Outline of the algorithm. Phase 2: Kőnig-Hall matching for remaining vertices. Consider the set of vertices left to be embedded, which is precisely formula_58, and the set of free spots formula_59. Form a bipartite graph between these two sets, joining each formula_10 to formula_60, and find a perfect matching in this bipartite graph. Embed according to this matching. Proof of correctness. The proof of correctness is technical and quite involved, so we omit the details. The core argument proceeds as follows: Step 1: most vertices are good, and enough vertices are free. Prove simultaneously by induction on formula_61 that if formula_62 is the vertex embedded at time formula_61, then Step 2: the "main lemma". Consider formula_66, and formula_67 such that formula_68 is not too small. Consider the event formula_69 where Then, we prove that the probability of formula_69 happening is low. Step 3: phase 1 succeeds with high probability. The only way that the first phase could fail is if it aborts, since by the first step we know that there is always a sufficient choice of good vertices. The program aborts only when the queue is too long. The argument then proceeds by union-bounding over all modes of failure, noting that for any particular choice of formula_73, formula_74 and formula_75 with formula_76 representing a subset of the queue that failed, the triple formula_77 satisfy the conditions of the "main lemma", and thus have a low probability of occurring. Step 4: no queue in initial phase. Recall that the list was set up so that neighbors of vertices in the buffer get embedded first. The time until all of these vertices get embedded is called the initial phase. Prove by induction on formula_57 that no vertices get added to the queue during the initial phase. It follows that all of the neighbors of the buffer vertices get added before the rest of the vertices. Step 5: buffer vertices have enough free spots. For any formula_78 and formula_79, we can find a sufficiently large lower bound on the probability that formula_80, conditional on the assumption that formula_81 was free before any of the vertices in formula_82 were embedded. Step 6: phase 2 succeeds with high probability. By Hall's marriage theorem, phase 2 fails if and only if Hall's condition is violated. For this to happen, there must be some formula_73 and formula_83 such that formula_84. formula_85 cannot be too small by largeness of free sets (step 1). If formula_85 is too large, then with high probability formula_86, so the probability of failure in such a case would be low. If formula_85 is neither too small nor too large, then noting that formula_87 is a large set of unused vertices, we can use the main lemma and union-bound the failure probability. Applications. The blow-up lemma has a number of applications in embedding dense graphs. Pósa-Seymour Conjecture. In 1962, Lajos Pósa conjectured that every formula_88-vertex graph with minimum degree at least formula_89 contains the square of a Hamiltonian cycle, generalizing Dirac's theorem. The conjecture was further extended by Paul Seymour in 1974 to the following: Every graph on formula_88 vertices with minimum degree at least formula_90 contains the formula_91-th power of a Hamiltonian cycle. The blow-up lemma was used by Komlós, Sárközy, and Szemerédi to prove the conjecture for all sufficiently large values of formula_88 (for a fixed formula_91) in 1998. Alon-Yuster Conjecture. In 1995, Noga Alon and Raphael Yuster considered the generalization of the well-known Hajnal–Szemerédi theorem to arbitrary formula_29-factors (instead of just complete graphs), and proved the following statement: For every fixed graph formula_92 with formula_93 vertices, any graph G with n vertices and with minimum degree formula_94 contains formula_95 vertex disjoint copies of H. They also conjectured that the result holds with only a constant (instead of linear) error: For every integer formula_96 there exists a constant formula_97 such that for every graph formula_92 with formula_93 vertices, any graph formula_42 with formula_98 vertices and with minimum degree formula_94 contains at least formula_99 vertex disjoint copies of formula_92. This conjecture was proven by Komlós, Sárközy, and Szemerédi in 2001 using the blow-up lemma. History and Variants. The blow-up lemma, first published in 1997 by Komlós, Sárközy, and Szemerédi, emerged as a refinement of existing proof techniques using the regularity method to embed spanning graphs, as in the proof of the Bollobás conjecture on spanning trees, work on the Pósa-Seymour conjecture about the minimum degree necessary to contain the k-th graph power of a Hamiltonian cycle, and the proof of the Alon-Yuster conjecture on the minimum degree needed for a graph to have a perfect H-factor. The proofs of all of these theorems relied on using a randomized greedy algorithm to embed the majority of vertices, and then using a Kőnig-Hall like argument to find an embedding for the remaining vertices. The first proof of the blow-up lemma also used a similar argument. Later in 1997, however, the same authors published another paper that found an improvement to the randomized algorithm to make it deterministic. Peter Keevash found a generalization of the blow-up lemma to hypergraphs in 2010. Stefan Glock and Felix Joos discovered a variant of the blow-up lemma for rainbow graphs in 2018. In 2019, Peter Allen, Julia Böttcher, Hiep Hàn, Yoshiharu Kohayakawa, and Yury Person, found sparse analogues of the blow-up lemma for embedding bounded degree graphs into random and pseudorandom graphs References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(A,B)" }, { "math_id": 1, "text": "(\\varepsilon, \\delta)" }, { "math_id": 2, "text": " X \\subset A " }, { "math_id": 3, "text": " Y \\subset B " }, { "math_id": 4, "text": " |X| > \\varepsilon |A| " }, { "math_id": 5, "text": " |Y| > \\varepsilon |B| " }, { "math_id": 6, "text": " e(X,Y) > \\delta |X| |Y| " }, { "math_id": 7, "text": " \\deg(a) > \\delta |B| " }, { "math_id": 8, "text": " a \\in A " }, { "math_id": 9, "text": " \\deg(b) > \\delta |A| " }, { "math_id": 10, "text": " b \\in B " }, { "math_id": 11, "text": " e(X, Y) " }, { "math_id": 12, "text": " (x,y) " }, { "math_id": 13, "text": " x \\in X " }, { "math_id": 14, "text": " y \\in Y " }, { "math_id": 15, "text": "\\{x,y\\}" }, { "math_id": 16, "text": "R" }, { "math_id": 17, "text": "r" }, { "math_id": 18, "text": "\\delta, \\Delta" }, { "math_id": 19, "text": "\\varepsilon = \\varepsilon(\\delta, \\Delta, r)" }, { "math_id": 20, "text": "n_1, n_2,\\dots,n_r" }, { "math_id": 21, "text": "v_1, v_2, \\dots,v_r" }, { "math_id": 22, "text": "V_1, V_2, \\dots, V_r" }, { "math_id": 23, "text": "n_1, n_2, \\dots, n_r" }, { "math_id": 24, "text": "V = \\bigcup V_i" }, { "math_id": 25, "text": "\\mathbf R" }, { "math_id": 26, "text": "\\{v_i, v_j\\}" }, { "math_id": 27, "text": "V_i" }, { "math_id": 28, "text": "V_j" }, { "math_id": 29, "text": "H" }, { "math_id": 30, "text": "\\Delta(H) \\le \\Delta" }, { "math_id": 31, "text": "G" }, { "math_id": 32, "text": "o(1)" }, { "math_id": 33, "text": " n = |V(G)| = \\sum n_i " }, { "math_id": 34, "text": "V(H) = X = \\bigcup_{i \\le r} X_i" }, { "math_id": 35, "text": " |X_i| = |V_i| " }, { "math_id": 36, "text": "i" }, { "math_id": 37, "text": " x \\in X_i " }, { "math_id": 38, "text": " V_x " }, { "math_id": 39, "text": " V_i " }, { "math_id": 40, "text": " x \\in X_i, y \\in X_j " }, { "math_id": 41, "text": " d_{xy} = \\frac{e(V_i,V_j)}{|V_i| |V_j|}" }, { "math_id": 42, "text": " G " }, { "math_id": 43, "text": "\\phi:V(G) \\to V(H)" }, { "math_id": 44, "text": " T " }, { "math_id": 45, "text": "B" }, { "math_id": 46, "text": "4" }, { "math_id": 47, "text": "H \\setminus B" }, { "math_id": 48, "text": "L" }, { "math_id": 49, "text": "q" }, { "math_id": 50, "text": "F_z" }, { "math_id": 51, "text": "z" }, { "math_id": 52, "text": "V_z" }, { "math_id": 53, "text": "x" }, { "math_id": 54, "text": "\\phi(x)" }, { "math_id": 55, "text": "F_z(t)" }, { "math_id": 56, "text": "X_i" }, { "math_id": 57, "text": "t" }, { "math_id": 58, "text": " B " }, { "math_id": 59, "text": " \\bigcup_{b \\in B} F_b(T) " }, { "math_id": 60, "text": " F_b(T) " }, { "math_id": 61, "text": " t " }, { "math_id": 62, "text": " x " }, { "math_id": 63, "text": " F_x(t) " }, { "math_id": 64, "text": " F_z(t+1) " }, { "math_id": 65, "text": " z " }, { "math_id": 66, "text": " 1 \\le i \\le r, Y \\subseteq X_i " }, { "math_id": 67, "text": " A \\subseteq V_i " }, { "math_id": 68, "text": "|A|" }, { "math_id": 69, "text": " E_{A,Y} " }, { "math_id": 70, "text": " A " }, { "math_id": 71, "text": " t_y " }, { "math_id": 72, "text": " y " }, { "math_id": 73, "text": " 1 \\le i \\le r " }, { "math_id": 74, "text": " Y \\subseteq X_i, |Y| \\ge \\delta_Q |X_i| " }, { "math_id": 75, "text": " A = V_i " }, { "math_id": 76, "text": " Y " }, { "math_id": 77, "text": "(i,Y,A)" }, { "math_id": 78, "text": " x \\in B " }, { "math_id": 79, "text": " v \\in V_x " }, { "math_id": 80, "text": "\\phi(N_H(x)) \\subseteq N_G(v)" }, { "math_id": 81, "text": " v " }, { "math_id": 82, "text": " N_H(x) " }, { "math_id": 83, "text": " S \\subseteq X_i " }, { "math_id": 84, "text": " |\\bigcup_{z \\in S} F_z(T)| < |S| " }, { "math_id": 85, "text": " |S| " }, { "math_id": 86, "text": " \\bigcup_{z \\in S} F_z(T) = V_i(T) " }, { "math_id": 87, "text": " A := V_i(T) \\setminus \\bigcup_{z \\in S} F_z(T)" }, { "math_id": 88, "text": "n" }, { "math_id": 89, "text": "\\frac{2n}3" }, { "math_id": 90, "text": "\\frac{kn}{k+1}" }, { "math_id": 91, "text": "k" }, { "math_id": 92, "text": " H " }, { "math_id": 93, "text": " h " }, { "math_id": 94, "text": " d \\ge \\frac{\\chi(H)-1}{\\chi(H)}n " }, { "math_id": 95, "text": " (1-o(1))n/h " }, { "math_id": 96, "text": "h" }, { "math_id": 97, "text": " c(h) " }, { "math_id": 98, "text": " n " }, { "math_id": 99, "text": "n/h-c(h) " } ]
https://en.wikipedia.org/wiki?curid=69392163
693935
Trinomial expansion
In mathematics, a trinomial expansion is the expansion of a power of a sum of three terms into monomials. The expansion is given by formula_0 where "n" is a nonnegative integer and the sum is taken over all combinations of nonnegative indices "i", "j", and "k" such that "i" + "j" + "k" "n". The trinomial coefficients are given by formula_1 This formula is a special case of the multinomial formula for "m" 3. The coefficients can be defined with a generalization of Pascal's triangle to three dimensions, called Pascal's pyramid or Pascal's tetrahedron. Derivation. The trinomial expansion can be calculated by applying the binomial expansion twice, setting formula_2, which leads to formula_3 Above, the resulting formula_4 in the second line is evaluated by the second application of the binomial expansion, introducing another summation over the index formula_5. The product of the two binomial coefficients is simplified by shortening formula_6, formula_7 and comparing the index combinations here with the ones in the exponents, they can be relabelled to formula_8, which provides the expression given in the first paragraph. Properties. The number of terms of an expanded trinomial is the triangular number formula_9 where "n" is the exponent to which the trinomial is raised. Example. An example of a trinomial expansion with formula_10 is : formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(a+b+c)^n = \\sum_{{i,j,k}\\atop{i+j+k=n}} {n \\choose i,j,k}\\, a^i \\, b^{\\;\\! j} \\;\\! c^k, " }, { "math_id": 1, "text": " {n \\choose i,j,k} = \\frac{n!}{i!\\,j!\\,k!} \\,." }, { "math_id": 2, "text": "d = b+c" }, { "math_id": 3, "text": "\n\\begin{align}\n(a+b+c)^n &= (a+d)^n = \\sum_{r=0}^{n} {n \\choose r}\\, a^{n-r}\\, d^{r} \\\\\n\t&= \\sum_{r=0}^{n} {n \\choose r}\\, a^{n-r}\\, (b+c)^{r} \\\\\n\t&= \\sum_{r=0}^{n} {n \\choose r}\\, a^{n-r}\\, \\sum_{s=0}^{r} {r \\choose s}\\, b^{r-s}\\,c^{s}.\n\\end{align}\n" }, { "math_id": 4, "text": "(b+c)^{r}" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "r!" }, { "math_id": 7, "text": "\n{n \\choose r}\\,{r \\choose s} = \\frac{n!}{r!(n-r)!} \\frac{r!}{s!(r-s)!}\n= \\frac{n!}{(n-r)!(r-s)!s!},\n" }, { "math_id": 8, "text": "i=n-r, ~ j=r-s, ~ k = s" }, { "math_id": 9, "text": " t_{n+1} = \\frac{(n+2)(n+1)}{2}, " }, { "math_id": 10, "text": "n=2" }, { "math_id": 11, "text": "(a+b+c)^2=a^2+b^2+c^2+2ab+2bc+2ca" } ]
https://en.wikipedia.org/wiki?curid=693935
6939387
Delta-ring
Ring closed under countable intersections In mathematics, a non-empty collection of sets formula_0 is called a δ-ring (pronounced "delta-ring") if it is closed under union, relative complementation, and countable intersection. The name "delta-ring" originates from the German word for intersection, "Durschnitt", which is meant to highlight the ring's closure under countable intersection, in contrast to a 𝜎-ring which is closed under countable unions. Definition. A family of sets formula_0 is called a δ-ring if it has all of the following properties: If only the first two properties are satisfied, then formula_0 is a ring of sets but not a δ-ring. Every 𝜎-ring is a δ-ring, but not every δ-ring is a 𝜎-ring. δ-rings can be used instead of σ-algebras in the development of measure theory if one does not wish to allow sets of infinite measure. Examples. The family formula_7 is a δ-ring but not a 𝜎-ring because formula_8 is not bounded. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{R}" }, { "math_id": 1, "text": "A \\cup B \\in \\mathcal{R}" }, { "math_id": 2, "text": "A, B \\in \\mathcal{R}," }, { "math_id": 3, "text": "A - B \\in \\mathcal{R}" }, { "math_id": 4, "text": "\\bigcap_{n=1}^{\\infty} A_n \\in \\mathcal{R}" }, { "math_id": 5, "text": "A_n \\in \\mathcal{R}" }, { "math_id": 6, "text": "n \\in \\N." }, { "math_id": 7, "text": "\\mathcal{K} = \\{ S \\subseteq \\mathbb{R} : S \\text{ is bounded} \\}" }, { "math_id": 8, "text": "\\bigcup_{n=1}^{\\infty} [0, n]" } ]
https://en.wikipedia.org/wiki?curid=6939387
69395777
Palindrome tree
Data structure for processing palindromes In computer science a palindrome tree, also called an EerTree, is a type of search tree, that allows for fast access to all palindromes contained in a string. They can be used to solve the longest palindromic substring, the "k"-factorization problem (can a given string be divided into exactly "k" palindromes), palindromic length of a string (what is the minimum number of palindromes needed to construct the string), and finding and counting all distinct sub-palindromes. Palindrome trees do this in an online manner, that is it does not require the entire string at the start and can be added to character by character. Description. Like most trees, a palindrome tree consists of vertices and directed edges. Each vertex in the tree represents a palindrome (e.g. 'tacocat') but only stores the length of the palindrome, and each edge represents either a character or a suffix. The character edges represent that when the character is appended to both ends of the palindrome represented by the source vertex, the palindrome in the destination vertex is created (e.g. an edge labeled 't' would connect the source vertex 'acoca' to the destination vertex 'tacocat'). The suffix edge connects each palindrome to the largest palindrome suffix it possesses (in the previous example 'tacocat' would have a suffix edge to 't', and 'atacocata' would have a suffix link to 'ata'). Where palindrome trees differ from regular trees, is that they have two roots (as they are in fact two separate trees). The two roots represent palindromes of length −1, and 0. That is, if the character 'a' is appended to both roots the tree will produce 'a' and 'aa' respectively. Since each edge adds (or removes) an even number of characters, the two trees are only ever connected by suffix edges. Operations. Add. Since a palindrome tree follows an online construction, it maintains a pointer to the last palindrome added to the tree. To add the next character to the palindrome tree, codice_0 first checks if the first character before the palindrome matches the character being added, if it does not, the suffix links are followed until a palindrome can be added to the tree. Once a palindrome has been found, if it already existed in the tree, there is no work to do. Otherwise, a new vertex is added with a link from the suffix to the new vertex, and a suffix link for the new vertex is added. If the length of the new palindrome is 1, the suffix link points to the root of the palindrome tree that represents a length of −1. def add(x: int) -&gt; bool: """Add character to the palindrome tree.""" while True: if x - 1 - current.length &gt;= 0 and S[x - 1 - current.length] == S[x]: break current = current.suffix if current.add[S[x]] is not None: return False suffix = current current = Palindrome_Vertex() current.length = suffix.length + 2 suffix.add[S[x]] = current if current.length == 1: current.suffix = root return True while True: suffix = suffix.suffix if x - 1 - suffix.length &gt;= 0 and S[x - 1 - suffix.length] == S[x]: current.suffix = suffix.add[S[x]] return True Joint trees. Finding palindromes that are common to multiple strings or unique to a single string can be done with formula_0 additional space where formula_1 is the number of strings being compared. This is accomplished by adding an array of length formula_1 to each vertex, and setting the flag to 1 at index formula_1 if that vertex was reached when adding string formula_1. The only other modification needed is to reset the current pointer to the root at the end of each string. By joining trees in such a manner the following problems can be solved: Complexity. Time. Constructing a palindrome tree takes formula_2 time, where formula_3 is the length of the string and formula_4 is the size of the alphabet. With formula_3 calls to codice_0, each call takes formula_5 amortized time. This is a result of each call to codice_0 increases the depth of the current vertex (the last palindrome in the tree) by at most one, and searching all possible character edges of a vertex takes formula_5 time. By assigning the cost of moving up and down the tree to each call to codice_0, the cost of moving up the tree more than once is 'paid for' by an equal number of calls to codice_0 when moving up the tree did not occur. Space. A palindrome tree takes formula_6 space: At most formula_7 vertices to store the sub-palindromes and two roots, formula_3 edges, linking the vertices and formula_7 suffix edges. Space–time tradeoff. If instead of storing only the add edges that exist for each palindrome an array of length formula_4 edges is stored, finding the correct edge can be done in constant time reducing construction time to formula_8 while increasing space to formula_9, where formula_10 is the number of palindromes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n*i)" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "O(n \\log{\\sigma})" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\sigma" }, { "math_id": 5, "text": "O(\\log{\\sigma})" }, { "math_id": 6, "text": "O(n)" }, { "math_id": 7, "text": "n+2" }, { "math_id": 8, "text": "O(n + p*\\sigma)" }, { "math_id": 9, "text": "O(p*\\sigma)" }, { "math_id": 10, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=69395777
69404632
Dipole glass
A dipole glass is an analog of a glass where the dipoles are frozen below a given freezing temperature "Tf" introducing randomness thus resulting in a lack of long-range ferroelectric order. A dipole glass is very similar to the concept of a spin glass where the atomic spins don't all align in the same direction (like in a ferromagnetic material) and thus result in a net-zero magnetization. The randomness of dipoles in a dipole glass creates local fields resulting in short-range order but no long-range order. The dipole glass like state was first observed in Alkali halide crystal-type dielectrics containing dipole impurities. The dipole impurities in these materials result in off-center ions which results in anomalies in certain properties like specific heat, thermal conductivity as well as some spectroscopic properties. Other materials which show a dipolar glass phase include Rb(1-x)(NH4)xH2PO4 (RADP) and Rb(1-x)(ND4)xD2PO4 (DRADP). In materials like DRADP the dipole moment is introduced due to the deuteron O-D--O bond. Dipole glass like behavior is also observed in materials like ceramics, 3D water framework and perovskites. Random-bond-random-field Ising model (RBRF). The model describing the pseudo-spins (dipole moments) is given by the Hamiltonian as: formula_0, where formula_1 is the Ising dipole moments. The formula_2 refers to the random bond interactions which are described by a gaussian probability distribution with mean formula_3 and variance formula_4. The second term provides a description of the interactions of the pseudo-spins in presence of random local fields where formula_5 are represented by an independent gaussian distribution with zero mean and variance formula_6. The final term denotes the interaction in presence of an external electric field formula_7. The replica method is used to obtain the glass order parameter: formula_8. where formula_9 is the gaussian measure and under the assumption that formula_10 the free energy is given by: formula_11. where formula_12 and formula_13 with formula_14. The formula_5 term is zero in case of magnetic spin glasses and with no presence of an external electric field this model reduces to the Edwards–Anderson model which is used to describe spin glasses. This model has been used to give quantitative description of DRADP type systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{H}=-\\frac{1}{2}\\sum_{ij}{J_{ij}}{S_i}^{z}{S_j}^{z}-\\sum_{i}{f_i}{S_i}^{z}-E\\sum_{i}{S_i}^{z}" }, { "math_id": 1, "text": "{S_i}^{z}" }, { "math_id": 2, "text": "{J_{ij}}" }, { "math_id": 3, "text": "{J_{0}}" }, { "math_id": 4, "text": "{J^{2}/N}" }, { "math_id": 5, "text": "{f_{i}}" }, { "math_id": 6, "text": "\\Delta" }, { "math_id": 7, "text": "{E}" }, { "math_id": 8, "text": "q=\\int Dz{\\tanh}^{2}[J(q+\\frac{\\Delta}{J^{2}})^{1/2}\\frac{z}{T}]" }, { "math_id": 9, "text": "\\int Dz" }, { "math_id": 10, "text": "{E}=0" }, { "math_id": 11, "text": "\\beta{f}=-(\\beta{J}/2)^{2}[(1-q_{1})^{2}-m(q_{1}^{2}-q_{0}^{2})]-m^{-1}\\int Dz \\ln\\int Dy Z^{m}(y,z)" }, { "math_id": 12, "text": "\\beta = 1/T" }, { "math_id": 13, "text": "Z^{m}(y,z) = 2\\cosh[\\beta{h(y,z)}]" }, { "math_id": 14, "text": "h(x,y)= J[(q_{1}-q_{0})^{1/2}y+(q_{0}+\\Delta/J^{2})^{1/2}z]" } ]
https://en.wikipedia.org/wiki?curid=69404632
69415319
Neuberg cubic
Plane curve associated with any triangle In Euclidean geometry, the Neuberg cubic is a special cubic plane curve associated with a reference triangle with several remarkable properties. It is named after Joseph Jean Baptiste Neuberg (30 October 1840 – 22 March 1926), a Luxembourger mathematician, who first introduced the curve in a paper published in 1884. The curve appears as the first item, with identification number K001, in Bernard Gilbert's Catalogue of Triangle Cubics which is a compilation of extensive information about more than 1200 triangle cubics. Definitions. The Neuberg cubic can be defined as a locus in many different ways. One way is to define it as a locus of a point P in the plane of the reference triangle △"ABC" such that, if the reflections of P in the sidelines of triangle △"ABC" are Pa, Pb, Pc, then the lines APa, BPb, CPc are concurrent. However, it needs to be proved that the locus so defined is indeed a cubic curve. A second way is to define it as the locus of point P such that if Oa, Ob, Oc are the circumcenters of triangles △"BPC", △"CPA", △"APB", then the lines AOa, BOb, Oc are concurrent. Yet another way is to define it as the locus of P satisfying the following property known as the "quadrangles involutifs" (this was the way in which Neuberg introduced the curve): formula_0 Equation. Let a, b, c be the side lengths of the reference triangle △"ABC". Then the equation of the Neuberg cubic of △"ABC" in barycentric coordinates "x" : "y" : "z" is formula_1 Other terminology: 21-point curve, 37-point curve. In the older literature the Neuberg curve commonly referred to as the 21-point curve. The terminology refers to the property of the curve discovered by Neuberg himself that it passes through certain special 21 points associated with the reference triangle. Assuming that the reference triangle is △"ABC", the 21 points are as listed below. The attached figure shows the Neuberg cubic of triangle △"ABC" with all the above mentioned 21 special points on it. In a paper published in 1925, B. H. Brown reported his discovery of 16 additional special points on the Neuberg cubic making the total number of then known special points on the cubic 37. Because of this, the Neuberg cubic is also sometimes referred to as the 37-point cubic. Currently, a huge number of special points are known to lie on the Neuberg cubic. Gilbert's Catalogue has a special page dedicated to a listing of such special points which are also triangle centers. Some properties of the Neuberg cubic. Neuberg cubic as a circular cubic. The equation in trilinear coordinates of the line at infinity in the plane of the reference triangle is formula_2 There are two special points on this line called the circular points at infinity. Every circle in the plane of the triangle passes through these two points and every conic which passes through these points is a circle. The trilinear coordinates of these points are formula_3 where formula_4. Any cubic curve which passes through the two circular points at infinity is called a circular cubic. The Neuberg cubic is a circular cubic. Neuberg cubic as a pivotal isogonal cubic. The isogonal conjugate of a point P with respect to a triangle △"ABC" is the point of concurrence of the reflections of the lines PA, PB, PC about the angle bisectors of A, B, C respectively. The isogonal conjugate of P is sometimes denoted by P*. The isogonal conjugate of P* is P. A self-isogonal cubic is a triangle cubic that is invariant under isogonal conjugation. A pivotal isogonal cubic is a cubic in which points P lying on the cubic and their isogonal conjugates are collinear with a fixed point Q known as the pivot point of the cubic. The Neuberg cubic is a pivotal isogonal cubic having its pivot at the intersection of the Euler line with the line at infinity. In Kimberling's Encyclopedia of Triangle Centers, this point is denoted by X(30). Neuberg cubic as a pivotol orthocubic. Let P be a point in the plane of triangle △"ABC". The perpendicular lines at P to AP, BP, CP intersect BC, CA, AB respectively at Pa, Pb, Pc and these points lie on a line LP. Let the trilinear pole of LP be "P"⊥. An isopivotal cubic is a triangle cubic having the property that there is a fixed point P such that, for any point M on the cubic, the points "P, M, M"⊥ are collinear. The fixed point P is called the orthopivot of the cubic. The Neuberg cubic is an orthopivotal cubic with orthopivot at the triangle's circumcenter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{vmatrix}\n1 & BC^2+AP^2 & BC^2\\times AP^2 \\\\ \n1 & CA^2+BP^2 & CA^2\\times BP^2\\\\\n1 & AB^2+CP^2& AB^2\\times CP^2\n\\end{vmatrix} = 0" }, { "math_id": 1, "text": " \\sum_{\\text{cyclic}} [a^2(b^2+c^2)- (b^2-c^2)^2 -2a^4]x(c^2y^2 - b^2z^2)=0" }, { "math_id": 2, "text": "ax+by+cz=0" }, { "math_id": 3, "text": "\\begin{align}\n& \\cos B + i\\sin B : \\cos A - i\\sin A : -1 \\\\\n& \\cos B-i\\sin B : \\cos A+i\\sin A: -1\n\\end{align}" }, { "math_id": 4, "text": "i=\\sqrt{-1}" } ]
https://en.wikipedia.org/wiki?curid=69415319
69432561
Deep learning speech synthesis
Method of speech synthesis that uses deep neural networks&lt;templatestyles src="Machine learning/styles.css"/&gt; Deep learning speech synthesis refers to the application of deep learning models to generate natural-sounding human speech from written text (text-to-speech) or spectrum (vocoder). Deep neural networks (DNN) are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text. Formulation. Given an input text or some sequence of linguistic unit formula_0, the target speech formula_1 can be derived by formula_2 where formula_3 is the model parameter. Typically, the input text will first be passed to an acoustic feature generator, then the acoustic features are passed to the neural vocoder. For the acoustic feature generator, the Loss function is typically L1 or L2 loss. These loss functions impose a constraint that the output acoustic feature distributions must be Gaussian or Laplacian. In practice, since the human voice band ranges from approximately 300 to 4000 Hz, the loss function will be designed to have more penalty on this range: formula_4 where formula_5 is the loss from human voice band and formula_6 is a scalar typically around 0.5. The acoustic feature is typically Spectrogram or spectrogram in Mel scale. These features capture the time-frequency relation of speech signal and thus, it is sufficient to generate intelligent outputs with these acoustic features. The Mel-frequency cepstrum feature used in the speech recognition task is not suitable for speech synthesis because it reduces too much information. History. In September 2016, DeepMind proposed WaveNet, a deep generative model of raw audio waveforms, demonstrating that deep learning-based models are capable of modeling raw waveforms and generating speech from acoustic features like spectrograms or mel-spectrograms. Although WaveNet was initially considered to be computationally expensive and slow to be used in consumer products at the time, a year after its release, DeepMind unveiled a modified version of WaveNet known as "Parallel WaveNet," a production model 1,000 faster than the original. In early 2017, Mila proposed char2wav, a model to produce raw waveform in an end-to-end method. In the same year, Google and Facebook proposed Tacotron and VoiceLoop, respectively, to generate acoustic features directly from the input text; months later, Google proposed Tacotron2, which combined the WaveNet vocoder with the revised Tacotron architecture to perform end-to-end speech synthesis. Tacotron2 can generate high-quality speech approaching the human voice. Semi-supervised learning. Currently, self-supervised learning has gained much attention through better use of unlabelled data. Research has shown that, with the aid of self-supervised loss, the need for paired data decreases. Zero-shot speaker adaptation. Zero-shot speaker adaptation is promising because a single model can generate speech with various speaker styles and characteristic. In June 2018, Google proposed to use pre-trained speaker verification models as speaker encoders to extract speaker embeddings. The speaker encoders then become part of the neural text-to-speech models, so that it can determine the style and characteristics of the output speech. This procedure has shown the community that it is possible to use only a single model to generate speech with multiple styles. Neural vocoder. In deep learning-based speech synthesis, neural vocoders play an important role in generating high-quality speech from acoustic features. The WaveNet model proposed in 2016 achieves excellent performance on speech quality. Wavenet factorised the joint probability of a waveform formula_7 as a product of conditional probabilities as follows formula_8 where formula_3 is the model parameter including many dilated convolution layers. Thus, each audio sample formula_9 is conditioned on the samples at all previous timesteps. However, the auto-regressive nature of WaveNet makes the inference process dramatically slow. To solve this problem, Parallel WaveNet was proposed. Parallel WaveNet is an inverse autoregressive flow-based model which is trained by knowledge distillation with a pre-trained teacher WaveNet model. Since such inverse autoregressive flow-based models are non-auto-regressive when performing inference, the inference speed is faster than real-time. Meanwhile, Nvidia proposed a flow-based WaveGlow model, which can also generate speech faster than real-time. However, despite the high inference speed, parallel WaveNet has the limitation of needing a pre-trained WaveNet model, so that WaveGlow takes many weeks to converge with limited computing devices. This issue has been solved by Parallel WaveGAN, which learns to produce speech through multi-resolution spectral loss and GAN learning strategies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "X=\\arg\\max P(X|Y, \\theta)" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "loss=\\alpha \\text{loss}_{\\text{human}} + (1 - \\alpha) \\text{loss}_{\\text{other}}" }, { "math_id": 5, "text": "\\text{loss}_{\\text{human}}" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "\\mathbf{x}=\\{x_1,...,x_T\\}" }, { "math_id": 8, "text": "p_{\\theta}(\\mathbf{x})=\\prod_{t=1}^{T}p(x_t|x_1,...,x_{t-1})" }, { "math_id": 9, "text": "x_t" } ]
https://en.wikipedia.org/wiki?curid=69432561
69432773
Photoacoustic flow cytometry
Photoacoustic Flow Cytometery Photoacoustic flow cytometry or PAFC is a biomedical imaging modality that utilizes photoacoustic imaging to perform flow cytometry. A flow of cells passes a photoacoustic system producing individual signal response. Each signal is counted to produce a quantitative evaluation of the input sample. Description. Traditional flow cytometry uses cells in a laminar single file stream which then passes through a light source. Using various quantification of light scattering from the cells enables the system to quantify cellular size and complexity which can ultimately be returned in a quantification of cell composition within a sample. Photoacoustic flow cytometry operates on similar principles, but utilizes a photoacoustic signal to differentiate cellular patterns. Furthermore, flow cytometry provides great ex-vivo analysis, but due to its pure optical source its penetration depth is limited making in-vivo analysis limited. Alternatively, photoacoustics may provide an advantage over flow cytometry as it receives an acoustic signal rather than an optical one and can penetrate to greater depths as discussed further in operating principles and mathematics. The photoacoustic (PA) affect was discovered by Alexander Bell in 1880, occurs when a photon source is absorbed by an optically receptive substance producing an ultrasonic wave. The strength of the ultrasonic wave produced is a function of intensity of photon absorbed, and the innate properties of the substance illuminated. Each substance of interest absorbs photons at a specific wavelength, as a result only certain substances will innately produce a PA signal at a given wavelength. For example, hemoglobin and melanin are two common biological substances that produce strong PA signals in response to laser pulses around the 680 nm wavelength range. The absorption spectrum for the PA lies within the visible electromagnetic spectrum, making PA imaging non-radiative in nature. The specific absorption spectrum can both be a limitation and an exploitation of PA imaging (see more in applications). Systems commonly use an (neodymium-doped yttrium aluminum garnet) or LED laser system that is pulsed to penetrate the biological tissue of interest. With each pulse that comes in contact with tissue, a PA signal in the form of an ultrasound wave is produced. This ultrasound wave propagates through the tissue until it reaches an ultrasound transducer to produce an a-line. The maximum amplitude of each a-line is extracted and its value is plotted on a time vs amplitude graph producing a cytometry graphic . Operating principles and mathematics. Heat production. Photoacoustic flow cytometry operates on the principle of the photoacoustic effect, whereby a laser in the visible spectrum produces a temperature rise and thus a thermal expansion. The thermal expansion equation with relation to laser intensity for a pulsating laser is described below.formula_0 Where formula_1 is the absorption coefficient of the focused equation, formula_2 is the intensity of the laser, ⍵ is the frequency of the laser pulse, t is time. formula_3 is described as the exponential expression of a sinusoidal function determined by Euler's formula. It is important to note that the penetration depth of the laser is limited by the diffusive regime, which is dependent on the attenuation through the tissue prior to biological target to be irradiated. Photoacoustic wave relationship. Below then establishes the heat-pressure relationship for a photoacoustic signal. formula_4 Where ∇ is the partial differential equation set with spatial relationship, formula_5 is the speed of sound in the substance of interest, t is time, formula_6 is pressure as a function of both time and space, β is the thermal expansion coefficient, formula_7 is the specific heat capacity, and formula_8 is the partial differential of the heat equation described above. The left side of the equation describes the pressure wave equation which is derived for modeling of an ultrasonic pressure wave equation. The right side of the equation determines the relationship of heat production to thermal expansion resulting in a pressure wave. Pressure wave solution. While reality produces a three dimensional wave that propagates through the tissue, for the purposes of PAFC, the information needed only pertains to a one dimensional analysis. Below demonstrates the one dimensional solution due to a pulsed laser. formula_9 Where formula_1 is the absorption coefficient, is the thermal expansion coefficient, formula_7 is the specific heat, F is the fluence of the laser, formula_10 is the speed of sound in a given material and formula_11 is the total energy derived from the laser pulse. It is important to note that for long durations of laser exposure, the wave equation becomes largely a function of laser intensity. For the purposes of analyzing PA signal, the laser pulse must be short in time to produce a signal that its value varies on the properties of the irradiated substance to differentiate the targets of interest. The differences in the pressure wave produced is the basis for signal separation in PAFC. Signal detection. The pressure wave created is in the form of an ultrasound wave. The wave propagates through the material and is detected by an ultrasound transducer. The pressure is sensed via piezoelectric crystals which converts the pressure into a voltage change, i.e. , the amplitude of the signal is proportional to the value of the pressure at any given time. This voltage is plotted as a function of time and results in the formation of an a-line previously described. The temporal data is important for other types of photoacoustic imaging, but for the purposes of PAFC, the maximum amplitude within an a-line is extracted as the data point. For each laser pulse this maximum amplitude value is plotted vs time producing a flow cytometry signal tracing. Each line represents a laser pulse and its amplitude reflects the target irradiated. By selecting an amplitude range that is representative of a particular cell type, the signals can be counted and thus quantify cell types within a given sample. Figure 1 shows an animation of cells flowing and its representative PAFC signal tracing. Applications. Bacteria. Over two million bacterial infections occur annually in the United States. With antibiotic resistance increasing treatment of these infections is becoming increasingly difficult making correct antibiotic selection evermore important. Optimal antibiotic selection hangs on the ability to determine the offending bacteria. Traditionally, bacterial speciation is determined by culturing and PCR technologies. These technologies take at least 48 hours and sometimes more. Due to the prolonged timeframe for speciation, providers must select broad-spectrum antibiotics. PAFC can be used to detect bacteria in the blood for more timely antibiotic selection. The first step in detection with PAFC is marking the bacteria so they have a PA signal to detect. Typically, this is composed of a dye of and a method to attach the dye to the bacteria of interest. Although antibodies have been used in the past, bacteriophages have proven to be cheaper and more stable to produce. Multiple studies have shown the specificity of bacteriophage selection for a bacteria of interest, particularly MRSA, "E. Coli", and "Salmonella". Dyes vary, but most commonly utilized are gold nanoparticles, Indocyanine green (ICG), and red dye 81. The dyes produce an enhanced signal to enable more sensitive detection. The detection limits found in one study showed approximately 1 bacterial cell per 0.6 μm3. Specific dyes have been tested on animals for toxicity and have not resulted in any clear damage. Although human studies for the detection of bacteria in the blood have yet to be attempted, PAFC may play a role in future applications of bacterial detection. Malaria. Malaria causes the deaths of 0.4 million people yearly worldwide. With current medications, early detection is key to preventing these deaths. Current methods include microscopic detection on blood film, serology, or PCR. Lab technicians may lack the experience or the technologies may be too expensive for certain facilities, inevitably missing the diagnosis. Furthermore current methods generally cannot detect malaria at parasites &lt; 50 per microliter and needs 3–4 days post infection before detection can occur. Thus, there is a need for a more automated and sensitive detection method to improve patient outcomes. PAFC has proven detection limits at much lower than current methods. One study demonstrated a sensitivity of one parasite in 0.16 mL of circulating blood and thus can be detected on day 1–2 post inoculation. Furthermore, studies have demonstrated the feasibility of in vivo detection removing the possibility of missed diagnosis from damaged cells from blood extraction and in-vitro analysis. PAFC detects malaria via the surrogate marker hemozoin, a breakdown product produced by malaria in the merozoite stage. Hemozoin is a great photoacoustic target and responds strongly at wavelengths 671 nm and 820 nm range. Although background signals are produced by hemoglobin within RBCs, infected RBCs (iRBCs) with hemozoin produce a strong signal above hemoglobin at these wavelengths. In vitro methods utilize 50 micrometer capillary tubes with flow of 1 cm/s (in vitro) for detection. Conversely, Menyaev et al. demonstrated the detection of malaria in vivo. Detection was performed on superficial and deep vessels of mice. The superficial vessels provide a higher signal-to-noise ratio (SNR), but are less comparable to that of human vessels. Mice jugular veins and carotid arteries are similar in size to small human vessels which demonstrated higher artifacts due to blood pulsation and respiratory variation, but could be accounted for. Although PAFC provides a more sensitive detection limit, this method does come with some limitations. As mentioned previously the detection of hemozoin only occurs when the parasites are in the merozoite stage. This limits the detection time frame of the parasites vs detection in the trophozoite stage, but still provides earlier detection than current methods. Second, the vessel sizes tested thus far have only been in mice. Artifacts from deeper vessel analysis in humans may decrease the sensitivity of PAFC making the detection limit less useful than currently suggested. Although challenges still exist, PAFC may play a role in improving diagnosis of Malaria in humans. Circulating Tumor Cells (CTCs). Circulating tumor cells or CTCs are tumor cells that have broken off from their primary tumor and travel in the blood. These CTCs then seed distant sites resulting in metastases. Metastases cause 90% of cancer-related deaths and as such, detection of CTCs is critical to the prevention of mets. Studies have shown earlier detection of CTCs improves treatment and thus longer survival times. (15) Current detection methods include RT-PCR, flow cytometry, optical sensing, cell size filtration among others. These methods are limited due to the sampling size from extracted blood (~5–10 mL) which results in a CTC detection limit of ~ 10 CTC/mL. These methods take hours to days to get results which can result in delayed initiation of treatment. PAFC may play a role in the future detection of CTCs. In order to prevent the limitation of small volume sampling through the extraction of blood from the patient, PAFC utilizes an in vivo method to monitor a larger volume of blood (i.e. the entire volume). The study demonstrated monitoring a mouse aorta, they were able to visualize the entire mouse blood volume within 1 minute of detection. CTCs such as melanomas contain an intrinsic chromophore and do not require labeling for detection above the background of hemoglobin. Other tumor cells (such as cancerous squamous cells) can be tagged with nanoparticles to produce a larger PA signal over RBCs for their detection. These methods resulted in an improved detection limit of CTCs. De la Zerda et al. detected CTCs after only 4 days with inoculation of the cancer cells. Their detection limit was determined to be 1 CTC/mL, a 10 fold improvement in sensitivity. Furthermore, the nano-particle labeling was found to be non-toxic and only took 10 minutes to optimally tag the CTCs. This CTC detection can be used for metastatic screening, but also has therapeutic implications. During tumor resection or manipulation it has been determined that these manipulations release CTCs. PAFC can be used as a way to monitor for the release of these CTCs which then may require treatment in a systematic manner. Due to the non-linear thermoelastic effect from the laser on CTCs/Nanoparticles a higher laser fluence can cause the CTC to rupture without damaging the local RBCs. With reduction of CTCs, this could improve treatment with systemic methods or completely remove the need altogether. Although there is a large potential for application, there are still areas for improvement. First, PAFC is depth limited and has only been tested in superficial skin of humans which may pose a difficulty for more centrally located tumors such as lung or bowel. Second, although initial mouse models have shown efficacy with nanoparticle labeling, specific cancer type labeling and dye side-effects need to be more deeply studied to assure safety of this imaging modality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H(z,t)=\\mu_a I_0 e^{-\\omega t}" }, { "math_id": 1, "text": "\\mu_a\n" }, { "math_id": 2, "text": "I_0" }, { "math_id": 3, "text": "e^{-i \\omega t}" }, { "math_id": 4, "text": "[\\nabla^2-{1 \\over \\nu_s}{\\partial \\over \\partial t^2}]p(z,t)=-{\\beta \\over C_p}{\\partial H \\over \\partial t}" }, { "math_id": 5, "text": "\\nu_s\n" }, { "math_id": 6, "text": "p(z,t)" }, { "math_id": 7, "text": "C_p" }, { "math_id": 8, "text": "{\\partial H \\over \\partial t}" }, { "math_id": 9, "text": "p(z,t)= {\\mu_a \\beta F \\nu_s^2 \\over 2C_p}\\hat{h}(z,t)" }, { "math_id": 10, "text": "\\nu_s" }, { "math_id": 11, "text": "\\hat{h}(z,t)" } ]
https://en.wikipedia.org/wiki?curid=69432773
69434578
1 Samuel 8
First Book of Samuel chapter 1 Samuel 8 is the eighth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter records the request from the elders of Israel to Samuel for a king, part of a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul. Text. This chapter was originally written in the Hebrew language. It is divided into 22 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 7, 9–14, 16–20. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter records the elders of Israel's request for a king and reports their persistence despite the warning from Samuel regarding the 'oppressive ways of kings'. One reason for the quest for a king was that Samuel's sons were unfit to succeed him (verses 3, 5), because they perverted justice in Beersheba, recalling the behavior of Eli's sons. One more explicit reason was that the people wished to be governed 'like other nations' (cf. Deuteronomy 17:14) with supposedly better military advantages (verse 20), than a new line of judges. The antimonarchial stance was given in three different sections of this chapter: The narrative further harmonized two opposing views: (1) the monarchy was not approved by Yahweh, but (2) Yahweh himself was responsible for selecting the first kings of Israel. Samuel's sons (8:1–3). When Samuel was at old age (verse 1), his sons, who were appointed as judges, became corrupt (verse 2). This draws a parallel to Samuel's mentor, Eli, whose sons became corrupt at Eli's old age (1 Samuel 2:22), leading to prophetic judgments on his family, Israel's defeat and loss of ark to the Philistines (1 Samuel 4). In the case of Samuel, the corruption of his sons led to the elders of Israel requesting for a king. "Now the name of his firstborn was Joel; and the name of his second, Abiah: they were judges in Beersheba." The demand for a king (8:4–22). The elders of Israel point to the corrupt ways of Samuel's sons and Samuel's old age as reasons to have a king like all 'other nations' (verse 5), contrary to God's declaration that Israel is 'above all the nations' (Deuteronomy 26:19) because they have YHWH as their king. This has once been brought out in Judges 8, when people asked Gideon to rule over them, but Gideon declined by saying that "the Lord will rule over you" (Judges 8:23). Samuel was deeply offended by the request, as verse 6 states the request "displeased" him (in Hebrew: 'this thing is evil in Samuel's eye'), because the request in Hebrew was literally for "a king to "judge" them", thereby attacking his lifelong role (and of his sons'). When Samuel 'prayed to the Lord' (that is, 'he laid the matter before the Lord in prayer'), God assured Samuel that the people did not reject Samuel personally but rejecting God's kingship over them. God did not seem surprised nor offended, instead quickly agreed to give the people a human king (verse 7), while explaining to Samuel that this behavior was consistent ever since God delivered the people in Exodus from Egypt until that time, in which the people tend to forsake God for false gods (verse 8). In fact, the Torah already anticipated and prepared specific instructions for this occasion (Deuteronomy 17:14–15). "And the Lord said to Samuel, "Heed the voice of the people in all that they say to you; for they have not rejected you, but they have rejected Me, that I should not reign over them."" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69434578
69434896
Nicole De Grande-De Kimpe
Belgian mathematician Nicole Leonie Jean Marie De Grande-De Kimpe (7 September 1936 – 23 July 2008) was a Belgian mathematician known as a pioneer of p-adic functional analysis, and particularly for her work on locally convex topological vector spaces over fields with non-Archimedean valuations. Early life and education. De Grande-De Kimpe was born on 7 September 1936 in Antwerp, the only child of a dockworker living in Hoboken, Antwerp, where she grew up, went to high school, and learned to play the violin. She studied mathematics on a scholarship to Ghent University, finished her degree with a specialty in mathematical analysis there in 1958, and took a job as a high school mathematics teacher. In 1963 she began a research fellowship, funded by the National Center for Algebra and Topology, which she used to study under Guy Hirsch at the Free University of Brussels. Following that, from 1965 to 1970 she worked as a graduate assistant in analysis for Piet Wuyts at the Free University of Brussels. During this time, she married, had a daughter, and divorced. She completed her Ph.D. in 1970, supervised by Hans Freudenthal at Utrecht University. Later life and career. After a year of postdoctoral research with Freudenthal at Utrecht, De Grande-De Kimpe took a position in 1971 at the Vrije Universiteit Brussel, the Flemish half of the newly-split Free University of Brussels. There, she and Lucien Van Hamme organized a long-running seminar on formula_0-adic analysis beginning in 1978, and hosted an international conference on the subject in 1986. Recognizing that there was enough critical mass for a more specialized international conference on formula_0-adic functional analysis, De Grande-De Kimpe founded the series of International Conferences on formula_0-adic Functional Analysis, with cofounders Javier Martínez Maurica and José Manuel Bayod, beginning with the first conference in 1990 in Spain. She also served a term as head of the mathematics department at her university. After retiring to her home in Willebroek in 2001, she remained mathematically active and continued to teach the history of mathematics. She died on 23 July 2008. Recognition. In 2002, a festschrift was published as a special volume of the Bulletin of the Belgian Mathematical Society, Simon Stevin, honoring both De Grande-De Kimpe and Lucien Van Hamme, who retired at the same time. The 11th International Conference on formula_0-adic Functional Analysis, held in 2010, was dedicated to the memory of De Grande-De Kimpe. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=69434896
69441901
Maria Silvia Lucido
Italian mathematician (1963–2008) Maria Silvia Lucido (22 April 1963 – 4 March 2008) was an Italian mathematician specializing in group theory, and a researcher in mathematics at the University of Udine. Life, education and career. Lucido was originally from Vicenza, where she was born on 22 April 1963. After working for a bank and a travel agency, she entered mathematical study at the University of Padua in 1986, graduating in 1991. Already as an undergraduate she began research into the theory of finite groups, and wrote an undergraduate thesis on the subject under the supervision of Franco Napolitani. She completed a Ph.D. at Padua in 1996 with the dissertation "Il Prime Graph dei gruppi finiti" ["the prime graphs of finite groups"], supervised by Napolitani and co-advised by Carlo Casolo. After postdoctoral research at the University of Padua and as a Fulbright scholar at Michigan State University, she obtained a permanent position as a researcher at the University of Udine in 1999. She was killed in an automobile accident on March 4, 2008, survived by her husband and two sons. Research. Lucido was particularly known for her research on prime graphs of finite groups. These are undirected graphs that have a vertex for each prime factor of the order of a group, and that have an edge formula_0 whenever the given group has an element of order formula_0. Her work in this area included Lucido founded a series of annual summer schools on the theory of finite groups, held in Venice and sponsored by the University of Udine, beginning in 2004. After her death, the three subsequent offerings of the summer schools in 2010, 2011, and 2013 were dedicated in her honor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "pq" } ]
https://en.wikipedia.org/wiki?curid=69441901
69442178
Cubic equations of state
Class of thermodynamic models Cubic equations of state are a specific class of thermodynamic models for modeling the pressure of a gas as a function of temperature and density and which can be rewritten as a cubic function of the molar volume. Equations of state are generally applied in the fields of physical chemistry and chemical engineering, particularly in the modeling of vapor–liquid equilibrium and chemical engineering process design. Van der Waals equation of state. The van der Waals equation of state may be written as formula_0 where formula_1 is the absolute temperature, formula_2 is the pressure, formula_3 is the molar volume and formula_4 is the universal gas constant. Note that formula_5, where formula_6 is the volume, and formula_7, where formula_8 is the number of moles, formula_9 is the number of particles, and formula_10 is the Avogadro constant. These definitions apply to all equations of state below as well. Proposed in 1873, the van der Waals equation of state was one of the first to perform markedly better than the ideal gas law. In this equation, usually formula_11 is called the attraction parameter and formula_12 the repulsion parameter (or the effective molecular volume). While the equation is definitely superior to the ideal gas law and does predict the formation of a liquid phase, the agreement with experimental data for vapor-liquid equilibria is limited. The van der Waals equation is commonly referenced in textbooks and papers for historical and other reasons, but since its development other equations of only slightly greater complexity have been since developed, many of which are far more accurate. The van der Waals equation may be considered as an ideal gas law which has been "improved" by the inclusion of two non-ideal contributions to the equation. Consider the van der Waals equation in the form formula_13 as compared to the ideal gas equation formula_14 The form of the van der Waals equation can be motivated as follows: The substance-specific constants formula_11 and formula_12 can be calculated from the critical properties formula_18 and formula_19 (noting that formula_19 is the molar volume at the critical point and formula_18 is the critical pressure) as: formula_20 formula_21 Expressions for formula_22 written as functions of formula_23 may also be obtained and are often used to parameterize the equation because the critical temperature and pressure are readily accessible to experiment. They are formula_24 formula_25 With the reduced state variables, i.e. formula_26, formula_27 and formula_28, the reduced form of the van der Waals equation can be formulated: formula_29 The benefit of this form is that for given formula_30 and formula_31, the reduced volume of the liquid and gas can be calculated directly using Cardano's method for the reduced cubic form: formula_32 For formula_33 and formula_34, the system is in a state of vapor–liquid equilibrium. In that situation, the reduced cubic equation of state yields 3 solutions. The largest and the lowest solution are the gas and liquid reduced volume. In this situation, the Maxwell construction is sometimes used to model the pressure as a function of molar volume. The compressibility factor formula_35 is often used to characterize non-ideal behavior. For the van der Waals equation in reduced form, this becomes formula_36 At the critical point, formula_37. Redlich–Kwong equation of state. Introduced in 1949, the Redlich–Kwong equation of state was considered to be a notable improvement to the van der Waals equation. It is still of interest primarily due to its relatively simple form. While superior to the van der Waals equation in some respects, it performs poorly with respect to the liquid phase and thus cannot be used for accurately calculating vapor–liquid equilibria. However, it can be used in conjunction with separate liquid-phase correlations for this purpose. The equation is given below, as are relationships between its parameters and the critical constants: formula_38 Another, equivalent form of the Redlich–Kwong equation is the expression of the model's compressibility factor: formula_39 The Redlich–Kwong equation is adequate for calculation of gas phase properties when the reduced pressure (defined in the previous section) is less than about one-half of the ratio of the temperature to the reduced temperature, formula_40 The Redlich–Kwong equation is consistent with the theorem of corresponding states. When the equation expressed in reduced form, an identical equation is obtained for all gases: formula_41 where formula_42 is: formula_43 In addition, the compressibility factor at the critical point is the same for every substance: formula_44 This is an improvement over the van der Waals equation prediction of the critical compressibility factor, which is formula_45 . Typical experimental values are formula_46 (carbon dioxide), formula_47 (water), and formula_48 (nitrogen). Soave modification of Redlich–Kwong. A modified form of the Redlich–Kwong equation was proposed by Soave. It takes the form formula_49 formula_50 formula_51 formula_52 formula_53 formula_54 formula_55 where "ω" is the acentric factor for the species. The formulation for formula_56 above is actually due to Graboski and Daubert. The original formulation from Soave is: formula_57 for hydrogen: formula_58 By substituting the variables in the reduced form and the compressibility factor at critical point formula_59 we obtain formula_60 formula_61 thus leading to formula_62 Thus, the Soave–Redlich–Kwong equation in reduced form only depends on "ω" and formula_63 of the substance, contrary to both the VdW and RK equation which are consistent with the theorem of corresponding states and the reduced form is one for all substances: formula_64 We can also write it in the polynomial form, with: formula_65 formula_66 In terms of the compressibility factor, we have: formula_67. This equation may have up to three roots. The maximal root of the cubic equation generally corresponds to a vapor state, while the minimal root is for a liquid state. This should be kept in mind when using cubic equations in calculations, e.g., of vapor-liquid equilibrium. In 1972 G. Soave replaced the formula_68 term of the Redlich–Kwong equation with a function "α"("T","ω") involving the temperature and the acentric factor (the resulting equation is also known as the Soave–Redlich–Kwong equation of state; SRK EOS). The "α" function was devised to fit the vapor pressure data of hydrocarbons and the equation does fairly well for these materials. Note especially that this replacement changes the definition of "a" slightly, as the formula_69 is now to the second power. Volume translation of Peneloux et al. (1982). The SRK EOS may be written as formula_70 where formula_71 where formula_56 and other parts of the SRK EOS is defined in the SRK EOS section. A downside of the SRK EOS, and other cubic EOS, is that the liquid molar volume is significantly less accurate than the gas molar volume. Peneloux et alios (1982) proposed a simple correction for this by introducing a volume translation formula_72 where formula_73 is an additional fluid component parameter that translates the molar volume slightly. On the liquid branch of the EOS, a small change in molar volume corresponds to a large change in pressure. On the gas branch of the EOS, a small change in molar volume corresponds to a much smaller change in pressure than for the liquid branch. Thus, the perturbation of the molar gas volume is small. Unfortunately, there are two versions that occur in science and industry. In the first version only formula_74 is translated, and the EOS becomes formula_75 In the second version both formula_74 and formula_76 are translated, or the translation of formula_74 is followed by a renaming of the composite parameter "b" − "c". This gives formula_77 The "c"-parameter of a fluid mixture is calculated by formula_78 The "c"-parameter of the individual fluid components in a petroleum gas and oil can be estimated by the correlation formula_79 where the Rackett compressibility factor formula_80 can be estimated by formula_81 A nice feature with the volume translation method of Peneloux et al. (1982) is that it does not affect the vapor–liquid equilibrium calculations. This method of volume translation can also be applied to other cubic EOSs if the "c"-parameter correlation is adjusted to match the selected EOS. Peng–Robinson equation of state. The Peng–Robinson equation of state (PR EOS) was developed in 1976 at The University of Alberta by Ding-Yu Peng and Donald Robinson in order to satisfy the following goals: The equation is given as follows: formula_82 formula_83 formula_84 formula_85 formula_86 formula_87 In polynomial form: formula_88 formula_89 formula_90 For the most part the Peng–Robinson equation exhibits performance similar to the Soave equation, although it is generally superior in predicting the liquid densities of many materials, especially nonpolar ones. Detailed performance of the original Peng-Robinson equation has been reported for density, thermal properties, and phase equilibria. Briefly, the original form exhibits deviations in vapor pressure and phase equilibria that are roughly three times as large as the updated implementations. The departure functions of the Peng–Robinson equation are given on a separate article. The analytic values of its characteristic constants are: formula_91 formula_92 formula_93 Peng–Robinson–Stryjek–Vera equations of state. PRSV1. A modification to the attraction term in the Peng–Robinson equation of state published by Stryjek and Vera in 1986 (PRSV) significantly improved the model's accuracy by introducing an adjustable pure component parameter and by modifying the polynomial fit of the acentric factor. The modification is: formula_94 where formula_95 is an adjustable pure component parameter. Stryjek and Vera published pure component parameters for many compounds of industrial interest in their original journal article. At reduced temperatures above 0.7, they recommend to set formula_96 and simply use formula_97. For alcohols and water the value of formula_98 may be used up to the critical temperature and set to zero at higher temperatures. PRSV2. A subsequent modification published in 1986 (PRSV2) further improved the model's accuracy by introducing two additional pure component parameters to the previous attraction term modification. The modification is: formula_99 where formula_95, formula_100, and formula_101 are adjustable pure component parameters. PRSV2 is particularly advantageous for VLE calculations. While PRSV1 does offer an advantage over the Peng–Robinson model for describing thermodynamic behavior, it is still not accurate enough, in general, for phase equilibrium calculations. The highly non-linear behavior of phase-equilibrium calculation methods tends to amplify what would otherwise be acceptably small errors. It is therefore recommended that PRSV2 be used for equilibrium calculations when applying these models to a design. However, once the equilibrium state has been determined, the phase specific thermodynamic values at equilibrium may be determined by one of several simpler models with a reasonable degree of accuracy. One thing to note is that in the PRSV equation, the parameter fit is done in a particular temperature range which is usually below the critical temperature. Above the critical temperature, the PRSV alpha function tends to diverge and become arbitrarily large instead of tending towards 0. Because of this, alternate equations for alpha should be employed above the critical point. This is especially important for systems containing hydrogen which is often found at temperatures far above its critical point. Several alternate formulations have been proposed. Some well known ones are by Twu et al. and by Mathias and Copeman. An extensive treatment of over 1700 compounds using the Twu method has been reported by Jaubert and coworkers. Detailed performance of the updated Peng-Robinson equation by Jaubert and coworkers has been reported for density, thermal properties, and phase equilibria. Briefly, the updated form exhibits deviations in vapor pressure and phase equilibria that are roughly a third as large as the original implementation. Peng–Robinson–Babalola-Susu equation of state (PRBS). Babalola and Susu modified the Peng–Robinson Equation of state as: formula_102 The attractive force parameter ‘a’ was considered to be a constant with respect to pressure in the Peng–Robinson equation of state. The modification, in which parameter ‘a’ was treated as a variable with respect to pressure for multicomponent multi-phase high density reservoir systems was to improve accuracy in the prediction of properties of complex reservoir fluids for PVT modeling. The variation was represented with a linear equation where a1 and a2 were the slope and the intercept respectively of the straight line obtained when values of parameter ‘a’ are plotted against pressure. This modification increases the accuracy of the Peng–Robinson equation of state for heavier fluids particularly at high pressure ranges (&gt;30MPa) and eliminates the need for tuning the original Peng–Robinson equation of state. Tunning was captured inherently during the modification of the Peng-Robinson Equation. The Peng-Robinson-Babalola-Susu (PRBS) Equation of State (EoS) was developed in 2005 and for about two decades now has been applied to numerous reservoir field data at varied temperature (T) and pressure (P) conditions and shown to rank among the few promising EoS for accurate prediction of reservoir fluid properties especially for more challenging ultra-deep reservoirs at High-Temperature High-Pressure (HTHP) conditions. These works have been published in reputable journals. While the widely used Peng-Robinson (PR) EoS  of 1976 can predict fluid properties of conventional reservoirs with good accuracy up to pressures of about 27 MPa (4,000 psi) but fail with pressure increase, the new Peng-Robinson-Babalola-Susu (PRBS) EoS can accurately model PVT behavior of ultra-deep reservoir complex fluid systems at very high pressures of up to 120 MPa (17,500 psi). Elliott–Suresh–Donohue equation of state. The Elliott–Suresh–Donohue (ESD) equation of state was proposed in 1990. The equation corrects the inaccurate van der Waals repulsive term that is also applied in the Peng–Robinson EOS. The attractive term includes a contribution that relates to the second virial coefficient of square-well spheres, and also shares some features of the Twu temperature dependence. The EOS accounts for the effect of the shape of any molecule and can be directly extended to polymers with molecular parameters characterized in terms of solubility parameter and liquid volume instead of using critical properties (as shown here). The EOS itself was developed through comparisons with computer simulations and should capture the essential physics of size, shape, and hydrogen bonding as inferred from straight chain molecules (like "n"-alkanes). formula_103 where: formula_104 formula_105 and formula_73 is a "shape factor", with formula_106 for spherical molecules. For non-spherical molecules, the following relation between the shape factor and the acentric factor is suggested: formula_107. The reduced number density formula_108 is defined as formula_109, where formula_12 is the characteristic size parameter [cm3/mol], and formula_110 is the molar density [mol/cm3]. The characteristic size parameter is related to formula_73 through formula_111 where formula_112 formula_113 formula_114 formula_115 formula_116 The shape parameter formula_117 appearing in the attraction term and the term formula_118 are given by formula_119 (and is hence also equal to 1 for spherical molecules). formula_120 where formula_121 is the depth of the square-well potential and is given by formula_122 formula_123, formula_124, formula_125 and formula_126 are constants in the equation of state: formula_127, formula_128, formula_129, formula_130 The model can be extended to associating components and mixtures with non-associating components. Details are in the paper by J.R. Elliott, Jr. "et al." (1990). Noting that formula_131 = 1.900, formula_132 can be rewritten in the SAFT form as: formula_133 If preferred, the formula_117 can be replaced by formula_134 in SAFT notation and the ESD EOS can be written: formula_135 In this form, SAFT's segmental perspective is evident and all the results of Michael Wertheim are directly applicable and relatively succinct. In SAFT's segmental perspective, each molecule is conceived as comprising "m" spherical segments floating in space with their own spherical interactions, but then corrected for bonding into a tangent sphere chain by the ("m" − 1) term. When "m" is not an integer, it is simply considered as an "effective" number of tangent sphere segments. Solving the equations in Wertheim's theory can be complicated, but simplifications can make their implementation less daunting. Briefly, a few extra steps are needed to compute formula_136given density and temperature. For example, when the number of hydrogen bonding donors is equal to the number of acceptors, the ESD equation becomes: formula_137 where: formula_138 formula_10 is the Avogadro constant, formula_139 and formula_140 are stored input parameters representing the volume and energy of hydrogen bonding. Typically, formula_141 and formula_142 are stored. formula_143 is the number of acceptors (equal to number of donors for this example). For example, formula_143 = 1 for alcohols like methanol and ethanol. formula_143 = 2 for water. formula_143 = degree of polymerization for polyvinylphenol. So you use the density and temperature to calculate formula_144 then use formula_144 to calculate the other quantities. Technically, the ESD equation is no longer cubic when the association term is included, but no artifacts are introduced so there are only three roots in density. The extension to efficiently treat any number of electron acceptors (acids) and donors (bases), including mixtures of self-associating, cross-associating, and non-associating compounds, has been presented here. Detailed performance of the ESD equation has been reported for density, thermal properties, and phase equilibria. Briefly, the ESD equation exhibits deviations in vapor pressure and vapor-liquid equilibria that are roughly twice as large as the Peng-Robinson form as updated by Jaubert and coworkers, but deviations in liquid-liquid equilibria are roughly 40% smaller. Cubic-plus-association. The cubic-plus-association (CPA) equation of state combines the Soave–Redlich–Kwong equation with the association term from SAFT based on Chapman's extensions and simplifications of a theory of associating molecules due to Michael Wertheim. The development of the equation began in 1995 as a research project that was funded by Shell, and published in 1996. formula_145 In the association term formula_146 is the mole fraction of molecules not bonded at site A. Cubic-plus-chain equation of state. The cubic-plus-chain (CPC) equation of state hybridizes the classical cubic equation of state with the SAFT chain term. The addition of the chain term allows the model to be capable of capturing the physics of both short-chain and long-chain non-associating components ranging from alkanes to polymers. The CPC monomer term is not restricted to one classical cubic EOS form, instead many forms can be used within the same framework. The cubic-plus-chain (CPC) equation of state is written in terms of the reduced residual Helmholtz energy (formula_147) as: formula_148 where formula_149 is the residual Helmholtz energy, formula_134 is the chain length, "rep" and "att" are the monomer repulsive and attractive contributions of the cubic equation of state, respectively. The "chain" term accounts for the monomer beads bonding contribution from SAFT equation of state. Using Redlich−Kwong (RK) for the monomer term, CPC can be written as: formula_150 where A is the molecular interaction energy parameter, B is the co-volume parameter, formula_151 is the mole-average chain length, "g(β)" is the radial distribution function (RDF) evaluated at contact, and "β" is the reduced volume. The CPC model combines the simplicity and speed compared to other complex models used to model polymers. Sisco et al. applied the CPC equation of state to model different well-defined and polymer mixtures. They analyzed different factors including elevated pressure, temperature, solvent types, polydispersity, etc. The CPC model proved to be capable of modeling different systems by testing the results with experimental data. Alajmi et al. incorporate the short-range soft repulsion to the CPC framework to enhance vapor pressure and liquid density predictions. They provided a database for more than 50 components from different chemical families, including "n"-alkanes, alkenes, branched alkanes, cycloalkanes, benzene derivatives, gases, etc. This CPC version uses a temperature-dependent co-volume parameter based on perturbation theory to describe short-range soft repulsion between molecules. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(p + \\frac{a}{V_\\text{m}^2}\\right)\\left(V_\\text{m} - b\\right) = RT" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "V_\\text{m}" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "V_\\text{m} = V / n" }, { "math_id": 6, "text": "V" }, { "math_id": 7, "text": "n=N/N_\\text{A}" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "N_\\text{A}" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "b" }, { "math_id": 13, "text": "p = \\frac{RT}{V_\\text{m}-b} - \\frac{a}{V_\\text{m}^2} " }, { "math_id": 14, "text": "p = \\frac{RT}{V_\\text{m}} " }, { "math_id": 15, "text": "V_\\text{m} - b" }, { "math_id": 16, "text": "\\rho" }, { "math_id": 17, "text": "\\rho^2" }, { "math_id": 18, "text": "p_\\text{c}" }, { "math_id": 19, "text": "V_\\text{c}" }, { "math_id": 20, "text": "a = 3 p_\\text{c} V_\\text{c}^2" }, { "math_id": 21, "text": "b = \\frac{V_\\text{c}}{3}." }, { "math_id": 22, "text": "(a,b)" }, { "math_id": 23, "text": "(T_\\text{c},p_\\text{c})" }, { "math_id": 24, "text": "a = \\frac{27(R T_\\text{c})^2}{64p_\\text{c}}" }, { "math_id": 25, "text": "b = \\frac{R T_\\text{c}}{8p_\\text{c}}." }, { "math_id": 26, "text": "V_\\text{r}=V_\\text{m}/V_\\text{c}" }, { "math_id": 27, "text": "P_\\text{r}=p/p_\\text{c}" }, { "math_id": 28, "text": "T_\\text{r}=T/T_\\text{c}" }, { "math_id": 29, "text": "\\left(P_\\text{r} + \\frac{3}{V_\\text{r}^2}\\right)\\left(3V_\\text{r} - 1\\right) = 8T_\\text{r}" }, { "math_id": 30, "text": "T_\\text{r}" }, { "math_id": 31, "text": "P_\\text{r}" }, { "math_id": 32, "text": "V_\\text{r}^3 - \\left(\\frac{1}{3} + \\frac{8T_\\text{r}}{3P_\\text{r}}\\right)V_\\text{r}^2 + \\frac{3V_\\text{r}}{P_\\text{r}} - \\frac{1}{P_\\text{r}} = 0" }, { "math_id": 33, "text": "P_\\text{r}<1" }, { "math_id": 34, "text": "T_\\text{r}<1" }, { "math_id": 35, "text": "Z=PV_\\text{m}/RT" }, { "math_id": 36, "text": "Z = \\frac{V_\\text{r}}{V_\\text{r}-\\frac{1}{3}} - \\frac{9}{8 V_\\text{r} T_\\text{r}} " }, { "math_id": 37, "text": " Z_\\text{c} = 3/8 = 0.375 " }, { "math_id": 38, "text": "\\begin{align}\n p &= \\frac{R\\,T}{V_\\text{m} - b} - \\frac{a}{\\sqrt{T}\\,V_\\text{m}\\left(V_\\text{m} + b\\right)} \\\\[3pt]\n a &= \\frac{\\Omega_a\\,R^2 T_\\text{c}^\\frac{5}{2}}{p_\\text{c}} \\approx 0.42748\\frac{R^2\\,T_\\text{c}^\\frac{5}{2}}{P_\\text{c}} \\\\[3pt]\n b &= \\frac{\\Omega_b\\,R T_\\text{c}}{P_\\text{c}} \\approx 0.08664\\frac{R\\,T_\\text{c}}{p_\\text{c}} \\\\[3pt]\n \\Omega_a &= \\left[9\\left(2^{1/3}-1\\right)\\right]^{-1} \\approx 0.42748 \\\\[3pt]\n \\Omega_b &= \\frac{2^{1/3}-1}{3} \\approx 0.08664\n\\end{align}" }, { "math_id": 39, "text": "Z=\\frac{p V_\\text{m}}{RT} = \\frac{V_\\text{m}}{V_\\text{m} - b} - \\frac{a}{R T^{3/2} \\left(V_\\text{m} + b\\right)} " }, { "math_id": 40, "text": "P_\\text{r} < \\frac{T}{2T_\\text{c}}." }, { "math_id": 41, "text": "P_\\text{r} = \\frac{3 T_\\text{r}}{V_\\text{r} - b'} - \\frac{1}{b' \\sqrt{T_\\text{r}} V_\\text{r} \\left(V_\\text{r}+b'\\right)} " }, { "math_id": 42, "text": "b'" }, { "math_id": 43, "text": "b' = 2^{1/3}-1 \\approx 0.25992" }, { "math_id": 44, "text": "Z_\\text{c}=\\frac{p_\\text{c} V_\\text{c}}{R T_\\text{c}}=1/3 \\approx 0.33333" }, { "math_id": 45, "text": "Z_\\text{c} = 3/8 = 0.375" }, { "math_id": 46, "text": "Z_\\text{c} = 0.274" }, { "math_id": 47, "text": "Z_\\text{c} = 0.235" }, { "math_id": 48, "text": "Z_\\text{c} = 0.29" }, { "math_id": 49, "text": "p = \\frac{R\\,T}{V_\\text{m}-b} - \\frac{a \\alpha}{V_\\text{m}\\left(V_\\text{m}+b\\right)}" }, { "math_id": 50, "text": "a = \\frac{\\Omega_a\\,R^2 T_\\text{c}^2}{P_\\text{c}} = \\frac{0.42748\\,R^2 T_\\text{c}^2}{P_\\text{c}}" }, { "math_id": 51, "text": "b = \\frac{\\Omega_b\\,R T_\\text{c}}{P_\\text{c}} = \\frac{0.08664\\,R T_\\text{c}}{P_\\text{c}}" }, { "math_id": 52, "text": "\\alpha = \\left(1 + \\left(0.48508 + 1.55171\\,\\omega - 0.15613\\,\\omega^2\\right) \\left(1-T_\\text{r}^{0.5}\\right)\\right)^2" }, { "math_id": 53, "text": "T_\\text{r} = \\frac{T}{T_\\text{c}}" }, { "math_id": 54, "text": "\\Omega_a = \\left[9\\left(2^{1/3}-1\\right)\\right]^{-1} \\approx 0.42748" }, { "math_id": 55, "text": "\\Omega_b = \\frac{2^{1/3}-1}{3} \\approx 0.08664" }, { "math_id": 56, "text": "\\alpha" }, { "math_id": 57, "text": "\\alpha = \\left(1 + \\left(0.480 + 1.574\\,\\omega - 0.176\\,\\omega^2\\right) \\left(1-T_\\text{r}^{0.5}\\right)\\right)^2" }, { "math_id": 58, "text": "\\alpha = 1.202 \\exp\\left(-0.30288\\,T_\\text{r}\\right)." }, { "math_id": 59, "text": "\\{p_\\text{r}=p/P_\\text{c}, T_\\text{r}=T/T_\\text{c}, V_\\text{r}=V_\\text{m}/V_\\text{c}, Z_\\text{c}=\\frac{P_\\text{c} V_\\text{c}}{R T_\\text{c}}\\}" }, { "math_id": 60, "text": "p_\\text{r} P_\\text{c} = \\frac{R\\,T_\\text{r} T_\\text{c}}{V_\\text{r} V_\\text{c}-b} - \\frac{a \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} V_\\text{c}\\left(V_\\text{r} V_\\text{c+}b\\right)} \n = \\frac{R\\,T_\\text{r} T_\\text{c}}{V_\\text{r} V_\\text{c}-\\frac{\\Omega_b\\,R T_\\text{c}}{P_\\text{c}}} - \\frac{\\frac{\\Omega_a\\,R^2 T_\\text{c}^2}{P_\\text{c}} \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} V_\\text{c}\\left(V_\\text{r} V_\\text{c}+\\frac{\\Omega_b\\,R T_\\text{c}}{P_\\text{c}}\\right)} =" }, { "math_id": 61, "text": "= \\frac{R\\,T_\\text{r} T_\\text{c}}{V_\\text{c}\\left(V_\\text{r}-\\frac{\\Omega_b\\,R T_\\text{c}}{P_\\text{c} V_\\text{c}}\\right)} - \\frac{\\frac{\\Omega_a\\,R^2 T_\\text{c}^2}{P_\\text{c}} \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} V_\\text{c}^2\\left(V_\\text{r}+\\frac{\\Omega_b\\,R T_\\text{c}}{P_\\text{c} V_\\text{c}}\\right)}\n = \\frac{R\\,T_\\text{r} T_\\text{c}}{V_\\text{c}\\left(V_\\text{r}-\\frac{\\Omega_b}{Z_\\text{c}}\\right)} - \\frac{\\frac{\\Omega_a\\,R^2 T_\\text{c}^2}{P_\\text{c}} \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} V_\\text{c}^2\\left(V_\\text{r}+\\frac{\\Omega_b}{Z_\\text{c}}\\right)}" }, { "math_id": 62, "text": "p_\\text{r} = \\frac{R\\,T_\\text{r} T_\\text{c}}{P_\\text{c} V_\\text{c}\\left(V_\\text{r}-\\frac{\\Omega_b}{Z_\\text{c}}\\right)} - \\frac{\\frac{\\Omega_a\\,R^2 T_\\text{c}^2}{P_\\text{c}^2} \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} V_\\text{c}^2\\left(V_\\text{r}+\\frac{\\Omega_b}{Z_\\text{c}}\\right)} = \n \\frac{T_\\text{r}}{Z_\\text{c}\\left(V_\\text{r}-\\frac{\\Omega_b}{Z_\\text{c}}\\right)} - \\frac{\\frac{\\Omega_a}{Z_\\text{c}^2} \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} \\left(V_\\text{r}+\\frac{\\Omega_b}{Z_\\text{c}}\\right)} " }, { "math_id": 63, "text": "Z_\\text{c}" }, { "math_id": 64, "text": "p_\\text{r} = \\frac{T_\\text{r}}{Z_\\text{c}\\left(V_\\text{r}-\\frac{\\Omega_b}{Z_\\text{c}}\\right)} - \\frac{\\frac{\\Omega_a}{Z_\\text{c}^2} \\alpha\\left(\\omega, T_\\text{r}\\right)}{V_\\text{r} \\left(V_\\text{r}+\\frac{\\Omega_b}{Z_\\text{c}}\\right)} " }, { "math_id": 65, "text": "A = \\frac{a \\alpha P}{R^2 T^2}" }, { "math_id": 66, "text": "B = \\frac{bP}{RT}" }, { "math_id": 67, "text": "0 = Z^3-Z^2+Z\\left(A-B-B^2\\right) - AB" }, { "math_id": 68, "text": "\\frac{1}{\\sqrt{T}}" }, { "math_id": 69, "text": "T_\\text{c}" }, { "math_id": 70, "text": "p = \\frac{R\\,T}{V_{m,\\text{SRK}} - b} - \\frac{a}{V_{m,\\text{SRK}} \\left(V_{m,\\text{SRK}} + b\\right)}" }, { "math_id": 71, "text": "\\begin{align}\n a &= a_\\text{c}\\, \\alpha \\\\\n a_\\text{c} &\\approx 0.42747\\frac{R^2\\,T_\\text{c}^2}{P_\\text{c}} \\\\\n b &\\approx 0.08664\\frac{R\\,T_\\text{c}}{P_\\text{c}}\n\\end{align}" }, { "math_id": 72, "text": "V_{\\text{m},\\text{SRK}} = V_\\text{m} + c" }, { "math_id": 73, "text": "c" }, { "math_id": 74, "text": "V_{\\text{m},\\text{SRK}}" }, { "math_id": 75, "text": "p = \\frac{R\\,T}{V_\\text{m} + c - b} - \\frac{a}{\\left(V_\\text{m} + c\\right) \\left(V_\\text{m} + c + b\\right)}" }, { "math_id": 76, "text": "b_\\text{SRK}" }, { "math_id": 77, "text": "\\begin{align}\n b_\\text{SRK} &= b + c \\quad \\text{or} \\quad b - c \\curvearrowright b \\\\\n p &= \\frac{R\\,T}{V_\\text{m} - b} - \\frac{a}{\\left(V_\\text{m} + c\\right) \\left(V_\\text{m} + 2c + b\\right)}\n\\end{align}" }, { "math_id": 78, "text": "c = \\sum_{i=1}^n z_i c_i" }, { "math_id": 79, "text": "c_i \\approx 0.40768\\ \\frac{RT_{ci}}{P_{ci}} \\left(0.29441 - Z_{\\text{RA},i}\\right) " }, { "math_id": 80, "text": "Z_{\\text{RA},i}" }, { "math_id": 81, "text": "Z_{\\text{RA},i} \\approx 0.29056 - 0.08775\\ \\omega_i" }, { "math_id": 82, "text": "p = \\frac{R\\,T}{V_\\text{m} - b} - \\frac{a\\,\\alpha}{V_\\text{m}^2 + 2bV_\\text{m} - b^2}" }, { "math_id": 83, "text": "a = \\Omega_a \\frac{R^2\\,T_\\text{c}^2}{p_\\text{c}}; \\Omega_a = \\frac{8+40\\eta_c}{49-37\\eta_c} \\approx 0.45724 \n" }, { "math_id": 84, "text": "b = \\Omega_b \\frac{R\\,T_\\text{c}}{p_\\text{c}}; \\Omega_b = \\frac{\\eta_c}{3+\\eta_c} \\approx 0.07780 " }, { "math_id": 85, "text": "\\eta_c = [1+(4-\\sqrt{8})^{1/3}+(4+\\sqrt{8})^{1/3}]^{-1}" }, { "math_id": 86, "text": "\\alpha = \\left(1 + \\kappa \\left(1 - \\sqrt{T_\\text{r}}\\right)\\right)^2; T_\\text{r} = \\frac{T}{T_\\text{c}}" }, { "math_id": 87, "text": "\\kappa \\approx 0.37464 + 1.54226\\omega - 0.26992\\omega^2" }, { "math_id": 88, "text": "A = \\frac{\\alpha a p}{R^2\\,T^2}" }, { "math_id": 89, "text": "B = \\frac{bp}{RT}" }, { "math_id": 90, "text": "Z^3 - (1 - B)Z^2 + \\left(A - 2B - 3B^2\\right)Z - \\left(AB - B^2 - B^3\\right) = 0" }, { "math_id": 91, "text": "Z_\\text{c} = \\frac{1}{32} \\left( 11 - 2\\sqrt{7} \\sinh\\left(\\frac{1}{3} \\operatorname{arsinh}\\left(\\frac{13}{7 \\sqrt{7}}\\right)\\right) \\right) \\approx 0.307401" }, { "math_id": 92, "text": "b' = \\frac{b}{V_{\\text{m},\\text{c}}} = \\frac{1}{3} \\left( \\sqrt{8} \\sinh\\left(\\frac{1}{3} \\operatorname{arsinh}\\left(\\sqrt{8}\\right) \\right) - 1 \\right) \\approx 0.253077 \\approx \\frac{0.07780}{Z_\\text{c}} " }, { "math_id": 93, "text": " \\frac{P_\\text{c} V_{\\text{m},\\text{c}}^2}{a\\,b'} = \\frac{3}{8} \\left( 1 + \\cosh\\left(\\frac{1}{3} \\operatorname{arcosh}(3) \\right) \\right) \\approx 0.816619 \\approx \\frac{Z_\\text{c}^2}{0.45724 \\, b'} " }, { "math_id": 94, "text": "\\begin{align}\n \\kappa &= \\kappa_0 + \\kappa_1 \\left(1 + T_\\text{r}^\\frac{1}{2}\\right) \\left(0.7 - T_\\text{r}\\right) \\\\\n \\kappa_0 &= 0.378893+1.4897153\\,\\omega - 0.17131848\\,\\omega^2 + 0.0196554\\,\\omega^3\n\\end{align}" }, { "math_id": 95, "text": "\\kappa_1" }, { "math_id": 96, "text": "\\kappa_1 = 0 " }, { "math_id": 97, "text": "\\kappa = \\kappa_0 " }, { "math_id": 98, "text": " \\kappa_1 " }, { "math_id": 99, "text": "\\begin{align}\n \\kappa &= \\kappa_0 + \\left[\\kappa_1 + \\kappa_2\\left(\\kappa_3 - T_\\text{r}\\right)\\left(1 - T_\\text{r}^\\frac{1}{2}\\right)\\right]\\left(1 + T_\\text{r}^\\frac{1}{2}\\right) \\left(0.7 - T_\\text{r}\\right) \\\\\n \\kappa_0 &= 0.378893 + 1.4897153\\,\\omega - 0.17131848\\,\\omega^2 + 0.0196554\\,\\omega^3\n\\end{align}" }, { "math_id": 100, "text": "\\kappa_2" }, { "math_id": 101, "text": "\\kappa_3" }, { "math_id": 102, "text": "P =\\left ( \\frac{RT}{V_\\mathrm{m}-b} \\right ) -\\left [ \\frac{(a_1P+a_2)\\alpha}{V_\\mathrm{m}(V_\\mathrm{m}+b)+b(V_\\mathrm{m}-b)} \\right ]" }, { "math_id": 103, "text": "\\frac{p V_\\text{m}}{RT}=Z=1 + Z^{\\rm{rep}} + Z^{\\rm{att}}" }, { "math_id": 104, "text": "Z^{\\rm{rep}} = \\frac{4 c \\eta}{1-1.9 \\eta}" }, { "math_id": 105, "text": "Z^{\\rm{att}} = -\\frac{z_\\text{m} q \\eta Y}{1+ k_1 \\eta Y}" }, { "math_id": 106, "text": "c=1" }, { "math_id": 107, "text": "c=1+3.535\\omega+0.533\\omega^2" }, { "math_id": 108, "text": "\\eta" }, { "math_id": 109, "text": "\\eta=b \\rho" }, { "math_id": 110, "text": "\\rho = \\frac{1}{V_\\text{m}}= N/(N_\\text{A}V)" }, { "math_id": 111, "text": "b=\\frac{RT_\\text{c}}{P_\\text{c}}\\Phi" }, { "math_id": 112, "text": "\\Phi=\\frac{Z_\\text{c}^2}{2A_q}{[-B_q+\\sqrt{B_q^2+4A_qC_q }] }" }, { "math_id": 113, "text": "3Z_\\text{c}=([(-0.173/\\sqrt{c}+0.217)/\\sqrt{c}-0.186]/\\sqrt{c}+0.115)/\\sqrt{c}+1\n" }, { "math_id": 114, "text": "A_q=[1.9(9.5q-k_1)+4ck_1](4c-1.9)\n" }, { "math_id": 115, "text": "B_q=1.9k_1Z_\\text{c}+3A_q/(4c-1.9)\n" }, { "math_id": 116, "text": "C_q=(9.5q-k_1)/Z_\\text{c}\n" }, { "math_id": 117, "text": "q" }, { "math_id": 118, "text": "Y" }, { "math_id": 119, "text": "q=1+k_3(c-1)" }, { "math_id": 120, "text": "Y=\\exp\\left(\\frac{\\epsilon}{kT}\\right) - k_2" }, { "math_id": 121, "text": "\\epsilon" }, { "math_id": 122, "text": "Y_\\text{c} =(\\frac{R T_\\text{c}}{b P_\\text{c}})^2 \\frac{Z_\\text{c}^3}{A_q}" }, { "math_id": 123, "text": "z_\\text{m}" }, { "math_id": 124, "text": "k_1" }, { "math_id": 125, "text": "k_2" }, { "math_id": 126, "text": "k_3" }, { "math_id": 127, "text": "z_\\text{m} = 9.5" }, { "math_id": 128, "text": "k_1 = 1.7745" }, { "math_id": 129, "text": "k_2 = 1.0617" }, { "math_id": 130, "text": "k_3 = 1.90476." }, { "math_id": 131, "text": "4(k_3-1)/k_3" }, { "math_id": 132, "text": "Z^\\text{rep}" }, { "math_id": 133, "text": "Z^{\\rm{rep}} = 4q \\eta g-(q-1)\\frac{\\eta}{g}\\frac{dg}{d\\eta}= \\frac{4q\\eta}{1-1.9\\eta}-\\frac{(q-1)1.9\\eta}{1-1.9\\eta};g=\\frac{1}{1-1.9\\eta}" }, { "math_id": 134, "text": "m" }, { "math_id": 135, "text": "Z=1 + m(\\frac{4\\eta}{1-1.9\\eta} - \\frac{9.5Y\\eta}{1+k_1Y\\eta})-\\frac{(m-1)1.9\\eta}{1-1.9\\eta}\n" }, { "math_id": 136, "text": "Z^{\\rm{assoc}}" }, { "math_id": 137, "text": "\\frac{p V_\\text{m}}{RT}=Z=1 + Z^{\\rm{rep}} + Z^{\\rm{att}}+ Z^{\\rm{assoc}}" }, { "math_id": 138, "text": "Z^{\\rm{assoc}} = -gN^\\text{AD}(1-X^\\text{AD});X^\\text{AD}=2/[1+\\sqrt{1+4N^\\text{AD}\\alpha^\\text{AD}}];\\alpha^\\text{AD}=\\rho N_\\text{A}K^\\text{AD}[\\exp{(\\epsilon^\\text{AD}/kT)-1]}" }, { "math_id": 139, "text": "K^\\text{AD}" }, { "math_id": 140, "text": "\\epsilon^\\text{AD}" }, { "math_id": 141, "text": "K^\\text{AD} = \\mathrm{0.001\\ nm^3}" }, { "math_id": 142, "text": "\\epsilon^\\text{AD}/k_\\text{B}=\\mathrm{2000\\ K}" }, { "math_id": 143, "text": "N^\\text{AD}" }, { "math_id": 144, "text": "\\alpha^\\text{AD}" }, { "math_id": 145, "text": "p = \\frac{RT}{(V_\\mathrm{m} - b)} - \\frac{a}{V_\\mathrm{m} (V_\\mathrm{m} + b)} + \\frac{RT}{V_\\mathrm{m}} \\rho \\sum_{A} \\left[ \\frac{1}{X^\\text{A}} - \\frac{1}{2} \\right] \\frac{\\partial X^\\text{A}}{\\partial \\rho}" }, { "math_id": 146, "text": "X^\\text{A}" }, { "math_id": 147, "text": "F^\\mathrm{CPC}" }, { "math_id": 148, "text": "F^\\mathrm{CPC} = \\frac{A^\\mathrm{R} (T,V,{\\textbf{n}})}{RT}= m(F^\\mathrm{rep}+F^\\mathrm{att})+F^\\mathrm{chain} " }, { "math_id": 149, "text": "A^\\mathrm{R}" }, { "math_id": 150, "text": "p = \\frac{nRT}{V} \\Biggl(1+\\frac{\\bar{m}^2B}{V -\\bar{m}B}\\Biggr) - \\frac{\\bar{m}^2A}{V(V +\\bar{m}B )} - \\frac{ nRT }{V} \\left[\\sum_{i} n_i(m_i-1)\\beta \\frac{g'(\\beta)}{g(\\beta) } \\right] " }, { "math_id": 151, "text": " \\bar{m} " } ]
https://en.wikipedia.org/wiki?curid=69442178
69443414
Prime graph
Undirected graph defined from a group In the mathematics of graph theory and finite groups, a prime graph is an undirected graph defined from a group. These graphs were introduced in a 1981 paper by J. S. Williams, credited to unpublished work from 1975 by K. W. Gruenberg and O. Kegel. Definition. The prime graph of a group has a vertex for each prime number that divides the order (number of elements) of the given group, and an edge connecting each pair of prime numbers formula_0 and formula_1 for which there exists a group element with order formula_2. Equivalently, there is an edge from formula_0 to formula_1 whenever the given group contains commuting elements of order formula_0 and of order formula_1, or whenever the given group contains a cyclic group of order formula_2 as one of its subgroups. Properties. Certain finite simple groups can be recognized by the degrees of the vertices in their prime graphs. The connected components of a prime graph have diameter at most five, and at most three for solvable groups. When a prime graph is a tree, it has at most eight vertices, and at most four for solvable groups. Related graphs. Variations of prime graphs that replace the existence of a cyclic subgroup of order formula_2, in the definition for adjacency in a prime graph, by the existence of a subgroup of another type, have also been studied. Similar results have also been obtained from a related family of graphs, obtained from a finite group through the degrees of its characters rather than through the orders of its elements. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "pq" } ]
https://en.wikipedia.org/wiki?curid=69443414
69449760
Chandrasekhar–Fermi method
Astrophysics equation Chandrasekhar–Fermi method or CF method or Davis–Chandrasekhar–Fermi method is a method that is used to calculate the mean strength of the interstellar magnetic field that is projected on the plane of the sky. The method was described by Leverett Davis Jr in 1951 and independently by Subrahmanyan Chandrasekhar and Enrico Fermi in 1953. According to this method, the magnetic field formula_0 in the plane of the sky is given by formula_1 where formula_2 is the mass density, formula_3 is the line-of-sight velocity dispersion and formula_4 is the dispersion of polarization angles and formula_5 is an order unity factor, which is typically taken it to be formula_6. The method is also employed for prestellar molecular clouds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "B = Q\\sqrt{4\\pi\\rho} \\frac{\\delta v}{\\delta \\phi}" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "\\delta v" }, { "math_id": 4, "text": "\\delta \\phi" }, { "math_id": 5, "text": "Q" }, { "math_id": 6, "text": "Q\\approx 0.5" } ]
https://en.wikipedia.org/wiki?curid=69449760